The sum of Squared Deviations from Mean (2015)

Introduction of Sum Square Deviations

In statistics, the sum of squared deviations (also known as the sum of squares) is a measure of the total variability (Measure of spread or variation) within a data set. In other words, the sum of squares is a measure of deviation or variation from the mean (average) value of the given data set.

Computation of Sum of Squared Deviations

A sum of squares is calculated by first computing the differences between each data point (observation) and the mean of the data set, i.e. $x=X-\overline{X}$. The computed $x$ is known as the deviation score for the given data set. Squaring each of these deviation scores and then adding these squared deviation scores gave us the sum of squared deviation (SS), which is represented mathematically as

\[SS=\sum(x^2)=\sum(X-\overline{X})^2\]

Note that the small letter $x$ usually represents the deviation of each observation from the mean value, while the capital letter $X$ represents the variable of interest in statistics.

The Sum of Squared Deviations Example

Consider the following data set {5, 6, 7, 10, 12}. To compute the sum of squares of this data set, follow these steps

  • Calculate the average of the given data by summing all the values in the data set and then divide this sum of numbers by the total number of observations in the data set. Mathematically, it is $\frac{\sum X_i}{n}=\frac{40}{5}=8$, where 40 is the sum of all numbers $5+6+7+10+12$ and there are 5 observations in number.
  • Calculate the difference of each observation in the data set from the average computed in step 1, for the given data. The differences are
    $5 – 8 = –3$; $6 – 8 = –2$; $7 – 8 = –1$; $10 – 8 =2$ and $12 – 8 = 4$
    Note that the sum of these differences should be zero. $(–3 + –2 + –1 + 2 +4 = 0)$
  • Now square each of the differences obtained in step 2. The square of these differences are
    9, 4, 1, 4 and 16
  • Now add the squared number obtained in step 3. The sum of these squared quantities will be $9 + 4 + 1 + 4 + 16 = 34$, which is the sum of the square of the given data set.
Sum of Squared Deviations

Sums of Squares in Different Context

In statistics, the sum of squares occurs in different contexts such as

  • Partitioning of Variance (Partition of Sums of Squares)
  • The sum of Squared Deviations (Least Squares)
  • The sum of Squared Differences (Mean Squared Error)
  • The sum of Squared Error (Residual Sum of Squares)
  • The sum of Squares due to Lack of Fit (Lack of Fit Sum of Squares)
  • The sum of Squares for Model Predictions (Explained Sum of Squares)
  • The sum of Squares for Observations (Total Sum of Squares)
  • The sum of Squared Deviation (Squared Deviations)
  • Modeling involving the Sum of Squares (Analysis of Variance)
  • Multivariate Generalization of Sum of Square (Multivariate Analysis of Variance)

As previously discussed, the Sum of Squares is a measure of the Total Variability of a set of scores around a specific number.

Summary

  • A higher sum of squares indicates that your data points are further away from the mean on average, signifying greater spread or variability in the data. Conversely, a lower sum of squares suggests the data points are clustered closer to the mean, indicating less variability.
  • The sum of squares plays a crucial role in calculating other important statistics like variance and standard deviation. These concepts help us understand the distribution of data and make comparisons between different datasets.

Online MCQs Test Website

R Faqs

Randomized Complete Block Design (RCBD)

The Randomized Complete Block Design may be defined as the design in which the experimental material is divided into blocks/groups of homogeneous experimental units (experimental units have same characteristics) and each block/group contains a complete set of treatments which are assigned at random to the experimental units.

In Randomized Complete Design (CRD), there is no restriction on the allocation of the treatments to experimental units. But in practical life there are situations where there is relatively large variability in the experimental material, it is possible to make blocks (in a simpler sense groups) of the relatively homogeneous experimental material or units. The design applied in such situations is called a Randomized Complete Block Design (RCBD).

Randomized Complete Block Design

RCBD is a one-restriction design, used to control a variable that influences the response variable. The main aim of the restriction is to control the variable causing the variability in response. Efforts of blocking are made to create a situation of homogeneity within the block. Blocking is a source of variability. An example of a blocking factor might be the gender of a patient (by blocking on gender), this is a source of variability controlled for, leading to greater accuracy. RCBD is a mixed model in which one factor is fixed and the other is random. The main assumption of the design is that there is no contact between the treatment and block effect.

Randomized Complete Block design is said to be a complete design because in this design the experimental units and number of treatments are equal. Each treatment occurs in each block.

The general model is defined as

\[Y_{ij}=\mu+\eta_i+\xi_j+e_{ij}\]

where $i=1,2,3\cdots, t$ and $j=1,2,\cdots, b$ with $t$ treatments and $b$ blocks. $\mu$ is the overall mean based on all observations, $\eta_i$ is the effect of the ith treatment response, $\xi$ is the effect of the jth block, and $e_{ij}$ is the corresponding error term which is assumed to be independent and normally distributed with mean zero and constant variance.

The main objective of blocking is to reduce the variability among experimental units within a block as much as possible and to maximize the variation among blocks; the design would not contribute to improving the precision in detecting treatment differences.

Randomized Complete Block Design Experimental Layout

Suppose there are $t$ treatments and $r$ blocks in a randomized complete block design, then each block contains homogeneous plots for one of each treatment. An experimental layout for such a design using four treatments in three blocks is as follows.

Block 1Block 2Block 3
ABC
BCD
CDA
DAB
Randomized Complete Block Design (RCBD)

From the RCBD layout, we can see that

  • The treatments are assigned at random within blocks of adjacent subjects and each of the treatments appears once in a block.
  • The number of blocks represents the number of replications
  • Any treatment can be adjacent to any other treatment, but not to the same treatment within the block.
  • Variation in an experiment is controlled by accounting for spatial effects.

MCQs about Sequence and Series (Intermediate Math Part – 1)

Learn R Programming

Covariance and Correlation (2015)

Introduction to Covariance and Correlation

Covariance and correlation are very important terminologies in statistics. Covariance measures the degree to which two variables co-vary (i.e. vary/change together). If the greater values of one variable (say, $X_i$) correspond with the greater values of the other variable (say, $X_j$), i.e. if the variables tend to show similar behavior, then the covariance between two variables ($X_i$, $X_j$) will be positive.

Similarly, if the smaller values of one variable correspond with the smaller values of the other variable, then the covariance between two variables will be positive. In contrast, if the greater values of one variable (say, $X_i$) mainly correspond to the smaller values of the other variables (say, $X_j$), i.e. both of the variables tend to show opposite behavior, then the covariance will be negative.

In other words, positive covariance between two variables means they (both of the variables) vary/change together in the same direction relative to their expected values (averages). It means that if one variable moves above its average value, the other variable tends to be above its average value.

Similarly, if covariance is negative between the two variables, then one variable tends to be above its expected value, while the other variable tends to be below its expected value. If covariance is zero then it means that there is no linear dependency between the two variables.

Mathematical Representation of Covariance

Mathematically covariance between two random variables $X_i$ and $X_j$ can be represented as
\[COV(X_i, X_j)=E[(X_i-\mu_i)(X_j-\mu_j)]\]
where
$\mu_i=E(X_i)$ is the average of the first variable
$\mu_j=E(X_j)$ is the average of the second variable

\begin{aligned}
COV(X_i, X_j)&=E[(X_i-\mu_i)(X_j-\mu_j)]\\
&=E[X_i X_j – X_i E(X_j)-X_j E(X_i)+E(X_i)E(X_j)]\\
&=E(X_i X_j)-E(X_i)E(X_j) – E(X_j)E(X_i)+E(X_i)E(X_j)\\
&=E(X_i X_j)-E(X_i)E(X_j)
\end{aligned}

Covariance

Note that, the covariance of a random variable with itself is the variance of the random variable, i.e. $COV(X_i, X_i)=VAR(X)$. If $X_i$ and $X_j$ are independent, then $E(X_i X_j)=E(X_i)E(X_j)$ and $COV(X_i, X_j)=E(X_i X_j)-E(X_i) E(X_j)=0$.

Covariance and Correlation

Covariance and Correlation

Correlation and covariance are related measures but not equivalent statistical measures.

Equation of Correlation (Normalized Covariance

The correlation between two variables (Let, $X_i$ and $X_j$) is their normalized covariance, defined as
\begin{aligned}
\rho_{i,j}&=\frac{E[(X_i-\mu_i)(X_j-\mu_j)]}{\sigma_i \sigma_j}\\
&=\frac{n \sum XY – \sum X \sum Y}{\sqrt{(n \sum X^2 -(\sum X)^2)(n \sum Y^2 – (\sum Y)^2)}}
\end{aligned}
where $\sigma_i$ is the standard deviation of $X_i$ and $\sigma_j$ is the standard deviation of $X_j$.

Note that correlation is dimensionless, i.e. a number that is free of the measurement unit and its values lie between -1 and +1 inclusive. In contrast, covariance has a unit of measure–the product of the units of two variables.

For further reading about Correlation follow these posts

R Frequently Asked Questions

Data Transformation (Variable Transformation)

The data transformation is a rescaling of the data using a function or some mathematical operation on each observation. When data are very strongly skewed (negative or positive), we sometimes transform the data so that they are easier to model. In another way, if variable(s) does not fit a normal distribution then one should try a DatavTransformation to fit the assumption of using a parametric statistical test.

The most common data transformation is log (or natural log) transformation, which is often applied when most of the data values cluster around zero relative to the larger values in the data set and all of the observations are positive.

Data Transformation Techniques

Variable transformation can also be applied to one or more variables in scatter plot, correlation, and regression analysis to make the relationship between the variables more linear; hence it is easier to model with a simple method. Other transformations than log are square root, reciprocal, etc.

Reciprocal Transformation

The reciprocal transformation $x$ to $\frac{1}{x}$ or $(-\frac{1}{x})$ is a very strong transformation with a drastic effect on the shape of the distribution. Note that this transformation cannot be applied to zero values, but can be applied to negative values. Reciprocal transformation is not useful unless all of the values are positive and reverses the order among values of the same sign i.e. largest becomes smallest etc.

Logarithmic Transformation

The logarithm $x$ to log (base 10) (or natural log, or log base 2) is another strong transformation that affects the shape of the distribution. Logarithmic transformation is commonly used for reducing right skewness, but cannot be applied to negative or zero values.

Square Root Transformation

The square root x to $x^{\frac{1}{2}}=\sqrt(x)$ transformation has a moderate effect on the distribution shape and is weaker than the logarithm. Square root transformation can be applied to zero values but not negative values.

Data Transformation

The purpose of data transformation is:

  • Convert data from one format or structure to another (like changing a messy spreadsheet into a table).
  • Clean and prepare data for analysis (fixing errors, inconsistencies, and missing values).
  • Standardize data for easier integration and comparison (making sure all your data uses the same units and formats).

Goals of transformation

The goals of transformation may be

  • one might want to see the data structure differently
  • one might want to reduce the skew that assists in modeling
  • one might want to straighten a nonlinear (curvilinear) relationship in a scatter plot. In other words, a transformation may be used to have approximately equal dispersion, making data easier to handle and interpret
Data Transformation (Variable Transformation)

There are many techniques used in data transformation, these techniques are:

  • Cleaning and Filtering: Identifying and removing errors, missing values, and duplicates.
  • Data Normalization: Ensuring data consistency across different fields.
  • Aggregation: Summarizing data by combining similar values.

Benefits of Data Transformation

The Benefits of data transformation and data cleaning are:

  • Improved data quality: Fewer errors and inconsistencies lead to more reliable results.
  • Easier analysis: Structured data is easier to work with for data analysts and scientists.
  • Better decision-making: Accurate insights from clean data lead to better choices.
https://itfeature.com

Data transformation is a crucial step in the data pipeline, especially in tasks like data warehousing, data integration, and data wrangling.

FAQS about Data Transformation

  • What is data transformation?
  • When data transformation is done?
  • What is the most common data transformation?
  • What is the reciprocal Data Transformation?
  • When reciprocal transformation is not useful?
  • What is a logarithmic transformation?
  • When logarithmic transformation is not applied to the data?
  • What is the square root transformation?
  • When square root transformation cannot be applied?
  • What is the main purpose of data transformation?
  • What are the goals of transformation?
  • What is the data normalization?
  • What is the data aggregation?
  • What is the cleaning and filtering?
  • What are the benefits of data transformation?

Online MCQs Test Website

Introduction to R Language