Pearson Correlation Coefficient (2012)

Introduction to Pearson Correlation Coefficient

The correlation coefficient or Pearson Correlation Coefficient was originated by Karl Pearson in the 1900s. The Pearson Correlation Coefficient is a measure of the (degree of) strength of the linear relationship between two continuous random variables denoted by $\rho_{XY}$ for population and for sample it is denoted by $r_{XY}$.

The Pearson Correlation coefficient can take values that occur in the interval $[1,-1]$. If the coefficient value is $1$ or $-1$, there will be a perfect linear relationship between the variables. A positive sign with a coefficient value shows a positive (direct, or supportive), while a negative sign with a coefficient value shows a negative (indirect, opposite) relationship between the variables.

The zero-value implies the absence of a linear relation and it also shows that variables are independent. Zero value also shows that there may be some other sort of relationship between the variables of interest such as a systematic or circular relationship between the variables.

Pearson Correlation Coefficient Scatter Diagram

Pearson’s Correlation Formula

Mathematically, if two random variables such as $X$ and $Y$ follow an unknown joint distribution then the simple linear correlation coefficient is equal to covariance between $X$ and $Y$ divided by the product of their standard deviations i.e

\[\rho=\frac{Cov(X, Y)}{\sigma_X \sigma_Y}\]

where $Cov(X, Y)$ is a measure of covariance between $X$ and $Y$, $\sigma_X$ and $\sigma_Y$ are the respective standard deviation of the random variables.

For a sample of size $n$, $(X_1, Y_1),(X_2, Y_2),\cdots,(X_n, Y_n)$ from the joint distribution, the quantity given below is an estimate of $\rho$, called sampling correlation and denoted by $r$.

\begin{eqnarray*}
r&=&\frac{\sum_{i=1}^{n}(X_i-\bar{X})(Y_i-\bar{Y})}{\sqrt{\sum_{i=1}^{n}(X_i-\bar{X})^2 \times \sum_{i=1}^{n}(Y_i-\bar{Y})^2}}\\
&=& \frac{Cov(X,Y)}{S_X  X_Y}
\end{eqnarray*}

Note that

  • The existence of a statistical correlation does not mean that there exists a cause-and-effect relation between the variables. Cause and effect mean that a change in one variable does cause a change in the other variable.
  • The changes in the variables may be due to a common cause or random variations.
  • There are many kinds of correlation coefficients. The choice of which to use for a particular set of data depends on different factors such as
    • Type of Scale (Level of Measurement or Measurement Scale) used to express the variables
    • Nature of the underlying distribution (continuous or discrete)
    • Characteristics of the distribution of the scores (linear or non-linear)
  • Correlation is perfectly linear if a constant change in $X$ is accompanied by a constant change in $Y$. In this case, all the points in the scatter diagram will lie in a straight line.
  • A high correlation coefficient does not necessarily imply a direct dependence on the variables. For example, there may be a high correlation between the number of crimes and shoe prices. Such a kind of correlation is referred to as a non-sense or spurious correlation.

Properties of Pearson Correlation Coefficient

The following are important properties that a Pearson correlation coefficient can have:

  1. The Pearson correlation coefficient is symmetrical for $X$ and $Y$ i.e. $r_{XY}=r_{YX}$.
  2. The Correlation coefficient is a pure number and it does not depend upon the units in which the variables are measured.
  3. The correlation coefficient is the geometric mean of the two regression coefficients. Thus if the two regression lines of $Y$ on $X$ and $X$ on $Y$ are written as $Y=a+bX$ and $X=c+dy$ respectively then $bd=r^2$.
  4. The correlation coefficient is independent of the choice of origin and scale of measurement of the variables, i.e. $r$ remains unchanged if constants are added to or subtracted from the variables and if the variables having the same size are multiplied or divided by the class interval size.
  5. The correlation coefficient lies between -1 and +1, symbolically $-1\le r \le 1$.

Take various Online MCQ quiz

Statistical Linear Models in R Language

What is Standard Error of Sampling? (2012)

The standard error (SE) of a statistic is the standard deviation of the sampling distribution of that statistic. The standard error of sampling reflects how much sampling fluctuation a statistic will show. The inferential (deductive) statistics involved in constructing confidence intervals and significance testing are based on standard errors. Increasing the sample size decreases the standard error.

In practical applications, the true value of the standard deviation of the error is unknown. As a result, the term standard error is often used to refer to an estimate of this unknown quantity.

The size of the SE is affected by two values.

  1. The Standard Deviation of the population affects the standard errors. The larger the population’s standard deviation ($\sigma$), the larger is SE i.e. $\frac {\sigma}{\sqrt{n}}$. If the population is homogeneous (which results in a small population standard deviation), the SE will also be small.
  2. The standard errors are affected by the number of observations in a sample. A large sample will result in a small SE of estimate (indicates less variability in the sample means)

Application of Standard Error of Sampling

The SEs are used in different statistical tests such as

  • to measure the distribution of the sample means
  • to build confidence intervals for means, proportions, differences between means, etc., for cases when population standard deviation is known or unknown.
  • to determine the sample size
  • in control charts for control limits for means
  • in comparison tests such as z-test, t-test, Analysis of Variance,
  • in relationship tests such as Correlation and Regression Analysis (standard error of regression), etc.

(1) Standard Error Formula Means

The SE for the mean or standard deviation of the sampling distribution of the mean measures the deviation/ variation in the sampling distribution of the sample mean, denoted by $\sigma_{\bar{x}}$ and calculated as the function of the standard deviation of the population and respective size of the sample i.e

$\sigma_{\bar{x}}=\frac{\sigma}{\sqrt{n}}$                      (used when population is finite)

If the population size is infinite then ${\sigma_{\bar{x}}=\frac{\sigma}{\sqrt{n}} \times \sqrt{\frac{N-n}{N}}}$ because $\sqrt{\frac{N-n}{N}}$ tends towards 1 as N tends to infinity.

When the population’s standard deviation ($\sigma$) is unknown, we estimate it from the sample standard deviation. In this case SE formula is $\sigma_{\bar{x}}=\frac{S}{\sqrt{n}}$

Standard Error of sampling

(2) Standard Error Formula for Proportion

The SE for a proportion can also be calculated in the same manner as we calculated the standard error of the mean, denoted by $\sigma_p$ and calculated as $\sigma_p=\frac{\sigma}{\sqrt{n}}\sqrt{\frac{N-n}{N}}$.

In case of finite population $\sigma_p=\frac{\sigma}{\sqrt{n}}$
in case of infinite population $\sigma=\sqrt{p(1-p)}=\sqrt{pq}$, where $p$ is the probability that an element possesses the studied trait and $q=1-p$ is the probability that it does not.

(3) Standard Error Formula for Difference Between Means

The SE for the difference between two independent quantities is the square root of the sum of the squared standard errors of both quantities i.e $\sigma_{\bar{x}_1+\bar{x}_2}=\sqrt{\frac{\sigma_1^2}{n_1}+\frac{\sigma_2^2}{n_2}}$, where $\sigma_1^2$ and $\sigma_2^2$ are the respective variances of the two independent population to be compared and $n_1+n_2$ are the respective sizes of the two samples drawn from their respective populations.

Unknown Population Variances
Suppose the variances of the two populations are unknown. In that case, we estimate them from the two samples i.e. $\sigma_{\bar{x}_1+\bar{x}_2}=\sqrt{\frac{S_1^2}{n_1}+\frac{S_2^2}{n_2}}$, where $S_1^2$ and $S_2^2$ are the respective variances of the two samples drawn from their respective population.

Equal Variances are assumed
In case when it is assumed that the variance of the two populations are equal, we can estimate the value of these variances with a pooled variance $S_p^2$ calculated as a function of $S_1^2$ and $S_2^2$ i.e

\[S_p^2=\frac{(n_1-1)S_1^2+(n_2-1)S_2^2}{n_1+n_2-2}\]
\[\sigma_{\bar{x}_1}+{\bar{x}_2}=S_p \sqrt{\frac{1}{n_1}+\frac{1}{n_2}}\]

(4) Standard Error for Difference between Proportions

The SE of the difference between two proportions is calculated in the same way as the SE of the difference between means is calculated i.e.
\begin{eqnarray*}
\sigma_{p_1-p_2}&=&\sqrt{\sigma_{p_1}^2+\sigma_{p_2}^2}\\
&=& \sqrt{\frac{p_1(1-p_1)}{n_1}+\frac{p_2(1-p_2)}{n_2}}
\end{eqnarray*}
where $p_1$ and $p_2$ are the proportion for infinite population calculated for the two samples of sizes $n_1$ and $n_2$.

FAQs about Standard Error

  1. Define the Standard Error of Mean.
  2. Standard Error is affected by which two values?
  3. Write the formula of the standard error of mean, proportion, and difference between means.
  4. What is the application of standard error of mean in Sampling?
  5. Discuss the importance of standard error?
https://itfeature.com Standard Error

Hypothesis Testing in R Language

Online General Knowledge Quiz

Multivariate Analysis (2012)

Multivariate Analysis term is used to include all statistics for more than two variables that are simultaneously analyzed.

Multivariate analysis is based upon an underlying probability model known as the Multivariate Normal Distribution (MND). The objective of scientific investigations to which multivariate methods most naturally lend themselves includes.

Multivariate Analysis and Statistics

Objectives of Multivariate Analysis

The following are some basic objectives of multivariate analysis.

  • Data reduction or structural simplification
    The phenomenon being studied is represented as simply as possible without sacrificing valuable information. It is hoped that this will make interpretation easier.
  • Sorting and Grouping
    Graphs of similar objects or variables are created, based on measured characteristics. Alternatively, rules for classifying objects into well-defined groups may be required.
  • Investigation of the dependence among variables
    The nature of the relationships among variables is of interest. Are all the variables mutually independent or are one or more variables dependent based on observation of the other variables?
  • Prediction
    Relationships between variables must be determined for predicting the values of one or more variables based on observation of the other variables.
  • Hypothesis Construction and testing
    Specific statistical hypotheses, formulated in terms of the parameter of the multivariate population, are tested. This may be done to validate assumptions or to reinforce prior convictions.

Applications: Multivariate analysis is used in various fields:

  • Social sciences (understanding factors influencing voting behavior)
  • Business (analyzing customer demographics and purchase patterns)
  • Finance (evaluating risk factors in investment portfolios)
  • Natural sciences (studying the relationships between different environmental variables)

Multivariate Data Sets

We are concerned with analyzing measurements made on several variables or characteristics. These measurements (data) must frequently be arranged and displayed in various ways (graphs, tabular form, etc.). Preliminary concepts underlying these first steps of data organization are

Array

Multivariate data arise whenever an investigator, seeking to understand a social or physical phenomenon, selects a number of variables $p\ge$ of variables or characteristics to record. The values of these variables are all recorded for each distinct item, individual, or experimental unit.

$X_{jk}$ notation is used to indicate the particular value of the kth variable that is observed on the jth item or trial. i.e. $X_{jk}$ measurement of the kth variable on the jth item. So, $n$ measurements on $p$ variables can be displayed as

\[\begin{array}{ccccccc}
. & V_1 & V_2  & \dots  & V_k & \dots  & V_p \\
Item 1 & x_{11} & x_{12} & \dots  & x_{1k} & \dots  & x_{1p} \\
Item 2 & x_{21} & x_{22} & \dots  & x_{2k} & \dots  & x_{2p} \\
\vdots & \vdots  & \vdots  & \vdots & \vdots   & \vdots & \vdots  \\
Item j  & x_{j1}   & x_{j2} & \dots  & x_{jk} & \dots  & x_{jp} \\
\vdots &  \vdots & \vdots & \vdots & \vdots   & \vdots & \vdots  \\
Item n & x_{n1} & x_{n2} & \dots  & x_{nk} & \dots  & x_{np} \\
\end{array}\]

These data can be displayed as rectangular arrays $X$ of $n$ rows and $p$ columns

\[X=\begin{pmatrix}
x_{11}     & x_{12} & \dots  & x_{1k}  & \dots  & x_{1p} \\
x_{21}     & x_{22} & \ddots  & x_{2k}  & \ddots  & x_{2p} \\
\vdots & \vdots & \ddots  & \ddots & \vdots & \vdots  \\
x_{j1}     & x_{j2} & \ddots  & x_{jk}  & \ddots  & x_{jp} \\
\vdots  & \vdots & \ddots  & \vdots & \ddots & \vdots  \\
x_{n1}     & x_{n2} & \dots  & x_{nk}  & \dots  & x_{np}
\end{pmatrix}\]

This $X$ array contains the data consisting of all of the observations on all of the variables.

Example: suppose we have data for the number of books sold and the total amount of each sale.

Variable 1 (Sales in Dollars)
\[\begin{array}{ccccc}
Data Values: & 42 & 52 & 48 & 63 \\
Notation: & x_{11} & x_{21} & x_{31} & x_{41}
\end{array}\]

Variable 2 (Number of Books sold)
\[\begin{array}{ccccc}
Data Values: & 4 & 2 & 8 & 3 \\
Notation: & x_{12} & x_{22} & x_{33} & x_{42}
\end{array}\]

itfeature.com Multivariate Analysis

The information, available in the multivariate data sets can be assessed by calculating certain summary numbers, known as multivariate analysis: multivariate descriptive statistics such as Arithmetic Mean, Sample Mean (the measure of location), Average of the Squares of the distances of all of the numbers from the mean (variation/spread i.e. Measure of Spread or Variation).

MCQs General Knowledge

R Programming Language