Basic Statistics and Data Analysis

Lecture notes, MCQS of Statistics

Correlation Coeficient values lies between +1 and -1?

We know that the ratio of the explained variation to the total variation is called the coefficient of determination. This ratio is non-negative, therefore denoted by $r^2$, thus

\begin{align*}
r^2&=\frac{\text{Explained Variation}}{\text{Total Variation}}\\
&=\frac{\sum (\hat{Y}-\overline{Y})^2}{\sum (Y-\overline{Y})^2}
\end{align*}

It can be seen that if the total variation is all explained, the ratio $r^2$ (Coefficient of Determination) is one and if the total variation is all unexplained then the explained variation and the ratio r2 is zero.

The square root of the coefficient of determination is called the correlation coefficient, given by

\begin{align*}
r&=\sqrt{ \frac{\text{Explained Variation}}{\text{Total Variation}} }\\
&=\pm \sqrt{\frac{\sum (\hat{Y}-\overline{Y})^2}{\sum (Y-\overline{Y})^2}}
\end{align*}

and

\[\sum (\hat{Y}-\overline{Y})^2=\sum(Y-\overline{Y})^2-\sum (Y-\hat{Y})^2\]

therefore

\begin{align*}
r&=\sqrt{ \frac{\sum(Y-\overline{Y})^2-\sum (Y-\hat{Y})^2} {\sum(Y-\overline{Y})^2} }\\
&=\sqrt{1-\frac{\sum (Y-\hat{Y})^2}{\sum(Y-\overline{Y})^2}}\\
&=\sqrt{1-\frac{\text{Unexplained Variation}}{\text{Total Variation}}}=\sqrt{1-\frac{S_{y.x}^2}{s_y^2}}
\end{align*}

where $s_{y.x}^2=\frac{1}{n} \sum (Y-\hat{Y})^2$ and $s_y^2=\frac{1}{n} \sum (Y-\overline{Y})^2$

\begin{align*}
\Rightarrow r^2&=1-\frac{s_{y.x}^2}{s_y^2}\\
\Rightarrow s_{y.x}^2&=s_y^2(1-r^2)
\end{align*}

Since variances are non-negative

\[\frac{s_{y.x}^2}{s_y^2}=1-r^2 \geq 0\]

Solving for inequality we have

\begin{align*}
1-r^2 & \geq 0\\
\Rightarrow r^2 \leq 1\, \text{or}\, |r| &\leq 1\\
\Rightarrow & -1 \leq r\leq 1
\end{align*}

Alternative Proof

Since $\rho(X,Y)=\rho(X^*,Y^*)$ where $X^*=\frac{X-\mu_X}{\sigma_X}$ and $Y^*=\frac{Y-Y^*}{\sigma_Y}$

and as covariance is bi-linear and X* ,Y* have zero mean and variance 1, therefore

\begin{align*}
\rho(X^*,Y^*)&=Cov(X^*,Y^*)=Cov\{\frac{X-\mu_X}{\sigma_X},\frac{Y-\mu_Y}{\sigma_Y}\}\\
&=\frac{Cov(X-\mu_X,Y-\mu_Y)}{\sigma_X\sigma_Y}\\
&=\frac{Cov(X,Y)}{\sigma_X \sigma_Y}=\rho(X,Y)
\end{align*}

We also know that the variance of any random variable is ≥0, it could be zero i.e .(Var(X)=0) if and only if X is a constant (almost surely), therefore

\[V(X^* \pm Y^*)=V(X^*)+V(Y^*)\pm2Cov(X^*,Y^*)\]

As Var(X*)=1 and Var(Y*)=1, the above equation would be negative if $Cov(X^*,Y^*)$ is either greater than 1 or less than -1. Hence \[1\geq \rho(X,Y)=\rho(X^*,Y^*)\geq -1\].

If $\rho(X,Y )=Cov(X^*,Y^*)=1$ then $Var(X^*- Y ^*)=0$ making X* =Y* almost surely. Similarly, if $\rho(X,Y )=Cov(X^*,Y^*)=-1$ then X*=−Y* almost surely. In either case, Y would be a linear function of X almost surely.

For proof with Cauchy-Schwarz Inequality please follow the link

We can see that Correlation Coefficient values lies between -1 and +1.

Download pdf file:

 

Bias is defined as the difference between the expected value of a statistic

Bias (Statistical Bias)

Bias is defined as the difference between the expected value of a statistic and the true value of the corresponding parameter. Therefore the bias is a measure of the systematic error of an estimator. The bias indicates the distance of the estimator from the true value of the parameter. For example, if we calculate the mean of large number of unbiased estimators, we will find the correct value.

Gauss, C.F. (1821) during his work on the least squares method gave the concept of an unbiased estimator.

Bias of an estimator of a parameter should not be confused with its degree of precision as degree of precision is a measure of the sampling error.

There are several types of bias which should not be considered as mutually exclusive

  • Selection Bias (arise due to systematic differences between the groups compared)
  • Exclusion Bias (arise due to the systematic exclusion of certain individuals from the study)
  • Analytical Bias (arise due to the way that the results are evaluated)

Mathematically Bias can be Defined as

Let statistics T used to estimate a parameter θ, if E(T)=θ + b(θ) then b(θ) is called the bias of the statistic T, where E(T) represents the expected value of the statistics T. Note that if b(θ)=0, then E(T)=θ. So T is an unbiased estimator of θ.

Reference:
Gauss, C.F. (1821, 1823, 1826). Theoria Combinationis Observationum Erroribus Minimis Obnoxiae, Parts 1, 2 and suppl. Werke 4, 1-108.

To whom is the researcher similar to in hypothesis testing: the defense attorney or the prosecuting attorney? Why?

Testing of Hypothesis

The researcher is similar to the prosecuting attorney is the sense that the researcher brings the null hypothesis “to trial” when she believes there is probability strong evidence against the null.

  • Just as the prosecutor usually believes that the person on trial is not innocent, the researcher usually believes that the null hypothesis is not true.
  • In the court system the jury must assume (by law) that the person is innocent until the evidence clearly calls this assumption into question; analogously, in hypothesis testing the researcher must assume (in order to use hypothesis testing) that the null hypothesis is true until the evidence calls this assumption into question.

Why do educational researchers usually use .05 as their significance level?

Type I Error

It has become part of the statistical hypothesis testing culture.

  • It is a longstanding convention.
  • It reflects a concern over making type I errors (i.e., wanting to avoid the situation where you reject the null when it is true, that is, wanting to avoid “false positive” errors).
  • If you set the significance level at .05, then you will only reject a true null hypothesis 5% or the time (i.e., you will only make a type I error 5% of the time) in the long run.

Estimation and types of estimation

Estimation: Point and Interval Estimation

Estimation

The procedure of making judgement or decision about a population parameter is referred to as statistical estimation or simply estimation.  Statistical estimation procedures provide estimates of population parameter with a desired degree of confidence. The degree of confidence can be controlled in part, (i) by the size the sample (larger sample greater accuracy of the estimate) and (ii) by the type of the estimate made. Population parameters are estimated from sample data because it is not possible (it is impracticable) to examine the entire population in order to make such an exact determination.The statistical estimation of population parameter is further divided into two types, (i) Point Estimation and (ii) Interval Estimation

Point Estimation

The objective of  point estimation is to obtain a single number from the sample which will represent the unknown value of the population parameter. Population parameters (population mean, variance etc) are estimated from the corresponding sample statistics (sample mean, variance etc).
A statistic used to estimate a parameter is called a point estimator or simply an estimator, the actual numerical value obtained by estimator is called an estimate.
Population parameter is denoted by θ which is unknown constant. The available information is in the form of a random sample x1,x2, … , xn of size n drawn from the population. We formulate a function of the sample observation x1,x2, … , xn. The estimator of θ is denoted by $\hat{\theta}$. Different random sample provide different values of the statistics $\hat{\theta}$. Thus $\hat{\theta}$ is a random variable with its own sampling probability distribution.

Interval Estimation

A point estimator (such as sample mean) calculated from the sample data provides a single number as an estimate of the population parameter, which can not be expected to be exactly equal to the population parameter because the mean of a sample taken from a population may assume different values for different samples. Therefore we estimate an interval/ range  of values (set of values) within which the population parameter is expected to lie with a certain degree of confidence. This range of values used to estimate a population parameter is known as interval estimate or estimate by confidence interval, and is defined by two numbers, between which a population parameter is expected to lie. For example, $a<\bar{x}<b$ is an interval estimate of the population mean μ, indicating that the population mean is greater than a but less than b. The purpose of an interval estimate is to provide information about how close the point estimate is to the true parameter.

Note that the information developed about the shape of a sampling distribution of the sample mean i.e. Sampling Distribution of $\bar{x}$ allows us to locate an interval that has some specified probability of containing the population mean $\mu$.

Which of the two types of estimation do you like the most, and why?

  • Point estimation is nice because it provides an exact point estimate of the population value. It provides you with the single best guess of the value of the population parameter.
  •  Interval estimation is nice because it allows you to make statements of confidence that an interval will include the true population value.

Download pdf 

Copy Right © 2011-2017 | Free Music Download ITFEATURE.COM