Estimating the Mean

The mean is the first statistic we learn, the cornerstone of many analyses. But the question is how well do we understand its estimation? For statisticians, estimating the mean is more than just summing and dividing. It involves navigating assumptions, choosing appropriate methods, and understanding the implications of our choices. Let us delve deeper into the art and science of estimating the mean.

The Simple Sample Mean: A Foundation

The Formula of sample mean $\overline{x}= \frac{\sum\limits_{i=1}^n x_i}{n}$​​. The sample mean is the unbiased estimator of population mean ($\mu$) under ideal conditions (simple random sampling, independent and identically distributed data). Violating the assumption can lead to biased estimates. For large samples, the distribution of the sample mean approximates a normal distribution, regardless of the population distribution due to the Central Limit Theorem (CLT).

Weighted Means

Beyond Simple Random Sampling, For weighted means, observations have varying importance (e.g., survey data with different sampling weights). The formula of weighted mean is $ \overline{x}_w = \frac{\sum\limits_{i=1}^n w_ix_i}{\sum\limits_{i=1}^n w_i}$. Weighted means are used in Survey sampling, and dealing with non-response. In Stratified Sampling, estimate the mean when the population is divided into strata for getting reduced variance, and improved precision. In cluster sampling have unique challenges of estimating the mean with cluster sampling, where observations are grouped.

Robust Estimation

Robust Estimation is required when the sample mean is vulnerable to extreme values. The alternative of the sample mean is the median which emphasizes its robustness to outliers. The trimmed mean is also used to balance out the robustness and efficiency.

Confidence Intervals for Estimating the Mean

Confidence Intervals make use of standard error to estimate the mean to reflect the precision of the estimate. For small samples, t-distribution while for large samples, z-distribution is used for the construction of confidence intervals. Bootstrapping (a non-parametric method) can also be used for constructing confidence intervals, especially, useful when assumptions are violated.

Point Estimate: To estimate the population mean $\mu$ for a random variable $x$ using a sample of values, the best possible point estimate is the sample mean $\overline{x}$.

Interval Estimate: An interval estimate for mean $\mu$ is constructed by starting with sample mean $\overline{x}$ and adding a margin of error (S.E.) above and below the mean $\overline{x}$. The interval is of the form $(\overline{x} – SE, \overline{x} + SE)$.

Example: Suppose that the mean height of Pakistani men is between 67.5 and 70.5 inches with a level of confidence of $c = 0.90$. To estimate the men’s height, the sample mean $\overline{x}$ is 69 inches with a margin of error = 1.5 inches. That is, $(\overline{x} – SE, \overline{x}+SE) = (69 – 1.5, 69+1.5) = (67.5, 70.5)$.

Note that the margin of error used for constructing an interval estimate depends on the level of confidence interval. A larger level of confidence will result in a larger margin of error and hence a wider interval.

Estimating the Mean boxplot with mean

Calculating Margin of Error for a Large Sample Data

If a random variable $x$ is normally distributed (with a known population standard deviation $\sigma$) or if the sample size $n$ is at least 30 (we will apply Central Limit Theorem, which will guarantee that),

  • $\overline{x}$ is approximately normally distributed
  • $\mu_{\overline{x}} = \mu$
  • $\sigma_{\overline{x}}=\frac{\sigma}{\sqrt{n}}$

The mean value of $\overline{x}$ equals the estimated population mean $\mu$. Given the desired level of confidence $c$, it is try to find the amount of error $E$ necessary to ensure that the probability of $\overline{x}$ being within $E$ of the mean is $c$.

There are always two critical $Z$-scores ($\pm z_c$ which give the appropriate probability for the standard normal distribution), and the corresponding probability for the distribution of $\overline{x}$ is $z_c \times \sigma_{\overline{x}}$ or

$$E=z_c \frac{\sigma}{\sqrt{n}}$$

Usually, $\sigma$ is unknown, but if $n\ge 30$ then the sample standard deviation $s$ is generally a reasonable estimate.

Estimating the Mean Histogram

Dealing with Missing Data

When dealing with missing data, one can impute mean. Imputing the mean is simple but it can underestimate variance. One can also perform multiple imputations to account for the uncertainty.

Bayesian Estimation

In Bayesian estimation, the prior and posterior distributions are used for estimating the mean by incorporating prior information, updated beliefs about the mean, and handling uncertainty.

Summary

Estimating the mean is a fundamental statistical task, but it requires careful consideration of assumptions, data characteristics, and the goals of the analysis. By understanding the nuances of different estimation methods, statisticians can provide more accurate and reliable insights.

Exploratory Data Analysis in R Language

Consistency: A Property of Good Estimator

Consistency refers to the property of an estimator that as the sample size increases, the estimator converges in probability to the true value of the parameter being estimated. In other words, a consistent estimator will yield results that become more accurate and stable as more data points are collected.

Characteristics of a Consistent Estimator

A consistent has some important characteristics:

  • Convergence: The estimator will produce values that get closer to the true parameter value with larger samples.
  • Reliability: Provides reassurance that the estimates will be valid as more data is accounted for.

Examples of Consistent Estimators

  1. Sample Mean ($\overline{x}$): The sample mean is a consistent estimator of the population mean ($\mu$). A larger sample from a population converges to the actual population mean, compared to a smaller smaller.
  2. Sample Proportion ($\hat{p}$): The sample proportion is also a consistent estimator of the true population proportion. As the number of observations increases, the sample proportion gets closer to the true population proportion.

Question: $\hat{\theta}$ is a consistent estimator of the parameter $\theta$ of a given population if

  1. $\hat{\theta}$ is unbiased, and
  2. $var(\hat{\theta}) \rightarrow 0$ when $n\rightarrow \infty$

Answer: Suppose $X$ is random with mean $\mu$ and variance $\sigma^2$. If $X_1,X_2,\cdots,X_n$ is a random sample from $X$ then

\begin{align*}
E(\overline{X}) &= \mu\\
Var(\overline{X}) & = \frac{\sigma^2}{n}
\end{align*}

That is $\overline{X}$ is unbiased and $\lim\limits_{n\rightarrow\infty} Var(\overline{X}) = \lim\limits_{n\rightarrow\infty} \frac{\sigma^2}{n} =0$

Question: Show that the sample mean $\overline{X}$ of a random sample of size $n$ from the density function $f(x; \theta) = \frac{1}{\theta} e^{-\frac{x}{\theta}}, \qquad 0<x<\infty$ is a consistent estimator of the parameter $\theta$.

Answer: First, we need to check that $E(\overline{x})=\theta$, that is, the sample mean $\overline{X}$ is unbiased.

\begin{align*}
E(X) &= \mu = \int x\cdot f(x; \theta) dx = \int\limits_{0}^{\infty}x\cdot \frac{1}{\theta} e^{-\frac{x}{\theta}}dx\\
&= \frac{1}{\theta} \int\limits_{0}^{\infty} xe^{-\frac{x}{\theta}}dx\\
&= \frac{1}{\theta} \left[ \Big| -\theta x e^{-\frac{x}{\theta}}dx\Big|_{0}^{\infty} + \theta \int\limits_{0}^{\infty} e^{-\frac{x}{\theta}}dx \right]\\
&= \frac{1}{\theta} \left[0+\theta(-\theta) e^{-\frac{x}{\theta}}\big|_0^{\infty} \right] = \theta\\
E(X^2) &= \int x^2 f(x; \theta)dx = \int\limits_{0}^{\infty}x^2 \frac{1}{\theta} e^{-\frac{x}{\theta}}dx\\
&= \frac{1}{\theta}\left[ \Big| – x^2 \theta e^{-\frac{x}{\theta} }\Big|_{0}^{\infty} + \int\limits_0^\infty 2x\theta e^{-\frac{x}{\theta}}dx \right]\\
&= \frac{1}{\theta} \left[ 0 + 2\theta^2 \int\limits_0^\infty \frac{x}{\theta} e^{-\frac{x}{\theta}}dx\right]
\end{align*}

The expression is to be integrated into $E(X)$ which equals 0. Thus

\begin{align*}
E(X^2) &=\frac{1}{\theta} 2\theta^2\theta = 2\theta^2\\
Var(X) &=E(X^2) – [E(X)]^2 = 2\theta^2 – \theta^2 = \theta^2
and \quad Var(\overline{X}) &= \frac{\sigma^2}{n}\\
\lim\limits_{n\rightarrow \infty} \,\, Var(\overline{X}) &= \lim\limits_{n\rightarrow \infty} \frac{\sigma^2}{n} = 0
\end{align*}

Since $\overline{X}$ is unbiased and $Var(\overline{X})$ approaches 0 and $n\rightarrow \infty$, the $\overline{X}$ is a consistent estimator of $\theta$.

Importance of Consistency in Statistics

The following are a few key points about the importance of consistency in statistics:

Reliable Inferences: Consistent estimators ensure that as sample size increases, the estimates become closer and closer to the true population value/parameters. This helps researchers and statisticians to make sound inferences about a population based on sample data.

Foundation for Hypothesis Testing: Most of the statistical methods rely on consistent estimators. Consistency helps in validating the conclusions drawn from statistical tests, leading to confidence in decision-making.

Improved Accuracy: Since more data points are available due to the increase in sample size, the more consistently the estimates will converge to the true value. All this leads to more accurate statistical models, which can improve analysis and predictions.

Mitigating Sampling Error: Consistent estimators help to reduce the impact of random sampling error. As sample sizes increase, the variability in estimates tends to decrease, leading to more dependable conclusions.

Building Statistical Theory: Consistency is a fundamental concept in the development of statistical theory. It provides a rigorous foundation for designing and validating statistical methods and procedures.

Trust in Results: Consistency builds trust in the findings of statistical analyses. It is because the results are stable and reliable across different samples (due to large samples), therefore it is more likely to accept and act upon those results.

Framework for Model Development: In statistics and data science, developing models based on consistent estimators results in models with more accuracy.

Long-Term Decision Making: Consistency in data interpretation supports long-term planning, risk assessment, and resource allocation. It is required that businesses and organizations often make strategic decisions based on statistical analyses.

https://itfeature.com consistency a property of good estimator

R Frequently Asked Questions

MCQs Estimation Quiz 8

MCQs Estimation Quiz from Statistical Inference covers the topics of Estimation (Confidence Interval) and Bayes Factor for the preparation of exams and different statistical job tests in Government/ Semi-Government or Private Organization sectors. This test will also help get admission to different colleges and Universities. The online MCQS Estimation quiz will help the learner understand the related concepts and enhance their knowledge.

Online MCQs Estimation Quiz with Answers

1. A Bayes Factor that provides strong evidence for the null model does not mean the null hypothesis is true.

 
 
 

2. Suppose that a research article indicates a value of $p = 0.001$ in the results section ($\alpha = 0.05$). The p-value of a statistical test is the probability of the observed result or a more extreme result, assuming the null hypothesis is true.

 
 
 

3. Suppose that a research article indicates a value of $p = 0.30$ in the results section ($\alpha = 0.05$). You have absolutely proven the null hypothesis (that is, you have proven that there is no difference between the population means).

 
 
 

4. Two researchers are investigating if people can see in the future. Person A believes there is no effect, which would mean that p-values are distributed as a —————-. B finds a test statistic at the very far end of the distribution, which means that —————-.

 
 
 
 

5. Suppose, the Bayesian method is used to estimate a population mean of 10 with a 95% credible interval from 8 to 12, which means ————–. This interval depends on —————.

 
 
 
 

6. To conclude that the difference between the two estimates is non-significant ($\alpha = 0.05$), the two 95% confidence intervals around the means do not overlap.

 
 
 

7. A Bayes Factor close to 1 (inconclusive evidence) means that the effect size is small.

 
 
 

8. Suppose that a research article indicates a value of $p = 0.001$ in the results section ($\alpha = 0.05$). The value $p = 0.001$ does not directly confirm that the effect size was large.

 
 
 

9. A Bayes Factor that provides strong evidence for the alternative model does not mean the alternative hypothesis is true.

 
 
 

10. When a Bayesian t-test yields a $BF = 10$, it is ten times more likely that there is an effect than that there is no effect.

 
 
 

11. The probability of finding a significant result when there is no true effect is called ————– The probability of finding a significant result when there is a true effect, is called —————.

 
 
 
 

12. The likelihood ratio of the two hypotheses gives information about ————–, but not about —————-.

 
 
 
 

13. If two 95% confidence intervals around the means overlap, then the difference between the two estimates is necessarily non-significant ($\alpha = 0.05$).

 
 
 

14. When a Bayesian t-test yields a $BF = 0.1$, it is ten times more likely that there is no effect than that there is an effect.

 
 
 

15. The specific 95% confidence interval observed in a study has a 95% chance of containing the true effect size.

 
 
 

16. Suppose a research article indicates a value of $p = 0.001$ in the results section ($\alpha = 0.05$). The probability that the given study’s results are replicable is not equal to $1-p$.

 
 
 

17. How are the three paths to statistical inference (frequentist, likelihood, Bayesian) related to each other?

 
 
 
 

18. An observed 95% confidence interval does not predict that 95% of the estimates from future studies will fall inside the observed interval.

 
 
 

19. Suppose that a research article indicates a value of $p = 0.001$ in the results section ($\alpha = 0.05$). The p-value gives the probability of obtaining a significant result whenever a given experiment is replicated.

 
 
 

20. Suppose that a research article indicates a value of p = .30 in the results section ($\alpha = 0.05$). The probability that the given study’s results are replicable is not equal to $1-p$.

 
 
 

MCQs Estimation Quiz with Answers

  • An observed 95% confidence interval does not predict that 95% of the estimates from future studies will fall inside the observed interval.
  • Suppose a research article indicates a value of $p = 0.001$ in the results section ($\alpha = 0.05$). The probability that the given study’s results are replicable is not equal to $1-p$.
  • Suppose that a research article indicates a value of $p = 0.001$ in the results section ($\alpha = 0.05$). The value $p = 0.001$ does not directly confirm that the effect size was large.
  • Suppose that a research article indicates a value of $p = 0.001$ in the results section ($\alpha = 0.05$). The p-value of a statistical test is the probability of the observed result or a more extreme result, assuming the null hypothesis is true.
  • The specific 95% confidence interval observed in a study has a 95% chance of containing the true effect size.
  • A Bayes Factor close to 1 (inconclusive evidence) means that the effect size is small.
  • To conclude that the difference between the two estimates is non-significant ($\alpha = 0.05$), the two 95% confidence intervals around the means do not overlap.
  • If two 95% confidence intervals around the means overlap, then the difference between the two estimates is necessarily non-significant ($\alpha = 0.05$).
  • Suppose that a research article indicates a value of p = .30 in the results section ($\alpha = 0.05$). The probability that the given study’s results are replicable is not equal to $1-p$.
  • Suppose that a research article indicates a value of $p = 0.30$ in the results section ($\alpha = 0.05$). You have absolutely proven the null hypothesis (that is, you have proven that there is no difference between the population means).
  • How are the three paths to statistical inference (frequentist, likelihood, Bayesian) related to each other?
  • Two researchers are investigating if people can see in the future. Person A believes there is no effect, which would mean that p-values are distributed as a —————-. B finds a test statistic at the very far end of the distribution, which means that —————-.
  • The probability of finding a significant result when there is no true effect is called ————– The probability of finding a significant result when there is a true effect, is called —————.
  • The likelihood ratio of the two hypotheses gives information about ————–, but not about —————-.
  • When a Bayesian t-test yields a $BF = 10$, it is ten times more likely that there is an effect than that there is no effect.
  • A Bayes Factor that provides strong evidence for the alternative model does not mean the alternative hypothesis is true.
  • When a Bayesian t-test yields a $BF = 0.1$, it is ten times more likely that there is no effect than that there is an effect.
  • Suppose that a research article indicates a value of $p = 0.001$ in the results section ($\alpha = 0.05$). The p-value gives the probability of obtaining a significant result whenever a given experiment is replicated.
  • A Bayes Factor that provides strong evidence for the null model does not mean the null hypothesis is true.
  • Suppose, the Bayesian method is used to estimate a population mean of 10 with a 95% credible interval from 8 to 12, which means ————–. This interval depends on —————.
Online MCQs Estimation Quiz with Answers

Statistical inference is a branch of statistics in which we conclude (make some wise decisions) about the population parameter using sample information. Statistical inference can be further divided into the Estimation of the Population Parameters and the Hypothesis Testing.

Estimation is a way of finding the unknown value of the population parameter from the sample information by using an estimator (a statistical formula) to estimate the parameter. One can estimate the population parameter by using two approaches (I) Point Estimation and (ii) Interval Estimation.

R Programming Language