The unbiasedness of the estimator is probably the most important property that a good estimator should possess. In statistics, the bias (or bias function) of an estimator is the difference between this estimator’s expected value and the true value of the parameter being estimated. An estimator is said to be unbiased if its expected value equals the corresponding population parameter; otherwise, it is said to be biased. Let us discuss in detail the unbiasedness of the estimator.
Unbiasedness of the Estimator
Suppose in the realization of a random variable $X$ taking values in probability space i.e. ($\chi, \mathfrak{F}, P_\theta$), such that $\theta \varepsilon \Theta$, a function $f:\Theta \rightarrow \Omega $ has to be estimated, mapping the parameter set $\Theta$ into a certain set $\Omega$, and that as an estimator of $f(\theta)$ a statistic $T=T(X)$ is chosen. if $T$ is such that
\[E_\theta[T]=\int_\chi T(x) dP_\theta(x)=f(\theta)\]
holds for $\theta\varepsilon \Theta$ then $T$ is called an unbiased estimator of $f(\theta)$. The unbiased estimator is frequently called free of systematic errors.
Unbiased Estimator
Suppose $\hat{\theta}$ be an estimator of a parameter $\theta$, then $\hat{\theta}$ is said to be unbiased estimator if $E(\hat{\theta})=0$.
- If $E(\hat{\theta})=\theta$ then $\hat{\theta}$ is an unbiased estimator of a parameter $\theta$.
- If $E(\hat{\theta})<\theta$ then $\hat{\theta}$ is a negatively biased estimator of a parameter $\theta$.
- If $E(\hat{\theta})>\theta$ then $\hat{\theta}$ is a positively biased estimator of a parameter $\theta$.
Bias of an estimator $\theta$ can be found by $$[E(\hat{\theta})-\theta]$$
- $\overline{X}$ is an unbiased estimator of the mean of a population (whose mean exists).
- $\overline{X}$ is an unbiased estimator of $\mu$ in a Normal distribution i.e. $N(\mu, \sigma^2)$.
- $\overline{X}$ is an unbiased estimator of the parameter $p$ of the Bernoulli distribution.
- $\overline{X}$ is an unbiased estimator of the parameter $\lambda$ of the Poisson distribution.
In each of these cases, the parameter $\mu, p$ or $\lambda$ is the mean of the respective population being sampled.
However, sample variance $S^2$ is not an unbiased estimator of population variance $\sigma^2$, but consistent.
It is possible to have more than one unbiased estimator for an unknown parameter. The sample mean and the sample median are unbiased estimators of the population mean $\mu$ if the population distribution is symmetrical.