MCQs Estimation 1

MCQs from Statistical Inference covering the topics of Estimation and Hypothesis Testing for the preparation of exams and different statistical job tests in Government/ Semi-Government or Private Organization sectors. These tests are also helpful in getting admission to different colleges and Universities. The Estimation and Hypothesis Testing Quiz will help the learner to understand the related concepts and enhance the knowledge too.

1. For two estimators $T_1=t_1(X_1,X_2,\cdots,X_n)$ and $T_2=t_2(X_1,X_2,\cdots,X_n)$ then estimator $t_1$ is defined to be $R_{{t_1}(\theta)}\leq R_{{t_2}(\theta)}$ for all $\theta$ in $\Theta$

 
 
 
 

2. If $E(\hat{\theta})=\theta$, then $\hat{\theta}$ is said to be

 
 
 
 

3. For a biased estimator $\hat{\theta}$ of $\theta$, which one is correct

 
 
 
 

4. If $f(x_1,x_2,\cdots,x_n;\theta)=g(\hat{\theta};\theta)h(x_1,x_2,\cdots,x_n)$, then $\hat{\theta}$ is

 
 
 
 

5. Let $Z_1,Z_2,\cdots,Z_n$ be independently and identically distributed random variables, satisfying $E[|Z_t|]<\infty$. Let N be an integer-valued random variable whose value n depends only on the values of the first n $Z_i$s. Suppose $E(N)<\infty$, then $E(Z_1+Z_2+\cdots+Z_n)=E( N)E(Z_i)$ is called

 
 
 
 

6. A test is said to be the most powerful test of size $\alpha$, if

 
 
 
 

7. If the conditional distribution of $X_1, X_2,\cdots,X_n$ given $S=s$, does not depend on $\theta$, for any value of $S=s$, the statistics $S=s(X_1,X_2,\cdots,X_n)$ is called

 
 
 
 

8. In statistical inference, the best asymptotically normal estimator is denoted by

 
 
 
 

9. If $Var(T_2) < Var(T_1)$, then $T_2$ is

 
 
 
 

10. Let $X_1,X_2,\cdots,X_n$ be a random sample from the density $f(x;\theta)$, where $\theta$ may be vector. If the conditional distribution of $X_1,X_2,\cdots,X_n$ given $S=s$ does not depend on $\theta$ for any value of $s$ of $S$, then statistic is called.

 
 
 
 

11. Which of the following assumptions are required to show the consistency, unbiasedness, and efficiency of the OLS estimator?

  1. $E(\mu_t)=0$
  2. $Var(\mu_t)=\sigma^2$
  3. $Cov(\mu_t,\mu_{t-j})=0;t\neq t-j$
  4. $\mu_t \sim N(0,\sigma^2)$
 
 
 
 

12. $Var_\theta (T) \geq \frac{[\tau'(\theta)]^2}{nE[{\frac{\partial}{\partial \theta}log f((X;\theta)}^2]}$, where $T=t(X_1,X_2,\cdots, X_n)$ is an unbiased estimator of $\tau(\theta)$. Then above inequality is called

 
 
 
 

13. Let $X_1,X_2,\cdots,X_n$ be a random sample from a density $f(x|\theta)$, where $\theta$ is a value of the random variable $\Theta$ with known density $g_\Theta(\theta)$. Then the estimator $\tau(\theta)$ with respect to the prior $g_\Theta(\theta)$ is defined as $E[\tau(\theta)|X_1,X_2,\cdots,X_n]$ is called

 
 
 
 

14. If $Var(\hat{\theta})\rightarrow 0$ as $n \rightarrow 0$, then $\hat{\theta}$ is said to be

 
 
 
 

15. A set of jointly sufficient statistics is defined to be minimal sufficient if and only if

 
 
 
 

16. Let $L(\theta;X_1,X_2,\cdots,X_n)$ be the likelihood function for a sample $X_1,X_2,\cdots, X_n$ having joint density $f(x_1,x_2,\cdots,x_n;\theta)$ where ? belong to parameter space. Then a test defined as $\lambda=\lambda_n=\lambda(x_1,x_2,\cdots,x_n)=\frac{Sup_{\theta\varepsilon \Theta_0}L(\theta;x_1,x_2,\cdots,x_n)}{Sup_{\theta\varepsilon \Theta}L(\theta;x_1,x_2,\cdots,x_n)}$

 
 
 
 

17. If $X_1,X_2,\cdots, X_n$ is the joint density of n random variables, say, $f(X_1, X_2,\cdots, X_n;\theta)$ which is considered to be a function of $\theta$. Then $L(\theta; X_1,X_2,\cdots, X_n)$ is called

 
 
 
 

Most of the MCQs on this page are covered from Estimate and Estimation, Testing of Hypothesis, Parametric and Non-Parametric tests, etc. Let’s start the MCQs Hypothesis Testing quiz now.

Statistical inference is a branch of statistics in which we draw conclusions (make wise decisions) about the population parameter by making use of sample information. Statistical inference can be further divided into Estimation of parameters and testing of hypothesis.

Estimation is a way of finding the unknown value of the population parameter from the sample information by using an estimator (a statistical formula) to estimate the parameter. One can estimate the population parameter by using two approaches (I) Point Estimation and (ii) Interval Estimation.
Point Estimation a single numerical value is computed for each parameter, while in interval estimation a set of values (interval) for the parameter is constructed. The width of the confidence interval depends on the sample size and confidence coefficient. However, it can be decreased by increasing the sample size. The estimator is a formula used to estimate the population parameter by making use of sample information.

Estimation

Muhammad Imdad Ullah

Currently working as Assistant Professor of Statistics in Ghazi University, Dera Ghazi Khan. Completed my Ph.D. in Statistics from the Department of Statistics, Bahauddin Zakariya University, Multan, Pakistan. l like Applied Statistics, Mathematics, and Statistical Computing. Statistical and Mathematical software used is SAS, STATA, Python, GRETL, EVIEWS, R, SPSS, VBA in MS-Excel. Like to use type-setting LaTeX for composing Articles, thesis, etc.

You may also like...

9 Responses

  1. Tarangini says:

    Thank you so much for your website, it is a great help for students like me. Very grateful to souls like you.

  2. Inbasat says:

    Excellent, its realy great work u done Sir Imdad . thanks

  3. Katrin says:

    Will you please give me the answers for these questions. I cant download the answer pdf file. It shows the same page only…

  4. ramzan soomro says:

    sir please send mcqs if possible for nts Gat Subject stat. I am asst. prof at degree college Karachi. Thanks sir

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

x Logo: Shield Security
This Site Is Protected By
Shield Security