The post is about MCQs Statistics with Answers. There are 20 multiple-choice questions covering topics related to the basics of statistics, level of measurements, variables, data, data collection methods, measures of central tendency and dispersion, numerical quantity, sample, population, etc. Let us start with the MCQs Statistics with Answers.
Online MCQs about Probability with Answers
MCQs Statistics with Answers
In inferential statistics, we study
In descriptive statistics, we study
When data are collected in a statistical study for only a portion or subset of all elements of interest we are using:
In statistics, a sample means
The height of a student is 60 inches. This is an example of
Data in the Population Census Report is:
Which of the following is not based on all the observations?
The statistic is a numerical quantity, which is calculated from:
Which one of the following measurements does not divide a set of observations into equal parts?
Which branch of statistics deals with the techniques that are used to organize, summarize, and present the data:
You asked five of your classmates about their height. Based on this information, you stated that the average height of all students in your university or college is 67 inches. This is an example of:
In statistics, conducting a survey means
The algebraic sum of deviations from the mean is:
In statistics, a population consists of:
Which one is the not measure of dispersion?
If there is no gap between the consecutive classes, the limits are called
If all the items in a variable are non-zero and non-negative then ————-.
The mean is affected by the change of ————.
The data which have NOT undergone any statistical treatments are:
If any value in a series is zero then we cannot compute the ————-.
The post contains MCQs Probability Questions with Answers. There are 20 multiple-choice questions covering topics related to the statistical experiment, basics of probability, sample space, addition rule of probability, multiplication rule of probability, and conditional probability. Let us start with MCQs Probability Questions.
The probability of an intersection of two events is computed by using the
If two events $A$ and $B$ are mutually exclusive events, then
The range of probability is
In a statistical experiment, each time the experiment is repeated
The set of all possible outcomes (sample points) is called
The sample space (experimental outcomes) refers to
An experiment that consists of tossing 4 coins successively. The number of sample points in this experiment is
On a December day, the probability of snow is 0.30. The probability of a “cold” day is 0.50. The probability of snow and a “cold” is 0.15. Do snow and “cold” weather are independent events?
If $P(A)=0.5$ and $P(B)=0.5$, then $P(A \cap B)$ is
If $A$ and $B$ are independent events with $P(A)=0.6$ and $P(B)=0.6$, then $P(A \cap B)=$?
If events $A$ and $B$ are independent events with $P(A)=0.2$ and $P(B)=0.6$, then $P(A \cup B)=$?
If $A$ and $B$ are independent events with $P(A)=0.4$ and $P(B)=0.25$, then $P(A \cup B)=$?
Events $A$ and $B$ are mutually exclusive. Which of the following statements is true?
If events $A$ and $B$ are independent events with $P(A)=0.05$ and $P(B)=0.65$, then $P(A|B)=$?
A six-sided die is tossed three times. The probability of observing three ones in a row is
If $P(A|B)=0.3$
If events $A$ and $B$ are independent events with $P(A)=0.1$ and $P(B)=0.4$, then
If $P(A|B)=0.3$ and $P(B)=0.8$, then
If $P(A)=0.6$, $P(B)=0.3$, and $P(A \cap B)=0.2$, then $P(B|A)=$?
Online Estimation MCQs for Preparation of PPSC and FPSC Statistics Lecturer Post. There are 20 multiple-choice questions covering the topics related to properties of a good estimation (unbiasedness, efficiency, sufficiency, consistency, and invariance), expectation, point estimate, and interval estimate. Let us start with the Online Estimation MCQs Quiz.
If $Var(\hat{\theta})\rightarrow 0$ as $n \rightarrow 0$, then $\hat{\theta}$ is said to be
If $E(\hat{\theta})=\theta$, then $\hat{\theta}$ is said to be
If $Var(T_2) < Var(T_1)$, then $T_2$ is
If $f(x_1,x_2,\cdots,x_n;\theta)=g(\hat{\theta};\theta)h(x_1,x_2,\cdots,x_n)$, then $\hat{\theta}$ is
Which of the following assumptions are required to show the consistency, unbiasedness, and efficiency of the OLS estimator? i. $E(\mu_t)=0$ ii. $Var(\mu_t)=\sigma^2$ iii. $Cov(\mu_t,\mu_{t-j})=0;t\neq t-j$ iv. $\mu_t \sim N(0,\sigma^2)$
For a biased estimator $\hat{\theta}$ of $\theta$, which one is correct
A test is said to be the most powerful test of size $\alpha$, if
In statistical inference, the best asymptotically normal estimator is denoted by
If the conditional distribution of $X_1, X_2,\cdots,X_n$ given $S=s$, does not depend on $\theta$, for any value of $S=s$, the statistics $S=s(X_1,X_2,\cdots,X_n)$ is called
A set of jointly sufficient statistics is defined to be minimal sufficient if and only if
If $X_1,X_2,\cdots, X_n$ is the joint density of n random variables, say, $f(X_1, X_2,\cdots, X_n;\theta)$ which is considered to be a function of $\theta$. Then $L(\theta; X_1,X_2,\cdots, X_n)$ is called
For two estimators $T_1=t_1(X_1,X_2,\cdots,X_n)$ and $T_2=t_2(X_1,X_2,\cdots,X_n)$ then estimator $t_1$ is defined to be $R_{{t_1}(\theta)}\leq R_{{t_2}(\theta)}$ for all $\theta$ in $\Theta$
Let $X_1,X_2,\cdots,X_n$ be a random sample from the density $f(x;\theta)$, where $\theta$ may be vector. If the conditional distribution of $X_1,X_2,\cdots,X_n$ given $S=s$ does not depend on $\theta$ for any value of $s$ of $S$, then statistic is called.
$Var_\theta (T) \geq \frac{[\tau'(\theta)]^2}{nE[{\frac{\partial}{\partial \theta}log f((X;\theta)}^2]}$, where $T=t(X_1,X_2,\cdots, X_n)$ is an unbiased estimator of $\tau(\theta)$. The above inequality is called
Let $X_1,X_2,\cdots,X_n$ be a random sample from a density $f(x|\theta)$, where $\theta$ is a value of the random variable $\Theta$ with known density $g_\Theta(\theta)$. Then the estimator $\tau(\theta)$ with respect to the prior $g_\Theta(\theta)$ is defined as $E[\tau(\theta)|X_1,X_2,\cdots,X_n]$ is called
Let $L(\theta;X_1,X_2,\cdots,X_n)$ be the likelihood function for a sample $X_1,X_2,\cdots, X_n$ having joint density $f(x_1,x_2,\cdots,x_n;\theta)$ where ? belong to parameter space. Then a test defined as $\lambda=\lambda_n=\lambda(x_1,x_2,\cdots,x_n)=\frac{Sup_{\theta\varepsilon \Theta_0}L(\theta;x_1,x_2,\cdots,x_n)}{Sup_{\theta\varepsilon \Theta}L(\theta;x_1,x_2,\cdots,x_n)}$
Let $Z_1,Z_2,\cdots,Z_n$ be independently and identically distributed random variables, satisfying $E[|Z_t|]<\infty$. Let N be an integer-valued random variable whose value $n$ depends only on the values of the first n $Z_i$s. Suppose $E(N)<\infty$, then $E(Z_1+Z_2+\cdots+Z_n)=E( N)E(Z_i)$ is called
What is the maximum expected difference between a population parameter and a sample estimate?
Which of the following statements describes an interval estimate?
What are the main components of a confidence interval?