Design of Experiments MCQs 6

The post is about the Design of Experiments MCQs with Answers. There are 20 multiple-choice questions. The quiz is related to the Basics of the Design of Experiments, Analysis of variation, assumptions of ANOVA, and Principles of DOE. Let us start with the Design of Experiments Design MCQs.

Online Multiple-Choice Questions about Design of Experiments

1. Blocking reduces

 
 
 
 

2. What should be the final step of the design of an experiment?

 
 
 
 

3. Repeating the same experiment more than once is called

 
 
 
 

4. Name the test(s) of equality of two population means.

 
 
 
 

5. Tests of population mean(s) include.

 
 
 
 

6. To control the variation of extraneous sources of variation we do

 
 
 
 

7. When fractionalizing, which resolution should be preferred?

 
 
 
 

8. Pure error is estimated through

 
 
 
 

9. Measuring a quantitative response will improve the power of your experiment with

 
 
 
 

10. Analysis of the experimental data is usually performed using

 
 
 
 

11. To check the reliability of results under the same environment we do

 
 
 
 

12. What is the last step of testing of hypothesis?

 
 
 
 

13. The arrangement of experimental units in groups that are homogeneous internally and different externally is called

 
 
 
 

14. What is the test for testing population mean(s) when the sample size is small?

 
 
 
 

15. The hypothesis is constructed about?

 
 
 
 

16. What is the first step in testing of hypothesis?

 
 
 
 

17. When population variance is unknown but the sample size is large, for testing population mean we use:

 
 
 
 

18. What is the first step of designing an experiment?

 
 
 
 

19. How power of a design can be improved?

 
 
 
 

20. The sampling distribution of the sample from the population approaches normal distribution if the sample size is large enough.

 
 
 
 

Online Design of Experiments MCQs with Answers

  • Repeating the same experiment more than once is called
  • Pure error is estimated through
  • To check the reliability of results under the same environment we do
  • The arrangement of experimental units in groups that are homogeneous internally and different externally is called
  • To control the variation of extraneous sources of variation we do
  • Blocking reduces
  • What is the first step of designing an experiment?
  • Analysis of the experimental data is usually performed using
  • What should be the final step of the design of an experiment?
  • When fractionalizing, which resolution should be preferred?
  • How power of a design can be improved?
  • Measuring a quantitative response will improve the power of your experiment with
  • What is the first step in testing of hypothesis?
  • The hypothesis is constructed about?
  • What is the last step of testing of hypothesis?
  • Tests of population mean(s) include.
  • The sampling distribution of the sample from the population approaches normal distribution if the sample size is large enough.
  • What is the test for testing population mean(s) when the sample size is small?
  • Name the test(s) of equality of two population means.
  • When population variance is unknown but the sample size is large, for testing population mean we use:
Design of Experiments MCQs Quiz

General Knowledge Quiz

Method of Least Squares

Introduction to Method of Least Squares

The method of least squares is a statistical technique used to find the best-fitting curve or line for a set of data points. It does this by minimizing the sum of the squares of the offsets (residuals) of the points from the curve.

The method of least squares is used for

  • solution of equations, and
  • curve fitting

The principles of least squares consist of minimizing the sum of squares of deviations, errors, or residuals.

Mathematical Functions/ Models

Many types of mathematical functions (or models) can be used to model the response, i.e. a function of one or more independent variables. It can be classified into two categories, deterministic and probabilistic models. For example, $Y$ and $X$ are related according to the relation

$$Y=\beta_o + \beta_1 X,$$

where $\beta_o$ and $\beta_1$ are unknown parameter. $Y$ is a response variable and $X$ is an independent/auxiliary variable (regressor). The model above is called the deterministic model because it does not allow for any error in predicting $Y$ as a function of $X$.

Probabilistic and Deterministic Models

Suppose that we collect a sample of $n$ values of $Y$ corresponding to $n$ different settings for the independent random variable $X$ and the graph of the data is as shown below.

Method of Least Squares

In the figure above it is clear that $E(Y)$ may increase as a function of $X$ but the deterministic model is far from an adequate description of reality.

Repeating the experiment when say $X=20$, we would find $Y$ fluctuates about a random error, which leads us to the probabilistic model (that is the model is not deterministic or not an exact representation between two variables). Further, if the mode is used to predict $Y$ when $X=20$, the prediction would be subjected to some known error. This of course leads us to use the statistical method predicting $Y$ for a given value of $X$ is an inferential process and we need to find if the error of prediction is to be valued in real life. In contrast to the deterministic model, the probabilistic model is

$$E(Y)=\beta_o + \beta_1 X + \varepsilon,$$

where $\varepsilon$ is a random variable having the specified distribution, with zero mean. One may think having the deterministic component with error $\varepsilon$.

The probabilistic model accounts for the random behaviour of $Y$ exhibited in the figure and provides a more accurate description of reality than the deterministic model.

The properties of error of prediction of $Y$ can be divided for many probabilistic models. If the deterministic model can be used to predict with negligible error, for all practical purposes, we use them, if not, we seek a probabilistic model which will not be a correct/exact characterization of nature but enable us to assess the reality of our nature.

Estimation of Linear Model: Least Squares Method

For the estimation of the parameters of a linear model, we consider fitting a line.

$$E(Y) = \beta_o + \beta_1 X, \qquad (where\,\, X\,\,\, is \,\,\, fixed).$$

For a set of points ($x_i, y_i$), we consider the real situation

$$Y=\beta_o+\beta_1X+\varepsilon, \qquad with\,\,\, E(\varepsilon)=0$$

where $\varepsilon$ posses specific probability distribution with zero mean and $\beta_o$ and $\beta_1$ are unknown parameters.

Minimizing the Vertical Distances of Data Points

Now if $\hat{\beta}_o$ and $\hat{\beta}_1$ are the estimates of $\beta_o$ and $\beta_1$, respectively then $\hat{Y}=\hat{\beta}_o+\hat{\beta}_1X$ is an estimate of $E(Y)$.

Method of Least Squares

Suppose we have a set of $n$ data sets (points, $x_i, y_i$) and we want to minimize the sum of squares of the vertical distances of the data points from the fitted line $\hat{y}_i = \hat{\beta}_o + \hat{\beta}_1x_i; \,\,\, i=1,2,\cdots, n$. The $\hat{y}_i = \hat{\beta}_o + \hat{\beta}_1x_i$ is the predicted value of $i$th $Y$ when $X=x_i$. The deviation of observed values of $Y$ from $\hat{Y}$ line (sometimes called errors) is $y_i – \hat{y}_i$ and the sum of squares of deviations to be minimized is (vertical distance: $y_i – \hat{y}_i$).

\begin{align*}
SSE &= \sum\limits_{i=1}^n (y_i-\hat{y}_i)^2\\
&= \sum\limits_{i=1}^n (y_i – \hat{\beta}_o – \hat{\beta}_1x_i)^2
\end{align*}

The quantity SSE is called the sum of squares of errors. If SSE possesses minimum, it will occur for values of $\beta_o$ and $\beta_1$ that satisfied the equation $\frac{\partial SSE}{\partial \beta_o}=0$ and $\frac{\partial SSE}{\partial \beta_1}=0$.

Taking the partial derivatives of SSE with respect to $\hat{\beta}_o$ and $\hat{\beta}_1$ and setting them equal to zero, gives us

\begin{align*}
\frac{\partial SSE}{\partial \beta_o} &= \sum\limits_{i=1}^n (y_i – \hat{\beta}_o – \hat{\beta}_1 x_i)^2\\
&= -2 \sum\limits_{i=1}^n (y_i – \hat{\beta}_o – \hat{\beta}_1 x_i) =0\\
&= \sum\limits_{i=1}^n y_i – n\hat{\beta}_o – \hat{\beta}_1 \sum\limits_{i=1}^n x_i =0\\
\Rightarrow \overline{y} &= \hat{\beta}_o + \beta_1\overline{x} \tag*{eq (1)}
\end{align*}

and

\begin{align*}
\frac{\partial SSE}{\partial \beta_1} &= -2 \sum\limits_{i=1}^n (y_i – \hat{\beta}_o – \hat{\beta}_1 x_i)x_i =0\\
&= \sum\limits_{i=1}^n (y_i – \hat{\beta}_o – \hat{\beta}_1 x_i)x_i=0\\
\Rightarrow \sum\limits_{i=1}^n x_iy_i &= \hat{\beta}_o \sum\limits_{i=1}^n x_i – \hat{\beta}_1 \sum\limits_{i=1}^n x_i^2\tag*{eq (2)}
\end{align*}

The equation $\frac{\partial SSE}{\hat{\beta}_o}=0$ and $\frac{\partial SSE}{\partial \hat{\beta}_1}=0$ are called the least squares for estimating the parameters of a straight line. On solving the least squares equation, we have from equation (1),

$$\hat{\beta}_o = \overline{Y} – \hat{\beta}_1 \overline{X}$$

Putting $\hat{\beta}_o$ in equation (2)

\begin{align*}
\sum\limits_{i=1}^n x_i y_i &= (\overline{Y} – \hat{\beta}\overline{X}) \sum\limits_{i=1}^n x_i + \hat{\beta}_1 \sum\limits_{i=1}^n x_i^2\\
&= n\overline{X}\,\overline{Y} – n \hat{\beta}_1 \overline{X}^2 + \hat{\beta}_1 \sum\limits_{i=1}^n x_i^2\\
&= n\overline{X}\,\overline{Y} + (\sum\limits_{i=1}^n x_i^2 – n\overline{X}^2)\\
\Rightarrow \hat{\beta}_1 &= \frac{\sum\limits_{i=1}^n x_iy_i – n\overline{X}\,\overline{Y} }{\sum\limits_{i=1}^n x_i^2 – n\overline{X}^2} = \frac{\sum\limits_{i=1}^n (x_i-\overline{X})(y_i-\overline{Y})}{\sum\limits_{i=1}^n(x_i-\overline{X})^2}
\end{align*}

Applications of Least Squares Method

The method of least squares is a powerful statistical technique. It provides a systematic way to find the best-fitting curve or line for a set of data points. It enables us to model relationships between variables, make predictions, and gain insights from data. The method of least squares is widely used in various fields, such as:

  • Regression Analysis: To model the relationship between variables and make predictions.
  • Curve Fitting: To find the best-fitting curve for a set of data points.
  • Data Analysis: To analyze trends and patterns in data.
  • Machine Learning: As a foundation for many machine learning algorithms.

Frequently Asked Questions about Least Squares Method

  • What is the method of Least Squares?
  • Write down the applications of the Least Squares method.
  • How vertical distance of the data points from the regression line is minimized?
  • What is the principle of the Method of Least Squares?
  • What is meant by probabilistic and deterministic models?
  • Give an example of deterministic and probabilistic models.
  • What is the mathematical model?
  • What is the statistical model?
  • What is curve fitting?
  • State and prove the Least Squares Method?

R Programming Language

MCQs Basic Statistics Quiz 19

This Statistics Test is about MCQs Basic Statistics Quiz with Answers. There are 20 multiple-choice questions from Basics of Statistics, measures of central tendency, measures of dispersion, Measures of Position, and Distribution of Data. Let us start with the MCQS Basic Statistics Quiz with Answers

Please go to MCQs Basic Statistics Quiz 19 to view the test

Online MCQs Basic Statistics Quiz

  • If any value in the data is negative, it is not possible to calculate
  • Mode of the values 2, 6, 8, 6, 12, 15, 18, and 8 is
  • Mode of the values 3, 5, 8, 10, and 12 is
  • The first step in computing the median is
  • If $x=3$ then which of the following is the minimum
  • The dispersion expressed in the form of a ratio or coefficient and independent from units of measurement is called
  • The half of the difference between the third and first quartiles is called
  • The difference between the largest and smallest value in the data is called
  • The most important measure of dispersion is
  • Which of the following is a relative measure of dispersion
  • Which of the following is an absolute measure of dispersion
  • If 6 is multiple t all observations in the data, the mean is multiplied by
  • Which of the properties of Average Deviation considers Mathematics assumption wrong?
  • What would be the changes in the standard deviation if different values are increased by a constant?
  • Two sets of distribution are as follows. For both of the sets, the Range is the same. Which of the demerits of Range is shown here in these sets of distribution? Distribution 1: 30 14 18 25 12 Distribution 2: 30 7 19 27 12
  • For a set of distributions if the value of the mean is 20 and the mode is 14 then what is the value of the median for a set of distributions?
  • Who used the term Statistics for the first time?
  • The median is larger than the arithmetic mean when
  • Fill in the missing words to the quote: “Statistical methods may be described as methods for drawing conclusions about —————- based on ————– computed from the —————“.
  • In general, which of the following statements is FALSE?
MCQs Basic Statistics Quiz with Answers

Computer MCQs Online Test, Learn R Language

Inferential Statistics Terminology

This post is about Inferential Statistics (or statistical inference) and some of its related terminologies. This is a field of statistics that allows us to understand and make predictions about the world around us.

Parameter and Statistic

Any measurable characteristic of a population is called a parameter. For example, the mean of a population is a parameter. OR

Numerical values that describe the characteristics of a whole population are called parameters, commonly presented in Greek Letters.

Any measurable characteristic of a sample is called a statistic. For example, the mean of a sample is a statistic. OR

Numerical measures describing the characteristics of a sample are called statistics, represented by Roman Letters.

Population and Sample

Population: The entire group of individuals, objects, or data points that one is interested in studying. A population under study can be finite or infinite. However, often too large or impractical to study directly.

Sample: A smaller, representative subset of the population. It is used to gain insights about the population without having to study every member. A sample should accurately reflect the characteristics of the population.  

Inference

A Process of drawing conclusions about a population based on the information contained in a sample taken from that population

Estimator

An estimator is a rule (method, formula) that tells how to calculate the value of an estimate based on the measurements contained in a sample. The sample mean is one possible estimator of the population mean $\mu$.

An estimator will be a good estimator in the sense that the distribution of an estimator is concentrated near the value of the parameter.

Estimate

Estimate is a way to use samples. There are many ways to estimate a parameter. Estimates are near to reality (biased or crude). Decisions are very accurate if the estimate is near to reality.

$X_1, X_2, \cdots, X_n$ is a sample and $\overline{X}$ is an estimator. $x_1, x_2, \cdots, x_n$ are sample observation and $\overline{x}=\frac{\Sigma x_i}{n}$ is an estimate.

Estimation

Estimation is the process of finding an estimate or approximation, which is a value that is usable for some purpose even if input data may be incomplete, uncertain, or unstable.

Statistical Inference (or Inferential Statistics)

Any process (art) of drawing inferences (conclusions) about the population based on limited information contained in a sample taken from the same population is called statistical inference (or inferential statistics). It is difficult to draw an inference about the population because the study of the entire universe (population) is not simple. To get some idea about the characteristics (parameters) of the population, we choose a part of a reasonable size, generally, referred to as a sample (by some appropriate method).

Statistical inference is a powerful set of tools used to conclude a population based on data collected from a sample of that population. It allows us to make informed decisions and predictions about the larger group even when we have not examined every single member.

Why Estimate?

  • Speed: Often, an estimate is faster to get than an exact calculation.
  • Simplicity: It can simplify complex problems.
  • Decision-Making: Estimates help one to make choices when one does not have all the details.
  • Checking: One can use estimates to check if a more precise answer is reasonable.

Why is Statistical Inference Important?

  • Decision-making: It helps us make informed decisions in various fields, such as medicine, business, and social sciences.
  • Research: It is crucial for conducting research and drawing meaningful conclusions from data.
  • Understanding the World: It allows us to understand and make predictions about the world around us.
Inferential Statistics or Statistical Inference

Learn R Programming Language, Learn Statistics and Data Analysis