Data Preparation Quiz 24

Test your knowledge with this Data Preparation Quiz featuring 20 MCQs designed for researchers, statisticians, data analysts, and data scientists. Covering key concepts like data understanding, feature engineering, outlier detection, and best practices in data preparation, this Data Preparation Quiz helps assess your expertise in preparing datasets for analysis. Whether you are refining data for machine learning or ensuring accuracy in statistical models, these questions will challenge your understanding of this critical phase in the data science methodology. Perfect for professionals and students looking to sharpen their data preparation skills!

Online Data Preparation Quiz with answers

Online MCQs about Data Preparation and Data Understanding Quiz with Answers

1. A branch of statistics in which data is collected according to an ordinal scale or a nominal scale is classified as

 
 
 
 

2. Which of the following has the same units as the units of the original data?

 
 
 
 

3. What is the definition of data preparation?

 
 
 
 

4. What is the primary role of the data understanding phase in the data science methodology?

 
 
 
 

5. In the case study, while working through the Data Preparation stage, data scientists learned that the initial definition did not capture all expected congestive heart failure admissions.

 
 

6. Advertising expenditure is an example of

 
 
 
 

7. When data is arranged, middle value in set of observations is classified as

 
 
 
 

8. Reports published by the International Labor Organization and the International Monetary Fund are considered

 
 
 
 

9. Data measurement, which arises from a specific measuring process, is classified as

 
 
 
 

10. Select the correct statement about the Data Preparation stage of the data science methodology.

 
 
 
 

11. The Data Preparation stage is the least time-consuming phase of a data science project, typically taking between 5 and 10 percent of the overall project time.

 
 

12. Types of structured questions do not include

 
 
 
 

13. What is the purpose of feature engineering during the Data Preparation stage?

 
 
 
 

14. If the range and maximum value of a dataset are 30 and 20, respectively, then the minimum value of the dataset is

 
 
 
 

15. In the case study, during the Data Understanding stage, data scientists discovered that not all the expected congestive heart failure admissions were being captured. What action did they take to resolve the issue?

 
 
 
 

16. Which method is used to detect outliers by calculating the range between the 25th and 75th percentiles?

 
 
 
 

17. How does automating data collection and preparation processes affect the overall project time?

 
 
 
 

18. How does the Data Preparation stage affect the next steps in a data science project?

 
 
 
 

19. What is the best practice for handling extreme outliers in a dataset when analyzing average compensation?

 
 
 
 

20. Why is the Data Preparation stage considered time-consuming for a data science project?

 
 
 
 

Question 1 of 20

Online Data Preparation Quiz with Answers

  • The Data Preparation stage is the least time-consuming phase of a data science project, typically taking between 5 and 10 percent of the overall project time.
  • In the case study, during the Data Understanding stage, data scientists discovered that not all the expected congestive heart failure admissions were being captured. What action did they take to resolve the issue?
  • In the case study, while working through the Data Preparation stage, data scientists learned that the initial definition did not capture all expected congestive heart failure admissions.
  • Select the correct statement about the Data Preparation stage of the data science methodology.
  • What is the primary role of the data understanding phase in the data science methodology?
  • How does the Data Preparation stage affect the next steps in a data science project?
  • Why is the Data Preparation stage considered time-consuming for a data science project?
  • What is the purpose of feature engineering during the Data Preparation stage?
  • How does automating data collection and preparation processes affect the overall project time?
  • Which method is used to detect outliers by calculating the range between the 25th and 75th percentiles?
  • What is the best practice for handling extreme outliers in a dataset when analyzing average compensation?
  • What is the definition of data preparation?
  • When data is arranged, the middle value in a set of observations is classified as
  • A branch of statistics in which data is collected according to an ordinal scale or a nominal scale is classified as
  • Data measurement, which arises from a specific measuring process, is classified as
  • Types of structured questions do not include
  • Reports published by the International Labor Organization and the International Monetary Fund are considered
  • Which of the following has the same units as the units of the original data?
  • If the range and maximum value of a dataset are 30 and 20, respectively, then the minimum value of the dataset is
  • Advertising expenditure is an example of

Online Quiz Website

Chebyshev’s Theorem

Chebyshev’s Theorem (also known as Chebyshev’s Inequality) is a statistical rule that applies to any dataset that applies to any distribution, regardless of its shape (not just normal distributions). It provides a way to estimate the minimum proportion of data points that fall within a certain number of standard deviations from the mean.

Chebyshev’s Theorem Statement

For any dataset (with mean $\mu$ and standard deviation $\sigma$), at least $1−\frac{1}{k^2}$​ of the data values will fall within $k$ standard deviations from the mean, where $k>1$. It can be defined in probability form as

$$P\left[|X-\mu| < k\sigma \right] \ge 1 – \frac{1}{k^2}$$

  • At least 75% of data lies within 2 standard deviations of the mean (since $1-\frac{1}{2^2}=0.75$).
  • At least 89% of data lies within 3 standard deviations of the mean ($1−\frac{1}{3^2}≈0.89$).
  • At least 96% of data lies within 5 standard deviations of the mean ($1−\frac{1}{5^2}=0.96$).

Key Points about Chebyshev’s Theorem

  • Works for any distribution (normal, skewed, uniform, etc.).
  • Provides a conservative lower bound (actual proportions may be higher).
  • Useful when the data distribution is unknown.

Unlike the Empirical Rule (which applies only to bell-shaped distributions), Chebyshev’s Theorem is universal—great for skewed or unknown distributions.

Note: Chebyshev’s Theorem gives only lower bounds for the proportion of data values, whereas the Empirical Rule gives approximations. If a data distribution is known to be bell-shaped, the Empirical Rule should be used.

Real-Life Application of Chebyshev’s Theorem

  • Quality Control & Manufacturing: Manufacturers use Chebyshev’s Theorem to determine the minimum percentage of products that fall within acceptable tolerance limits. For example, if a factory produces bolts with a mean length of 5cm and a standard deviation of 0.1cm, Chebyshev’s Theorem guarantees that at least 75% of bolts will be between 4.8 cm and 5.2 cm (within 2 standard deviations).
  • Finance & Risk Management: Investors use Chebyshev’s Theorem to assess the risk of stock returns. For example, if a stock has an average return of 8% with a standard deviation of 2%, Chebyshev’s Theorem ensures that at least 89% of returns will be between 2% and 14% (within 3 standard deviations).
  • Weather Forecasting: Meteorologists use Chebyshev’s Theorem to predict temperature variations. For example, if the average summer temperature in a city is 30${}^\circ$C with a standard deviation of 3${}^\circ$C, at least 75% of days will have temperatures between 24${}^\circ$C and 36${}^\circ$C (within 2 standard deviations).
  • Education & Grading Systems: Teachers can use Chebyshev’s Theorem to estimate grade distributions. As schools might not know the exact distribution of test scores. For example, if an exam has a mean score of 70 with a standard deviation of 10, at least 96% of students scored between 50 and 90 (within 5 standard deviations). Therefore, Chebyshev’s theorem can help assess performance ranges.
  • Healthcare & Medical Studies: Medical researchers use Chebyshev’s Theorem to analyze biological data (e.g., blood pressure, cholesterol levels). For example, if the average blood pressure is 120 mmHg with a standard deviation of 10, at least 75% of patients have blood pressure between 100 and 140 mmHg (within 2 standard deviations).
  • Insurance & Actuarial Science: Insurance companies use Chebyshev’s Theorem to estimate claim payouts. For example, if the average claim is 5,000 with a standard deviation of 1,000, at least 89% of claims will be between 2,000 and 8,000 (within 3 standard deviations).
  • Environmental Studies: When tracking irregular phenomena like daily pollution levels, Chebyshev’s inequality helps understand the concentration of values – even when the data is erratic.

Numerical Example of Chebyshev’s Data

Consider the daily delivery times (in minutes) for a courier.
Data: 30, 32, 35, 36, 37, 39, 40, 41, 43, 50

Calculate the mean and standard deviation:

  • Mean $\mu$ = 38.3
  • Standard Deviation $\sigma$ = 5.77

Let $k=2$ (we want to know how many values will lie within 2 standard deviation of the mean)
\begin{align}
\mu – 2\sigma &= 38.3 – (2\times 5.77) \approx 26.76\\
\mu + 2\sigma &= 38.3 + (2\times 5.77) \approx 49.84
\end{align}

So, values between 26.76 and 49.84 should contain at least 75% of the data, according to Chebyshev’s inequality.

A visual representation of the data points, mean, and shaded bands for $\pm 1\sigma$, $\pm 2\sigma$, and $\pm 3\sigma$.

Chebyshev's Theorem Inequality

From the visual representation of Chebyshev’s Theorem, one can see how most of the data points cluster around the mean value and how the $\pm 2\sigma$ range captures 90% of the data.

Summary

Chebyshev’s Inequality/Theorem is a powerful tool in statistics because it applies to any dataset, making it useful in fields like finance, manufacturing, healthcare, and more. While it doesn’t give exact probabilities like the normal distribution, it provides a worst-case scenario guarantee, which is valuable for risk assessment and decision-making.

FAQs about Chebyshev’s Method

  • What is Chebyshev’s Inequality/Theorem?
  • What is the range of values of Chebyshev’s Inequality?
  • Give some real-life application of Chebyshev’s Theorem.
  • What is the Chebyshev Theorem Formula?

Data Analysis in R Programming Language

Empirical Rule

The Empirical Rule (also known as the 68-95-99.7 Rule) is a statistical principle that applies to normally distributed data (bell-shaped curves). Empirical Rule tells us how data is spread around the mean in such (bell-shaped) distributions.

Empirical Rule states that:

  • 68% of data falls within 1 standard deviation ($\sigma$) of the mean ($\mu$). In other words, 68% of the data falls within ±1 standard deviation ($\sigma$) of the mean ($\mu$). Range: $\mu-1\sigma$ to $\mu+1\sigma$.
  • 95% of data falls within 2 standard deviations ($\sigma$) of the mean ($\mu$). In other words, 95% of the data falls within ±2 standard deviations ($2\sigma$) of the mean ($\mu$). Range: $\mu-2\sigma$ to $\mu+2\sigma$.
  • 99.7% of data falls within 3 standard deviations ($\sigma$) of the mean ($\mu$). In other words, 99.7% of the data falls within ±3 standard deviations ($3\sigma$) of the mean ($\mu$). Range: $\mu-3\sigma$ to $\mu+3\sigma$.

Visual Representation of Empirical Rule

The empirical rule can be visualized from the following graphical representation:

Visual Representation of Empirical Rule

Key Points

  • Empirical Rule only applies to normal (symmetric, bell-shaped) distributions.
  • It helps estimate probabilities and identify outliers.
  • About 0.3% of data lies beyond ±3σ (considered rare events).

Numerical Example of Empirical Rule

Suppose adult human heights are normally distributed with Mean ($\mu$) = 70 inches and standard deviation ($\sigma$) = 3 inches. Then:

  • 68% of heights are between 67–73 inches ($\mu \pm \sigma \Rightarrow 70 \pm 3$ ).
  • 95% are between 64–76 inches ($\mu \pm 2\sigma\Rightarrow 70 \pm 2\times 3$).
  • 99.7% are between 61–79 inches ($\mu \pm 3\sigma \Rightarrow 70 ± 3\times 3$).

This rule is a quick way to understand variability in normally distributed data without complex calculations. For non-normal distributions, other methods (like Chebyshev’s inequality) may be used.

Real-Life Applications & Examples

  • Quality Control in Manufacturing: Manufacturers measure product dimensions (e.g., bottle fill volume, screw lengths). If the process is normally distributed, the Empirical Rule helps detect defects: If soda bottles have a mean volume of 500ml with $\sigma$ = 10ml:
    • 68% of bottles will be between 490ml–510ml.
    • 95% will be between 480ml–520ml.
    • Bottles outside 470ml–530ml (3$\sigma$) are rare and may indicate a production issue.
  • Human Height Distribution: The Heights of people in a population often follow a normal distribution. If the average male height is 70 inches (5’10”) with $\sigma$ = 3 inches:
    • 68% of men are between 67–73 inches.
    • 95% are between 64–76 inches.
    • 99.7% are between 61–79 inches.
  • Test Scores (Standardized Exams): The exam scores (SAT, IQ tests) are often normally distributed. If SAT scores have $\mu$ = 1000 and $\sigma$ = 200:
    • 68% of students score between 800–1200.
    • 95% score between 600–1400.
    • Extremely low (<400) or high (>1600) scores are rare.
  • Financial Market Analysis (Stock Returns): The daily stock returns often follow a normal distribution. If a stock has an average daily return of 0.1% with σ = 2%: If a stock has an average daily return of 0.1% with σ = 2%:
    • 68% of days will see returns between -1.9% to +2.1%.
    • 95% will be between -3.9% to +4.1%.
    • Extreme crashes or surges beyond ±6% are very rare (0.3%).
  • Medical Data (Blood Pressure, Cholesterol Levels): Many health metrics are normally distributed. If the average systolic blood pressure is 120 mmHg with $\sigma$ = 10:
    • 68% of people have readings between 110–130 mmHg.
    • 95% fall within 100–140 mmHg.
    • Readings above 150 mmHg may indicate hypertension.
  • Weather Data (Temperature Variations): The daily temperatures in a region often follow a normal distribution. If the average July temperature is 85°F with σ = 5°F:
    • 68% of days will be between 80°F–90°F.
    • 95% will be between 75°F–95°F.
    • Extremely hot (>100°F) or cold (<70°F) days are rare.

Why the Empirical Rule Matters

  • It helps in predicting probabilities without complex calculations.
  • It is used in risk assessment (finance, insurance).
  • It guides quality control and process improvements.
  • It assists in setting thresholds (e.g., medical diagnostics, passing scores).

FAQs about Empirical Rule

  • What is the empirical rule?
  • For what kind of probability distribution, the empirical rule is used.
  • What is the area under the curve (or percentage) if data falls within 1, 2, and 3 standard deviations?
  • Represent the rule graphically.
  • Give real-life applications and examples of the rule.
  • Why the empirical rule matters, describe.

R Frequently Asked Questions