Percentages, Fractions, and Decimals Made Easy

Percentages, Fractions, and Decimals are connected. Percentages, fractions, and decimals are all different ways to represent parts of a whole. They can be converted between each other, which is useful for solving many mathematical problems.

We often see phrases like

  • up to 75% off on all items
  • 90% housing loan with low-interest rates
  • 10% to 50% discount advertisements

Examples of Percentages, Fractions, and Decimals

These are some examples of percentages.

Suppose, there are 200 students in a college. Let 80 students remain in college to participate in college extra-curricular activities (ECA). The fraction of students who participated in college ECA can be written as $\frac{80}{100}$, or $\frac{40}{100}$, or $\frac{2}{5}$. We can read it as 80 out of 200 students participated in ECA (or 2 out of 5 participated in ECA). Multiplying this fraction with 100 will convert the fraction to percentages. Therefore, 40% of the students participated in ECA.

By percent means that for every hundred or out of every hundred.

Therefore, a percentage is a fraction whose denominator is always 100. Therefore, a percentage can be converted to a fraction by dividing it by 100. Alternatively, one can change a fraction or a decimal to a percentage by multiplying it by 100. The following figure is about the conversion cycle of percentages to fractions or decimals and vice versa.

Percentages, Fractions, and Decimals

Real-life Examples of Percentages, Fractions, and Decimals

Suppose, you are told that 70% of the students in a class of 50 passed a Mathematics test. How many of them failed?

Number of Students passed the Mathematics test = 70% of 50 = $\frac{70}{100}\times 50 = 35$

Number of students who failed the Mathematics test = $50 – 35 = 15$.

The number of students who failed can be found in an alternative way

\[(100-70)\%\times 50 = \frac{30}{100}\times 50 = 15\]

Statistics Help

SPSS Data Analysis

Online MCQs Quiz Website

Secrets of Skewness and Measures of Skewness (2021)

If the curve is symmetrical, a deviation below the mean exactly equals the corresponding deviation above the mean. This is called symmetry. Here, we will discuss Skewness and Measures of Skewness.

Skewness is the degree of asymmetry or departure from the symmetry of a distribution. Positive Skewness means when the tail on the right side of the distribution is longer or fatter. The mean and median will be greater than the mode. Negative Skewness is when the tail of the left side of the distribution is longer or fatter than the tail on the right side.

Skewness and Measures of Skewness

Measures of Skewness

Karl Pearson Measures of Relative Skewness

In a symmetrical distribution, the mean, median, and mode coincide. In skewed distributions, these values are pulled apart; the mean tends to be on the same side of the mode as the longer tail. Thus, a measure of the asymmetry is supplied by the difference ($mean – mode$). This can be made dimensionless by dividing by a measure of dispersion (such as SD).

The Karl Pearson measure of relative skewness is
$$\text{SK} = \frac{\text{Mean}-\text{mode}}{SD} =\frac{\overline{x}-\text{mode}}{s}$$
The value of skewness may be either positive or negative.

The empirical formula for skewness (called the second coefficient of skewness) is

$$\text{SK} = \frac{3(\text{mean}-\text{median})}{SD}=\frac{3(\tilde{X}-\text{median})}{s}$$

Bowley Measures of Skewness

In a symmetrical distribution, the quartiles are equidistant from the median ($Q_2-Q_1 = Q_3-Q_2$). If the distribution is not symmetrical, the quartiles will not be equidistant from the median (unless the entire asymmetry is located in the extreme quarters of the data). The Bowley suggested measure of skewness is

$$\text{Quartile Coefficient of SK} = \frac{Q_(2-Q_2)-(Q_2-Q_1)}{Q_3-Q_1}=\frac{Q_2-2Q_2+Q_1}{Q_3-Q_1}$$

This measure is always zero when the quartiles are equidistant from the median and is positive when the upper quartile is farther from the median than the lower quartile. This measure of skewness varies between $+1$ and $-1$.

Moment Coefficient of Skewness

In any symmetrical curve, the sum of odd powers of deviations from the mean will be equal to zero. That is, $m_3=m_5=m_7=\cdots=0$. However, it is not true for asymmetrical distributions. For this reason, a measure of skewness is devised based on $m_3$. That is

\begin{align}
\text{Moment of Coefficient of SK}&= a_3=\frac{m_3}{s^3}=\frac{m_3}{\sqrt{m_2^3}}\\
&=b_1=\frac{m_3^2}{m_2^3}
\end{align}

For perfectly symmetrical curves (normal curves), $a_3$ and $b_1$ are zero.

Skewness ad Measure of Skewness

FAQs about SKewness

  1. What is skewness?
  2. If a curve is symmetrical then what is the behavior of deviation below and above the mean?
  3. What is Bowley’s Measure of Skewness?
  4. What is Karl Person’s Measure of Relative Skewness?
  5. What is the moment coefficient of skewness?
  6. What is the positive and negative skewness?

See More about Skewness

Online MCQs Test Preparation Website

Key Points of Heteroscedasticity (2021)

The following are some key points about heteroscedasticity. These key points are about the definition, example, properties, assumptions, and tests for the detection of heteroscedasticity (detection of hetero in short).

One important assumption of Regression is that the

One important assumption of Regression is that the variance of the Error Term is constant across observations. If the error has a constant variance, then the errors are called homoscedastic, otherwise heteroscedastic. In the case of heteroscedastic errors (non-constant variance), the standard estimation methods become inefficient. Typically, to assess the assumption of homoscedasticity, residuals are plotted.

Heteroscedasticity

  • The disturbance term of OLS regression $u_i$ should be homoscedastic. By Homo, we mean equal, and scedastic means spread or scatter.
  • By hetero, we mean unequal.
  • Heteroscedasticity means that the conditional variance of $Y_i$ (i.e., $var(u_i))$ conditional upon the given $X_i$ does not remain the same regardless of the values taken by the variable $X$.
  • In case of heteroscedasticity $E(u_i^2)=\sigma_i^2=var(u_i^2)$, where $i=1,2,\cdots, n$.
  • In case of Homoscedasticity $E(u_i^2)=\sigma^2=var(u_i^2)$, where $i=1,2,\cdots, n$
  • Homoscedasticity means that the conditional variance of $Y_i$ (i.e. $var(u_i))$ conditional upon the given $X_i$ remains the same regardless of the values taken by the variable $X$.
  • The error terms are heteroscedastic, when the scatter of the errors is different, varying depending on the value of one or more of the explanatory variables.
  • Heteroscedasticity is a systematic change in the scatteredness of the residuals over the range of measured values
  • The presence of outliers may be due to (i) The presence of outliers in the data, (ii) incorrect functional form of the regression model, (iii) incorrect transformation of the data, and (iv) missing observations with different measures of scale.
  • The presence of hetero does not destroy the unbiasedness and consistency of OLS estimators.
  • Hetero is more common in cross-section data than time-series data.
  • Hetero may affect the variance and standard errors of the OLS estimates.
  • The standard errors of OLS estimates are biased in the case of hetero.
  • Statistical inferences (confidence intervals and hypothesis testing) of estimated regression coefficients are no longer valid.
  • The OLS estimators are no longer BLUE as they are no longer efficient in the presence of hetero.
  • The regression predictions are inefficient in the case of hetero.
  • The usual OLS method assigns equal weights to each observation.
  • In GLS the weight assigned to each observation is inversely proportional to $\sigma_i$.
  • In GLS a weighted sum of squares is minimized with weight $w_i=\frac{1}{\sigma_i^2}$.
  • In GLS each squared residual is weighted by the inverse of $Var(u_i|X_i)$
  • GLS estimates are BLUE.
  • Heteroscedasticity can be detected by plotting an estimated $u_i^2$ against $\hat{Y}_i$.
  • Plotting $u_i^2$ against $\hat{Y}_i$, if no systematic pattern exists then there is no hetero.
  • In the case of prior information about $\sigma_i^2$, one may use WLS.
  • If $\sigma_i^2$ is unknown, one may proceed with heteroscedastic corrected standard errors (that are also called robust standard errors).
  • Drawing inferences in the presence of hetero (or if hetero is suspected) may be very misleading.

MCQs Online Website with Answers: https://gmstat.com

R Frequently Asked Questions