Histogram Graph: Useful Graphical Representation of Data

A histogram is very similar to the bar chart for a frequency distribution based on quantitative data showing the distribution of qualitative data. It is a useful graphical representation of data that helps to visualize the distribution of data.

Important Points to Draw a Histogram Graph

The histogram is constructed from the grouped data by taking the class boundaries (not class limits) along the x-axis and the corresponding frequencies along the y-axis. For ungrouped data, we have to form the grouped frequency distribution before making a histogram. It consists of a set of bars (like a bar chart) but these bars are adjacent to each other and the height of the bars is proportional to the frequency associated with respective classes.

The area of each rectangle represented the respective class frequencies. When the class intervals are equal, the rectangles all have the same width and their heights directly represent the class frequencies. For the case in which class intervals are not all equal, the height of the rectangle (bar) over an unequal class interval, is to be adjusted because it is area and not the height that measures frequency. This means that the height of a rectangle must be proportionally decreased if the length of the corresponding class interval increases.

For example, if the length of a class interval becomes double, then the height of the rectangle is to be halved so that area, being the fundamental property of the rectangle of the histogram remains unchanged. This sort of rescaling is necessary to observe the correct pattern of distribution.

Important Features of Histogram

The important feature of the Histogram graph is that there is no gap (space) between the vertical bars because the variable plotted on the horizontal axis is quantitative and the variable is from the measure of scale either interval or ratio. Thus, it provides an easily interpreted visual representation of a frequency distribution. Note that class midpoints are used as labels for the classes.

It allows us to analyze extremely large datasets by reducing them to a single graphical representation which is used to show primary, secondary, and tertiary peaks in data, and also helps us by giving a visual representation of the statistical significance of those peaks.

Alternative of Histogram

An alternative to the histogram is kernel density estimation, which uses a kernel to smooth samples. This will construct a smooth probability density function, which will, in general, more accurately reflect the underlying variable.

Histograms for Continuous Grouped Data

To draw a histogram graph from the continuous grouped frequency distribution, the following steps are taken.

  1. Mark the class boundaries of the classes along the x-axis.
  2. Mark frequencies along the y-axis.
  3. Draw a rectangle for each class such that the height of each rectangle is proportional to the frequency corresponding to that class. This is the case when classes are of equal width as they often are.
  4. If the classes are of unequal width, then the area instead of the height of each rectangle is proportional to the frequency corresponding to that class, and the height of each rectangle is obtained by dividing the frequency of the class by the width of that class.

It may be noted that the area under a histogram graph can be calculated by adding up the areas of all the rectangles that constitute the histogram. The area of one rectangle is obtained by the multiplication of the width of the class by the corresponding frequency i.e.

Area of a single rectangle = width of the class x frequency of the class

Histogram for Discrete Data

Bar graphs are usually drawn for discrete and categorical data but there are some situations where there is a need to make an approximation, the histograms may be constructed. To construct a histogram graph for discrete grouped data, the following steps are taken:

  1. Mark possible values on the x-axis.
  2. Mark frequencies along the y-axis.
  3. Draw a rectangle centered on each value with equal width on each side possibly 0.5 to either side of the value.
Histogram graph

The advantages of the histograms as compared to the unprocessed data are:

  1. It gives a range of data.
  2. It gives the location of the data.
  3. it gives a clue about the skewness of the data.
  4. It gives information about the out-of-control situation.
  5. Histograms are density estimates (give a good impression of the distribution of data.
  6. Can be compared to the normal curve.

The disadvantages are:

  • Exact values cannot be read from histogram graph because data is grouped into categories and individuality of data vanishes in grouped data.
  • It is more difficult to compare two data sets.
  • It is used only for the continuous data set.

FAQs about Histogram

  1. What is a histogram graph?
  2. What is the difference between a bar chart and a histogram?
  3. What are the important features of histograms?
  4. What are the advantages and disadvantages of histogram graphs?
  5. How one can draw a histogram for a discrete data set?
  6. How one can draw a histogram for a continuous data set?

Graphical Representation of Data, Data Visualization/ Graphics in R

Correlation Coefficient Range (2012)

We know that the ratio of the explained variation to the total variation is called the coefficient of determination which is the square of the Correlation Coefficient Range lies between $-1$ and $+1$. This ratio (coefficient of determination) is non-negative, therefore denoted by $r^2$, thus

\begin{align*}
r^2&=\frac{\text{Explained Variation}}{\text{Total Variation}}\\
&=\frac{\sum (\hat{Y}-\overline{Y})^2}{\sum (Y-\overline{Y})^2}
\end{align*}

It can be seen that if the total variation is all explained, the ratio $r^2$ (Coefficient of Determination) is one and if the total variation is all unexplained then the explained variation and the ratio $r^2$ are zero.

The square root of the coefficient of determination is called the correlation coefficient, given by

\begin{align*}
r&=\sqrt{ \frac{\text{Explained Variation}}{\text{Total Variation}} }\\
&=\pm \sqrt{\frac{\sum (\hat{Y}-\overline{Y})^2}{\sum (Y-\overline{Y})^2}}
\end{align*}

and

\[\sum (\hat{Y}-\overline{Y})^2=\sum(Y-\overline{Y})^2-\sum (Y-\hat{Y})^2\]

Therefore

\begin{align*}
r&=\sqrt{ \frac{\sum(Y-\overline{Y})^2-\sum (Y-\hat{Y})^2} {\sum(Y-\overline{Y})^2} }\\
&=\sqrt{1-\frac{\sum (Y-\hat{Y})^2}{\sum(Y-\overline{Y})^2}}\\
&=\sqrt{1-\frac{\text{Unexplained Variation}}{\text{Total Variation}}}=\sqrt{1-\frac{S_{y.x}^2}{s_y^2}}
\end{align*}

where $s_{y.x}^2=\frac{1}{n} \sum (Y-\hat{Y})^2$ and $s_y^2=\frac{1}{n} \sum (Y-\overline{Y})^2$

\begin{align*}
\Rightarrow r^2&=1-\frac{s_{y.x}^2}{s_y^2}\\
\Rightarrow s_{y.x}^2&=s_y^2(1-r^2)
\end{align*}

Since variances are non-negative

\[\frac{s_{y.x}^2}{s_y^2}=1-r^2 \geq 0\]

Solving for inequality we have

\begin{align*}
1-r^2 & \geq 0\\
\Rightarrow r^2 \leq 1\, \text{or}\, |r| &\leq 1\\
\Rightarrow & -1 \leq r\leq 1
\end{align*}

Therefore, the Correlation Coefficient Range lies between $-1$ and $+1$ inclusive.

Correlation Coefficient Range

Alternative Proof: Correlation Coefficient Range

Since $\rho(X,Y)=\rho(X^*,Y^*)$ where $X^*=\frac{X-\mu_X}{\sigma_X}$ and $Y^*=\frac{Y-Y^*}{\sigma_Y}$

and as covariance is bi-linear and $X^*, Y^*$ have zero mean and variance 1, therefore

\begin{align*}
\rho(X^*,Y^*)&=Cov(X^*,Y^*)=Cov\{\frac{X-\mu_X}{\sigma_X},\frac{Y-\mu_Y}{\sigma_Y}\}\\
&=\frac{Cov(X-\mu_X,Y-\mu_Y)}{\sigma_X\sigma_Y}\\
&=\frac{Cov(X,Y)}{\sigma_X \sigma_Y}=\rho(X,Y)
\end{align*}

We also know that the variance of any random variable is $\ge 0$, it could be zero i.e. $(Var(X)=0)$ if and only if $X$ is a constant (almost surely), therefore

\[V(X^* \pm Y^*)=V(X^*)+V(Y^*)\pm2Cov(X^*,Y^*)\]

As $Var(X^*)=1$ and $Var(Y^*)=1$, the above equation would be negative if $Cov(X^*,Y^*)$ is either greater than 1 or less than -1. Hence \[1\geq \rho(X,Y)=\rho(X^*,Y^*)\geq -1\].

If $\rho(X,Y )=Cov(X^*,Y^*)=1$ then $Var(X^*- Y ^*)=0$ making $X^* = Y^*$ almost surely. Similarly, if $\rho(X,Y )=Cov(X^*,Y^*)=-1$ then $X^* = – Y^*$ almost surely. In either case, $Y$ would be a linear function of $X$ almost surely.

For proof of Cauchy-Schwarz Inequality please follow the link

We can see that the Correlation Coefficient range lies between $-1$ and $+1$.

Coefficient of Correlation Range

Learn more about

Multivariate Data Sets: Descriptive Statistics (2010)

Much of the information contained in the multivariate data sets can be assessed by calculating certain summary numbers, known as multivariate descriptive statistics such as Arithmetic mean (a measure of location), an average of the squares of the distances of all of the numbers from the mean (variation/spread i.e. measure of spread or variation), etc. Here we will discuss descriptive statistics and multivariate data sets.

Multivariate data sets are used in various fields, such as:

  • Social Sciences: Analyzing factors influencing social phenomena like voting behavior, educational attainment, or health outcomes.
  • Business: Understanding customer demographics and purchase patterns, market research, risk assessment, and financial modeling.
  • Natural Sciences: Studying relationships between environmental variables, analyzing climate data, or exploring genetic factors influencing diseases.

Multivariate Data Sets: Descriptive Analysis

We shall rely most heavily on descriptive statistics which is a measure of location, variation, and linear association. For descriptive statistics multivariate data set, let us start with a measure of location, a measure of spread, sample covariance, and sample correlation coefficient.

Measure of Location

The arithmetic average of $n$ measurements $(x_{11}, x_{21}, x_{31},x_{41})$ on the first variable (defined in Multivariate Analysis: An Introduction) is

Sample Mean = $\bar{x}=\frac{1}{n} \sum _{j=1}^{n}x_{j1} \mbox{ where } j =1, 2,3,\cdots , n $

The sample mean for $n$ measurements on each of the p variables (there will be p sample means)

$\bar{x}_{k} =\frac{1}{n} \sum _{j=1}^{n}x_{jk} \mbox{ where }  k  = 1, 2, \cdots , p$

Measure of Spread

Measure of spread (variance) for $n$ measurements on the first variable for multivariate data sets can be found as
$s_{1}^{2} =\frac{1}{n} \sum _{j=1}^{n}(x_{j1} -\bar{x}_{1} )^{2} $ where $\bar{x}_{1} $ is sample mean of the $x_{j}$’s for p variables.

Measure of spread (variance) for $n$ measurements on all variables can be found as

$s_{k}^{2} =\frac{1}{n} \sum _{j=1}^{n}(x_{jk} -\bar{x}_{k} )^{2}  \mbox{ where } k=1,2,\dots ,p \mbox{ and } j=1,2,\cdots ,p$

The Square Root of the sample variance is the sample standard deviation i.e

$S_{l}^{2} =S_{kk} =\frac{1}{n} \sum _{j=1}^{n}(x_{jk} -\bar{x}_{k} )^{2}  \mbox{ where }  k=1,2,\cdots ,p$

Multivariate Data sets

Sample Covariance

Consider $n$pairs of measurement on each of Variable 1 and Variable 2
\[\left[\begin{array}{c} {x_{11} } \\ {x_{12} } \end{array}\right],\left[\begin{array}{c} {x_{21} } \\ {x_{22} } \end{array}\right],\cdots ,\left[\begin{array}{c} {x_{n1} } \\ {x_{n2} } \end{array}\right]\]
That is $x_{j1}$ and $x_{j2}$ are observed on the jth experimental item $(j=1,2,\cdots ,n)$. So a measure of linear association between the measurements of  $V_1$ and $V_2$ for multivariate data sets is provided by the sample covariance
\[s_{12} =\frac{1}{n} \sum _{j=1}^{n}(x_{j1} -\bar{x}_{1} )(x_{j2} -\bar{x}_{2}  )\]
(the average product of the deviation from their respective means) therefore

$s_{ik} =\frac{1}{n} \sum _{j=1}^{n}(x_{ji} -\bar{x}_{i} )(x_{jk} -\bar{x}_{k}  )$;  $i=1,2,\cdots, p$ and $k=1,2,\cdots, p$.

It measures the association between the kth variable.

Variance is the most commonly used measure of dispersion (variation) in the data and it is directly proportional to the amount of variation or information available in the data.

Sample Correlation Coefficient

For Multivariate Data Sets, the sample correlation coefficient for the ith and kth variables is

\[r_{ik} =\frac{s_{ik} }{\sqrt{s_{ii} } \sqrt{s_{kk} } } =\frac{\sum _{j=1}^{n}(x_{ji} -\bar{x}_{j} )(x_{jk} -\bar{x}_{k} ) }{\sqrt{\sum _{j=1}^{n}(x_{ji} -\bar{x}_{i} )^{2}  } \sqrt{\sum _{j=1}^{n}(x_{jk} -\bar{x}_{k}  )^{2} } } \]
$\mbox{ where } i=1,2,..,p \mbox{ and}  k=1,2,\dots ,p$

Note that $r_{ik} =r_{ki} $ for all $i$ and $k$, and $r$ lies between $-1$ and $+1$. $r$ measures the strength of the linear association. If $r=0$ the lack of linear association between the components exists. The sign of $r$ indicates the direction of the association.

Other Multivariate Analysis

Multiple Regression: It is used to model the relationship between a dependent variable (DV) and multiple independent variables (IV).

Principal Component Analysis (PCA): It reduces the dimensionality of data by identifying a smaller set of uncorrelated variables that capture most of the data’s variance.

Cluster Analysis: It groups the data points into clusters based on their similarities, helping identify subgroups within the data.

Discriminant Analysis: It classifies data points into predefined groups based on their characteristics.

Learn the use of matrices in R Language

Online MCQs Economics

Pearson Correlation Coefficient (2012)

Introduction to Pearson Correlation Coefficient

The correlation coefficient or Pearson Correlation Coefficient was originated by Karl Pearson in the 1900s. The Pearson Correlation Coefficient is a measure of the (degree of) strength of the linear relationship between two continuous random variables denoted by $\rho_{XY}$ for population and for sample it is denoted by $r_{XY}$.

The Pearson Correlation coefficient can take values that occur in the interval $[1,-1]$. If the coefficient value is $1$ or $-1$, there will be a perfect linear relationship between the variables. A positive sign with a coefficient value shows a positive (direct, or supportive), while a negative sign with a coefficient value shows a negative (indirect, opposite) relationship between the variables.

The zero-value implies the absence of a linear relation and it also shows that variables are independent. Zero value also shows that there may be some other sort of relationship between the variables of interest such as a systematic or circular relationship between the variables.

Pearson Correlation Coefficient Scatter Diagram

Pearson’s Correlation Formula

Mathematically, if two random variables such as $X$ and $Y$ follow an unknown joint distribution then the simple linear correlation coefficient is equal to covariance between $X$ and $Y$ divided by the product of their standard deviations i.e

\[\rho=\frac{Cov(X, Y)}{\sigma_X \sigma_Y}\]

where $Cov(X, Y)$ is a measure of covariance between $X$ and $Y$, $\sigma_X$ and $\sigma_Y$ are the respective standard deviation of the random variables.

For a sample of size $n$, $(X_1, Y_1),(X_2, Y_2),\cdots,(X_n, Y_n)$ from the joint distribution, the quantity given below is an estimate of $\rho$, called sampling correlation and denoted by $r$.

\begin{eqnarray*}
r&=&\frac{\sum_{i=1}^{n}(X_i-\bar{X})(Y_i-\bar{Y})}{\sqrt{\sum_{i=1}^{n}(X_i-\bar{X})^2 \times \sum_{i=1}^{n}(Y_i-\bar{Y})^2}}\\
&=& \frac{Cov(X,Y)}{S_X  X_Y}
\end{eqnarray*}

Note that

  • The existence of a statistical correlation does not mean that there exists a cause-and-effect relation between the variables. Cause and effect mean that a change in one variable does cause a change in the other variable.
  • The changes in the variables may be due to a common cause or random variations.
  • There are many kinds of correlation coefficients. The choice of which to use for a particular set of data depends on different factors such as
    • Type of Scale (Level of Measurement or Measurement Scale) used to express the variables
    • Nature of the underlying distribution (continuous or discrete)
    • Characteristics of the distribution of the scores (linear or non-linear)
  • Correlation is perfectly linear if a constant change in $X$ is accompanied by a constant change in $Y$. In this case, all the points in the scatter diagram will lie in a straight line.
  • A high correlation coefficient does not necessarily imply a direct dependence on the variables. For example, there may be a high correlation between the number of crimes and shoe prices. Such a kind of correlation is referred to as a non-sense or spurious correlation.

Properties of Pearson Correlation Coefficient

The following are important properties that a Pearson correlation coefficient can have:

  1. The Pearson correlation coefficient is symmetrical for $X$ and $Y$ i.e. $r_{XY}=r_{YX}$.
  2. The Correlation coefficient is a pure number and it does not depend upon the units in which the variables are measured.
  3. The correlation coefficient is the geometric mean of the two regression coefficients. Thus if the two regression lines of $Y$ on $X$ and $X$ on $Y$ are written as $Y=a+bX$ and $X=c+dy$ respectively then $bd=r^2$.
  4. The correlation coefficient is independent of the choice of origin and scale of measurement of the variables, i.e. $r$ remains unchanged if constants are added to or subtracted from the variables and if the variables having the same size are multiplied or divided by the class interval size.
  5. The correlation coefficient lies between -1 and +1, symbolically $-1\le r \le 1$.

Take various Online MCQ quiz

Statistical Linear Models in R Language