Basic Statistics and Data Analysis

Lecture notes, MCQS of Statistics

Category: Basic Statistics

Introduction to statistics

Standard Deviation: A Measure of Dispersion

Standard Deviation

The standard deviation is a widely used concept in statistics and it tells how much variation (spread or dispersion) is in the data set. It can be defined as the positive square root of the mean (average) of the squared deviations of the values from their mean.
To calculate the standard deviation one have to follow these steps:

  1. First, find the mean of the data.
  2. Take the difference of each data point from the mean of the given data set (which is computed in step 1). Note that, the sum of these differences must be equal to zero or near to zero due to rounding of numbers.
  3. Now computed the square the differences that were obtained in step 2, It would be greater than zero, that it, I will be a positive quantity.
  4. Now add up all the squared quantities obtained in step 3. We call it the sum of squares of differences.
  5. Divide this sum of squares of differences (obtained in step 4) by the total number of observation (available in data) if we have to calculate population standard deviation ($\sigma$). If you want t to compute sample standard deviation ($S$) then divide the sum of squares of differences (obtained in step 4) by the total number of observation minus one ($n-1$) i.e. the degree of freedom. Note $n$ is the number of observations available in the data set.
  6. Find the square root (also known as under root) of the quantity obtained in step 5. The resultant quantity in this way known as the standard deviation for given data set.

The sample standard deviation of a set of $n$ observation, $$X_1, X_2, \cdots, X_n$$ denoted by $S$ is
\begin{aligned}
\sigma &=\sqrt{\sum_{i=1}^n \frac{X_i-\overline{X}}{n}}; Population\, Standard\, Deviation\\
S&=\sqrt{\sum_{i=1}^n \frac{X_i-\overline{X}}{n-1}}; Sample\, Standard\, Deviation
\end{aligned}
The standard deviation can be computed from variance too as $S= \sqrt{Variance}$.

The real meaning of the standard deviation is that for a given data set 68% of the data values will lie within the range $\overline{X} \pm \sigma$ i.e. within one standard deviation from mean or simply within one $\sigma$. Similarly, 95% of the data values will lie within the range $\overline{X} \pm 2 \sigma$ and 99% within $\overline{X} \pm 3 \sigma$.

Examples of Standard Deviation and Variance

A large value of standard deviation indicates more spread in the data set which can be interpreted as the inconsistent behaviour of the data collected. It means that the data points tend to away from the mean value. For the case of smaller standard deviation, data points tend to be close (very close) to mean indicating the consistent behaviour of data set.
The standard deviation and variance both are used to measure the risk of a particular investment in finance. The mean of 15% and standard deviation of 2% indicates that it is expected to earn a 15% return on an investment and we have 68% chance that the return will actually be between 13% and 17%. Similarly, there are 95% chance that the return on the investment will yield an 11% to 19% return.

Skewness: Measure of Asymmetry

The skewed and askew are widely used terminologies that refer to something that is out of order or distorted on one side. Similarly, when referring to the shape of frequency distributions or probability distributions, the term skewness also refers to asymmetry of that distribution. A distribution with an asymmetric tail extending out to the right is referred to as “positively skewed” or “skewed to the right”, while a distribution with an asymmetric tail extending out to the left is referred to as “negatively skewed” or “skewed to the left”. The range of skewness is from minus infinity ($-\infty$) to positive infinity ($+\infty$). In simple words skewness (asymmetry) is measure of symmetry or in other words skewness is the lack of symmetry.

Karl Pearson (1857-1936) first suggested measuring skewness by standardizing the difference between the mean and the mode, such that, $skewness=\frac{\mu-mode}{\text{standard deviation}}$. Since, population modes are not well estimated from sample modes, therefore Stuart and Ord, 1994 suggested that one can estimate the difference between the mean and the mode as being three times the difference between the mean and the median. Therefore, the estimate of skewness will be: $skewness=\frac{3(M-median)}{\text{standard deviation}}$. Many of the statisticians use this measure but after eliminating the ‘3’, that is, $skewness=\frac{M-Median}{\text{standard deviation}}$. This statistic ranges from $-1$ to $+1$. According to Hilderand, 1986, absolute values of skewness above 0.2 indicate great skewness.

Skewness has also been defined with respect to the third moment about the mean, that is $\gamma_1=\frac{\sum(X-\mu)^3}{n\sigma^3}$, which is simply the expected value of the distribution of cubed $Z$ scores. Skewness measured in this way is also sometimes referred to as “Fisher’s skewness”. When the deviations from the mean are greater in one direction than in the other direction, this statistic will deviate from zero in the direction of the larger deviations. From sample data, Fisher’s skewness is most often estimated by: $g_1=\frac{n\sum z^3}{(n-1)(n-2)}$. For large sample sizes ($n > 150$), $g_1$ may be distributed approximately normally, with a standard error of approximately $\sqrt{\frac{6}{n}}$. While one could use this sampling distribution to construct confidence intervals for or tests of hypotheses about $\gamma_1$, there is rarely any value in doing so.

Arthur Lyon Bowley (1869-19570, has also proposed a measure of skewness based on the median and the two quartiles. In a symmetrical distribution, the two quartiles are equidistant from the median but in an asymmetrical distribution, this will not be the case. The Bowley’s coefficient of skewness is $skewness=\frac{q_1+q_3-2\text{median}}{Q_3-Q_1}$. Its value lies between 0 and $\pm1$.

The most commonly used measures of skewness (those discussed here) may produce some surprising results, such as a negative value when the shape of the distribution appears skewed to the right.

It is important for researchers from the behavioral and business sciences to measure skewness when it appears in their data. Great amount of skewness may motivate the researcher to investigate the existence of outliers. When making decisions about which measure of location to report and which inferential statistic to employ, one should take into consideration the estimated skewness of the population. Normal distributions have zero skewness. Of course, a distribution can be perfectly symmetric but may far away from normal distribution. Transformations of variables under study commonly employed to reduce (positive) skewness. These transformation may include square root, log, and reciprocal of variable.

For more about skewness see, Skewness

Convert PDFs to Editable File Formats in 3 Easy Steps

Since the introduction of computers into our lives, we’ve been able to do things that we couldn’t do before. Slowly but surely, our PC skills have improved and today we are using new technologies that are enabling us to be better and more productive in almost every aspect of our lives.

One huge part of modern technology are digital documents that are a legacy of digital revolution. Paper documents have been replaced by digital files at one point, since they are easier to use, edit and share between colleagues and friends.

One of the most used and known digital file formats is Portable Document Format, better known as the PDF. Developed and published in the nineties, the PDF is still a number one format for managers, students, accountants, writers and many others. For more than 20 years it has been building up supporters, who use it for 3 main reasons:

  1. It’s universal — it can be opened on any device (including mobile devices).
  2. It’s shareable — documents are easily shared across all platforms.
  3. It’s standardized — the files always maintain original formatting.

Aside from attractive features that make this file format popular, there is one major downside to using PDF — the format is not so easy to edit.

If you want to make changes to your financial or project reports saved in PDF, the best thing to do is to edit your documents using a software that’s designed for that purpose. One such tool is Able2Extract Professional 11, known for its powerful and modern PDF editing features.

With Able2Extract’s integrated PDF editor you can:

  • Resize and scale more pages at once
  • Add 10 different annotations
  • Customize any individual page
  • Add and delete your PDF content
  • Extract and combine multiple PDFs
  • Redact any sensitive content

The software also converts PDF to over 10 different file formats (MS Office, AutoCAD, Image, HTML, CSV) and it’s available for all three desktop platforms.

It’s so easy to use that all you need to do is follow this three step conversion process:

  1. Click Open and select the PDF document that you want to convert.Convert PDF with Able2Extact: Open and Select PDF
  2. Select either the entire document or just a part, using the Selection panel. After making the selection, click on the desired output format.
    Convert PDF with Able2Extact: Selection Panel
  3. Choose where you want your document to be saved, and the conversion will begin.
    Convert PDF with Able2Extact: save conversion

Besides editing and conversion, the developers of Able2Extract decided to provide complete document encryption and decryption upon your PDF creation.

Now you can set up file owners, configure passwords and share your documents freely. By clicking on the “Create” button in Able2Extract, the software will automatically make a PDF document from your file.

To conclude this quick guide: the conversion of PDF files is precise, quick and most importantly — it can boost your office productivity. On the downside, the tool is aimed at experienced business professionals, with the full, lifetime license costing around $150.

To see if Able2Extract is a tool that can help you with your everyday documents struggles, you can download the free trial version. It lasts for 7 days, which is more than enough to make the right call.

See the video for further information and working of Able2Extact software

 

The Correlogram

A correlogram is a graph used to interpret a set of autocorrelation coefficients in which $r_k$ is plotted against the $log k$. A correlogram is often very helpful for visual inspection. Some general advice to interpret the correlogram are:

  • A Random Series: If a time series is completely random, then for large $N$, $r_k \cong 0$ for all non-zero value of $k$. A random time series $r_k$ is approximately $N\left(0, \frac{1}{N}\right)$. If a time series is random, let 19 out of 20 of the values of $r_k$ can be expected to lie between $\pm \frac{2}{\sqrt{N}}$. However, plotting the first 20 values of $r_k$, one can expect to find one significant value on average even when time series is really random.
  • Short-term Correlation: Stationary series often exhibit short term correlation characterized by a fairly large value of $r_1$ followed by 2 or 3 more coefficients (significantly greater than zero) tend to get successively smaller value of $r_k$ for larger lags tends to get be approximately zero. A time series which give rise to such a correlogram is one for which an observation above the mean tends to be followed by one or more further observations above the mean and similarly for observation below the mean. A model called an autoregressive model, may be appropriate for series of this type.
  • Alternating Series: If a time series has a tendency to alternate with successive observations on different sides of the overall mean, then the correlogram also tends to alternate. The value of $r_1$ will be negative, however, the value of $r_2$ will be positive as observation at lag 2 will tend to be on the same side of the mean.
  • Non-Stationary Series: If a time series contains a trend, then the value of $r_k$ will not come down to zero except for very large values of the lags. This is because by a large number of further observations on the same side of the mean because of the trend. The sample autocorrelation function $\{ r_k \}$ should only be calculated for stationary time series and no any tend should be removed before calculating $\{ r_k\}$.
  • Seasonal Fluctuations: If a time series contains a seasonal fluctuation then the correlogram will also exhibit an oscillation at the same frequency. If $x_t$ follows a sinusoidal patterns then so does $r_k$.
    $x_t=a\, cos\, t\, w, $ where $a$ is constant, $w$ is frequency such that $0 < w < \pi$. Therefore $r_k \cong cos\, k\, w$ for large $N$.
    If the seasonal variation is removed from seasonal data then the correlogram may provide useful information.
  • Outliers: If a time series contains one or more outliers the correlogram may be seriously affected. If there is one outlier in the time series and it is not adjusted, then the plot of $x_y$ vs $x_{t+k}$ will contain two extreme points, which will tend to depress the sample correlation coefficients towards zero. If there are two outliers, this effect is more noticeable.
  • General Remarks: Experience is required to interpret autocorrelation coefficients. We need to study the probability theory of stationary series and the classes of model too. We also need to know the sampling properties of $x_t$.

Standard Error of Estimate

Standard error (SE) is a statistical term used to measure the accuracy within a sample taken from population of interest. The standard error of the mean measures the variation in the sampling distribution of the sample mean, usually denoted by $\sigma_\overline{x}$ is calculated as

\[\sigma_\overline{x}=\frac{\sigma}{\sqrt{n}}\]

Drawing (obtaining) different samples from the same population of interest usually results in different values of sample means, indicating that there is a distribution of sampled means having its own mean (average values) and variance. The standard error of the mean is considered as the standard deviation of all those possible sample drawn from the same population.

The size of the standard error is affected by standard deviation of the population and number of observations in a sample called the sample size. The larger the standard deviation of the population ($\sigma$), the larger the standard error will be, indicating that there is more variability in the sample means. However larger the number of observations in a sample smaller will be the standard error of estimate, indicating that there is less variability in the sample means, where by less variability we means that the sample is more representative of the population of interest.

If the sampled population is not very larger, we need to make some adjustment in computing the SE of the sample means. For a finite population, in which total number of objects (observations) is $N$ and the number of objects (observations) in a sample is $n$, then the adjustment will be $\sqrt{\frac{N-n}{N-1}}$. This adjustment is called the finite population correction factor. Then the adjusted standard error will be

\[\frac{\sigma}{\sqrt{n}} \sqrt{\frac{N-n}{N-1}}\]

The SE is used to:

  1. measure the spread of values of statistic about the expected value of that statistic
  2. construct confidence intervals
  3. test the null hypothesis about population parameter(s)

The standard error is computed from sample statistics. To compute SE for simple random samples, assuming that the size of population ($N$) is at least 20 times larger than that of the sample size ($n$).
\begin{align*}
Sample\, mean, \overline{x} & \Rightarrow SE_{\overline{x}} = \frac{n}{\sqrt{n}}\\
Sample\, proportion, p &\Rightarrow SE_{p} \sqrt{\frac{p(1-p)}{n}}\\
Difference\, b/w \, means, \overline{x}_1 – \overline{x}_2 &\Rightarrow SE_{\overline{x}_1-\overline{x}_2}=\sqrt{\frac{s_1^2}{n_1}+\frac{s_2^2}{n_2}}\\
Difference\, b/w\, proportions, \overline{p}_1-\overline{p}_2 &\Rightarrow SE_{p_1-p_2}=\sqrt{\frac{p_1(1-p_1)}{n_1}+\frac{p_2(1-p_2)}{n_2}}
\end{align*}

The standard error is identical to the standard deviation, except that it uses statistics whereas the standard deviation uses the parameter.

 

For more about SE follow the link Standard Error of Estimate

 

Copy Right © 2011-2017 | Free Music Download ITFEATURE.COM