Basic Statistics and Data Analysis

Lecture notes, MCQS of Statistics

Cronbach’s Alpha Reliability Analysis of Measurement Scales

Reliability analysis is used to study the properties of measurement scales (Likert scale questionnaire) and the items (questions) that make them up. The reliability analysis method computes a number of commonly used measures of scale reliability. The reliability analysis also provides information about the relationships between individual items in the scale. The intraclass correlation coefficients can be used to compute the interrater reliability estimates.

Consider that you want to know that does my questionnaire measures the customer satisfaction in a useful way? For this purpose, you can use the reliability analysis to determine the extent to which the items (questions) in your questionnaire are correlated to each other. The overall index of the reliability or internal consistency of the scale as a whole can be obtained. You can also identify problematic items that should be removed (deleted) from the scale.

As an example open the data “satisf.save” already available in SPSS sample files. To check the reliability of Likert scale items follows the steps given below:

Step 1: On the Menu bar of SPSS, Click Analyze > Scale > Reliability Analysis… option
Reliability SPSS menu
Step 2: Select two more variables that you want to test and shift them from left pan to right pan of reliability analysis dialogue box. Note, multiple variables (items) can be selected by holding down the CTRL key and clicking the variable you want. Clicking the arrow button between the left and right pan will shift the variables to the item pan (right pan).
Reliability Analysis Dialog box
Step 3: Click on the “Statistics” Button to select some other statistics such as descriptives (for item, scale and scale if item deleted), summaries (for means, variances, covariances and correlations), inter-item (for correlations and covariances) and Anova table (for none, F-test, Friedman chi-square and Cochran chi-square) statistics etc.

Reliability Statistics

Click on the “Continue” button to save the current statistics options for analysis. Click the OK button in the Reliability Analysis dialogue box to get analysis to be done on selected items. The output will be shown in SPSS output windows.

Reliability Analysis Output

The Cronbach’s Alpha Reliability ($\alpha$) is about 0.827, which is good enough. Note that, deleting the item “organization satisfaction” will increase the reliability of remaining items to 0.860.

A rule of thumb for interpreting alpha for dichotomous items (questions with two possible answers only) or Likert scale items (question with 3, 5, 7, or 9 etc items) is:

  • If Cronbach’s Alpha is $\ge 0.9$, the internal consistency of scale is Excellent.
  • If Cronbach’s Alpha is $0.90 > \alpha \ge 0.8$, the internal consistency of scale is Good.
  • If Cronbach’s Alpha is $0.80 > \alpha \ge 0.7$, the internal consistency of scale is Acceptable.
  • If Cronbach’s Alpha is $0.70 > \alpha \ge 0.6$, the internal consistency of scale is Questionable.
  • If Cronbach’s Alpha is $0.60 > \alpha \ge 0.5$, the internal consistency of scale is Poor.
  • If Cronbach’s Alpha is $0.50 > \alpha $, the internal consistency of scale is Unacceptable.

However, the rules of thumb listed above should be used with caution. Since Cronbach’s Alpha reliability is sensitive to the number of items in a scale. A larger number of questions can results in a larger Alpha Reliability, while a smaller number of items may result in smaller $\alpha$.

Standard Deviation: A Measure of Dispersion

Standard Deviation

The standard deviation is a widely used concept in statistics and it tells how much variation (spread or dispersion) is in the data set. It can be defined as the positive square root of the mean (average) of the squared deviations of the values from their mean.
To calculate the standard deviation one have to follow these steps:

  1. First, find the mean of the data.
  2. Take the difference of each data point from the mean of the given data set (which is computed in step 1). Note that, the sum of these differences must be equal to zero or near to zero due to rounding of numbers.
  3. Now computed the square the differences that were obtained in step 2, It would be greater than zero, that it, I will be a positive quantity.
  4. Now add up all the squared quantities obtained in step 3. We call it the sum of squares of differences.
  5. Divide this sum of squares of differences (obtained in step 4) by the total number of observation (available in data) if we have to calculate population standard deviation ($\sigma$). If you want t to compute sample standard deviation ($S$) then divide the sum of squares of differences (obtained in step 4) by the total number of observation minus one ($n-1$) i.e. the degree of freedom. Note $n$ is the number of observations available in the data set.
  6. Find the square root (also known as under root) of the quantity obtained in step 5. The resultant quantity in this way known as the standard deviation for given data set.

The sample standard deviation of a set of $n$ observation, $$X_1, X_2, \cdots, X_n$$ denoted by $S$ is
\begin{aligned}
\sigma &=\sqrt{\sum_{i=1}^n \frac{X_i-\overline{X}}{n}}; Population\, Standard\, Deviation\\
S&=\sqrt{\sum_{i=1}^n \frac{X_i-\overline{X}}{n-1}}; Sample\, Standard\, Deviation
\end{aligned}
The standard deviation can be computed from variance too as $S= \sqrt{Variance}$.

The real meaning of the standard deviation is that for a given data set 68% of the data values will lie within the range $\overline{X} \pm \sigma$ i.e. within one standard deviation from mean or simply within one $\sigma$. Similarly, 95% of the data values will lie within the range $\overline{X} \pm 2 \sigma$ and 99% within $\overline{X} \pm 3 \sigma$.

Examples of Standard Deviation and Variance

A large value of standard deviation indicates more spread in the data set which can be interpreted as the inconsistent behaviour of the data collected. It means that the data points tend to away from the mean value. For the case of smaller standard deviation, data points tend to be close (very close) to mean indicating the consistent behaviour of data set.
The standard deviation and variance both are used to measure the risk of a particular investment in finance. The mean of 15% and standard deviation of 2% indicates that it is expected to earn a 15% return on an investment and we have 68% chance that the return will actually be between 13% and 17%. Similarly, there are 95% chance that the return on the investment will yield an 11% to 19% return.

Classical Probability: Example, Definition, and Uses in Life

Classical probability is the statistical concept that measures the likelihood (probability) of something happening. In a classic sense, it means that every statistical experiment will contain elements that are equally likely to happen (equal chances of occurrence of something). Therefore, the concept of classical probability is the simplest form of probability that has equal odds of something happening.

Classical Probability Examples

Example 1: The typical example of classical probability would be rolling of a fair dice because it is equally probable that top face of die will be any of the 6 numbers on the die: 1, 2, 3, 4, 5, or 6.

Example 2: Another example of classical probability would be tossing an unbiased coin. There is an equal probability that your toss will yield either head or tail.

Example 3: In selecting bingo balls, each numbered ball has an equal chance of being chosen.

Example 4: Guessing a multiple choice quiz (MCQs) test with (say) four possible answers A, B, C or D. Each option (choice) has the same odds (equal chances) of being picked (assuming you pick randomly and do not follow any pattern).

Formula for Classical Probability

The probability of a simple event happening is the number of times the event can happen, divided by the number of possible events (outcomes).

Mathematically $P(A) = \frac{f}{N}$,

where, $P(A)$ means “probability of event A” (event $A$ is whatever event you are looking for, like winning the lottery, that is event of interest), $f$ is the frequency, or number of possible times the event could happen and $N$ is the number of times the event could happen.

For example,  the odds of rolling a 2 on a fair die are one out of 6, (1/6). In other words, one possible outcome (there is only one way to roll a 1 on a fair die) divided by the number of possible outcomes.

Classical probability can be used for very basic events, like rolling a dice and tossing a coin, it can also be used when occurrence of all events is equally likely. Choosing a card from a standard deck of cards gives you a 1/52 chance of getting a particular card, no matter what card you choose. On the other hand, figuring out will it rain tomorrow or not isn’t something you can figure out with this basic type of probability. There might be a 15% chance of rain (and therefore, an 85% chance of it not raining).

Other Examples of classical Probability

There are many other examples of classical probability problems besides rolling dice. These examples include flipping coins, drawing cards from a deck, guessing on a multiple choice test, selecting jellybeans from a bag, and choosing people for a committee, etc.

Classical Probability cannot be used:

Dividing the number of events by the number of possible events is very simplistic, and it isn’t suited to finding probabilities for a lot of situations. For example, natural events like weights, heights, and test scores need normal distribution probability charts to calculate probabilities. In fact, most “real life” things aren’t simple events like coins, cards, or dice. You’ll need something more complicated than classical probability theory to solve them.

For further Detail see Introduction to Probability

Skewness: Measure of Asymmetry

The skewed and askew are widely used terminologies that refer to something that is out of order or distorted on one side. Similarly, when referring to the shape of frequency distributions or probability distributions, the term skewness also refers to asymmetry of that distribution. A distribution with an asymmetric tail extending out to the right is referred to as “positively skewed” or “skewed to the right”, while a distribution with an asymmetric tail extending out to the left is referred to as “negatively skewed” or “skewed to the left”. The range of skewness is from minus infinity ($-\infty$) to positive infinity ($+\infty$). In simple words skewness (asymmetry) is measure of symmetry or in other words skewness is the lack of symmetry.

Karl Pearson (1857-1936) first suggested measuring skewness by standardizing the difference between the mean and the mode, such that, $skewness=\frac{\mu-mode}{\text{standard deviation}}$. Since, population modes are not well estimated from sample modes, therefore Stuart and Ord, 1994 suggested that one can estimate the difference between the mean and the mode as being three times the difference between the mean and the median. Therefore, the estimate of skewness will be: $skewness=\frac{3(M-median)}{\text{standard deviation}}$. Many of the statisticians use this measure but after eliminating the ‘3’, that is, $skewness=\frac{M-Median}{\text{standard deviation}}$. This statistic ranges from $-1$ to $+1$. According to Hilderand, 1986, absolute values of skewness above 0.2 indicate great skewness.

Skewness has also been defined with respect to the third moment about the mean, that is $\gamma_1=\frac{\sum(X-\mu)^3}{n\sigma^3}$, which is simply the expected value of the distribution of cubed $Z$ scores. Skewness measured in this way is also sometimes referred to as “Fisher’s skewness”. When the deviations from the mean are greater in one direction than in the other direction, this statistic will deviate from zero in the direction of the larger deviations. From sample data, Fisher’s skewness is most often estimated by: $g_1=\frac{n\sum z^3}{(n-1)(n-2)}$. For large sample sizes ($n > 150$), $g_1$ may be distributed approximately normally, with a standard error of approximately $\sqrt{\frac{6}{n}}$. While one could use this sampling distribution to construct confidence intervals for or tests of hypotheses about $\gamma_1$, there is rarely any value in doing so.

Arthur Lyon Bowley (1869-19570, has also proposed a measure of skewness based on the median and the two quartiles. In a symmetrical distribution, the two quartiles are equidistant from the median but in an asymmetrical distribution, this will not be the case. The Bowley’s coefficient of skewness is $skewness=\frac{q_1+q_3-2\text{median}}{Q_3-Q_1}$. Its value lies between 0 and $\pm1$.

The most commonly used measures of skewness (those discussed here) may produce some surprising results, such as a negative value when the shape of the distribution appears skewed to the right.

It is important for researchers from the behavioral and business sciences to measure skewness when it appears in their data. Great amount of skewness may motivate the researcher to investigate the existence of outliers. When making decisions about which measure of location to report and which inferential statistic to employ, one should take into consideration the estimated skewness of the population. Normal distributions have zero skewness. Of course, a distribution can be perfectly symmetric but may far away from normal distribution. Transformations of variables under study commonly employed to reduce (positive) skewness. These transformation may include square root, log, and reciprocal of variable.

For more about skewness see, Skewness

Creating Formula in Excel: Operators order of precedence

Creating Formula in Excel: Operators Order of Precedence

Creating customized (user defined) formulas in Microsoft Excel is not too difficult. For creating formulas just combine the references of your data with the correct mathematical operator (such as -, +, /, * and ^).

Microsoft Order of Precedence

The order of mathematical operations determines in which order the mathematical operations are carried out. If more than mathematical operators are used in formula, there is a specific order (sequence) that Microsoft Excel will follow to perform (compute) these mathematical operations. However, to change the order of operations, brackets (parenthesis) are used in the Excel formula. The easy way to remember the order of operations (precedence) is to remember the acronym: BEDMAS (PEDMAS), that i.e.,

The order of operations (precedence) is:

Bracket or Parenthesis
Exponents (^)
Division (/)
Multiplication (*)
Addition (+)
Subtraction (-)

Suppose, following is the screenshot of an Excel sheet. The formula is also shown in formula bar. As an example, addition (+), division (/) and multiplication (*) operators are used.

order of precedenceThe formula in screenshot performs the computation in the following order,

  • E1/F1 will be computed (answer is 1.5),
  • the answer of E1/F1 will be multiplied by value of G1 (answer is 1.5*2 = 3)
  • the answer of E1/F1 * G1 will be added to D1 (answer is 7)

Any operation(s) enclosed in brackets (parenthesis) will be carried out first followed by any exponents. After that, Excel will consider division or multiplication operations to be of equal importance. The operations are performed in the order they occur left to right in the formula. Similar sequence is also performed for addition and subtraction. Both (addition and subtraction) are considered equal in the order of operations. The operator which appears first will be computed first.

For example, see the screenshot order of precedence bracketThe sequence of operation is

  • First bracket will be computed, that is, multiplication will be performed (2 *2 = 4)
  • E1 will be divided by the answer from multiplication of F1 and G1 (3/4 = 0.75)
  • Lastly D1 will be added to the answer 0.75 (4 + 0.75 = 4.75)

Now check the sequence in the following screenshot

order of precedence bracketFor Creating formula in Excel, see the link Creating Excel Formula

 

Copy Right © 2011-2017 | Free Music Download ITFEATURE.COM