Logistic regression Introduction

Logistic regression Introduction

Logistic regression was introduced in 1930s by Ronald Fisher and Frank Yates and was first proposed in 1970s as an alternative technique to overcome limitations of ordinary least square regression in handling dichotomous outcomes. It is a type of probabilistic statistical classification model which is non-linear regression model, can be converted into linear model by using a simple transformation. It is used to predict a binary response categorical dependent variable, based on one or more predictor variables. That is, it is used in estimating empirical values of the parameters in a model. Here response variable assumes value as zero or one i.e. dichotomous variable. It is the regression model of b, a logistic regression model is written as

  \[\pi=\frac{1}{1+e^{-[\alpha +\sum_{i=1}^k \beta_i X_{ij}]}}\]

where $\alpha$ and $\beta_i$ are the intercept and slope respectively.

So in simple words, logistic regression is used to find the probability of the occurrence of the outcome of interest.  For example if we want to find the significance of the different predictors (gender, sleeping hours, took part in extracurricular activities, etc.), on a binary response (pass or fail in exams coded as 0 and 1), for this kind of problems we used logistic regression.

By using a transformation this nonlinear regression model can be easily converted into linear model. As $\pi$ is the probability of the events in which we are interested so if we takes the ratio of the probability of success and failure then the model become linear model.


The natural log of odds can convert the logistics regression model into linear form.



Be Sociable, Share!

Introduction Odds Ratio

Introduction Odds Ratio

Medical students, students from clinical and psychological sciences, professionals allied to medicine enhancing their understanding and learning of medical literature and researchers from different fields of life usually encounter Odds Ratio (OR) throughout their careers.

Odds ratio is a relative measure of effect, allowing the comparison of the intervention group of a study relative to the comparison or placebo group. When computing Odds Ratio, one would do:

  • The numerator is the odds in the intervention arm
  • The denominator is the odds in the control or placebo arm= OR

If the outcome is the same in both groups, the ratio will be 1, implying that there is no difference between the two arms of the study. However, if the OR>1, the control group is better than the intervention group while, if the OR<1, the intervention group is better than the control group.

The ratio of the probability of success and failure is known as odds. If the probability of an event is $P_1$ then the odds is:

The Odds Ratio is the ratio of two odds can be used to quantify how much a factor is associated to the response factor in a given model. If the probabilities of occurrences an event are $P_1$ (for first group) and $P_2$ (for second group), then the OR is:

If predictors are binary then the OR for ith factor, is defined as

The regression coefficient $b_1$ from logistic regression is the estimated increase in the log odds of the dependent variable per unit increase in the value of the independent variable. In other words, the exponential function of the regression coefficients $(e^{b_1})$ in the OR associated with a one unit increase in the independent variable.


Be Sociable, Share!

Median Measure of Central Tendency

Median Measure of Central Tendency

Median is the middle most value in the data set when all of the values (observations) in a data set are arranged either in ascending or descending order of their magnitude. Median is also considered as a measure of central tendency which divides the data set in two half, where the first half contains 50% observations below the median value and 50% above the median value. If in a data set there are odd number of observations (data points), the median value is the single most middle value after sorting the data set.

Example: Consider the following data set 5, 9, 8, 4, 3, 1, 0, 8, 5, 3, 5, 6, 3.
To find the median of the given data set, first sort it (either in ascending or descending order), that is
0, 1, 3, 3, 3, 4, 5, 5, 5, 6, 8, 8, 9. The middle most value of the above data after sorting is 5, which is median of the given data set.

When the number of observations in a data set is even then the median value is the average of two middle most values in the sorted data.

Example: Consider the following data set, 5, 9, 8, 4, 3, 1, 0, 8, 5, 3, 5, 6, 3, 2.
To find the median first sort it and then locate the middle most two values, that is,
0, 1, 2, 3, 3, 3, 4, 5, 5, 5, 6, 8, 8, 9. The middle most two values are 4 and 5. So median will be average of these two values, i.e. 4.5 in this case.

The Median is less affected by extreme values in the data set, so median is preferred measure of central tendency when the data set is skewed or not symmetrical.

For large data set it is relatively very difficult to locate median value in sorted data. It will be helpful to use median value using formula. The formula for odd number of observations is
Median &=\frac{n+1}{2}th\\
Median &=\frac{n+1}{2}\\

The 7th value in sorted data is the median of the given data.

The median formula for even number of observation is
Median&=\frac{1}{2}(\frac{n}{2}th + (\frac{n}{2}+1)th)\\
&=\frac{1}{2}(\frac{14}{2}th + (\frac{14}{2}+1)th)\\
&=\frac{1}{2}(7th + 8th )\\
&=\frac{1}{2}(4 + 5)= 4.5

Note that median measure of central tendency, cannot be found for categorical data.


Be Sociable, Share!

Pseudo Random Process

Random Number

Every random experiment results in two or more outcomes.
A variable whose values depend upon the outcomes of a random experiment is called a random variable denoted by capital letters X, Y, or Z and their values by the corresponding small letters x, y or z.

Random Numbers and their Generation

Random numbers are a sequence of digits from the set {0,1,2,⋯,9} so that, at each position in the sequence, each digit has the same probability 0.1 of being selected irrespective of the actual sequence, so far constructed.

The simplest ways of achieving such numbers are games of chance such as dice, coins, cards or by repeatedly drawing numbered slips out of a jar. These are usually grouped purely for convenience of reading but this would becomes very tedious for long runs of each digit. Fortunately tables of random digits are widely available now.

Pseudo Random Process

Pseudo Random Process is a process that appears to be random but actually it is not. Pseudo random sequences typically exhibit statistical randomness while being generated by an entirely deterministic causal process. Such a process is easier to produce than a genuinely random one, and has the benefit that it can be used again and again to produce exactly the same numbers and they are useful for testing and fixing software.

For implementation on computers to provide sequence of such digits easily, and quickly, the most common methods are called Pseudo Random Technique.

Here, digit will re-appear in the same order (cycle) eventually. For a good technique the cycle might be tens of thousands of digit long.
Of course the pseudo random digits are not truly random. In fact, they are completely deterministic but they do exhibit most of the properties of random digits. Generally, they methods involves the recursive formula e.g.

\[X_{n+1}= a x_n +b\, mod\, m; n=0, 1, 2, …\]

a, b and n are suitably chosen integer constants and the seed $x_0$ (a starting number i.e. n = 0) is an integer. (Note mode m means that if the result from formula is greater than m, then divide it by m and keep the remainder as a random number.

Use of this formula gives rise to a sequence of integers each of which is in the random 0 to m – 1.


let a = 13, b=5, and m = 1000, Generate 500 random numbers.


\[x_{n+1}=a x_n \,b\, mod\, 1000; n=0,1,2,…\]

let seed $x_0=5$, then for n=0 we have

x_{0+1}&=13 \times 5 +5\, mod\, 1000=70\\
x_{1+1}&=13 \times 70 + 5\, mod\, 1000=915

Application of Random Variables

The random numbers have wide applicability in the simulation techniques (also called Monte Carlo Methods) which have been applied to many problems in the various sciences and one useful in the situation where direct experimentation is not possible, the cost of conducting an experimetn is very high or the experiment takes too much time.

R code to Generate Random Number

# store the pseudo random output
for(i in 1:500){


Read more about Pseudo Random Process | Random Number Generation and Linear Congruential Generator (LCG)

Download Pseudo Random Process pdf file:


Be Sociable, Share!

Mode Measure of Central Tendency

The mode is the most frequent observation in the data set i.e. the value (number) that appears the most in data set. It is possible that there may be more than one mode or it may also be possible that there is no mode in a data set. Usually mode is used for categorical data (data belongs to nominal or ordinal scale) but it is not necessary. Mode can also be used for ordinal and ratio scale, but there should be some repeated value in the data set or data set can be classified in groups. If any of the data point don’t have same values (no repetition in data values) , then the mode of that data set will not exit or may not be meaningful. A data set having more than one mode is called multimode or multimodal.

Example 1: Consider the following data set showing the weight of child at age of 10 years: 33, 30, 23, 23, 32, 21, 23, 30, 30, 22, 25, 33, 23, 23, 25. We can found the mode by tabulating the given data in form of frequency distribution table, whose first column is the weight of child and second column is the number of times the weight appear in the data i.e frequency of the each weight in first column.

Weight of 10 year child Frequency
22 1
23 5
25 2
30 3
32 1
33 2
Total 15

From above frequency distribution table we can easily found the most frequently occurring observation (data point), which will be the mode of data set. Therefore the mode of the given data set is 23, meaning that majority of the 10 year child have weight of 23kg. Note that for finding mode it is not necessary do make frequency distribution table, but it helps in finding the mode quickly and frequency table can also be used in further calculations such as percentage and cumulative percentage of each weight group.

Example 2: Consider we have information of person about his/her gender. Consider the M stands for male and F stands for Female. The sequence of person’s gender noted is as follows: F, F, M, F, F, M, M, M, M, F, M, F, M, F, M, M, M, F, F, M. The frequency distribution table of gender is

Weight of 10 year child Frequency
Male 11
Female 9
Total 25

The mode of gender data is male, showing that most frequent or majority of the people have male gender in this data set.

Mode can be found by simply sorting the data in ascending or descending order. Mode can also be found by counting the frequent value without sorting the data especially when data contains small number of observations, though it may be difficult in remembering the number of times which observation occurs. Note that mode is not affected by the extreme values (outliers or influential observations).

Mode is also a measure of central tendency, but the mode may not reflect the center of the data very well. For example the mean of data set in example 1, is 26.4kg while mode is of 23kg.

One should use mode measure of central tendency, if he/ she expect that data points will repeat or have some classification in it. For example in production process a product produced can be classified as defective or non-defective product. Similarly student grades can classified as A, B, C, D etc. For such kind of data one should use mode as a measure of central tendency instead of mean or median.

Example 3: Consider the following data. 3, 4, 7, 11, 15, 20, 23, 22, 26, 33, 25, 13. There is no mode of this data as each of the value occurs once. Grouping this data in some useful and meaningful form we can get mode of the data for example, the grouped frequency table is

Group Values Frequency
0 to 9 3, 4, 7 3
10 to 19 11, 13, 15 3
20 to 29 20, 22, 23, 25, 26 5
30 to 39 33 1
Total 12

From this table, we cannot find the most appearing value, but we can say that “20 to 29″ is the group in which most of the observations occur. We can say that this group contains the mode which can be found by using mode formula for grouped data.


Be Sociable, Share!