Discovering Odds Ratio

An odds ratio is a relative measure of effect, allowing the comparison of the intervention group of a study relative to the comparison or placebo group. The odds ratio helps quantify the strength and direction of the relationship between two groups or conditions.

Introduction Odds Ratio

The odds ratio (OR) is a measure of association used in statistics to compare the odds of an event occurring in one group to the odds of it occurring in another group. It is commonly used in case-control studies and logistic regression.

  • an OR of 1 indicates no difference between groups,
  • an OR greater than 1 suggests higher odds in the first group, and
  • an OR less than 1 suggests lower odds in the first group.

Medical students, students from clinical and psychological sciences, professionals allied to medicine enhancing their understanding and learning of medical literature, and researchers from different fields of life usually encounter Odds Ratio (OR) throughout their careers.

When computing the OR, one would do:

  • The numerator is the odds in the intervention arm
  • The denominator is the odds in the control or placebo arm= OR

Calculating Odds Ratio

The ratio of the probability of success and failure is known as the odds. If the probability of an event is $P_1$ then the odds are:
\[OR=\frac{p_1}{1-p_1}\]

If the outcome is the same in both groups, the ratio will be 1, implying that there is no difference between the two arms of the study. However, if the $OR>1$, the control group is better than the intervention group while, if the $OR<1$, the intervention group is better than the control group.

The Odds Ratio is the ratio of two odds that can be used to quantify how much a factor is associated with the response factor in a given model. If the probabilities of occurrences of an event are $P_1$ (for the first group) and $P_2$ (for the second group), then the OR is:
\[OR=\frac{\frac{p_1}{1-p_1}}{\frac{p_2}{1-p_2}}\]

If predictors are binary then the OR for $i$th factor is defined as
\[OR_i=e^{\beta}_i\]

Odds Ratio

Real-Life Examples of Odds Ratio

  1. Medical Researches
    • Consider we are interested in comparing the odds of developing a disease (e.g., lung cancer) in smokers versus non-smokers. Suppose, the OR is 2.5, it means smokers have 2.5 times higher odds of developing lung cancer compared to non-smokers.
  2. Public Health
    • Suppose, we are interested in assessing the effectiveness of a vaccine. For example, comparing the odds of contracting a disease (e.g., COVID-19) in vaccinated versus unvaccinated individuals. An OR less than 1 would indicate the vaccine reduces the odds of infection.
  3. Social Sciences
    • Consider we are interested in studying the odds of students passing an exam based on attendance. For instance, if students who attend extra tutoring have an OR of 3.0 for passing, they have 3 times higher odds of passing compared to those who don’t attend.
  4. Marketing
    • Suppose we need to analyze the odds of customers purchasing a product after seeing an advertisement versus not seeing it. An OR greater than 1 suggests the ad increases the likelihood of purchase.
  5. Environmental Studies
    • Evaluating the odds of developing asthma in people living in high-pollution areas compared to those in low-pollution areas. An OR greater than 1 would indicate higher odds of asthma in high-pollution areas.

The regression coefficient $b_1$ from logistic regression is the estimated increase in the log odds of the dependent variable per unit increase in the value of the independent variable. In other words, the exponential function of the regression coefficients $(e^{b_1})$ in the OR is associated with a one-unit increase in the independent variable.

Online MCQs about Economics with Answers

R Programming Language Lectures

Application of Regression in Medical: A Quick Guide (2024)

The application of Regression cannot be ignored, as regression is a powerful statistical tool widely used in medical research to understand the relationship between variables. It helps identify risk factors, predict outcomes, and optimize treatment strategies.

Considering the application of regression analysis in medical sciences, Chan et al. (2006) used multiple linear regression to estimate standard liver weight for assessing adequacies of graft size in live donor liver transplantation and remnant liver in major hepatectomy for cancer. Standard liver weight (SLW) in grams, body weight (BW) in kilograms, gender (male=1, female=0), and other anthropometric data of 159 Chinese liver donors who underwent donor right hepatectomy were analyzed. The formula (fitted model)

 \[SLW = 218 + 12.3 \times BW + 51 \times gender\]

 was developed with a coefficient of determination $R^2=0.48$.

Application of Regression Analysis

These results mean that in Chinese people, on average, for each 1-kg increase of BW, SLW increases about 12.3 g, and, on average, men have a 51-g higher SLW than women. Unfortunately, SEs and CIs for the estimated regression coefficients were not reported. Using Formula 6 in their article, the SLW for Chinese liver donors can be estimated if BW and gender are known. About 50% of the variance of SLW is explained by BW and gender.

The regression analysis helps in:

  • Identifying risk factors: Determine which factors contribute to the development of a disease (For example, gender, age, smoking, and blood pressure for heart disease).
  • Predicting disease occurrence: Estimate the likelihood of a patient developing a disease based on specific risk factors. for example, logistic regression is used to predict the risk of diabetes based on factors like BMI, age, and family history.

The following types of regression models are widely used in medical sciences:

  • Linear regression: Used when the outcome variable is continuous (e.g., blood pressure, cholesterol levels).
  • Logistic regression: Used when the outcome variable is binary (e.g., disease present/absent, survival/death).
  • Cox proportional hazards regression: Used for survival analysis (time to event data)

 Some other related articles (Application of Regression Analysis in Medical Sciences)

Reference of Article

  • Chan SC, Liu CL, Lo CM, et al. (2006). Estimating liver weight of adults by body weight and gender. World J Gastroenterol 12, 2217–2222.

R Programming Lectures

Regression Model Assumptions

Linear Regression Model Assumptions

The linear regression model (LRM) is based on certain statistical assumptions, some of which are related to the distribution of a random variable (error term) $u_i$, some are about the relationship between error term $u_i$ and the explanatory variables (Independent variables, $X$‘s) and some are related to the independent variable themselves. The linear regression model assumptions can be classified into two categories

  1. Stochastic Assumption
  2. None Stochastic Assumptions

These linear regression model assumptions (or assumptions about the ordinary least square method: OLS) are extremely critical to interpreting the regression coefficients.

Regression Model Assumptions
  • The error term ($u_i$) is a random real number i.e. $u_i$ may assume any positive, negative, or zero value upon chance. Each value has a certain probability, therefore, the error term is a random variable.
  • The mean value of $u$ is zero, i.e. $E(u_i)=0$ i.e. the mean value of $u_i$ is conditional upon the given $X_i$ is zero. It means that for each value of variable $X_i$, $u$ may take various values, some of them greater than zero and some smaller than zero. Considering all possible values of $u$ for any particular value of $X$, we have zero mean value of disturbance term $u_i$.
  • The variance of $u_i$ is constant i.e. for the given value of $X$, the variance of $u_i$ is the same for all observations. $E(u_i^2)=\sigma^2$. The variance of disturbance term ($u_i$) about its mean is at all values of $X$ will show the same dispersion about their mean.
  • The variable $u_i$ has a normal distribution i.e. $u_i\sim N(0,\sigma_{u}^2$. The value of $u$ (for each $X_i$) has a bell-shaped symmetrical distribution.
  • The random terms of different observations ($u_i,u_j$) are independent i..e $E(u_i,u_j)=0$, i.e. there is no autocorrelation between the disturbances. It means that the random term assumed in one period does not depend on the values in any other period.
  • $u_i$ and $X_i$ have zero covariance between them i.e. $u$ is independent of the explanatory variable or $E(u_i X_i)=0$ i.e. $Cov(u_i, X_i)=0$. The disturbance term $u$ and explanatory variable $X$ are uncorrelated. The $u$’s and $X$’s do not tend to vary together as their covariance is zero. This assumption is automatically fulfilled if the $X$ variable is nonrandom or non-stochastic or if the mean of the random term is zero.
  • All the explanatory variables are measured without error. It means that we will assume that the regressors are error-free while $y$ (dependent variable) may or may not include measurement errors.
  • The number of observations $n$ must be greater than the number of parameters to be estimated or the number of observations must be greater than the number of explanatory (independent) variables.
  • The should be variability in the $X$ values. That is $X$ values in a given sample must not be the same. Statistically, $Var(X)$ must be a finite positive number.
  • The regression model must be correctly specified, meaning there is no specification bias or error in the model used in empirical analysis.
  • No perfect or near-perfect multicollinearity or collinearity exists among the two or more explanatory (independent) variables.
  • Values taken by the regressors $X$ are considered to be fixed in repeating sampling i.e. $X$ is assumed to be non-stochastic. Regression analysis is conditional on the given values of the regressor(s) $X$.
  • The linear regression model is linear in the parameters, e.g. $y_i=\beta_1+\beta_2x_i +u_i$
regression model Assumptions

Visit MCQs Site: https://gmstat.com