The Secular Trend Example and Uses (2020)

For the estimation of the secular trend of a time series, the most commonly used method is to fit a straight line $\hat{y} = a+bx$, an exponential curve $\hat{y}=ab^x$, and a second-degree parabola $\hat{y}=a +bx+ cx^2$, etc, where $y$ is the value of a time series variable, $x$ representing the time and all others are constants (the intercept $a$, and the slope $b$). The method of least squares is a widely used method to determine the values of the constants appearing in such an equation.

The Secular Trend is used

  • For prediction (or projection) into the future
  • The detrending process (removal of trend) in a time series for studying other non-trend fluctuations.
  • It is used for historical description

The secular trend can be represented either by a straight line or by some type of smooth curve. It is measured by the following methods:

Least Squares Method (secular trend)

The secular trend may be used in determining how a time series has grown in the past or in making a forecast. The trend line is used to adjust a series to eliminate the effect of the secular trend to isolate non-trend fluctuations.

Note that

  • These trends can be positive or negative. For example, the advancement of technology offers new opportunities but also raises concerns about job displacement and privacy.
  • These trends can be interrelated. For instance, urbanization might be fueled by technological advancements that allow people to work remotely.
  • Identifying secular trends can be challenging, as they unfold over a long period. However, by analyzing historical data, monitoring current developments, and considering expert opinions, one can gain valuable insights into the long-term direction of change.

By understanding and utilizing secular trends, individuals, businesses, and policymakers can make informed decisions, prepare for future challenges, and capitalize on emerging opportunities in a constantly evolving world.

Time Series Data Analysis

R Programming Language

Heteroscedasticity Consequences

Heteroscedasticity refers to a situation in which the variability of the errors (residuals) in a regression model is not constant across all levels of the independent variable(s). It refers to the violation of the assumption of homoscedasticity in linear regression models (LRM).

Heteroscedasticity Consequences

A short detail about the Heteroscedasticity Consequences is described below:

  • The OLS estimators and regression predictions based on them remain unbiased and consistent.
  • The OLS estimators are no longer the BLUE (Best Linear Unbiased Estimators) because they are no longer efficient, so the regression predictions will be inefficient too.
  • Because of the inconsistency of the covariance matrix of the estimated regression coefficients, the tests of hypotheses, (t-test, F-test) are no longer valid.

A detailed discussion about the Heteroscedasticity Consequences are:

Heteroscedasticity Consequences
  1. Inefficient Estimates: As a result of a violation of the homoscedasticity assumption, the OLS estimates become inefficient, that is, the estimators are not more Best Linear Unbiased Estimators (BLUE) and therefore, could have larger standard errors. The large standard errors may lead to incorrect conclusions about the statistical significance of the regression coefficients.
  2. Biased Estimates: Heteroscedasticity may lead to biased estimates of regression coefficients. In the case of heteroscedasticity, the ordinary least squares estimators (OLSE) are still unbiased, but they are no longer the most efficient estimators, as estimators may have larger possible variances. The estimated coefficients for the regressors may not accurately reflect the true population parameters.
  3. Incorrect Standard Errors: The standard errors of the regression coefficients are biased in the presence of heteroscedasticity, which leads to inaccurate inference in hypothesis testing, including incorrect t-test, F-test, and p-values. Researchers may mistakenly conclude that a variable is not statistically significant when it is, or vice versa.
  4. Invalid Inference: Larger standard errors may also lead to invalid inferences about the population parameters, it is because the confidence intervals and hypothesis tests based on these estimates may be unreliable and become wider to include the population parameter.
  5. Model Misspecification: Heteroscedasticity may indicate a misspecification of the underlying model. If the assumption of constant variance is violated, it suggests that there may be unaccounted-for factors or omitted variables influencing the variability of the errors. It suggests that the model may not be capturing all the variability in the data adequately.
  6. Inflated Type I Errors: Heteroscedasticity can lead to inflated Type I errors (false positives) in hypothesis tests. Researchers might mistakenly reject null hypotheses when they should not, leading to incorrect conclusions.
  7. Suboptimal Forecasting: Models affected by heteroscedasticity may provide suboptimal forecasts since the variability of the errors is not accurately captured. This can impact the model’s ability to make reliable predictions.
  8. Robustness Issues: Heteroscedasticity can make regression models less robust, meaning that their performance deteriorates when applied to different datasets or when the underlying assumptions are not met.

The Test of heteroscedasticity, such as the Breusch-Pagan test, or the White test of heteroscedasticity, and consider corrective measures like weighted least squares regression or transforming the data.

Learn about Remedial Measures of Heteroscedasticity

R Programming Language

Test Preparation MCQs

OLS Estimation in the Presence of Heteroscedasticity (2020)

OLS Estimation Method is a widely used method in regression analysis for the estimation of the parameters used in a linear regression model. However, when heteroscedasticity exists (which refers to the situation where the variance of the error terms is not constant across observations) the assumptions of OLS may be violated. This violation leads to biased and inefficient parameter estimates, as well as unreliable hypothesis tests and confidence intervals. For more information see Consequences of Heteroscedasticity.

For the OLS Estimation in the presence of heteroscedasticity, consider the two-variable model

\begin{align*}
Y_i &= \beta_1 +\beta_2X_i + u_i\\
\hat{\beta}_2&=\frac{\sum x_i y_i}{\sum x_i^2}\\
Var(\hat{\beta}_2)&= \frac{\sum x_i^2\, \sigma_i^2}{(\sum x_i^2)^2}
\end{align*}

OLS Estimation in the Presence of Heteroscedasticity

OLS Estimation in the Presence of Heteroscedasticity, the variance of the OLS estimator will be

$Var(\hat{\beta}_2)$ under the assumption of homoscedasticity is $Var(\hat{\beta}_2)=\frac{\sigma^2}{\sum x_i^2}$. If $\sigma_i^2=\sigma^2$ the both $Var(\hat{\beta}_2)$ will be same.

OLS Estimation in the Presence of Heteroscedasticity (2020)

Note that in the case of heteroscedasticity, the OLS estimators

  • $\hat{\beta_2}$ is BLUE if the assumptions of the classical model, including homoscedasticity, hold.
  • To establish the unbiasedness of $\hat{\beta}_2$, it is not necessary for the disturbances ($u_i$) to be homoscedastic.
  • The variance of $u_i$, homoscedasticity, or heteroscedasticity plays no part in the determination of the unbiasedness property.
  • $\hat{\beta}_2$ will be a consistent estimator despite heteroscedasticity.
  • With the increase of sample size indefinitely, the $\hat{\beta}_2$ (estimated $\beta_2$) converges to its true value.
  • $\hat{\beta}_2$ is asymptotically normally distributed.
OLS Estimation in the presence of Heteroscedasticity

Overall, addressing the existence of heteroscedasticity in regression analysis is crucial to ensure the validity and reliability of the estimated parameters and inference results. Various methods and techniques are available to account for heteroscedasticity and obtain accurate estimates in regression analysis. For more details see Tests of Heteroscedasticity.

The best approach to address heteroscedasticity depends on the specific situation and the characteristics of the data being studied. The general guidelines are:

  • For mild heteroscedasticity, robust standard errors might be sufficient.
  • If the form of heteroscedasticity is known and the assumptions are comfortable, consider WLS or GLS.
  • Data transformation can be a simple solution, but weigh the benefits against the potential drawbacks of interpreting the transformed coefficients.

Remember that The OLS estimates remain unbiased under heteroscedasticity, however, addressing it can improve the efficiency and reliability of regression analysis, leading to more robust and interpretable results.

https://itfeature.com statistics help

Learn about Heteroscedasticity Tests and Remedies

MCQ Test Online

Learn R Software

Nature of Heteroscedasticity (2020)

Let us start with the nature of heteroscedasticity.

The assumption of homoscedasticity (equal spread, equal variance) is

$$E(u_i^2)=E(u_i^2|X_{2i},X_{3i},\cdots, X_{ki})=\sigma^2,\quad 1,2,\cdots, n$$

Nature of Heteroscedasticity (2020)

The above Figure shows that the conditional variance of $Y_i$ (which is equal to that of $u_i$), conditional upon the given $X_i$, remains the same regardless of the values taken by the variable $X$.

Nature of Heteroscedasticity

The Figure shows that the conditional value of $Y_i$ increases as $X$ increases. The variance of $Y_i$ is not the same, there is heteroscedasticity.

$$E(u_i^2)=E(u_i^2|X_{2i},X_{3i},\cdots, X_{ki})=\sigma_i^2$$

Nature of Heteroscedasticity

The nature of heteroscedasticity refers to the violation of the assumption of homoscedasticity in linear regression models. In the case of heteroscedasticity, the errors have unequal variances for different levels of the regressors, which leads to biased and inefficient estimators of the regression coefficients. There are several reasons why the variances of $u_i$ may be variable:

  • Following the error-learning models, as people learn, their error of behavior becomes smaller over time or the number of errors becomes more consistent. In such cases, $\sigma_i^2$ is expected to decrease.
  • As income grows, people have more discretionary income (income remaining after deduction of taxes) and hence more scope for choice about disposition (برتاؤ، قابو) of their income. Similarly, companies with larger profits are generally expected to show greater variability in their dividend (کمپنی کا منافع) policies than companies with lower profits.
  • As data collecting techniques improve $\sigma_i^2$ is likely to decrease. For example, Banks having sophisticated data processing equipment are likely to commit fewer errors in the monthly or quarterly statements of their customers than banks without such equipment.
  • Heteroscedasticity can also arise as a result of the presence of outliers. The inclusion or exclusion of such an observation, especially if the sample size is small, can substantially (معقول حد تک، درحقیقت) alter the results of regression analysis.
  • The omission of variables also results in the problem of Heteroscedasticity. Upon deleting the variable from the model the researcher would not be able to interpret anything from the model.
    \item Heteroscedasticity may arise from the violation of the assumption of CLRM that the model is correctly specified.
  • Skewness in the distribution of one or more regressors is another source of heteroscedasticity. For example, income is uneven.
  • Incorrect data transformation (ratio or first difference), and incorrect functional form (linear vs log-linear) are also the source of heteroscedasticity.
  • The problem of heteroscedasticity is likely to be more in cross-sectional data than in time series data.
https://itfeature.com Statistics Help

Computer MCQs

Learn R Programming