## Important Time Series Analysis Quiz 4

In this test, the MCQs Time Series Analysis and Forecasting will help to prepare for exams related to statistics lecturer job, and statistical officer job tests. This Time Series Analysis Quiz will help the learner enhance their knowledge in the field of Time Series. Let us start with the Time Series Analysis Quiz with answers.

Online MCQs Time Series Analysis and Forecasting

1. In an ARIMA model, what does the “MA” part of the acronym ARIMA represent?

2. Residual methods for measuring cycles in a time series consist of:

3. The moving average method suffers from:

4. In a Moving Average (MA) model, what does the “order q” represent?

5. Which of the following is a key step in the ARIMA modeling process?

6. Link relatives in a time series remove the influence of

7. The component of a time series attached to long-term variations is termed as:

8. Time series analysis helps to:

9. The best method for finding out seasonal variation is:

10. The general decline in sales of a product is attached to the component of the time series:

11. The time series analysis helps to

12. Irregular variations in a time series are caused by:

13. The linear trend of a time series indicates towards:

14. What does seasonality in data refer to?

15. Which of the following is a key limitation of the Moving Average (MA) model?

16. The secular trend is indicative of long-term variation towards:

17. The component of a time series that is attached to short-term variation is:

18. The forecasts on the basis of a time series are:

19. The moving averages in a time series are free from the influence of:

20. Seasonal variation means the variation occurring within:

Time series analysis deals with the data observed with some time-related units such as a month, days, years, quarters, minutes, etc. Time series data means that data is in a series of particular periods or intervals. Therefore, a set of observations on the values that a variable takes at different times.

### Time Series Analysis Quiz

• The moving averages in a time series are free from the influence of:
• Seasonal variation means the variation occurring within:
• The time series analysis helps to
• The moving average method suffers from:
• The secular trend is indicative of long-term variation towards:
• Link relatives in a time series remove the influence of
• Residual methods for measuring cycles in a time series consist of:
• The component of a time series that is attached to short-term variation is:
• The general decline in sales of a product is attached to the component of the time series:
• The linear trend of a time series indicates:
• The component of a time series attached to long-term variations is termed as:
• Time series analysis helps to:
• Irregular variations in a time series are caused by:
• The best method for finding out seasonal variation is:
• The forecasts on the basis of a time series are:
• What does seasonality in data refer to?
• Which of the following is a key step in the ARIMA modeling process?
• In an ARIMA model, what does the “MA” part of the acronym ARIMA represent?
• Which of the following is a key limitation of the Moving Average (MA) model?
• In a Moving Average (MA) model, what does the “order q” represent?

R and Data Analysis

R FAQs

## Random Walk Model (2016)

The random walk model is widely used in the area of finance. The stock prices or exchange rates (Asset prices) follow a random walk. A common and serious departure from random behavior is called a random walk (non-stationary) since today’s stock price is equal to yesterday’s stock price plus a random shock.

### Types of Random Walk Model

There are two types of random walks

1. Random walk without drift (no constant or intercept)
2. Random walk with drift (with a constant term)

### Definition

A time series is said to follow a random walk if the first differences (difference from one observation to the next observation) are random.

Note that in a random walk model, the time series itself is not random, however, the first differences in time series are random (the differences change from one period to the next).

A random walk model for a time series $X_t$ can be written as

$X_t=X_{t-1}+e_t\, \, ,$

where $X_t$ is the value in time period $t$, $X_{t-1}$ is the value in time period $t-1$ plus a random shock $e_t$ (value of error term in time period $t$).

Since the random walk is defined in terms of first differences, therefore, it is easier to see the model as

$X_t-X_{t-1}=e_t\, \, ,$

where the original time series is changed to a first difference time series, that is the time series is transformed.

The transformed time series:

• Forecast the future trends to aid in decision-making
• If the time series follows a random walk, the original series offers little or no insights
• May need to analyze the first differenced time series

### Real World Example

Consider a real-world example of the daily US-dollar-to-Euro exchange rate. A plot of the entire history (of daily US-dollar-to-Euro exchange rate) from January 1, 1999, to December 5, 2014, looks like

The historical pattern from the above plot looks quite interesting, with many peaks and valleys. The plot of the daily changes (first difference) would look like

The volatility (variance) has not been constant over time, but the day-to-day changes are almost completely random.

### Key Characteristics of a Random Walk

• No Pattern: The path taken by a random walk is unpredictable.
• Independence: Each step is independent of the previous one.
• Probability distribution: The size and direction of each step can be defined by a probability distribution.

### Applications of Random Walk Models

Beyond finance, random walk models have applications in:

• Physics: Brownian motion and diffusion processes
• Biology: Population dynamics and genetic drift
• Computer science: Algorithms and simulations

Remember that, random walk patterns are also widely found elsewhere in nature, for example, in the phenomenon of Brownian Motion that was first explained by Einstein.

Computer MCQs Test Online

R and Data Analysis

## Stationary Stochastic Process (2016)

### Stationary Stochastic Process

A stationary stochastic process is said to be stationary if its mean and variance are constant over time and the value of the covariance between the two time periods depends only on a distance or gap or lag between the two time periods and not the actual time at which the covariance is computed. Such a stochastic process is also known as weak stationary, covariance stationary, second-order stationary, or wide-sense stochastic process.

In other words, a sequence of random variables {$y_t$} is covariance stationary if there is no trend, and if the covariance does not change over time.

### Strictly Stationary (Covariance Stationary)

A time series is strictly stationary if all the moments of its probability distribution are invariance over time but not for the first two (mean and variance).

Let $y_t$ be a stochastic time series with

$E(y_t) = \mu$    $\Rightarrow$ Mean
$V(y_t) = E(y_t -\mu)^2=\sigma^2$  $\Rightarrow$ Variance
$\gamma_k = E[(y_t-\mu)(y_{t+k}-\mu)]$  $\Rightarrow$ Covariance = $Cov(y_t, y_{t-k})$

$\gamma_k$ is covariance or autocovariance at lag $k$.

If $k=0$ then $Var(y_t)=\sigma^2$ i.e. $Cov(y_t)=Var(y_t)=\sigma^2$

If $k=1$ then we have covariance between two adjacent values of $y$.

If $y_t$ is to be stationary, the mean, variance, and autocovariance of $y_{t+m}$ (shift or origin of $y=m$) must be the same as those of $y_t$. OR

If a time series is stationary, its mean, variance, and autocovariance remain the same no matter at what point we measure them, i.e., they are time-invariant.

### Non-Stationary Time Series

A time series having a time-varying mean or a time-varying variance or both is called a non-stationary time series.

### Purely Random/ White Noise Process

A stochastic process having zero mean and constant variance ($\sigma^2$) and serially uncorrelated is called a purely random/ white noise process.

If it is independent also then such a process is called strictly white noise.

White noise denoted by $\mu_t$ as $\mu_t \sim N(0, \sigma^2)$ i.e. $\mu_t$ is independently and identically distributed as a normal distribution with zero mean and constant variance.

A stationary time series is important because if a time series is non-stationary, we can study its behavior only for the time period under consideration. Each set of time series data will, therefore, be for a particular episode. As a consequence, it is not possible to generalize it to other time periods. Therefore, for forecasting, such (non-stochastic) time series may be of little practical value. Our interest is in stationary time series.

R Frequently Asked Questions

## The Correlogram

A correlogram is a graph used to interpret a set of autocorrelation coefficients in which $r_k$ is plotted against the $log k$. A correlogram is often very helpful for visual inspection.

### Some general advice to interpret the correlogram are:

• A Random Series: If a time series is completely random, then for large $N$, $r_k \cong 0$ for all non-zero values of $k$. A random time series $r_k$ is approximately $N\left(0, \frac{1}{N}\right)$. If a time series is random, 19 out of 20 of the values of $r_k$ can be expected to lie between $\pm \frac{2}{\sqrt{N}}$. However, plotting the first 20 values of $r_k$, one can expect to find one significant value on average even when the time series is random.
• Short-term Correlation: Stationary series often exhibit short-term correlation characterized by a fairly large value of $r_1$ followed by 2 or 3 more coefficients (significantly greater than zero) tend to get successively smaller values of $r_k$ for larger lags tend to get be approximately zero. A time series that gives rise to such a correlogram is one for which an observation above the mean tends to be followed by one or more further observations above the mean and similarly for observation below the mean. A model called an autoregressive model may be appropriate for a series of this type.
• Alternating Series: If a time series tends to alternate with successive observations on different sides of the overall mean, then the correlogram also tends to alternate. The value of $r_1$ will be negative, however, the value of $r_2$ will be positive as observation at lag 2 will tend to be on the same side of the mean.
• Non-Stationary Series: If a time series contains a trend, then the value of $r_k$ will not come down to zero except for very large values of the lags. This is because of a large number of further observations on the same side of the mean because of the trend. The sample autocorrelation function $\{ r_k \}$ should only be calculated for stationary time series and no trend should be removed before calculating $\{ r_k\}$.
• Seasonal Fluctuations: If a time series contains a seasonal fluctuation then the correlogram will also exhibit an oscillation at the same frequency. If $x_t$ follows a sinusoidal pattern then so does $r_k$.
$x_t=a\, cos\, t\, w,$ where $a$ is constant, $w$ is frequency such that $0 < w < \pi$. Therefore $r_k \cong cos\, k\, w$ for large $N$.
If the seasonal variation is removed from seasonal data then the correlogram may provide useful information.
• Outliers: If a time series contains one or more outliers the correlogram may be seriously affected. If there is one outlier in the time series and it is not adjusted, then the plot of $x_y$ vs $x_{t+k}$ will contain two extreme points, which will tend to depress the sample correlation coefficients towards zero. If there are two outliers, this effect is more noticeable.
• General Remarks: Experience is required to interpret autocorrelation coefficients. We need to study the probability theory of stationary series and the classes of the model too. We also need to know the sampling properties of $x_t$.

There are two main types of correlograms depending on the type of correlation being analyzed:

• Pearson Correlation: This is the most common type and measures linear correlations between continuous variables.
• Spearman Rank Correlation: This is a non-parametric measure suitable for ordinal or continuous data and assesses monotonic relationships (not necessarily linear).

In summary, a correlogram is a valuable tool for exploratory data analysis. It helps us:

• Understand the relationships between multiple variables in your data.
• Identify potential issues with multicollinearity before building statistical models.
• Gain insights into the underlying structure of your data.