Learn Cholesky Transformation (2020)

Given the covariances between variables, one can write an invertible linear transformation that “uncorrelated” the variables. Contrariwise, one can transform a set of uncorrelated variables into variables with given covariances. This transformation is called Cholesky Transformation; represented by a matrix that is the “Square Root” of the covariance matrix.

The Square Root Matrix

Given a covariance matrix $\Sigma$, it can be factored uniquely into a product $\Sigma=U’U$, where $U$ is an upper triangle matrix with positive diagonal entries. The matrix $U$ is the Cholesky (or square root) matrix. If one prefers to work with the lower triangular matrix entries ($L$), then one can define $$L=U’ \Rightarrow \quad \Sigma = LL’.$$

This is the form of the Cholesky decomposition given by Golub and Van Lean in 1996. They provided proof of the Cholesky Decomposition and various ways to compute it.

The Cholesky matrix transforms uncorrelated variables into variables whose variances and covariances are given by $\Sigma$. If one generates standard normal variates, the Cholesky transformation maps the variables into variables for the multivariate normal distribution with covariance matrix $\Sigma$ and centered at the origin (%MVN(0, \Sigma)$).

Generally, pseudo-random numbers are used to generate two variables sampled from a population with a given degree of correlation. Property is used for a set of variables (correlated or uncorrelated) in the population, a given correlation matrix can be imposed by post-multiplying the data matrix $X$ by the upper triangular Cholesky Decomposition of the correlation matrix R. That is

  • Create two variables using the pseudo-random number, let the names be $X$ and $Y$
  • Create the desired correlation matrix between variables using $Y=X*R + Y*\sqrt{1-r^2},$
    where $r$ is the desired correlation value. $X$ and $Y$ variables will have an exact desired relationship between them. For a larger number of times, the distribution of correlation will be centered on $r$.

The Cholesky Transformation: The Simple Case

Suppose you want to generate multivariate normal data that are uncorrelated but have non-unit variance. The covariance matrix is the diagonal matrix of variance: $\Sigma = diag(\sigma_1^2,\sigma_2^2,\cdots, \sigma_p^2)$. The $\sqrt{\Sigma}$ is the diagnoal matrix $D$ that consists of the standard deviations $\Sigma = D’D$, where $D=diag(\sigma_1,\sigma_2,\cdots, \sigma_p)$.

Geometrically, the $D$ matrix scales each coordinate direction independent of other directions. The $X$-axix is scaled by a factor of 3, whereas the $Y$-axis is unchanged (scale factor of 1). The transformation $D$ is $diag(3,1)$, which corresponds to a covariance matrix of $diag(9,1)$.

Thinking the circles in Figure ‘a’ as probability contours for multivariate distribution $MNV(0, I)$, and Figure ‘b’ as the corresponding probability ellipses for the distribution $MNV(0, D)$.

Cholesky Transformation
# define the correlation matrix
C <- matrix(c(1.0, 0.6, 0.3,0.6, 1.0, 0.5,0.3, 0.5, 1.0),3,3)

# Find its cholesky decomposition
U = chol(C)

#generate correlated random numbers from uncorrelated
#numbers by multiplying them with the Cholesky matrix.
x <- matrix(rnorm(3000),1000,3)
xcorr <- x%*%U
cor(xcorr)
Cholesky Transformation
https://itfeature.com

Reference: Cholesky Transformation to correlate and Uncorrelated variables

R Programming Language

MCQs General Knowledge

Canonical Correlation Analysis (2016)

The bivariate correlation analysis measures the strength of the relationship between two variables. One may be required to find the strength of the relationship between two sets of variables. In this case, canonical correlation is an appropriate technique for measuring the strength of the relationship between two sets of variables. Canonical correlation is appropriate in the same situations where multiple regression would be, but where there are multiple inter-correlated outcome variables. Canonical correlation analysis determines a set of canonical variates, orthogonal linear combinations of the variables within each set that best explain the variability both within and between sets.

Examples: Canonical Correlation Analysis

  • In medicine, individuals’ lifestyles and eating habits may affect their different health measures determined by several health-related variables such as hypertension, weight, anxiety, and tension level.
  • In business, the marketing manager of a consumer goods firm may be interested in finding the relationship between the types of products purchased and consumers’ lifestyles and personalities.

From the above two examples, one set of variables is the predictor or independent while the other set of variables is the criterion or dependent set. The objective of canonical correlation is to determine if the predictor set of variables affects the criterion set of variables.

Note that it is unnecessary to designate the two sets of variables as dependent and independent. In this case, the objective of canonical correlation is to ascertain the relationship between the two sets of variables.

Canonical Correlation Analysis

The objective of canonical correlation is similar to conducting a principal components analysis on each set of variables. In principal components analysis, the first new axis results in a new variable that accounts for the maximum variance in the data. In contrast, in canonical correlation analysis, a new axis is identified for each set of variables such that the correlation between the two resulting new variables is maximum.

Canonical correlation analysis can also be considered a data reduction technique as only a few canonical variables may be needed to adequately represent the association between the two sets of variables. Therefore, an additional objective of canonical correlation is to determine the minimum number of canonical correlations needed to adequately represent the association between two sets of variables.

Canonical Correlation Analysis (2016)

Learn R Programming

Computer MCQs Test Online