# Cholesky Transformation

Given the covariances between variables, one can write an invertible linear transformation that “uncorrelated” the variables. Contrariwise, one can transform a set of uncorrelated variables into variables with given covariances. This transformation is called Cholesky Transformation; represented by a matrix that is the “Square Root” of the covariance matrix.

## The Square Root Matrix

Given a covariance matrix $\Sigma$, it can be factored uniquely into a product $\Sigma=U’U$, where $U$ is an upper triangle matrix with positive diagonal entries. The matrix $U$ is the Cholesky (or square root) matrix. If one prefers to work with the lower triangular matrix entries ($L$), then one can define $$L=U’ \Rightarrow \quad \Sigma = LL’.$$

This is the form of the Cholesky decomposition given by Golub and Van Lean in 1996. They provided proof of the Cholesky Decomposition and various ways to compute it.

The Cholesky matrix transforms uncorrelated variables into variables whose variances and covariances are given by $\Sigma$. If one generates standard normal variates, the Cholesky transformation maps the variables into variables for the multivariate normal distribution with covariance matrix $\Sigma$ and centered at the origin (%MVN(0, \Sigma)$). Generally, pseudo-random numbers are used to generate two variables sampled from a population with a given degree of correlation. Property is used for a set of variables (correlated or uncorrelated) in the population, a given correlation matrix can be imposed by post-multiplying the data matrix$X$by the upper triangular Cholesky Decomposition of the correlation matrix R. That is • Create two variables using the pseudo-random number, let the names are$X$and$Y$• Create the desired correlation matrix between variables using$Y=X*R + Y*\sqrt{1-r^2},$where$r$is the desired correlation value.$X$and$Y$variable will have exact desired relationship between them. For a larger number of times, the distribution of correlation will be centered on$r$. ## The Cholesky Transformation: The Simple Case Suppose you want to generate multivariate normal data that are uncorrelated, but have non-unit variance. The covariance matrix is the diagonal matrix of variance:$\Sigma = diag(\sigma_1^2,\sigma_2^2,\cdots, \sigma_p^2)$. The$\sqrt{\Sigma}$is the diagnoal matrix$D$that consists of the standard deviations$\Sigma = D’D$, where$D=diag(\sigma_1,\sigma_2,\cdots, \sigma_p)$. Geometrically, the$D$matrix scales each coordinate direction independent of other directions. The$X$-axix is scaled by a factor of 3, where as the$Y$-axis is unchanged (scale factor of 1). The transformation$D$is$diag(3,1)$, which corresponds to a covariance matrix of$diag(9,1)$. Thinking the circles in figure ‘a’ as probability contours for multivariate distribution$MNV(0,I)$, and Figure ‘b’ as the corresponding probability ellipses for the distribution$MNV(0,D)\$.

# define the correlation matrix
C <- matrix(c(1.0, 0.6, 0.3,0.6, 1.0, 0.5,0.3, 0.5, 1.0),3,3)
# Find its cholesky decomposition
U = chol(C)
#generate correlated random numbers from uncorrelated
#numbers by multiplying them with the Cholesky matrix.
x <- matrix(rnorm(3000),1000,3)
xcorr <- x%*%U
cor(xcorr) 