Easy Multivariate Analysis MCQs – 1

Multivariate Analysis term includes all statistics for more than two simultaneously analyzed variables. The post contains Multivariate Analysis MCQs. Let us start with the Online Multivariate Analysis MCQs test.

Multiple Choice Questions about Multivariate and Multivariate Analysis

1. A set of vectors $X_1, X_2, \cdots, X_n$ are linearly independent if

 
 
 
 

2. The eigenvalue is the factor by which the Eigenvector is

 
 
 
 

3. Let $x$ be distributed as $N_p(\mu, \sigma)$ with $|\sigma | > 0$, then $(x-\mu)’ \sigma^{-1} (x-\mu)$ is distributed as:

 
 
 
 

4. Length of vector $\underline{X}$ is calculated as

 
 
 
 

5. The rank of a matrix $\begin{bmatrix}1 & 0 & 1 & 0 & 2 \\ 0 & 0 & 1 & 1 & 2 \\ 1 & 1 & 0 & 0 & 2 \\ 0 & 1 & 1 & 1 & 3\end{bmatrix}$ is

 
 
 
 

6. The pdf of multivariate normal distribution exists only when $\sigma$ is

 
 
 
 

7. What are Eigenvalues?

 
 
 
 

8. How many Eigenvalues does a 2 by 2 matrix have?

 
 
 
 

9. If $A$ and $B$ are two $n \times n$ matrices, which of the following is not always true?

 
 
 
 

10. A square matrix $A$ and its transpose have the Eigenvalues

 
 
 
 

11. If $A$ is a square matrix, then $det(A – \lambda)=0$ is known as

 
 
 
 

12. The set of all linear combination of $X_1, X_2, \cdots, X_k$ is called

 
 
 
 

13. Eigenvalues and Eigenvectors are only for the matrices

 
 
 
 

14. Let $A$ be a $k\times k$ symmetric matrix and $X$ be a $k\times 1$ vector. Then

 
 
 
 

15. A matrix $A_{m\times n}$ is defined to be orthogonal if

 
 
 
 

16. If $A$ is a square matrix of order ($m \times m$) then the sum of diagonal elements is called

 
 
 
 

17. A matrix in which the number of rows and columns are equal is called

 
 
 
 

18. Eigenvalue is often introduced in the context of

 
 
 
 

19. Let $x_1, x_2, \cdots, x_n$ be a random sample of size $n$ from a p-variate normal distribution with mean $\mu$ and covariance matrix $\sigma$, then

 
 
 
 

20. Let $x_1, x_2, \cdots, x_n$ be a random sample from a joint distribution with mean vector $\mu$ and covariance $\sigma$. Then $\overline{x}$ is an unbiased estimator of $\mu$ and its covariance matrix is:

 
 
 
 

Multivariate Analysis MCQs

Multivariate Analysis MCQs

  • If $A$ and $B$ are two $n \times n$ matrices, which of the following is not always true?
  • Let $x_1, x_2, \cdots, x_n$ be a random sample from a joint distribution with mean vector $\mu$ and covariance $\sigma$. Then $\overline{x}$ is an unbiased estimator of $\mu$ and its covariance matrix is:
  • Let $x$ be distributed as $N_p(\mu, \sigma)$ with $|\sigma | > 0$, then $(x-\mu)’ \sigma^{-1} (x-\mu)$ is distributed as:
  • Let $A$ be a $k\times k$ symmetric matrix and $X$ be a $k\times 1$ vector. Then
  • Let $x_1, x_2, \cdots, x_n$ be a random sample of size $n$ from a p-variate normal distribution with mean $\mu$ and covariance matrix $\sigma$, then
  • The set of all linear combination of $X_1, X_2, \cdots, X_k$ is called
  • A set of vectors $X_1, X_2, \cdots, X_n$ are linearly independent if
  • Length of vector $\underline{X}$ is calculated as
  • A matrix in which the number of rows and columns are equal is called
  • A matrix $A_{m\times n}$ is defined to be orthogonal if
  • If $A$ is a square matrix of order ($m \times m$) then the sum of diagonal elements is called
  • The rank of a matrix $\begin{bmatrix}1 & 0 & 1 & 0 & 2 \ 0 & 0 & 1 & 1 & 2 \ 1 & 1 & 0 & 0 & 2 \ 0 & 1 & 1 & 1 & 3\end{bmatrix}$ is
  • If $A$ is a square matrix, then $det(A – \lambda)=0$ is known as
  • The pdf of multivariate normal distribution exists only when $\sigma$ is
  • The eigenvalue is the factor by which the Eigenvector is
  • Eigenvalue is often introduced in the context of
  • How many Eigenvalues does a 2 by 2 matrix have?
  • What are Eigenvalues?
  • Eigenvalues and Eigenvectors are only for the matrices
  • A square matrix $A$ and its transpose have the Eigenvalues

R Frequently Asked Questions and Interview Questions

EigenValues and EigenVectors (2020)

Introduction to Eigen Values and Eigen Vectors

Eigenvalues and eigenvectors of matrices are needed for some of the methods such as Principal Component Analysis (PCA), Principal Component Regression (PCR), and assessment of the input of collinearity.

Eigenvalues and Eigenvectors

For a real, symmetric matrix $A_{ntimes n}$ there exists a set of $n$ scalars $lambda_i$, and $n$ non-zero vectors $Z_i,,(i=1,2,cdots,n)$ such that

begin{align*}
AZ_i &=lambda_i,Z_i\
AZ_i – lambda_i, Z_i &=0\
Rightarrow (A-lambda_i ,I)Z_i &=0
end{align*}

The $lambda_i$ are the $n$ eigenvalues (characteristic roots or latent root) of the matrix $A$ and the $Z_i$ are the corresponding (column) eigenvectors (characteristic vectors or latent vectors).

There are non-zero solutions to $(A-lambda_i,I)=0$ only if the matrix ($A-lambda_i,I$) is less than full rank (only if the determinant of $(A-lambda_i,I)$ is zero). $lambda_i$ are obtained by solving the general determinantal equation $|A-lambda,I|=0$.

The determinant of $(A-lambda,I)$ is an $n$th degree polynomial in $lambda$. Solving this equation gives the $n$ values of $lambda$, which are not necessarily distinct. Each value of $lambda$ is used in equation $(A-lambda_i,I)Z_i=0$ to find the companion eigenvectors $Z_i$.

When the eigenvalues are distinct, the vector solution to $(A-lambda_i,I)Z_i=0$ is unique except for an arbitrary scale factor and sign. By convention, each eigenvector is defined to be the solution vector scaled to have unit length; that is, $Z_i’Z_i=1$. Furthermore, the eigenvectors are mutually orthogonal; ($Z_i’Z_i=0$ when $ine j$).

When the eigenvalues are not distinct, there is an additional degree of arbitrariness in defining the subsets of vectors corresponding to each subset of non-distinct eigenvalues.

Eigen Values and Eigen Vectors Examples

Example: Let the matrix $A=begin{bmatrix}10&3\3 & 8end{bmatrix}$.

The eigenvalues of $A$ can be found by $|A-lambda,I|=0$. Therefore,

begin{align*}
|A-lambda, I|&=Big|begin{matrix}10-lambda & 3\ 3& 8-lambdaend{matrix}Big|\
Rightarrow (10-lambda)(8-lambda)-9 &= lambda^2 -18lambda+71 =0
end{align*}

By Quadratic formula, $lambda_1 = 12.16228$ and $lambda_2=5.83772$, arbitrarily ordered from largest to smallest. Thus the matrix of eigenvalues of $A$ is

$$L=begin{bmatrix}12.16228 & 0 \ 0 & 5.83772end{bmatrix}$$

The eigenvectors corresponding to $lambda_1=12.16228$ are obtained by solving

$(A-lambda_2,I)Z_i=0$ for the element of $Z_i$;

begin{align*}
(A-12.16228I)begin{bmatrix}Z_{11}\Z_{21}end{bmatrix} &=0\
left(begin{bmatrix}10&3\3&8end{bmatrix}-begin{bmatrix}12.162281&0\0&12.162281end{bmatrix}right)begin{bmatrix}Z_{11}\Z_{21}end{bmatrix}&=0\
begin{bmatrix}-2.162276 & 3\ 3 & -4.162276end{bmatrix}begin{bmatrix}Z_{11}\Z_{21}end{bmatrix}&=0
end{align*}

Arbitrary setting $Z_{11}=1$ and solving for $Z_{11}$, using first equation gives $Z_{21}=0.720759$. Thus the vector $Z_1’=begin{bmatrix}1 & 0.72759end{bmatrix}$ statisfy first equation.

$Length(Z_1)=sqrt{Z_1’Z_1}=sqrt{1.5194935}=1.232677$, where $Z’=0.999997$.

begin{align*}
Z_1 &=begin{bmatrix} 0.81124&0.58471end{bmatrix}\
Z_2 &=begin{bmatrix}-0.58471&0.81124end{bmatrix}
end{align*}

The elements of $Z_2$ are found in the same manner. Thus the matrix of eigenvectors for $A$ is

$$Z=begin{bmatrix}0.81124 &-0.58471\0.8471&0.81124end{bmatrix}$$

Note that matrix $A$ is of rank two because both eigenvalues are non-zero. The decomposition of $A$ into two orthogonal matrices each of rank one.

begin{align*}
A &=A_1+A_2\
A_1 &=lambda_1Z_1Z_1′ = 12.16228 begin{bmatrix}0.81124\0.58471end{bmatrix}begin{bmatrix}0.81124 & 0.58471end{bmatrix}\
&= begin{bmatrix}8.0042 & 5.7691\ 5.7691&4.1581end{bmatrix}\
A_2 &= lambda_2Z_2Z_2′ = begin{bmatrix}1.9958 & -2.7691\-2.7691&3.8419end{bmatrix}
end{align*}

EigenValues and EigenVectors (2020)

Thus the sum of eigenvalues $lambda_1+lambda_2=18$ is $trace(A)$. Thus the sum of the eigenvalues for any square symmetric matrix is equal to the trace of the matrix. The trace of each of the component rank $-1$ matrix is equal to its eigenvalue. $trace(A_1)=lambda_1$ and $trace(A_2)=lambda_2$.

In summary, understanding eigenvalues and eigenvectors is essential for various mathematical and scientific applications. They provide valuable tools for analyzing linear transformations, solving systems of equations, and understanding complex systems in various fields.

itfeature.com

Computer MCQs Test Online

R and Data Analysis

Learn Cholesky Transformation (2020)

Given the covariances between variables, one can write an invertible linear transformation that “uncorrelated” the variables. Contrariwise, one can transform a set of uncorrelated variables into variables with given covariances. This transformation is called Cholesky Transformation; represented by a matrix that is the “Square Root” of the covariance matrix.

The Square Root Matrix

Given a covariance matrix $\Sigma$, it can be factored uniquely into a product $\Sigma=U’U$, where $U$ is an upper triangle matrix with positive diagonal entries. The matrix $U$ is the Cholesky (or square root) matrix. If one prefers to work with the lower triangular matrix entries ($L$), then one can define $$L=U’ \Rightarrow \quad \Sigma = LL’.$$

This is the form of the Cholesky decomposition given by Golub and Van Lean in 1996. They provided proof of the Cholesky Decomposition and various ways to compute it.

The Cholesky matrix transforms uncorrelated variables into variables whose variances and covariances are given by $\Sigma$. If one generates standard normal variates, the Cholesky transformation maps the variables into variables for the multivariate normal distribution with covariance matrix $\Sigma$ and centered at the origin (%MVN(0, \Sigma)$).

Generally, pseudo-random numbers are used to generate two variables sampled from a population with a given degree of correlation. Property is used for a set of variables (correlated or uncorrelated) in the population, a given correlation matrix can be imposed by post-multiplying the data matrix $X$ by the upper triangular Cholesky Decomposition of the correlation matrix R. That is

  • Create two variables using the pseudo-random number, let the names be $X$ and $Y$
  • Create the desired correlation matrix between variables using $Y=X*R + Y*\sqrt{1-r^2},$
    where $r$ is the desired correlation value. $X$ and $Y$ variables will have an exact desired relationship between them. For a larger number of times, the distribution of correlation will be centered on $r$.

The Cholesky Transformation: The Simple Case

Suppose you want to generate multivariate normal data that are uncorrelated but have non-unit variance. The covariance matrix is the diagonal matrix of variance: $\Sigma = diag(\sigma_1^2,\sigma_2^2,\cdots, \sigma_p^2)$. The $\sqrt{\Sigma}$ is the diagnoal matrix $D$ that consists of the standard deviations $\Sigma = D’D$, where $D=diag(\sigma_1,\sigma_2,\cdots, \sigma_p)$.

Geometrically, the $D$ matrix scales each coordinate direction independent of other directions. The $X$-axix is scaled by a factor of 3, whereas the $Y$-axis is unchanged (scale factor of 1). The transformation $D$ is $diag(3,1)$, which corresponds to a covariance matrix of $diag(9,1)$.

Thinking the circles in Figure ‘a’ as probability contours for multivariate distribution $MNV(0, I)$, and Figure ‘b’ as the corresponding probability ellipses for the distribution $MNV(0, D)$.

Cholesky Transformation
# define the correlation matrix
C <- matrix(c(1.0, 0.6, 0.3,0.6, 1.0, 0.5,0.3, 0.5, 1.0),3,3)

# Find its cholesky decomposition
U = chol(C)

#generate correlated random numbers from uncorrelated
#numbers by multiplying them with the Cholesky matrix.
x <- matrix(rnorm(3000),1000,3)
xcorr <- x%*%U
cor(xcorr)
Cholesky Transformation
https://itfeature.com

Reference: Cholesky Transformation to correlate and Uncorrelated variables

R Programming Language

MCQs General Knowledge