Creating Matrices in Mathematica

A matrix is an array of numbers arranged in rows and columns. In Mathematica matrices are expressed as a list of rows, each of which is a list itself. It means a matrix is a list of lists. If a matrix has n rows and m columns then we call it an n by m matrix. The value(s) in the ith row and jth column is called the i, j entry.

In mathematica, matrices can be entered with the { } notation, constructed from a formula or imported from a data file. There are also commands for creating diagonal matrices, constant matrices and other special matrix types.

Creating matrices in Mathematica

  • Create a matrix using { } notation
    mat={{1, 2, 3}, {4, 5, 6}, {7, 8, 9}}
    but output will not be in matrix form, to get in matrix form use command like
  • Creating matrix using Table command
    mat1=Table[b{row, column},
    {row, 1, 4, 1}, {column, 1, 2, 1}]
  • Creating symbolic matrix such as
    mat2=Table[xi+xj , {i, 1, 4}, {j, 1, 3}]
  • Creating a diagonal matrix with nonzero entries at its diagonal
    DiagonalMatrix[{1, 2, 3, r}]//MatrixForm
  • Creating a matrix with same entries i.e. a constant matrix
    ConstantArray[3, {2, 4}]//MatrixForm
  • Creating an identity matrix of order n × n

Matrix Operations in Mathematica

In mathematica matrix operations can be performed on both numeric and symbolic matrices.

  • To find the determinant of a matrix
  • To find the transpose of a matrix
  • To find the inverse of a matrix for linear system
  • To find the Trace of a matrix i.e. sum of diagonal elements in a matrix
  • To find Eigenvalues of a matrix
  • To find Eigenvector of a matrix
  • To find both Eigenvalues and Eigenvectors together

Note that +, *, ^ operators all automatically work element-wise.

Displaying matrix and its elements

  • mat[[1]]         displays the first row of a matrix where mat is a matrix create above
  • mat[[1, 2]]     displays the element from first row and second column, i.e. m12 element of the matrix
  • mat[[All, 2]]  displays the 2nd column of matrix


Be Sociable, Share!

Student t test

Student t test

William Sealy Gosset in 1908 published his work under the pseudonym “Student” to solve problems associated with inference based on sample(s) drawn from normally distributed population when the population standard deviation is unknown. He developed the t-test and t-distribution, which can be used to compare two small sets of quantitative data collected independently of one another, in this case this t-test is called independent samples t-test or also called unpaired samples t-test.

Student’s t-test is the most commonly used statistical techniques in testing of hypothesis on the basis of difference between sample means. The t-test can be computed just by knowing the means, standard deviations and number of data points in both samples by using the following formula

\[t=\frac{\overline{X}_1-\overline{X}_2 }{\sqrt{s_p^2 (\frac{1}{n_1}+\frac{1}{n_2})}}\]

where $s_p^2$ is the pooled (combined) variance and can be computed as

\[s_p^2=\frac{(n_1-1)s_1^2 + (n_2-2)s_2^2}{n_1+n_2-2}\]

Using this test statistic, we test the null hypothesis $H_0:\mu_1=\mu_2$ which means that both samples came from the same population under the given level of significance or level of risk.

If the computed t-statistics from above formula is greater than the critical value (value from t-table with $n_1+n_2-2$ degrees of freedom and given level of significance, say $\alpha=0.05$), the null hypothesis will be rejected, otherwise null hypothesis will be accepted.

Note that the t-distribution is a family of curves depending of degree of freedom (the number of independent observations in the sample minus number of parameters). As the sample size increases, the t-distribution approaches to bell shape i.e. normal distribution.

Example: The production manager wants to compare the number of defective products produced on the day shift with the number on the afternoon shift. A sample of the production from 6day shifts and 8 afternoon shifts revealed the following numbers of defects. The production manager wants to check at the 0.05 significance level, is there a significant difference in the mean number of defects per shits?

Day shift 5 8 7 6 9 7
Afternoon Shit 8 10 7 11 9 12 14 9

Some required calculations are:

Mean of samples:

$\overline{X}_1=7$, $\overline{X}_2=10$,

Standard Deviation of samples

$s_1=1.4142$, $s_2=2.2678$ and $s_p^2=\frac{(6-1) (1.4142)^2+(8-1)(2.2678)^2}{6+8-2}=3.8333$

Step 1: Null and alternative hypothesis are: $H_0:\mu_1=\mu_2$ vs $H_1:\mu_1 \ne \mu_2$

Step 2: Level of significance: $\alpha=0.05$

Step 3: Test Statistics

t&=\frac{\overline{X}_1-\overline{X}_2 }{\sqrt{s_p^2 (\frac{1}{n_1}+\frac{1}{n_2})}}\\

Step 4: Critical value or rejection region (Reject $H_0$ if absolute value of t-calculated in step 3 is greater than absolute table value i.e. $|t_{calculated}|\ge t_{tabulated}|$). In this example t-tabulated is -2.179 with 12 degree of freedom at significance level 5%.

Step 5: Conclusion: As computed value $|2.837| > |2.179|$, which means that the number of defects is not same on the two shifts.


Be Sociable, Share!

Sum of Squares

Sum of Sqaures

In statistics, the sum of squares is a measure of the total variability (spread, variation) within a data set. In other words the sum of squares is a measure of deviation or variation from mean value of the given data set. A sum of squares calculated by first computing the differences between each data point (observation) and mean of the data set, i.e. $x=X-\overline{X}$. The computed $x$ is the deviation score for the given data set. Squaring each of this deviation score and then adding these squared deviation scores gave us the sum of squares (SS), which is represented mathematically as


Note that the small letter $x$ usually represents the deviation of each observation from mean value, while capital letter $X$ represents the variable of interest in statistics.

Sum of Squares Example

Consider the following data set {5, 6, 7, 10, 12}. To compute the sum of squares of this data set, follow these steps

  • Calculate the average of the given data by summing all the values in the data set and then divide this sum of numbers by the total number of observations in the date set. Mathematically, it is $\frac{\sum X_i}{n}=\frac{40}{5}=8$, where 40 is the sum of all numbers $5+6+7+10+12$ and there are 5 observations in number.
  • Calculate the difference of each observation in data set from the average computed in step 1, for given data. The difference are
    5 – 8 = –3; 6 – 8 = –2; 7 – 8 = –1; 10 – 8 =2 and 12 – 8 = 4
    Note that the sum of these differences should be zero. (–3 + –2 + –1 + 2 +4 = 0)
  • Now square the each of the differences obtained in step 2. The square of these differences are
    9, 4, 1, 4 and 16
  • Now add the squared number obtained in step 3. The sum of these squared quantities will be 9 + 4 + 1 + 4 + 16 = 34, which is the sum of the square of the given data set.

In statistics, sum of squares occurs in different contexts such as

  • Partitioning of Variance (Partition of Sums of Squares)
  • Sum of Squared Deviations (Least Squares)
  • Sum of Squared Differences (Mean Squared Error)
  • Sum of Squared Error (Residual Sum of Squares)
  • Sum of Squares due to Lack of Fit (Lack of Fit Sum of Squares)
  • Sum of Squares for Model Predictions (Explained Sum of Squares)
  • Sum of Squares for Observations (Total Sum of Squares)
  • Sum of Squared Deviation (Squared Deviations)
  • Modeling involving Sum of Squares (Analysis of Variance)
  • Multivariate Generalization of Sum of Square (Multivariate Analysis of Variance)

As previously discussed, Sum of Square is a measure of the Total Variability of a set of scores around a specific number.


Be Sociable, Share!

Randomized Complete Block Design

In Randomized Complete Design (CRD), there is no restriction on the allocation of the treatments to experimental units. But in practical life there are situations where there is relatively large variability in the experimental material, it is possible to make blocks (in simpler sense groups) of the relatively homogeneous experimental material or units. The design applied in such situations is named as Randomized Complete Block Design (RCBD).

The Randomized Complete Block Design may be defined as the design in which the experimental material is divided into blocks/groups of homogeneous experimental units (experimental units have same characteristics) and each block/group contains a complete set of treatments which are assigned at random to the experimental units.

Actually RCBD is a one restrictional design, used to control a variable which is influence the response variable. The main aim of the restriction is to control the variable causing the variability in response. Efforts of blocking is done to create the situation of homogeneity within block. A blocking is a source of variability. An example of blocking factor might be the gender of a patient (by blocking on gender), this is source of variability controlled for, leading to greater accuracy. RCBD is a mixed model in which a factor is fixed and other is random. The main assumption of the design is that there is no contact between the treatment and block effect.

Randomized Complete Block design is said to be complete design because in this design the experimental units and number of treatments are equal. Each treatment occurs in each block.

The general model is defined as


where $i=1,2,3\cdots, t$ and $j=1,2,\cdots, b$ with $t$ treatments and $b$ blocks. $\mu$ is the overall mean based on all observations, $\eta_i$ is the effect of the ith treatment response, $\xi$ is the effect of jth block and $e_{ij}$ is the corresponding error term which is assumed to be independent and normally distributed with mean zero and constant variance.

The main objective of blocking is to reduce the variability among experimental units within a block as much as possible and to maximize the variation among blocks; the design would not contribute to improve the precision in detecting treatment differences.

Randomized Complete Block Design Experimental Layout

Suppose there are $t$ treatments and $r$ blocks in a randomized complete block design, then each block contains homogeneous plots one of each treatment. An experimental layout for such a design using four treatments in three blocks be as follows.

Block 1 Block 2 Block 3

From RCBD layout we can see that

  • The treatments are assigned at random within blocks of adjacent subjects and each of the treatment appears once in a block.
  • The number of block represents the number of replications
  • Any treatment can be adjacent to any other treatment, but not to the same treatment within the block.
  • Variation in an experiment is controlled by accounting spatial effects.


Be Sociable, Share!

Covariance and Correlation

Covariance and Correlation

Covariance measures the degree to which two variables co-vary (i.e. vary/ changes together). If the greater values of one variable (say, $X_i$) correspond with the greater values of the other variable (say, $X_j$), i.e. if the variables tend to show similar behaviour, then the covariance between two variables ($X_i$, $X_j$) will be positive. Similarly if the smaller values of one variable correspond with the smaller values of the other variable, then the covariance between two variables will be positive. In contrast, if the greater values of one variable (say, $X_i$) mainly correspond to the smaller values of the other variables (say, $X_j$), i.e. both of the variables tend to show opposite behaviour, then the covariance will be negative.

In other words, for positive covariance between two variables means they (both of the variables) vary/changes together in the same direction relative to their expected values (averages). It means that if one variable moves above its average value, then the other variable tend to be above its average value also. Similarly, if covariance is negative between the two variables, then one variable tends to be above its expected value, while the other variable tends to be below its expected value. If covariance is zero then it means that there is no linear dependency between the two variables. Mathematically covariance between two random variables $X_i$ and $X_j$ can be represented as
\[COV(X_i, X_j)=E[(X_i-\mu_i)(X_j-\mu_j)]\]
$\mu_i=E(X_i)$ is the average of the first variable
$\mu_j=E(X_j)$ is the average of the second variable

COV(X_i, X_j)&=E[(X_i-\mu_i)(X_j-\mu_j)]\\
&=E[X_i X_j – X_i E(X_j)-X_j E(X_i)+E(X_i)E(X_j)]\\
&=E(X_i X_j)-E(X_i)E(X_j) – E(X_j)E(X_i)+E(X_i)E(X_j)\\
&=E(X_i X_j)-E(X_i)E(X_j)

Note that, the covariance of a random variable with itself is the variance of the random variable, i.e. $COV(X_i, X_i)=VAR(X)$. If $X_i$ and $X_j$ are independent, then $E(X_i X_j)=E(X_i)E(X_j)$ and $COV(X_i, X_j)=E(X_i X_j)-E(X_i) E(X_j)=0$.

Covariance and Correlation

Correlation and covariance are related measures but not equivalent statistical measures. The correlation between two variables (Let, $X_i$ and $X_j$) is their normalized covariance, defined as
\rho_{i,j}&=\frac{E[(X_i-\mu_i)(X_j-\mu_j)]}{\sigma_i \sigma_j}\\
&=\frac{n \sum XY – \sum X \sum Y}{\sqrt{(n \sum X^2 -(\sum X)^2)(n \sum Y^2 – (\sum Y)^2)}}
where $\sigma_i$ is the standard deviation of $X_i$ and $\sigma_j$ is the standard deviation of $X_j$.

Note that correlation is the dimensionless, i.e. a number which is free of measurement unit and its values lies between -1 and +1 inclusive. In contrast covariance has a unit of measure–the product of the units of two variables.

For further reading about Correlation follows these posts


Be Sociable, Share!
error: Content is protected !!