Chapter 1
Some Results on Linear Algebra, Matrix Theory and Distributions
We need some basic knowledge to understand the topics in the analysis of variance.
Vectors:
A vector Y is an ordered n-tuple of real numbers. A vector can be expressed as a row vector or a column
vector as
y1
y
Y 2
yn
is a column vector of order n 1 and
Y ' ( y1 , y2 ,..., yn )
is a row vector of order 1 n.
If all yi 0 for all i = 1,2,…,n then Y ' (0, 0,..., 0) is called the null vector.
If
x1 y1 z1
x y z
X 2 , Y 2 , Z 2
xn yn zn
then
x1 y2 ky1
x2 y2 ky2
X Y , kY
xn yn kyn
X (Y Z ) ( X Y ) Z
X '(Y Z ) X ' Y X ' Z
k ( X ' Y ) ( kX ) ' Y X '( kY )
k ( X Y ) kX kY
X ' Y x1 y1 x2 y2 ... xn yn
where k is a scalar.
Analysis of Variance | Chapter 1 | Linear Algebra, Matrix Theory and Dist. | Shalabh, IIT Kanpur
1
,Orthogonal vectors:
Two vectors X and Y are said to be orthogonal if X ' Y Y ' X 0 .
The null vector is orthogonal to every vector X and is the only such vector.
Linear combination:
If x1 , x2 ,..., xm are m vectors and k1 , k2 ,..., km are m scalars, then
m
t ki xi
i 1
is called the linear combination of x1 , x2 ,..., xm .
Linear independence
If X1 , X 2 ,..., X m are m vectors then they are said to be linearly independent if there exist scalars
k1 , k2 ,..., km such that
m
k X
i 1
i i 0 ki 0 for all i = 1,2,…,m.
m
If there exist k1 , k2 ,..., km with at least one ki to be nonzero, such that k x
i 1
i i 0 then
x1 , x2 ,..., xm are said to be linearly dependent.
Any set of vectors containing the null vector is linearly dependent.
Any set of non-null pair-wise orthogonal vectors is linearly independent.
If m > 1 vectors are linearly dependent, it is always possible to express at least one of them as a
linear combination of the others.
Linear function:
Let K (k1 , k2 ,..., km ) ' be a m 1 vector of scalars and X ( x1 , x2 ,..., xm ) be a m 1 vector of variables,
m
then K ' Y ki yi is called a linear function or linear form. The vector K is called the coefficient
i 1
vector. For example, the mean of x1 , x2 ,..., xm can be expressed as
x1
1 m 1 x 1
x xi (1,1,...,1) 2 1'm X
m i 1 m m
xm
where 1'm is a m 1 vector of all elements unity.
Analysis of Variance | Chapter 1 | Linear Algebra, Matrix Theory and Dist. | Shalabh, IIT Kanpur
2
, Contrast:
m m
The linear function K ' X ki xi is called a contrast in x1 , x2 ,..., xm if k i 0.
i 1 i 1
For example, the linear functions
x1 x
x1 x2 , 2 x1 3x2 x3 , x2 3
2 3
are contrasts.
m
A linear function K ' X is a contrast if and only if it is orthogonal to a linear function x
i
i or to
1 m
the linear function x x.i .
m i 1
Contrasts x1 x2 , x1 x3 ,..., x1 x j are linearly independent for all j 2, 3,..., m.
Every contrast in x1 , x2 ,..., xn can be written as a linear combination of (m - 1) contrasts
x1 x2 , x1 x3 ,..., x1 xm .
Matrix:
A matrix is a rectangular array of real numbers. For example
a11 a12 ... a1n
a21 a22 ... a2 n
am1 am 2 ... amn
is a matrix of order m n with m rows and n columns.
If m = n, then A is called a square matrix.
If aij 0, i j , m n, then A is a diagonal matrix and is denoted as
A diag (a11 , a22 ,..., amm ).
If m = n (square matrix) and aij 0 for i > j , then A is called an upper triangular matrix. On
the other hand if m = n and aij 0 for i < j then A is called a lower triangular matrix.
If A is a m n matrix, then the matrix obtained by writing the rows of A and
columns of A as columns of A and rows of A respectively is called the transpose of a matrix A
and is denoted as A ' .
If A A ' then A is a symmetric matrix.
If A A ' then A is skew-symmetric matrix.
A matrix whose all elements are equal to zero is called a null matrix.
Analysis of Variance | Chapter 1 | Linear Algebra, Matrix Theory and Dist. | Shalabh, IIT Kanpur
3
Some Results on Linear Algebra, Matrix Theory and Distributions
We need some basic knowledge to understand the topics in the analysis of variance.
Vectors:
A vector Y is an ordered n-tuple of real numbers. A vector can be expressed as a row vector or a column
vector as
y1
y
Y 2
yn
is a column vector of order n 1 and
Y ' ( y1 , y2 ,..., yn )
is a row vector of order 1 n.
If all yi 0 for all i = 1,2,…,n then Y ' (0, 0,..., 0) is called the null vector.
If
x1 y1 z1
x y z
X 2 , Y 2 , Z 2
xn yn zn
then
x1 y2 ky1
x2 y2 ky2
X Y , kY
xn yn kyn
X (Y Z ) ( X Y ) Z
X '(Y Z ) X ' Y X ' Z
k ( X ' Y ) ( kX ) ' Y X '( kY )
k ( X Y ) kX kY
X ' Y x1 y1 x2 y2 ... xn yn
where k is a scalar.
Analysis of Variance | Chapter 1 | Linear Algebra, Matrix Theory and Dist. | Shalabh, IIT Kanpur
1
,Orthogonal vectors:
Two vectors X and Y are said to be orthogonal if X ' Y Y ' X 0 .
The null vector is orthogonal to every vector X and is the only such vector.
Linear combination:
If x1 , x2 ,..., xm are m vectors and k1 , k2 ,..., km are m scalars, then
m
t ki xi
i 1
is called the linear combination of x1 , x2 ,..., xm .
Linear independence
If X1 , X 2 ,..., X m are m vectors then they are said to be linearly independent if there exist scalars
k1 , k2 ,..., km such that
m
k X
i 1
i i 0 ki 0 for all i = 1,2,…,m.
m
If there exist k1 , k2 ,..., km with at least one ki to be nonzero, such that k x
i 1
i i 0 then
x1 , x2 ,..., xm are said to be linearly dependent.
Any set of vectors containing the null vector is linearly dependent.
Any set of non-null pair-wise orthogonal vectors is linearly independent.
If m > 1 vectors are linearly dependent, it is always possible to express at least one of them as a
linear combination of the others.
Linear function:
Let K (k1 , k2 ,..., km ) ' be a m 1 vector of scalars and X ( x1 , x2 ,..., xm ) be a m 1 vector of variables,
m
then K ' Y ki yi is called a linear function or linear form. The vector K is called the coefficient
i 1
vector. For example, the mean of x1 , x2 ,..., xm can be expressed as
x1
1 m 1 x 1
x xi (1,1,...,1) 2 1'm X
m i 1 m m
xm
where 1'm is a m 1 vector of all elements unity.
Analysis of Variance | Chapter 1 | Linear Algebra, Matrix Theory and Dist. | Shalabh, IIT Kanpur
2
, Contrast:
m m
The linear function K ' X ki xi is called a contrast in x1 , x2 ,..., xm if k i 0.
i 1 i 1
For example, the linear functions
x1 x
x1 x2 , 2 x1 3x2 x3 , x2 3
2 3
are contrasts.
m
A linear function K ' X is a contrast if and only if it is orthogonal to a linear function x
i
i or to
1 m
the linear function x x.i .
m i 1
Contrasts x1 x2 , x1 x3 ,..., x1 x j are linearly independent for all j 2, 3,..., m.
Every contrast in x1 , x2 ,..., xn can be written as a linear combination of (m - 1) contrasts
x1 x2 , x1 x3 ,..., x1 xm .
Matrix:
A matrix is a rectangular array of real numbers. For example
a11 a12 ... a1n
a21 a22 ... a2 n
am1 am 2 ... amn
is a matrix of order m n with m rows and n columns.
If m = n, then A is called a square matrix.
If aij 0, i j , m n, then A is a diagonal matrix and is denoted as
A diag (a11 , a22 ,..., amm ).
If m = n (square matrix) and aij 0 for i > j , then A is called an upper triangular matrix. On
the other hand if m = n and aij 0 for i < j then A is called a lower triangular matrix.
If A is a m n matrix, then the matrix obtained by writing the rows of A and
columns of A as columns of A and rows of A respectively is called the transpose of a matrix A
and is denoted as A ' .
If A A ' then A is a symmetric matrix.
If A A ' then A is skew-symmetric matrix.
A matrix whose all elements are equal to zero is called a null matrix.
Analysis of Variance | Chapter 1 | Linear Algebra, Matrix Theory and Dist. | Shalabh, IIT Kanpur
3