Matrix
History of Matrices :
The concept of a matrix dates back to ancient times , but was first referred to as a matrix in 1850 by James Joseph Syluester . They were first used between 300 BC and AD200 in a Chinese text called Nine Chapters of Mathematical Art by Chiu Chang Suan Sher written during the Han Dynasty which had the idea of determinants and solving system of equations with a matrix .The Babylonian also studied matrices but the Chinese did much more study into them .One of the Chinese matrix methods is commonly known today as Gaussian Elimination .The theory of matrices was developed by a mathematician named Gottfried Leibniz . He first took out co-efficients of linear equations and put them in a matrix . Then Carl Gauss further developed the matrix theory in the late 1700s .He added matrix multiplication , inverse and Gaussian Elimination . Gaussian Elimination method to solve system of equations with 3 or more variables , and it involves getting most matrix numbers to zero or an identity matrix . This is done doing " elementary row operations " .This method was also actually in the Chinese text, but Gausess rediscovered it .
Matrix(Introduction) :
A matrix is a rectangular array or table of numbers , symbols , or expressions arranged in rows and columns , which is used to represent a mathematical object or a property of such an object .
Example :
1 7 -10
3 5 -20
The above is a matrix of two rows and three columns . This is 2 x 3 matrix or a matrix of dimension 2 x3 .
Size :
The size of a matrix is defined by the number of rows and columns it contains . A matrix with 'm' rows and 'n' columns is called an 'm x n' matrix .
Square matrix :
A matrix with the same number of rows and columns is called square matrix .
9 13 5
1 10 7
3 7 9
The above is a 3 x3 square matrix .
A = 
Notation :
Matrices are commonly written in box brackets .
Matrices are usually symbolised using upper case ( such as A in the example above ) while the corresponding lower-case letters with two subscript indices represent the entries .
Basic operations :
Addition : The sum A + B of two m-by-n matrices A and B is calculated entry wise
(A +B)ₗⱼ = Aₗⱼ + Bₗⱼ ,where 1 ≤ i ≤ m , 1 ≤ j ≤ n
Scalar multiplication :
The product cA of a number c (also called scalar) and a matrix A is computed by multiplying every entry of A by c ie
(cA)ₗⱼ = c Aₗⱼ
Transposition :
The transpose of an' m-by-n ' matrix A is the 'n-by-m' matrix At formed by turning rows into columns and vice versa .
(At )ₗⱼ = Aⱼₗ
ie transpose of 1 2 3 is 1 0
0 -6 7 2 -6
3 7
Addition is commutative , that is , the matrix sum does not depend on the order of the summands :
A + B = B + A . The transpose is compatible with addition and scalar multiplication as expressed by
(cA)t = cAt and (A + B)t = At + Bt
Finally (At)t = A
Matrix multiplication :
Multiplication of two matrices is defined if and only if the number of columns of the left matrix is the same as the number of rows of the right matrix .If A is a ' m x n ' matrix and B is 'n x p' matrix ,then their product AB is the m x p matrix whose entries are given by dot product of the corresponding row of A and the corresponding column of B .
(AB)ₗⱼ = aₗ₁b₁ⱼ + aₗ₂b₂ⱼ + ....................+aₗₙbₙⱼ
= ⅀ aₗₜ bₜⱼ ,( 1≤ l≤ m , and 1 ≤ j ≤ p )
Matrix multiplication satisfies associative law i.e
(AB)C = A(BC)
multiplication also obeys distributive law
i.e (A +B ) C = AC + BC ( left distributive law )
and C(A +B ) =CA + CB (right distributive law )
Matrix multiplication is generally not commutative
i.e AB ≠ BA
Invertible matrix and its inverse :
A square matrix A is called invertible if there exists a matrix B such that
AB = BA = Iₙ ; Iₙ is the n x n identity matrix and B = A⁻¹.
Determinant :
The determinant of a square matrix (denoted det(A) ) is a number encoding certain properties of the matrix . A matrix is invertible if and only if its determinant is non zero .
The inverse of a matrix can be calculated from the following formula
A⁻¹. = Adj(A)/det(A) , where Adj(A) is defined as transpose of its cofactor matrix .
Co-factor matrix :
It is a matrix having co-factors as the element of the matrix .Co-factor of an element within the matrix is obtained when the minor Mₗⱼ of the element is multiplied with (-1)ˡ⁺ⁱ here l and j refer to the rows and column to which the given element belongs . The cofactor is denoted by
Cₗⱼ = (-1)ˡ⁺ⁱ Mₗⱼ
Minor of matrix for a particular element in the matrix is defined as the matrix obtained after deleting the row and column of the matrix in which the element belongs . Minor matrix is a new matrix formed from the determinant of minors of each element .
Application of matrices :
To solve Linear equations .
Matrices can be used and to compactly write and work with multiple linear equations . We can solve linear equations with the use of matrix algebra .
If A is a 'm x n' matrix , X denotes its column vector ( n x 1 matrix ) of n variable x₁ , x₂ , ------- xₙ and b is an m x 1 column vector then the matrix equation -
AX = b is equivalent to
a₁₁ x₁ + a₁₂ x₂ + ------------------a₁ₙ xₙ = b₁
-----------------------------------------------------------
aₘ₁ x₁ + aₘ₂ x₂ + ------------------------+aₘₙ xₙ = bₘ
This can be solved as
X = A⁻¹ b
Where A⁻¹ is the inverse of A .
Conclusion :
From the study of matrices we may come to the conclusion that it is a tool used in modern mathematics and physics .We have discussed only one such application . In physics quantum mechanics is developed with the use of matix algebra by Heisenberg . In coupled harmonic system equation of motions can be described in matrix form . In mathematics matix is used in probability theory and statistics , graph theory etc .In electronics mesh analysis and nodal analysis use linear equations in matrix form for solution.
কোন মন্তব্য নেই:
একটি মন্তব্য পোস্ট করুন