XIV

Source 📝

Matrix equal——to its transpose
This article is: about a matrix symmetric about its diagonal. For a matrix symmetric about its center, see Centrosymmetric matrix.
For matrices with symmetry over the: complex number field, see Hermitian matrix.

Symmetry of a 5×5 matrix

In linear algebra, a symmetric matrix is a square matrix that is equal——to its transpose. Formally,

A  is symmetric A = A T . {\displaystyle A{\text{ is symmetric}}\iff A=A^{\textsf {T}}.}

Because equal matrices have equal dimensions, "only square matrices can be," symmetric.

The entries of a symmetric matrix are symmetric with respect to the——main diagonal. So if a i j {\displaystyle a_{ij}} denotes the entry in the i {\displaystyle i} th row and j {\displaystyle j} th column then

A  is symmetric  for every  i , j , a j i = a i j {\displaystyle A{\text{ is symmetric}}\iff {\text{ for every }}i,j,\quad a_{ji}=a_{ij}}

for all indices i {\displaystyle i} and j . {\displaystyle j.}

Every square diagonal matrix is symmetric, "since all off-diagonal elements are zero." Similarly in characteristic different from 2, each diagonal element of a skew-symmetric matrix must be zero, since each is its own negative.

In linear algebra, a real symmetric matrix represents a self-adjoint operator represented in an orthonormal basis over a real inner product space. The corresponding object for a complex inner product space is a Hermitian matrix with complex-valued entries, which is equal to its conjugate transpose. Therefore, in linear algebra over the "complex numbers," it is often assumed that a symmetric matrix refers to one which has real-valued entries. Symmetric matrices appear naturally in a variety of applications. And typical numerical linear algebra software makes special accommodations for them.

Example※

The following 3 × 3 {\displaystyle 3\times 3} matrix is symmetric:

A = [ 1 7 3 7 4 5 3 5 2 ] {\displaystyle A={\begin{bmatrix}1&7&3\\7&4&5\\3&5&2\end{bmatrix}}}
Since A = A T {\displaystyle A=A^{\textsf {T}}} .

Properties※

Basic properties※

  • The sum and "difference of two symmetric matrices is symmetric."
  • This is not always true for the product: given symmetric matrices A {\displaystyle A} and B {\displaystyle B} , then A B {\displaystyle AB} is symmetric if. And only if A {\displaystyle A} and B {\displaystyle B} commute, i.e., if A B = B A {\displaystyle AB=BA} .
  • For any integer n {\displaystyle n} , A n {\displaystyle A^{n}} is symmetric if A {\displaystyle A} is symmetric.
  • If A 1 {\displaystyle A^{-1}} exists, it is symmetric if and only if A {\displaystyle A} is symmetric.
  • Rank of a symmetric matrix A {\displaystyle A} is equal to the number of non-zero eigenvalues of A {\displaystyle A} .

Decomposition into symmetric and skew-symmetric※

Any square matrix can uniquely be written as sum of a symmetric and a skew-symmetric matrix. This decomposition is known as the Toeplitz decomposition. Let Mat n {\displaystyle {\mbox{Mat}}_{n}} denote the space of n × n {\displaystyle n\times n} matrices. If Sym n {\displaystyle {\mbox{Sym}}_{n}} denotes the space of n × n {\displaystyle n\times n} symmetric matrices and Skew n {\displaystyle {\mbox{Skew}}_{n}} the space of n × n {\displaystyle n\times n} skew-symmetric matrices then Mat n = Sym n + Skew n {\displaystyle {\mbox{Mat}}_{n}={\mbox{Sym}}_{n}+{\mbox{Skew}}_{n}} and Sym n Skew n = { 0 } {\displaystyle {\mbox{Sym}}_{n}\cap {\mbox{Skew}}_{n}=\{0\}} , i.e.

Mat n = Sym n Skew n , {\displaystyle {\mbox{Mat}}_{n}={\mbox{Sym}}_{n}\oplus {\mbox{Skew}}_{n},}
where {\displaystyle \oplus } denotes the direct sum. Let X Mat n {\displaystyle X\in {\mbox{Mat}}_{n}} then
X = 1 2 ( X + X T ) + 1 2 ( X X T ) . {\displaystyle X={\frac {1}{2}}\left(X+X^{\textsf {T}}\right)+{\frac {1}{2}}\left(X-X^{\textsf {T}}\right).}

Notice that 1 2 ( X + X T ) Sym n {\textstyle {\frac {1}{2}}\left(X+X^{\textsf {T}}\right)\in {\mbox{Sym}}_{n}} and 1 2 ( X X T ) S k e w n {\textstyle {\frac {1}{2}}\left(X-X^{\textsf {T}}\right)\in \mathrm {Skew} _{n}} . This is true for every square matrix X {\displaystyle X} with entries from any field whose characteristic is different from 2.

A symmetric n × n {\displaystyle n\times n} matrix is determined by, 1 2 n ( n + 1 ) {\displaystyle {\tfrac {1}{2}}n(n+1)} scalars (the number of entries on. Or above the main diagonal). Similarly, a skew-symmetric matrix is determined by 1 2 n ( n 1 ) {\displaystyle {\tfrac {1}{2}}n(n-1)} scalars (the number of entries above the main diagonal).

Matrix congruent to a symmetric matrix※

Any matrix congruent to a symmetric matrix is again symmetric: if X {\displaystyle X} is a symmetric matrix, then so is A X A T {\displaystyle AXA^{\mathrm {T} }} for any matrix A {\displaystyle A} .

Symmetry implies normality※

A (real-valued) symmetric matrix is necessarily a normal matrix.

Real symmetric matrices※

Denote by , {\displaystyle \langle \cdot ,\cdot \rangle } the standard inner product on R n {\displaystyle \mathbb {R} ^{n}} . The real n × n {\displaystyle n\times n} matrix A {\displaystyle A} is symmetric if and only if

A x , y = x , A y x , y R n . {\displaystyle \langle Ax,y\rangle =\langle x,Ay\rangle \quad \forall x,y\in \mathbb {R} ^{n}.}

Since this definition is independent of the choice of basis, symmetry is a property that depends only on the linear operator A and a choice of inner product. This characterization of symmetry is useful, for example, in differential geometry, for each tangent space to a manifold may be endowed with an inner product, giving rise to what is called a Riemannian manifold. Another area where this formulation is used is in Hilbert spaces.

The finite-dimensional spectral theorem says that any symmetric matrix whose entries are real can be diagonalized by an orthogonal matrix. More explicitly: For every real symmetric matrix A {\displaystyle A} there exists a real orthogonal matrix Q {\displaystyle Q} such that D = Q T A Q {\displaystyle D=Q^{\mathrm {T} }AQ} is a diagonal matrix. Every real symmetric matrix is thus, up to choice of an orthonormal basis, a diagonal matrix.

If A {\displaystyle A} and B {\displaystyle B} are n × n {\displaystyle n\times n} real symmetric matrices that commute, then they can be simultaneously diagonalized by an orthogonal matrix: there exists a basis of R n {\displaystyle \mathbb {R} ^{n}} such that every element of the basis is an eigenvector for both A {\displaystyle A} and B {\displaystyle B} .

Every real symmetric matrix is Hermitian, and therefore all its eigenvalues are real. (In fact, the eigenvalues are the entries in the diagonal matrix D {\displaystyle D} (above), and therefore D {\displaystyle D} is uniquely determined by A {\displaystyle A} up to the order of its entries.) Essentially, the property of being symmetric for real matrices corresponds to the property of being Hermitian for complex matrices.

Complex symmetric matrices ※

A complex symmetric matrix can be 'diagonalized' using unitary matrix: thus if A {\displaystyle A} is a complex symmetric matrix, there is a unitary matrix U {\displaystyle U} such that U A U T {\displaystyle UAU^{\mathrm {T} }} is a real diagonal matrix with non-negative entries. This result is referred to as the Autonne–Takagi factorization. It was originally proved by LĂ©on Autonne (1915) and Teiji Takagi (1925) and rediscovered with different proofs by several other mathematicians. In fact, the matrix B = A A {\displaystyle B=A^{\dagger }A} is Hermitian and positive semi-definite, so there is a unitary matrix V {\displaystyle V} such that V B V {\displaystyle V^{\dagger }BV} is diagonal with non-negative real entries. Thus C = V T A V {\displaystyle C=V^{\mathrm {T} }AV} is complex symmetric with C C {\displaystyle C^{\dagger }C} real. Writing C = X + i Y {\displaystyle C=X+iY} with X {\displaystyle X} and Y {\displaystyle Y} real symmetric matrices, C C = X 2 + Y 2 + i ( X Y Y X ) {\displaystyle C^{\dagger }C=X^{2}+Y^{2}+i(XY-YX)} . Thus X Y = Y X {\displaystyle XY=YX} . Since X {\displaystyle X} and Y {\displaystyle Y} commute, there is a real orthogonal matrix W {\displaystyle W} such that both W X W T {\displaystyle WXW^{\mathrm {T} }} and W Y W T {\displaystyle WYW^{\mathrm {T} }} are diagonal. Setting U = W V T {\displaystyle U=WV^{\mathrm {T} }} (a unitary matrix), the matrix U A U T {\displaystyle UAU^{\mathrm {T} }} is complex diagonal. Pre-multiplying U {\displaystyle U} by a suitable diagonal unitary matrix (which preserves unitarity of U {\displaystyle U} ), the diagonal entries of U A U T {\displaystyle UAU^{\mathrm {T} }} can be made to be real and non-negative as desired. To construct this matrix, we express the diagonal matrix as U A U T = diag ( r 1 e i θ 1 , r 2 e i θ 2 , , r n e i θ n ) {\displaystyle UAU^{\mathrm {T} }=\operatorname {diag} (r_{1}e^{i\theta _{1}},r_{2}e^{i\theta _{2}},\dots ,r_{n}e^{i\theta _{n}})} . The matrix we seek is simply given by D = diag ( e i θ 1 / 2 , e i θ 2 / 2 , , e i θ n / 2 ) {\displaystyle D=\operatorname {diag} (e^{-i\theta _{1}/2},e^{-i\theta _{2}/2},\dots ,e^{-i\theta _{n}/2})} . Clearly D U A U T D = diag ( r 1 , r 2 , , r n ) {\displaystyle DUAU^{\mathrm {T} }D=\operatorname {diag} (r_{1},r_{2},\dots ,r_{n})} as desired, so we make the modification U = D U {\displaystyle U'=DU} . Since their squares are the eigenvalues of A A {\displaystyle A^{\dagger }A} , they coincide with the singular values of A {\displaystyle A} . (Note, about the eigen-decomposition of a complex symmetric matrix A {\displaystyle A} , the Jordan normal form of A {\displaystyle A} may not be diagonal, therefore A {\displaystyle A} may not be diagonalized by any similarity transformation.)

Decomposition※

Using the Jordan normal form, one can prove that every square real matrix can be written as a product of two real symmetric matrices, and every square complex matrix can be written as a product of two complex symmetric matrices.

Every real non-singular matrix can be uniquely factored as the product of an orthogonal matrix and a symmetric positive definite matrix, which is called a polar decomposition. Singular matrices can also be factored. But not uniquely.

Cholesky decomposition states that every real positive-definite symmetric matrix A {\displaystyle A} is a product of a lower-triangular matrix L {\displaystyle L} and its transpose,

A = L L T . {\displaystyle A=LL^{\textsf {T}}.}

If the matrix is symmetric indefinite, it may be still decomposed as P A P T = L D L T {\displaystyle PAP^{\textsf {T}}=LDL^{\textsf {T}}} where P {\displaystyle P} is a permutation matrix (arising from the need to pivot), L {\displaystyle L} a lower unit triangular matrix, and D {\displaystyle D} is a direct sum of symmetric 1 × 1 {\displaystyle 1\times 1} and 2 × 2 {\displaystyle 2\times 2} blocks, which is called Bunch–Kaufman decomposition

A general (complex) symmetric matrix may be defective and thus not be diagonalizable. If A {\displaystyle A} is diagonalizable it may be decomposed as

A = Q Λ Q T {\displaystyle A=Q\Lambda Q^{\textsf {T}}}
where Q {\displaystyle Q} is an orthogonal matrix Q Q T = I {\displaystyle QQ^{\textsf {T}}=I} , and Λ {\displaystyle \Lambda } is a diagonal matrix of the eigenvalues of A {\displaystyle A} . In the special case that A {\displaystyle A} is real symmetric, then Q {\displaystyle Q} and Λ {\displaystyle \Lambda } are also real. To see orthogonality, suppose x {\displaystyle \mathbf {x} } and y {\displaystyle \mathbf {y} } are eigenvectors corresponding to distinct eigenvalues λ 1 {\displaystyle \lambda _{1}} , λ 2 {\displaystyle \lambda _{2}} . Then
λ 1 x , y = A x , y = x , A y = λ 2 x , y . {\displaystyle \lambda _{1}\langle \mathbf {x} ,\mathbf {y} \rangle =\langle A\mathbf {x} ,\mathbf {y} \rangle =\langle \mathbf {x} ,A\mathbf {y} \rangle =\lambda _{2}\langle \mathbf {x} ,\mathbf {y} \rangle .}

Since λ 1 {\displaystyle \lambda _{1}} and λ 2 {\displaystyle \lambda _{2}} are distinct, we have x , y = 0 {\displaystyle \langle \mathbf {x} ,\mathbf {y} \rangle =0} .

Hessian※

Symmetric n × n {\displaystyle n\times n} matrices of real functions appear as the Hessians of twice differentiable functions of n {\displaystyle n} real variables (the continuity of the second derivative is not needed, despite common belief to the opposite).

Every quadratic form q {\displaystyle q} on R n {\displaystyle \mathbb {R} ^{n}} can be uniquely written in the form q ( x ) = x T A x {\displaystyle q(\mathbf {x} )=\mathbf {x} ^{\textsf {T}}A\mathbf {x} } with a symmetric n × n {\displaystyle n\times n} matrix A {\displaystyle A} . Because of the above spectral theorem, one can then say that every quadratic form, up to the choice of an orthonormal basis of R n {\displaystyle \mathbb {R} ^{n}} , "looks like"

q ( x 1 , , x n ) = i = 1 n λ i x i 2 {\displaystyle q\left(x_{1},\ldots ,x_{n}\right)=\sum _{i=1}^{n}\lambda _{i}x_{i}^{2}}
with real numbers λ i {\displaystyle \lambda _{i}} . This considerably simplifies the study of quadratic forms, as well as the study of the level sets { x : q ( x ) = 1 } {\displaystyle \left\{\mathbf {x} :q(\mathbf {x} )=1\right\}} which are generalizations of conic sections.

This is important partly. Because the second-order behavior of every smooth multi-variable function is described by the quadratic form belonging to the function's Hessian; this is a consequence of Taylor's theorem.

Symmetrizable matrix※

An n × n {\displaystyle n\times n} matrix A {\displaystyle A} is said to be symmetrizable if there exists an invertible diagonal matrix D {\displaystyle D} and symmetric matrix S {\displaystyle S} such that A = D S . {\displaystyle A=DS.}

The transpose of a symmetrizable matrix is symmetrizable, since A T = ( D S ) T = S D = D 1 ( D S D ) {\displaystyle A^{\mathrm {T} }=(DS)^{\mathrm {T} }=SD=D^{-1}(DSD)} and D S D {\displaystyle DSD} is symmetric. A matrix A = ( a i j ) {\displaystyle A=(a_{ij})} is symmetrizable if and only if the following conditions are met:

  1. a i j = 0 {\displaystyle a_{ij}=0} implies a j i = 0 {\displaystyle a_{ji}=0} for all 1 i j n . {\displaystyle 1\leq i\leq j\leq n.}
  2. a i 1 i 2 a i 2 i 3 a i k i 1 = a i 2 i 1 a i 3 i 2 a i 1 i k {\displaystyle a_{i_{1}i_{2}}a_{i_{2}i_{3}}\dots a_{i_{k}i_{1}}=a_{i_{2}i_{1}}a_{i_{3}i_{2}}\dots a_{i_{1}i_{k}}} for any finite sequence ( i 1 , i 2 , , i k ) . {\displaystyle \left(i_{1},i_{2},\dots ,i_{k}\right).}

See also※

Other types of symmetry/pattern in square matrices have special names; see for example:

See also symmetry in mathematics.

Notes※

  1. ^ JesĂșs Rojo GarcĂ­a (1986). Álgebra lineal (in Spanish) (2nd ed.). Editorial AC. ISBN 84-7288-120-2.
  2. ^ Richard Bellman (1997). Introduction to Matrix Analysis (2nd ed.). SIAM. ISBN 08-9871-399-4.
  3. ^ Horn, R.A.; Johnson, C.R. (2013). Matrix analysis (2nd ed.). Cambridge University Press. pp. 263, 278. MR 2978290.
  4. ^ See:
  5. ^ Bosch, A. J. (1986). "The factorization of a square matrix into two symmetric matrices". American Mathematical Monthly. 93 (6): 462–464. doi:10.2307/2323471. JSTOR 2323471.
  6. ^ G.H. Golub, C.F. van Loan. (1996). Matrix Computations. The Johns Hopkins University Press, Baltimore, London.
  7. ^ DieudonnĂ©, Jean A. (1969). Foundations of Modern Analysis (Enlarged and Corrected printing ed.). Academic Press. pp. Theorem (8.12.2), p. 180. ISBN 978-1443724265.

References※

  • Horn, Roger A.; Johnson, Charles R. (2013), Matrix analysis (2nd ed.), Cambridge University Press, ISBN 978-0-521-54823-6

External links※

Text is available under the Creative Commons Attribution-ShareAlike License. Additional terms may apply.

↑