Cholesky Decomposition
A = LLᵀ for symmetric positive definite A. L lower triangular with positive diagonal. Unique when A is SPD. ~half cost of LU. Used in least squares, Kalman filter.
Why This Mathematical Concept Matters
Why: Cholesky is faster and more stable than LU for SPD matrices. Used in least squares (normal equations), Monte Carlo, and optimization.
How: Compute L column by column. L_ii = √(A_ii − Σ L_ik²). L_ij = (A_ij − Σ L_ik L_jk) / L_jj for i > j.
- ●A SPD ⟺ all leading principal minors > 0.
- ●Cholesky fails if A not SPD.
- ●Numerically stable for well-conditioned A.
Cholesky Decomposition Calculator
Sample Examples
Standard Example
A typical symmetric positive definite matrix
Correlation Matrix
A positive definite correlation matrix
4×4 Block Matrix
Larger symmetric positive definite matrix
Hilbert Matrix
A challenging matrix for numerical computations
About Cholesky Decomposition
Cholesky decomposition is a special case of LU decomposition applicable only to symmetric positive definite matrices. It decomposes a matrix into the product of a lower triangular matrix and its transpose.
What is Cholesky Decomposition?
For any symmetric positive definite matrix A, there exists a unique decomposition A = LL^T where:
- L is a lower triangular matrix with positive diagonal entries
- L^T is the transpose of L (an upper triangular matrix)
Key Concepts:
- Symmetric Matrix: A matrix A is symmetric if A = A^T (a_ij = a_ji)
- Positive Definite Matrix: A symmetric matrix A is positive definite if x^T A x > 0 for all non-zero vectors x
- Properties: All eigenvalues of a positive definite matrix are positive
Advantages of Cholesky Decomposition:
- Approximately twice as efficient as LU decomposition for applicable matrices
- Numerically stable and does not require pivoting
- Ensures a unique factorization
- Preserves the banded structure of sparse matrices
- Can be used as a test for positive definiteness
Applications of Cholesky Decomposition
Scientific Computing
- Linear Systems: Efficiently solving Ax = b when A is symmetric positive definite
- Numerical Optimization: Essential for many constrained and unconstrained optimization methods
- Least Squares: Solving normal equations in least squares problems (A^T A x = A^T b)
- Finite Element Method: Solving stiffness equations in structural analysis
Machine Learning & Statistics
- Multivariate Gaussians: Sampling from multivariate normal distributions
- Monte Carlo Simulations: Generating correlated random variables
- Kalman Filtering: State estimation in control systems and tracking
- Covariance Matrices: Efficient handling in Bayesian statistics
Signal Processing
- Time Series Analysis: Modeling ARMA processes
- Filter Design: Minimum-phase filter implementation
- Spectral Estimation: Efficient processing of covariance matrices
- Signal Detection: Whitening filters in communication systems
Graphics & Physics
- Cloth Simulation: Solving mass-spring systems
- Fluid Dynamics: Pressure projection in incompressible flows
- Computer Vision: Structure from motion and bundle adjustment
- Molecular Dynamics: Constrained molecular systems
The Cholesky Algorithm
The Cholesky decomposition can be computed using the following recursive formulas:
- For diagonal elements: Lii = √(Aii - ∑k=1i-1 Lik²)
- For lower triangular elements (i > j): Lij = (Aij - ∑k=1j-1 LikLjk) / Ljj
- Working from top to bottom, left to right within each row
This algorithm processes the matrix one column at a time, from left to right. For each column, we first compute the diagonal element and then all elements below it.
Numerical Considerations
Stability
Cholesky decomposition is remarkably stable numerically. The algorithm will fail only if the matrix is not positive definite (attempting to compute the square root of a negative number). This property makes it useful as a test for positive definiteness: if the Cholesky algorithm completes successfully, the matrix is positive definite.
Efficiency
The Cholesky decomposition requires approximately n³/3 floating-point operations, which is roughly half the cost of LU decomposition. This efficiency, combined with its numerical stability, makes it the preferred method whenever applicable.
Real-World Example: Monte Carlo Simulation
Suppose we want to generate samples from a multivariate normal distribution with a given covariance matrix Σ. We can use the Cholesky decomposition as follows:
- Compute the Cholesky decomposition Σ = LLT
- Generate a vector z of independent standard normal random variables
- Compute x = μ + Lz, where μ is the mean vector
The resulting vector x will have the desired multivariate normal distribution. This approach is widely used in finance for risk modeling, in physics for particle simulations, and in machine learning for Bayesian inference.
Historical Note:
The Cholesky decomposition is named after André-Louis Cholesky, a French military officer and mathematician who developed the algorithm in the early 20th century for surveying applications. Although he formulated it around 1910, his method was not published until after his death in World War I. Today, it stands as one of the most important factorization methods in numerical linear algebra.
⚠️For educational and informational purposes only. Verify with a qualified professional.
🧮 Fascinating Math Facts
L unique with positive diagonal
A⁻¹ = (Lᵀ)⁻¹L⁻¹