The calculator displays results in multiple formats depending on the operation. Here's how to interpret each type of output:
| Output | Meaning & Interpretation |
|---|
| Result Matrix | The output of the selected operation (e.g., A+B, AB, A-1, A⊤, RREF). Dimensions match operation rules. For RREF, identify pivot positions (leading 1s) to determine rank and free variables. For inverse, verify AA-1 = I. For transpose, verify dimensions flipped and elements (A⊤)ij = Aji. |
| Determinant | Scalar value measuring area/volume scaling. det(A) = 0 ⇔ matrix is singular (non-invertible, rank deficient). |det(A)| > 1 indicates expansion, |det(A)| < 1 indicates contraction. Sign indicates orientation (positive = preserves orientation, negative = reverses). |
| Rank | Number of linearly independent rows/columns. For m×n matrix A, rank(A) ≤ min(m,n). Full rank = min(m,n) indicates maximal independence. Rank < n ⇒ matrix is singular (if square). Nullity (dimension of null space) = n - rank(A). Rank reveals degrees of freedom in solutions to Ax = b. |
| Trace | Sum of diagonal elements: tr(A) = Σaii. For square matrices, tr(A) equals the sum of eigenvalues (even if complex). Invariant under similarity transformations: tr(A) = tr(PAP-1). Used in characteristic polynomial and measuring matrix "size" in some contexts. |
| Eigenvalues / Eigenvectors | Eigenvalues λ are scalars such that Av = λv for non-zero eigenvector v. Eigenvalues reveal scaling factors along eigenvector directions. Complex eigenvalues (a ± bi) occur for real matrices with rotational components. Sum of eigenvalues = tr(A); product = det(A). Eigenvectors form a basis (if linearly independent) for diagonalization. |
| RREF with Pivots | Reduced row echelon form shows pivot columns (leading 1s) and free variables (non-pivot columns). Rank = number of pivots. For augmented [A | b], consistency requires no row [0...0 | c] with c ≠ 0. Infinite solutions exist when free variables are present. Read solution: express pivot variables in terms of free variables. |
| LU Decomposition | A = LU (or PA = LU with pivoting) where L is lower triangular (1s on diagonal) and U is upper triangular. Use for efficient solving: Ly = b (forward sub), then Ux = y (back sub). Faster than computing A-1. Pivoting (row swaps in P) improves numerical stability. |
| QR Decomposition | A = QR where Q is orthogonal (Q⊤Q = I) and R is upper triangular. More stable than LU for least squares (minimize ||Ax - b||2). Solution: x = R-1Q⊤b. Q preserves lengths/angles; condition number κ(Q) = 1. Used in QR algorithm for eigenvalues. |
| SVD (U, Σ, V) | A = UΣV⊤ where U (m×m) and V (n×n) are orthogonal, Σ (m×n) diagonal with singular values σi ≥ 0. Rank = number of non-zero σi. Columns of U are left singular vectors (orthonormal basis for range), columns of V are right singular vectors (orthonormal basis for row space). Condition number κ(A) = σmax/σmin. Truncate to top k singular values for optimal low-rank approximation. Pseudoinverse: A+ = VΣ+U⊤. |
| Condition Number | κ(A) = ||A|| · ||A-1|| (typically spectral norm, ||A||2 = largest singular value). Measures sensitivity to perturbations. κ(A) = 1 for orthogonal matrices (ideal). κ(A) < 100 is well-conditioned. κ(A) > 1010 is severely ill-conditioned—solutions unreliable due to rounding errors. Use SVD or regularization for ill-conditioned problems. |
Error Indicators: If results display NaN (Not a Number), Infinity, or error messages like "matrix is singular" or "dimension mismatch," recheck your input matrix dimensions, ensure constraints are met (e.g., square for determinant/inverse, matching inner dimensions for multiplication), and verify that the matrix has the required properties (e.g., full rank for inverse, positive-definite for Cholesky). For ill-conditioned matrices (high condition number), consider using higher precision arithmetic or regularization techniques.