linear-algebra
| Kind | ffi-zig |
|---|---|
| Capabilities | ffi |
| Categories | math numeric ffi |
| Keywords | linear-algebra blas lapack matrix vector math numeric zig-ffi |
High-performance linear algebra via BLAS/LAPACK (uses Accelerate on macOS, OpenBLAS on Linux)
Lazy by default - Build matrix expression trees, optimize via algebraic rewrites, and evaluate in a single pass with no intermediate allocations.
Files
| File | Description |
|---|---|
.editorconfig | Editor formatting configuration |
.gitignore | Git ignore rules for build artifacts and dependencies |
.tool-versions | asdf tool versions (Zig, Kit) |
LICENSE | MIT license file |
README.md | This file |
examples/3d-vectors.kit | Example: 3d vectors |
examples/basic.kit | Basic usage example |
examples/decompositions.kit | Example: decompositions |
examples/eigenvalues.kit | Example: eigenvalues |
examples/lazy-expressions.kit | Example: lazy expressions |
examples/least-squares.kit | Example: least squares |
examples/matrix-norms.kit | Example: matrix norms |
examples/transformations.kit | Example: transformations |
kit.toml | Package manifest with metadata and dependencies |
src/bidiagonal.kit | A bidiagonal matrix with non-zero elements on main diagonal and one off-diago... |
src/complex-expr.kit | Module for complex expr |
src/complex-matrix.kit | Module for complex matrix |
src/diagonal.kit | A diagonal matrix represented by its diagonal elements. |
src/expr.kit | Module for expr |
src/geig.kit | Result of generalized eigenvalue problem. |
src/hessenberg.kit | Result of Hessenberg decomposition. |
src/ldlt.kit | Result of LDLᵀ factorization. |
src/linear-algebra.kit | Kit Linear Algebra Package — Lazy by Default |
src/main.kit | Kit Linear Algebra Package — Lazy by Default |
src/schur.kit | Result of Schur decomposition. |
src/sparse-expr.kit | Module for sparse expr |
src/sparse.kit | A single entry in a sparse matrix (COO format element) |
src/symmetric.kit | Module for symmetric |
src/triangular.kit | A triangular matrix with its structure type. |
src/tridiagonal.kit | A general tridiagonal matrix with three diagonals. |
tests/bidiagonal.test.kit | Tests for bidiagonal |
tests/complex-expr.test.kit | Tests for complex-expr |
tests/complex-ffi.test.kit | Tests for complex-ffi |
tests/complex-matrix.test.kit | Tests for complex-matrix |
tests/debug-modules/both.test.kit | Tests for both |
tests/debug-modules/direct.test.kit | Tests for direct |
tests/debug-modules/module-a.kit | Tests for module-a |
tests/debug-modules/module-b.kit | Tests for module-b |
tests/debug-modules/reexport.kit | Tests for reexport |
tests/debug.test.kit | Tests for debug |
tests/diagonal.test.kit | Tests for diagonal |
tests/expr.test.kit | Tests for expr |
tests/geig.test.kit | Tests for geig |
tests/hessenberg.test.kit | Tests for hessenberg |
tests/ldlt.test.kit | Tests for ldlt |
tests/linear-algebra.test.kit | Tests for linear-algebra |
tests/schur.test.kit | Tests for schur |
tests/sparse-expr.test.kit | Tests for sparse-expr |
tests/sparse-ffi.test.kit | Tests for sparse-ffi |
tests/sparse.test.kit | Tests for sparse |
tests/symmetric.test.kit | Tests for symmetric |
tests/triangular.test.kit | Tests for triangular |
tests/tridiagonal.test.kit | Tests for tridiagonal |
tests/vec.test.kit | Tests for vec |
zig/.zig-cache/h/17c80755ed4b068e1245d202a730617f.txt | 17c80755ed4b068e1245d202a730617f.txt |
zig/.zig-cache/h/33572684c8367eaee53e3426d79b1ce9.txt | 33572684c8367eaee53e3426d79b1ce9.txt |
zig/.zig-cache/h/397293e019dbcfe1da257b0049148c07.txt | 397293e019dbcfe1da257b0049148c07.txt |
zig/.zig-cache/h/595db9f271e25a5572038856fb17aa3a.txt | 595db9f271e25a5572038856fb17aa3a.txt |
zig/.zig-cache/h/773ed19ba232d42ab4b6b4026feb2dda.txt | 773ed19ba232d42ab4b6b4026feb2dda.txt |
zig/.zig-cache/h/880c6ce92711cd2b4bcf3aab29429740.txt | 880c6ce92711cd2b4bcf3aab29429740.txt |
zig/.zig-cache/h/9acaeb1ccaa7e491838a452ce9a5c2e4.txt | 9acaeb1ccaa7e491838a452ce9a5c2e4.txt |
zig/.zig-cache/h/e1da3e57c850f4ac08900ea68d2c5d2e.txt | e1da3e57c850f4ac08900ea68d2c5d2e.txt |
zig/.zig-cache/h/efb079a3ed5d812b2dca946fd6512d90.txt | efb079a3ed5d812b2dca946fd6512d90.txt |
zig/.zig-cache/h/fb5aee929ca1a9470696e78468a3260e.txt | fb5aee929ca1a9470696e78468a3260e.txt |
zig/.zig-cache/o/4be10ddbf958739d494ae2e1715e8619/cimport.h | cimport.h |
zig/.zig-cache/o/4be10ddbf958739d494ae2e1715e8619/cimport.h.d | cimport.h.d |
zig/.zig-cache/o/b3240b4b253945faba0c618b8505cfbe/cimport.zig | Zig FFI module for cimport |
zig/.zig-cache/o/cfaf7a24095f5d89fc7e30798eca24da/dependencies.zig | Zig FFI module for dependencies |
zig/kit_ffi.zig | Zig FFI module for kit ffi |
zig/linear_algebra.zig | Zig FFI module for linear algebra |
Dependencies
No Kit package dependencies.
Architecture
Key insight: Matrix operations are lazy. LA.of, LA.mul, LA.add, etc. build an expression tree without computation. When you call LA.eval or a terminal operation, the optimizer applies algebraic rewrites, then evaluates in a single pass - no intermediate matrices allocated.
Features
- Lazy by Default - Build matrix expression trees, optimize via algebraic rewrites, evaluate once
- No Intermediate Allocations - Single-pass evaluation after optimization
- 14 Algebraic Rewrite Rules - Double transpose, identity multiplication, scalar folding, and more
- Terminal Operations - Automatically evaluate trees before computing determinants, decompositions, etc.
- Eager Vector Operations - Dot product, norms, scaling under
LA.Vec.* - BLAS/LAPACK Performance - Accelerate (macOS) or OpenBLAS (Linux)
- Type-Safe - Compile-time type checking
Requirements
- Kit compiler
- macOS: Built-in Accelerate framework (no installation needed)
- Linux: OpenBLAS library (see install commands below)
Linux OpenBLAS Installation
| Distribution | Command |
|---|---|
| Ubuntu/Debian | sudo apt install libopenblas-dev |
| Fedora | sudo dnf install openblas-devel |
| Arch | sudo pacman -S openblas |
| Alpine | apk add openblas-dev |
Installation
In your project directory, add the package as a dependency:
kit add gitlab.com/kit-lang/packages/kit-linear-algebra.gitThen import it in your Kit code:
import Kit.LinearAlgebra as LAQuick Start
import Kit.LinearAlgebra as LA
# --- Matrix Operations (Lazy) ---
# Build expression trees - no computation yet
a = LA.of [[1.0, 2.0], [3.0, 4.0]]
b = LA.of [[5.0, 6.0], [7.0, 8.0]]
# Compose operations (still no computation)
expr = a |> LA.mul b |> LA.add (LA.scale 2.0 (LA.identity 2))
# Evaluate (single pass, optimized)
match LA.eval-optimized expr
| Ok result -> println "Result: ${result}"
| Err e -> println "Error: ${e}"
# Terminal operations auto-evaluate
match LA.det a
| Ok d -> println "Determinant: ${d}"
| Err e -> println "Error: ${e}"
# --- Vector Operations (Eager) ---
# Vector ops are eager (immediate computation)
match LA.Vec.dot [1.0, 2.0, 3.0] [4.0, 5.0, 6.0]
| Ok d -> println "Dot: ${d}" # 32.0
| Err e -> println "Error: ${e}"
match LA.Vec.norm [3.0, 4.0]
| Ok n -> println "Norm: ${n}" # 5.0
| Err e -> println "Error: ${e}"API Reference
Building Expressions
All matrix operations are lazy - they build expression trees without computation.
| Function | Description |
|---|---|
of(m) | Wrap a matrix literal [[Float]] as an expression |
add(left, right) | Element-wise addition (pipe-friendly) |
sub(left, right) | Element-wise subtraction |
mul(left, right) | Matrix multiplication |
scale(alpha, expr) | Scalar multiplication |
transpose(expr) | Matrix transpose |
neg(expr) | Negate (multiply by -1) |
inv(expr) | Matrix inverse |
Lazy Constructors
These build expressions without calling FFI until evaluation.
| Function | Description |
|---|---|
identity(n) | n x n identity matrix |
zeros(rows, cols) | Zero matrix |
ones(rows, cols) | Matrix of ones |
fill(rows, cols, val) | Matrix filled with value |
diag-matrix(vec) | Diagonal matrix from vector |
Evaluation
| Function | Description |
|---|---|
eval(expr) | Evaluate expression tree |
eval-optimized(expr) | Apply rewrites, then evaluate |
optimize(expr) | Apply algebraic rewrite rules only |
Terminal Operations (Matrix Properties)
Terminal operations automatically evaluate the expression, then compute the result.
| Function | Description |
|---|---|
det(expr) | Determinant |
trace(expr) | Trace (sum of diagonal) |
rank(expr) | Matrix rank via SVD |
cond(expr) | Condition number |
norm-fro(expr) | Frobenius norm |
norm-1(expr) | 1-norm (max column sum) |
norm-inf(expr) | Infinity norm (max row sum) |
shape(expr) | Shape as {rows, cols} |
extract-diag(expr) | Extract diagonal as vector |
is-square?(expr) | Check if matrix is square |
num-rows(expr) | Number of rows |
num-cols(expr) | Number of columns |
Terminal Operations (Decompositions)
| Function | Description |
|---|---|
lu(expr) | LU decomposition with pivoting |
qr(expr) | QR decomposition |
cholesky(expr) | Cholesky decomposition |
svd(expr) | Singular value decomposition |
eig(expr) | Eigenvalues/vectors (symmetric) |
eig-general(expr) | Eigenvalues/vectors (general) |
Terminal Operations (Linear Systems)
| Function | Description |
|---|---|
solve(expr, b) | Solve Ax = b |
lstsq(expr, b) | Least squares solution |
matvec(expr, x) | Matrix-vector multiply |
Terminal Operations (Matrix Extraction)
| Function | Description |
|---|---|
triu(expr) | Upper triangular part |
tril(expr) | Lower triangular part |
flatten(expr) | Flatten to vector |
mat-reshape(expr, m, n) | Reshape to m x n |
Terminal Operations (Advanced)
| Function | Description |
|---|---|
pinv(expr) | Moore-Penrose pseudoinverse |
kron(expr1, expr2) | Kronecker product |
Vector Operations (LA.Vec.*)
Vector operations are eager - they compute immediately (no expression trees).
| Function | Description |
|---|---|
Vec.dot(x, y) | Dot product |
Vec.norm(x) | Euclidean norm (L2) |
Vec.scale(alpha, x) | Scale vector |
Vec.axpy(alpha, x, y) | alpha*x + y |
Vec.plus(x, y) | Vector addition |
Vec.minus(x, y) | Vector subtraction |
Vec.cross(x, y) | Cross product (3D only) |
Vec.normalize(x) | Normalize to unit length |
Vec.asum(x) | L1 norm (sum of abs values) |
Vec.iamax(x) | Index of max absolute value |
Vec.range(start, stop) | Range vector |
Raw Matrix Helpers
Pure Kit helpers for raw matrices (no FFI calls).
| Function | Description |
|---|---|
diag(m) | Extract diagonal of [[Float]] |
rows(m) | Number of rows |
cols(m) | Number of columns |
Optimization Rules
The optimizer applies these algebraic rewrites:
| # | Rule | Transformation |
|---|---|---|
| 1 | Nested scales | 2 * (3 * A) -> 6 * A |
| 2 | Identity scale | 1.0 * A -> A |
| 3 | Double transpose | Trans(Trans(A)) -> A |
| 4 | Double negation | Neg(Neg(A)) -> A |
| 5 | Add-neg to sub | A + (-B) -> A - B |
| 6 | Sub-neg to add | A - (-B) -> A + B |
| 7 | Neg into scale | -(k * A) -> (-k) * A |
| 8 | Identity transpose | Trans(I) -> I |
| 9 | Identity inverse | Inv(I) -> I |
| 10 | Left identity mul | I * A -> A |
| 11 | Right identity mul | A * I -> A |
| 12 | Zero negation | Neg(Zeros) -> Zeros |
| 13 | Zero scaling | k * Zeros -> Zeros |
| 14 | Zero transpose | Trans(Zeros(m,n)) -> Zeros(n,m) |
Examples
Running Examples
cd kit-linear-algebra
# Basic operations (lazy matrices, eager vectors)
kit run examples/basic.kit
# Lazy expression trees (build, optimize, evaluate)
kit run examples/lazy-expressions.kit
# Least squares regression (linear and polynomial fitting)
kit run examples/least-squares.kit
# Matrix decompositions (LU, QR, Cholesky, SVD)
kit run examples/decompositions.kit
# Eigenvalue problems (symmetric matrices, spectral properties)
kit run examples/eigenvalues.kit
# 3D vector operations (cross product, projections, angles)
kit run examples/3d-vectors.kit
# 2D transformations (rotation, scaling, reflection, shear)
kit run examples/transformations.kit
# Matrix norms and condition numbers
kit run examples/matrix-norms.kitLazy Expression Example
import Kit.LinearAlgebra as LA
# Build expression trees - no computation yet
a = LA.of [[1.0, 2.0], [3.0, 4.0]]
b = LA.of [[5.0, 6.0], [7.0, 8.0]]
c = LA.of [[0.5, 0.5], [0.5, 0.5]]
# Compose: (A * B) + 2C
expr = a |> LA.mul b |> LA.add (LA.scale 2.0 c)
println "Expression: ${show expr}"
# Optimization demo: double transpose -> identity
double-trans = LA.transpose (LA.transpose a)
println "Before: ${show double-trans}"
println "After: ${show (LA.optimize double-trans)}"
# Evaluate in one pass
match LA.eval-optimized expr
| Ok result -> println "Result: ${result}"
| Err e -> println "Error: ${e}"Solving Linear Systems
import Kit.LinearAlgebra as LA
# Solve: 2x + y = 5, x + 3y = 6
coeffs = LA.of [[2.0, 1.0], [1.0, 3.0]]
rhs = [5.0, 6.0]
match LA.solve coeffs rhs
| Ok [x, y | _] -> println "x = ${x}, y = ${y}"
| _ -> println "Error"Vector Operations
import Kit.LinearAlgebra as LA
# Cross product of 3D vectors
match LA.Vec.cross [1.0, 0.0, 0.0] [0.0, 1.0, 0.0]
| Ok k -> println "i x j = ${k}" # [0, 0, 1]
| Err e -> println "Error: ${e}"
# Normalize a vector
match LA.Vec.normalize [3.0, 4.0]
| Ok unit -> println "Unit: ${unit}" # [0.6, 0.8]
| Err e -> println "Error: ${e}"Eigenvalues
import Kit.LinearAlgebra as LA
sym = LA.of [[2.0, 1.0], [1.0, 2.0]]
match LA.eig sym
| Ok {values, vectors} ->
println "Eigenvalues: ${values}"
println "Eigenvectors: ${vectors}"
| Err e -> println "Error: ${e}"Types
Expression Type
type Expr =
| Lit [[Float]] # Matrix literal
| Add Expr Expr # A + B
| Sub Expr Expr # A - B
| Scale Float Expr # alpha * A
| Mul Expr Expr # A * B
| Trans Expr # A^T
| Neg Expr # -A
| Inv Expr # A^(-1)
| Identity Int # I(n) - lazy
| Zeros Int Int # 0(m,n) - lazy
| Ones Int Int # 1(m,n) - lazy
| Fill Int Int Float # fill(m,n,v) - lazy
| DiagM [Float] # diag(v) - lazyError Type
type LinAlgError =
| DimensionMismatch {message: String}
| SingularMatrix {message: String}
| InvalidArgument {message: String}
| AllocationFailed {message: String}
| ConvergenceFailed {message: String}Performance Notes
- Lazy evaluation eliminates intermediate matrix allocations
- Algebraic optimization simplifies expressions before evaluation
- macOS: Uses Apple's Accelerate framework with optimized BLAS/LAPACK
- Linux: Uses OpenBLAS for high-performance linear algebra
- All operations designed for numerical stability
License
MIT License - see LICENSE for details.
This package provides bindings to:
- Apple Accelerate (macOS)
- OpenBLAS (Linux)
Exported Functions & Types
Matrix
A matrix represented as a list of rows, where each row is a list of floats.
Variants
Matrix {rows}Vector
A vector represented as a list of floats.
Variants
Vector {values}LinAlgError
Error types for linear algebra operations.
Variants
DimensionMismatch {message}SingularMatrix {message}InvalidArgument {message}AllocationFailed {message}ConvergenceFailed {message}LUResult
Result of LU decomposition with partial pivoting.
For a matrix $A$, computes $PA = LU$ where: - $L$ is lower triangular with unit diagonal - $U$ is upper triangular - $P$ is a permutation matrix (represented by pivot indices)
Variants
LUResult {lu, pivots}EigenResult
Result of eigenvalue decomposition.
For a symmetric matrix $A$, computes $A = V \Lambda V^T$ where: - $\Lambda$ is a diagonal matrix of eigenvalues - $V$ is an orthogonal matrix of eigenvectors
Variants
EigenResult {values, vectors}SVDResult
Result of Singular Value Decomposition (SVD).
For a matrix $A$, computes $A = U \Sigma V^T$ where: - $U$ is an orthogonal matrix of left singular vectors - $\Sigma$ is a diagonal matrix of singular values - $V^T$ is the transpose of right singular vectors
Variants
SVDResult {u, s, vt}ShapeResult
Shape of a matrix as (rows, cols).
Variants
ShapeResult {rows, cols}vec-plus
Add two vectors element-wise: $\mathbf{z} = \mathbf{x} + \mathbf{y}$
[Float] -> [Float] -> Result [Float] String
vec-minus
Subtract two vectors element-wise: $\mathbf{z} = \mathbf{x} - \mathbf{y}$
[Float] -> [Float] -> Result [Float] String
diag
Extract the diagonal elements from a matrix.
Returns $[a_{11}, a_{22}, \ldots, a_{nn}]$ for a square matrix.
[[Float]] -> [Float]
rows
Get the number of rows in a raw matrix.
[[Float]] -> Int
cols
Get the number of columns in a raw matrix.
[[Float]] -> Int
Expr
Lazy expression tree module for building and optimizing matrix computations.
See LA.of, LA.plus, LA.times, etc. for building expressions, and LA.eval-optimized for evaluating them.
Vec
Eager vector operations using BLAS-backed FFI.
All operations are computed immediately (not lazy).
Operations: - dot: Inner product $\langle \mathbf{x}, \mathbf{y} \rangle = \sum_i x_i y_i$ - norm: Euclidean norm $\|\mathbf{x}\|_2 = \sqrt{\sum_i x_i^2}$ - scale: Scalar multiplication $\alpha \mathbf{x}$ - axpy: $\alpha \mathbf{x} + \mathbf{y}$ (BLAS axpy) - cross: Cross product $\mathbf{x} \times \mathbf{y}$ (3D vectors only) - normalize: Unit vector $\mathbf{x} / \|\mathbf{x}\|_2$ - asum: Sum of absolute values $\sum_i |x_i|$ - iamax: Index of maximum absolute value - range: Generate a range of floats - plus: Vector addition $\mathbf{x} + \mathbf{y}$ - minus: Vector subtraction $\mathbf{x} - \mathbf{y}$
of
Wrap a raw matrix [[Float]] as a lazy expression.
[[Float]] -> Expr
plus
Add two matrix expressions: $A + B$.
Expr -> Expr -> Expr
minus
Subtract two matrix expressions: $A - B$.
Expr -> Expr -> Expr
times
Multiply two matrix expressions: $A \cdot B$.
Expr -> Expr -> Expr
scale
Scale a matrix expression by a scalar: $\alpha A$.
Float -> Expr -> Expr
transpose
Transpose a matrix expression: $A^T$.
Expr -> Expr
negate
Negate a matrix expression: $-A$.
Expr -> Expr
inv
Invert a matrix expression: $A^{-1}$.
Expr -> Expr
identity
Create a lazy identity matrix $I_n$.
Int -> Expr
zeros
Create a lazy zero matrix of size $m \times n$.
Int -> Int -> Expr
ones
Create a lazy ones matrix of size $m \times n$.
Int -> Int -> Expr
fill
Create a lazy matrix filled with a constant value.
Int -> Int -> Float -> Expr
diag-matrix
Create a lazy diagonal matrix from a vector.
[Float] -> Expr
eval
Evaluate a matrix expression tree.
Expr -> Result [[Float]] String
optimize
Optimize a matrix expression tree without evaluating.
Expr -> Expr
eval-optimized
Optimize and evaluate a matrix expression tree.
Applies algebraic simplifications before evaluation.
Expr -> Result [[Float]] String
det
Compute the determinant of a matrix expression.
$$\det(A) = \sum_{\sigma \in S_n} \text{sgn}(\sigma) \prod_{i=1}^{n} a_{i,\sigma(i)}$$
Expr -> Result Float String
mat-trace
Compute the trace of a matrix expression.
$$\text{tr}(A) = \sum_{i=1}^{n} a_{ii}$$
Expr -> Result Float String
rank
Compute the rank of a matrix expression.
The rank is the number of linearly independent rows (or columns).
Expr -> Result Int String
cond
Compute the condition number of a matrix expression.
$$\kappa(A) = \|A\| \cdot \|A^{-1}\|$$
A large condition number indicates the matrix is ill-conditioned.
Expr -> Result Float String
norm-fro
Compute the Frobenius norm of a matrix expression.
$$\|A\|_F = \sqrt{\sum_{i,j} |a_{ij}|^2}$$
Expr -> Result Float String
norm-1
Compute the 1-norm (maximum column sum) of a matrix expression.
$$\|A\|_1 = \max_j \sum_i |a_{ij}|$$
Expr -> Result Float String
norm-inf
Compute the infinity-norm (maximum row sum) of a matrix expression.
$$\|A\|_\infty = \max_i \sum_j |a_{ij}|$$
Expr -> Result Float String
shape
Get the shape (rows, cols) of a matrix expression.
Expr -> Result {rows: Int, cols: Int} String
extract-diag
Extract the diagonal elements from a matrix expression.
Returns $[a_{11}, a_{22}, \ldots, a_{nn}]$ for a square matrix.
Expr -> Result [Float] String
is-square?
Check if a matrix expression is square.
Expr -> Result Bool String
num-rows
Get the number of rows in a matrix expression.
Expr -> Result Int String
num-cols
Get the number of columns in a matrix expression.
Expr -> Result Int String
lu
Compute the LU decomposition with partial pivoting.
For a matrix $A$, computes $PA = LU$ where $L$ is lower triangular with unit diagonal and $U$ is upper triangular.
Expr -> Result {lu: [[Float]], pivots: [Int]} String
qr
Compute the QR decomposition.
For a matrix $A$, computes $A = QR$ where $Q$ is orthogonal and $R$ is upper triangular.
Expr -> Result {q: [[Float]], r: [[Float]]} String
cholesky
Compute the Cholesky decomposition.
For a symmetric positive-definite matrix $A$, computes $A = LL^T$ where $L$ is lower triangular.
Expr -> Result [[Float]] String
svd
Compute the Singular Value Decomposition (SVD).
For a matrix $A$, computes $A = U \Sigma V^T$ where: - $U$ contains left singular vectors - $\Sigma$ contains singular values (returned as vector) - $V^T$ contains right singular vectors
Expr -> Result {u: [[Float]], s: [Float], vt: [[Float]]} String
eig
Compute eigenvalues and eigenvectors of a symmetric matrix.
For a symmetric matrix $A$, computes $A = V \Lambda V^T$ where $\Lambda$ is diagonal (eigenvalues) and $V$ is orthogonal (eigenvectors).
Expr -> Result {values: [Float], vectors: [[Float]]} String
eig-general
Compute eigenvalues and eigenvectors of a general (non-symmetric) matrix.
Returns eigenvalues and eigenvectors as Kit Complex numbers. For real eigenvalues, the imaginary part will be zero. Complex eigenvalues and their eigenvectors come in conjugate pairs.
Expr -> Result {values: [Complex], vectors: [[Complex]]} String
solve
Solve the linear system $Ax = b$ for $x$.
Uses LU decomposition with partial pivoting.
Expr -> [Float] -> Result [Float] String
lstsq
Solve the least squares problem $\min_x \|Ax - b\|_2$.
For overdetermined systems, finds the best approximation.
Expr -> [Float] -> Result [Float] String
matvec
Compute matrix-vector multiplication $y = Ax$.
Expr -> [Float] -> Result [Float] String
pinv
Compute the Moore-Penrose pseudoinverse $A^+$.
For any matrix $A$, satisfies $AA^+A = A$ and $A^+AA^+ = A^+$.
Expr -> Result [[Float]] String
kron
Compute the Kronecker product $A \otimes B$.
The result is a block matrix where each element $a_{ij}$ of $A$ is replaced by $a_{ij} B$.
Expr -> Expr -> Result [[Float]] String
triu
Extract the upper triangular part of a matrix.
Returns a matrix where all elements below the diagonal are zero.
Expr -> Result [[Float]] String
tril
Extract the lower triangular part of a matrix.
Returns a matrix where all elements above the diagonal are zero.
Expr -> Result [[Float]] String
mat-flatten
Flatten a matrix to a 1D vector (row-major order).
Expr -> Result [Float] String
mat-reshape
Reshape a matrix to new dimensions.
Total number of elements must remain the same.
Expr -> Int -> Int -> Result [[Float]] String
inverse
Compute the matrix inverse $A^{-1}$.
For a non-singular square matrix $A$, computes $A^{-1}$ such that $AA^{-1} = I$.
Expr -> Result [[Float]] String
ComplexFFI
FFI-accelerated complex matrix operations using BLAS/LAPACK.
These provide faster implementations than pure Kit for large matrices. Uses zgemm, zgesv, zgeev, zgesvd (complex double precision routines).
Operations: - mat-mul: Complex matrix multiply via zgemm - mat-vec-mul: Complex matrix-vector multiply via zgemv - solve: Solve complex linear system via zgesv - eig: Complex eigenvalue decomposition via zgeev - svd: Complex singular value decomposition via zgesvd - mat-scale: Complex matrix scaling via zscal - hermitian: Hermitian (conjugate transpose) - inverse: Complex matrix inverse via zgetrf/zgetri - det: Complex determinant via zgetrf
zeros
Create a complex zero matrix of size m x n.
Int -> Int -> [[Complex]]
identity
Create a complex identity matrix of size n x n.
Int -> [[Complex]]
fill
Create a complex matrix filled with a constant value.
Int -> Int -> Complex -> [[Complex]]
from-real
Create a complex matrix from a real matrix (zero imaginary parts).
[[Float]] -> [[Complex]]
from-parts
Create a complex matrix from separate real and imaginary matrices.
[[Float]] -> [[Float]] -> [[Complex]]
diagonal
Create a diagonal complex matrix from a list of complex values.
[Complex] -> [[Complex]]
num-rows
Get the number of rows in a complex matrix.
[[Complex]] -> Int
num-cols
Get the number of columns in a complex matrix.
[[Complex]] -> Int
get
Get element at position (i, j).
Int -> Int -> [[Complex]] -> Option Complex
real-part
Get the real part of a complex matrix.
[[Complex]] -> [[Float]]
imag-part
Get the imaginary part of a complex matrix.
[[Complex]] -> [[Float]]
mat-add
Add two complex matrices.
[[Complex]] -> [[Complex]] -> Result [[Complex]] String
mat-subtract
Subtract two complex matrices.
[[Complex]] -> [[Complex]] -> Result [[Complex]] String
mat-scale
Scale a complex matrix by a complex scalar.
Complex -> [[Complex]] -> [[Complex]]
mat-scale-real
Scale a complex matrix by a real scalar.
Float -> [[Complex]] -> [[Complex]]
mat-conjugate
Element-wise conjugate of a complex matrix.
[[Complex]] -> [[Complex]]
transpose
Transpose of a complex matrix.
[[Complex]] -> [[Complex]]
hermitian
Hermitian conjugate (conjugate transpose) of a complex matrix.
[[Complex]] -> [[Complex]]
mat-negate
Negate a complex matrix.
[[Complex]] -> [[Complex]]
mat-multiply
Multiply two complex matrices.
[[Complex]] -> [[Complex]] -> Result [[Complex]] String
matvec
Complex matrix-vector multiplication.
[[Complex]] -> [Complex] -> Result [Complex] String
is-hermitian?
Check if a matrix is Hermitian (equal to its conjugate transpose).
[[Complex]] -> Float -> Bool
is-unitary?
Check if a matrix is unitary (U * U^H = I).
[[Complex]] -> Float -> Bool
frobenius-norm
Compute the Frobenius norm of a complex matrix.
[[Complex]] -> Float
mat-trace
Compute the trace (sum of diagonal elements).
[[Complex]] -> Complex
mat-to-string
Convert a complex matrix to a string representation.
[[Complex]] -> String
Matrix
A matrix represented as a list of rows, where each row is a list of floats.
Represents an $m \times n$ matrix: $$A = \begin{bmatrix} a_{11} & a_{12} & \cdots & a_{1n} \\ a_{21} & a_{22} & \cdots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{m1} & a_{m2} & \cdots & a_{mn} \end{bmatrix}$$
Variants
Matrix {rows}Vector
A vector represented as a list of floats.
Represents a column vector $\mathbf{v} \in \mathbb{R}^n$: $$\mathbf{v} = \begin{bmatrix} v_1 \\ v_2 \\ \vdots \\ v_n \end{bmatrix}$$
Variants
Vector {values}LinAlgError
Error types for linear algebra operations.
These errors can occur during matrix decompositions, inversions, and other numerical operations.
Variants
DimensionMismatch {message}SingularMatrix {message}InvalidArgument {message}AllocationFailed {message}ConvergenceFailed {message}LUResult
Result of LU decomposition with partial pivoting.
For a matrix $A$, computes $PA = LU$ where: - $L$ is lower triangular with unit diagonal - $U$ is upper triangular - $P$ is a permutation matrix (represented by pivot indices)
Variants
LUResult {lu, pivots}EigenResult
Result of eigenvalue decomposition.
For a symmetric matrix $A$, computes $A = V \Lambda V^T$ where: - $\Lambda$ is a diagonal matrix of eigenvalues - $V$ is an orthogonal matrix of eigenvectors
Variants
EigenResult {values, vectors}SVDResult
Result of Singular Value Decomposition (SVD).
For a matrix $A$, computes $A = U \Sigma V^T$ where: - $U$ is an orthogonal matrix of left singular vectors - $\Sigma$ is a diagonal matrix of singular values - $V^T$ is the transpose of right singular vectors
Variants
SVDResult {u, s, vt}ShapeResult
Shape of a matrix as (rows, cols).
Variants
ShapeResult {rows, cols}vec-plus
Add two vectors element-wise: $\mathbf{z} = \mathbf{x} + \mathbf{y}$
[Float] -> [Float] -> Result [Float] String
vec-minus
Subtract two vectors element-wise: $\mathbf{z} = \mathbf{x} - \mathbf{y}$
[Float] -> [Float] -> Result [Float] String
diag
Extract the diagonal elements from a matrix.
Returns $[a_{11}, a_{22}, \ldots, a_{nn}]$ for a square matrix.
[[Float]] -> [Float]
rows
Get the number of rows in a raw matrix.
[[Float]] -> Int
cols
Get the number of columns in a raw matrix.
[[Float]] -> Int
Expr
Lazy expression tree module for building and optimizing matrix computations.
See LA.of, LA.plus, LA.times, etc. for building expressions, and LA.eval-optimized for evaluating them.
Vec
Eager vector operations using BLAS-backed FFI.
All operations are computed immediately (not lazy).
Operations: - dot: Inner product $\langle \mathbf{x}, \mathbf{y} \rangle = \sum_i x_i y_i$ - norm: Euclidean norm $\|\mathbf{x}\|_2 = \sqrt{\sum_i x_i^2}$ - scale: Scalar multiplication $\alpha \mathbf{x}$ - axpy: $\alpha \mathbf{x} + \mathbf{y}$ (BLAS axpy) - cross: Cross product $\mathbf{x} \times \mathbf{y}$ (3D vectors only) - normalize: Unit vector $\mathbf{x} / \|\mathbf{x}\|_2$ - asum: Sum of absolute values $\sum_i |x_i|$ - iamax: Index of maximum absolute value - range: Generate a range of floats - plus: Vector addition $\mathbf{x} + \mathbf{y}$ - minus: Vector subtraction $\mathbf{x} - \mathbf{y}$
FFI
Direct access to FFI functions for advanced users.
These functions operate on raw [[Float]] matrices and [Float] vectors, bypassing the lazy expression system. Use the lazy API (top-level exports) for most use cases.
of
Wrap a raw matrix [[Float]] as a lazy expression.
[[Float]] -> Expr
plus
Add two matrix expressions: $A + B$.
Expr -> Expr -> Expr
minus
Subtract two matrix expressions: $A - B$.
Expr -> Expr -> Expr
times
Multiply two matrix expressions: $A \cdot B$.
Expr -> Expr -> Expr
scale
Scale a matrix expression by a scalar: $\alpha A$.
Float -> Expr -> Expr
transpose
Transpose a matrix expression: $A^T$.
Expr -> Expr
negate
Negate a matrix expression: $-A$.
Expr -> Expr
inv
Invert a matrix expression: $A^{-1}$.
Expr -> Expr
identity
Create a lazy identity matrix $I_n$.
Int -> Expr
zeros
Create a lazy zero matrix of size $m \times n$.
Int -> Int -> Expr
ones
Create a lazy ones matrix of size $m \times n$.
Int -> Int -> Expr
fill
Create a lazy matrix filled with a constant value.
Int -> Int -> Float -> Expr
diag-matrix
Create a lazy diagonal matrix from a vector.
[Float] -> Expr
eval
Evaluate a matrix expression tree.
For matrices larger than 2x2, uses FFI (LAPACK) for inverse operations.
Expr -> Result [[Float]] String
eval-optimized
Optimize and evaluate a matrix expression tree.
Applies algebraic simplifications before evaluation. For matrices larger than 2x2, uses FFI (LAPACK) for inverse operations.
Expr -> Result [[Float]] String
optimize
Optimize a matrix expression tree without evaluating.
Expr -> Expr
ComplexMatrix
Complex matrix operations using Kit's built-in Complex type.
Provides constructors, accessors, and operations for [[Complex]] matrices: - Constructors: zeros, identity, fill, from-real, from-parts, diagonal - Operations: mat-add, mat-subtract, mat-scale, mat-multiply, hermitian, etc. - Properties: is-hermitian?, is-unitary?, frobenius-norm, mat-trace
CM = LA.ComplexMatrix
m = CM.identity 3
h = CM.hermitian mComplexExpr
Lazy expression tree module for complex matrix computations.
Similar to Expr but for [[Complex]] matrices, with support for: - Hermitian conjugate (CHerm) - Element-wise conjugate (CConj) - Complex and real scalar multiplication - Promotion from real matrices
CExpr = LA.ComplexExpr
a = CExpr.from-real [[1.0, 2.0], [3.0, 4.0]]
b = CExpr.identity 2
result = a |> CExpr.times b |> CExpr.hermitian |> CExpr.eval-optimizedSparse
Sparse matrix support using COO and CSR formats.
Provides memory-efficient storage for matrices with many zero elements: - COO format: Good for construction, stores (row, col, value) triplets - CSR format: Good for computation, stores compressed sparse rows
Sparse = LA.Sparse
entries = [Sparse.entry 0 0 1.0, Sparse.entry 1 1 2.0]
coo = Sparse.from-entries 3 3 entries
result = Sparse.matvec coo [1.0, 2.0, 3.0]SparseExpr
Lazy expression tree module for sparse matrix computations.
Similar to Expr but for sparse matrices (COO format), with support for: - Addition, subtraction, scaling, transpose, negation - Optimization rules (double transpose, scalar folding, etc.) - Matrix-vector multiplication (matvec, csr-matvec)
SExpr = LA.SparseExpr
Sparse = LA.Sparse
entries = [Sparse.entry 0 0 1.0, Sparse.entry 1 1 2.0]
a = SExpr.of (Sparse.from-entries 2 2 entries)
b = SExpr.identity 2
result = a |> SExpr.plus b |> SExpr.eval-optimizedComplexFFI
FFI-accelerated complex matrix operations using BLAS/LAPACK.
These provide faster implementations than pure Kit for large matrices. Uses zgemm, zgesv, zgeev, zgesvd (complex double precision routines).
Operations: - mat-mul: Complex matrix multiply via zgemm - mat-vec-mul: Complex matrix-vector multiply via zgemv - solve: Solve complex linear system via zgesv - eig: Complex eigenvalue decomposition via zgeev - svd: Complex singular value decomposition via zgesvd - mat-scale: Complex matrix scaling via zscal - hermitian: Hermitian (conjugate transpose) - inverse: Complex matrix inverse via zgetrf/zgetri - det: Complex determinant via zgetrf
SparseFFI
FFI-accelerated sparse matrix operations.
These provide faster implementations than pure Kit for large sparse matrices. Operations work with CSR (Compressed Sparse Row) format for efficiency.
Operations: - csr-matvec: Sparse matrix-vector multiply (SpMV) - csr-add: Sparse matrix addition - csr-scale: Scalar multiplication - csr-norm: Frobenius norm - csr-to-dense: Convert to dense matrix
Diagonal
Diagonal matrix support with O(n) operations.
Diagonal matrices have non-zero elements only on the main diagonal, enabling efficient algorithms that are O(n) or O(n²) instead of O(n³).
Operations: - from-vector: Create diagonal from vector - mul-vec: Matrix-vector multiply (O(n)) - solve: Linear solve (O(n)) - inverse: Matrix inverse (O(n)) - det: Determinant (O(n)) - eigenvalues: Eigenvalue decomposition (O(n)) - mul-left, mul-right: Multiply with dense matrix (O(n²))
Triangular
Triangular matrix support with O(n²) solve operations.
Triangular matrices have zeros above (lower) or below (upper) the diagonal, enabling efficient back/forward substitution algorithms.
Types: - Upper, Lower: Standard triangular matrices - UnitUpper, UnitLower: Triangular with implicit 1s on diagonal
Operations: - upper, lower: Create triangular from dense - solve: Linear solve via back/forward substitution (O(n²)) - mul-vec: Matrix-vector multiply - inverse: Matrix inverse - det: Determinant (O(n) - product of diagonal) - transpose: Transpose (upper ↔ lower)
Symmetric
Symmetric and Hermitian matrix support.
Symmetric matrices satisfy A = A^T, Hermitian matrices satisfy A = A^H. Special properties: all eigenvalues are real, eigenvectors are orthogonal.
Types: - SymmetricMatrix: Packed storage for symmetric matrices - HermitianMatrix: Packed storage for complex Hermitian matrices
Operations: - from-dense: Create from dense matrix - eig: Eigenvalue decomposition (Jacobi algorithm) - cholesky: Cholesky factorization for positive-definite matrices - solve: Linear system solve - inverse: Matrix inverse - quadratic-form: Compute x^T A x
Tridiagonal
Tridiagonal matrix support with O(n) algorithms.
Tridiagonal matrices have non-zero elements only on the main diagonal and the diagonals immediately above and below it. This structure enables O(n) algorithms instead of O(n³).
Types: - TridiagonalMatrix: General tridiagonal (3 vectors) - SymTridiagonalMatrix: Symmetric tridiagonal (2 vectors) - TridiagLUResult: LU decomposition result
Key algorithms: - solve: Thomas algorithm for O(n) linear solve - det: Determinant via recurrence relation (O(n)) - mul-vec: Matrix-vector multiply (O(n)) - lu: LU decomposition - inverse: Matrix inverse via column-by-column solve
Bidiagonal
Bidiagonal matrix support with O(n) algorithms.
Bidiagonal matrices have non-zero elements only on the main diagonal and either the superdiagonal (upper) or subdiagonal (lower). Important for SVD computation.
Types: - BidiagonalMatrix: Upper or Lower variants
Key algorithms: - solve: O(n) via back/forward substitution - det: O(n) product of diagonal - mul-vec: O(n) matrix-vector multiply - inverse: Dense inverse via column-by-column solve - transpose: Convert between upper and lower
LDLT
LDLᵀ factorization for symmetric matrices.
Decomposes A = L * D * Lᵀ where L is unit lower triangular and D is diagonal. Works for symmetric positive definite and some symmetric indefinite matrices. More stable than Cholesky for indefinite matrices, and avoids square roots.
Types: - LDLtResult: Factorization result containing L and D
Key operations: - factorize: Compute the LDLᵀ factorization - solve: Solve Ax = b using the factorization - det: Determinant (product of D diagonal) - inertia: Number of positive/negative/zero eigenvalues - is-positive-definite?: Check definiteness from D
Hessenberg
Hessenberg decomposition for eigenvalue preprocessing.
Transforms a square matrix A into A = Q * H * Qᵀ where: - H is upper Hessenberg (zeros below the first subdiagonal) - Q is orthogonal
This decomposition preserves eigenvalues while reducing the cost of subsequent QR iterations for eigenvalue computation.
Key operations: - decompose: Compute the Hessenberg decomposition - get-h: Get the upper Hessenberg matrix - get-q: Get the orthogonal matrix - is-hessenberg?: Verify H has correct structure - is-orthogonal?: Verify Q^T * Q = I
Schur
Schur decomposition for eigenvalue computation.
Transforms a square matrix A into A = Z * T * Zᵀ where: - T is quasi-upper triangular (Schur form) - Z is orthogonal
The Schur form has 1x1 blocks for real eigenvalues and 2x2 blocks for complex conjugate eigenvalue pairs.
Key operations: - decompose: Compute the Schur decomposition - get-t: Get the quasi-upper triangular matrix - get-z: Get the orthogonal matrix - eigenvalues: Extract eigenvalues from Schur form - det: Determinant from Schur form
GEig
Generalized eigenvalue problem solver.
Solves Ax = λBx where eigenvalues are represented as λ = α/β pairs to handle both regular (β ≠ 0) and infinite (β = 0) eigenvalues.
Key operations: - solve: Solve the generalized eigenvalue problem - eigenvalues: Get eigenvalues as α/β pairs - eigenvalues-complex: Get eigenvalues as complex numbers - has-infinite-eigenvalue?: Check for infinite eigenvalues - count-finite: Count finite eigenvalues
Entry
A single entry in a sparse matrix (COO format element)
Variants
Entry {row, col, value}SparseMatrix
Sparse matrix in COO (Coordinate) format.
Best for: - Construction and incremental building - Random access modification - Converting between formats
Storage: O(nnz) where nnz = number of non-zeros
Variants
SparseMatrix {rows, cols, entries}CSRMatrix
Sparse matrix in CSR (Compressed Sparse Row) format.
Best for: - Matrix-vector multiplication (SpMV) - Row slicing - Arithmetic operations
Storage: O(nnz + rows) where nnz = number of non-zeros
Structure: - values: non-zero values in row-major order - col-indices: column index for each value - row-ptrs: index into values where each row starts (length = rows + 1)
Variants
CSRMatrix {rows, cols, values, col-indices, row-ptrs}entry
Create a sparse matrix entry.
Int -> Int -> Float -> Entry
entry-row
Get the row index of an entry.
Entry -> Int
entry-col
Get the column index of an entry.
Entry -> Int
entry-value
Get the value of an entry.
Entry -> Float
empty
Create an empty sparse matrix.
Int -> Int -> SparseMatrix
from-entries
Create a sparse matrix from a list of entries.
Entries outside the matrix bounds are filtered out. Duplicate entries at the same position are summed.
Int -> Int -> [Entry] -> SparseMatrix
from-dense
Create a sparse matrix from a dense matrix.
Only stores non-zero entries (values with |v| > epsilon).
[[Float]] -> SparseMatrix
identity
Create a sparse identity matrix.
Int -> SparseMatrix
diagonal
Create a sparse diagonal matrix from a vector.
[Float] -> SparseMatrix
num-rows
Get the number of rows.
SparseMatrix -> Int
num-cols
Get the number of columns.
SparseMatrix -> Int
nnz
Get the number of non-zero entries.
SparseMatrix -> Int
sparsity
Get the sparsity ratio (nnz / total elements).
SparseMatrix -> Float
get
Get an element at position (i, j).
Returns 0.0 if the element is not stored (sparse zero).
Int -> Int -> SparseMatrix -> Float
to-dense
Convert a sparse matrix to dense format.
SparseMatrix -> [[Float]]
to-csr
Convert COO format to CSR format.
CSR is more efficient for matrix-vector multiplication.
SparseMatrix -> CSRMatrix
from-csr
Convert CSR format back to COO format.
CSRMatrix -> SparseMatrix
scale
Scale a sparse matrix by a scalar.
Float -> SparseMatrix -> SparseMatrix
transpose
Transpose a sparse matrix.
SparseMatrix -> SparseMatrix
add
Add two sparse matrices.
Matrices must have the same dimensions.
SparseMatrix -> SparseMatrix -> Result SparseMatrix String
subtract
Subtract two sparse matrices.
SparseMatrix -> SparseMatrix -> Result SparseMatrix String
matvec
Sparse matrix-vector multiplication (SpMV).
Computes y = A * x where A is sparse and x is dense.
SparseMatrix -> [Float] -> Result [Float] String
csr-matvec
CSR matrix-vector multiplication (more efficient for repeated SpMV).
CSRMatrix -> [Float] -> Result [Float] String
HessenbergResult
Result of Hessenberg decomposition.
- h: Upper Hessenberg matrix - q: Orthogonal matrix such that A = Q * H * Q^T - n: Size of the matrix
Variants
HessenbergResult {h, q, n}decompose
Compute the Hessenberg decomposition of a square matrix.
Given a square matrix A, computes orthogonal Q and upper Hessenberg H such that A = Q * H * Q^T.
The algorithm uses Householder reflections to zero out elements below the first subdiagonal, column by column.
[[Float]] -> Result HessenbergResult String
get-h
Get the upper Hessenberg matrix H from the decomposition.
HessenbergResult -> [[Float]]
get-q
Get the orthogonal matrix Q from the decomposition.
HessenbergResult -> [[Float]]
size
Get the size of the decomposed matrix.
HessenbergResult -> Int
is-hessenberg?
Verify that H is in upper Hessenberg form (zeros below first subdiagonal).
HessenbergResult -> Bool
is-orthogonal?
Verify that Q is orthogonal (Q^T * Q ≈ I).
HessenbergResult -> Bool
reconstruct
Reconstruct the original matrix as Q * H * Q^T.
HessenbergResult -> [[Float]]
subdiagonal
Get the subdiagonal elements of H (useful for eigenvalue algorithms).
HessenbergResult -> [Float]
diagonal
Get the diagonal elements of H.
HessenbergResult -> [Float]
trace
Compute the trace of H (equals trace of A since similar matrices have same trace).
HessenbergResult -> Float
BidiagonalMatrix
A bidiagonal matrix with non-zero elements on main diagonal and one off-diagonal.
- Upper: Main diagonal and superdiagonal (above main) - Lower: Main diagonal and subdiagonal (below main)
Both variants store: - diag: Main diagonal elements (n elements) - off: Off-diagonal elements (n-1 elements)
Variants
Upper {diag, off}Lower {diag, off}upper
Create an upper bidiagonal matrix from main diagonal and superdiagonal.
[Float] -> [Float] -> Result BidiagonalMatrix String
lower
Create a lower bidiagonal matrix from main diagonal and subdiagonal.
[Float] -> [Float] -> Result BidiagonalMatrix String
from-dense
Create a bidiagonal matrix from a dense matrix, extracting the relevant diagonals. Determines upper vs lower based on which off-diagonal has larger Frobenius norm.
[[Float]] -> BidiagonalMatrix
identity
Create an upper bidiagonal identity matrix.
Int -> BidiagonalMatrix
zeros
Create a zero bidiagonal matrix.
Int -> Bool -> BidiagonalMatrix
size
Get the size (dimension) of the bidiagonal matrix.
BidiagonalMatrix -> Int
is-upper?
Check if the matrix is upper bidiagonal.
BidiagonalMatrix -> Bool
is-lower?
Check if the matrix is lower bidiagonal.
BidiagonalMatrix -> Bool
diagonal
Get the main diagonal.
BidiagonalMatrix -> [Float]
off-diagonal
Get the off-diagonal.
BidiagonalMatrix -> [Float]
get
Get an element at position (i, j) from a bidiagonal matrix. Returns 0 for elements outside the two diagonals.
Int -> Int -> BidiagonalMatrix -> Option Float
to-dense
Convert a bidiagonal matrix to dense form.
BidiagonalMatrix -> [[Float]]
transpose
Transpose a bidiagonal matrix. Upper becomes Lower and vice versa.
BidiagonalMatrix -> BidiagonalMatrix
mul-vec
Multiply a bidiagonal matrix by a vector: $y = B \cdot x$
Complexity: O(n).
BidiagonalMatrix -> [Float] -> Result [Float] String
solve
Solve the linear system $Bx = b$ via back/forward substitution.
For upper bidiagonal: back substitution For lower bidiagonal: forward substitution
Complexity: O(n).
BidiagonalMatrix -> [Float] -> Result [Float] String
det
Compute the determinant of a bidiagonal matrix.
The determinant of a bidiagonal matrix is the product of diagonal elements.
Complexity: O(n).
BidiagonalMatrix -> Float
logdet
Compute the log-determinant of a bidiagonal matrix.
More numerically stable for matrices with large/small determinants.
BidiagonalMatrix -> Float
trace
Compute the trace of a bidiagonal matrix.
BidiagonalMatrix -> Float
is-invertible?
Check if the bidiagonal matrix is invertible. A bidiagonal matrix is invertible iff all diagonal elements are non-zero.
BidiagonalMatrix -> Bool
scale
Scale a bidiagonal matrix by a scalar.
Float -> BidiagonalMatrix -> BidiagonalMatrix
mat-add
Add two bidiagonal matrices of the same type.
BidiagonalMatrix -> BidiagonalMatrix -> Result BidiagonalMatrix String
subtract
Subtract two bidiagonal matrices of the same type.
BidiagonalMatrix -> BidiagonalMatrix -> Result BidiagonalMatrix String
inverse
Compute the inverse of a bidiagonal matrix.
Uses solve to compute B^-1 column by column. Note: The inverse of a bidiagonal matrix is generally upper/lower triangular.
BidiagonalMatrix -> Result [[Float]] String
mat-mul
Multiply two bidiagonal matrices.
Note: The product of two bidiagonal matrices is generally tridiagonal. This returns a dense matrix.
BidiagonalMatrix -> BidiagonalMatrix -> Result [[Float]] String
singular-values-squared
Compute the squared singular values of a bidiagonal matrix.
Returns the diagonal of B^T B (for upper) or B B^T (for lower), which are the squared singular values.
BidiagonalMatrix -> [Float]
LDLtResult
Result of LDLᵀ factorization.
- l: Unit lower triangular matrix (stored as dense, but only lower part is meaningful) - d: Diagonal entries of D - n: Size of the matrix
Variants
LDLtResult {l, d, n}factorize
Compute the LDLᵀ factorization of a symmetric matrix.
Given a symmetric matrix A, computes L (unit lower triangular) and D (diagonal) such that A = L * D * Lᵀ.
The algorithm used is the standard LDLᵀ without pivoting, which works for: - Symmetric positive definite matrices (always succeeds) - Some symmetric indefinite matrices (may fail if zero pivot encountered)
For guaranteed stability with indefinite matrices, use Bunch-Kaufman pivoting (not implemented in this version).
[[Float]] -> Result LDLtResult String
solve
Solve the linear system Ax = b using the LDLᵀ factorization.
Given A = LDLᵀ, solves: 1. Ly = b (forward substitution) 2. Dz = y (diagonal solve) 3. Lᵀx = z (back substitution)
LDLtResult -> [Float] -> Result [Float] String
get-l
Get the L matrix from the factorization.
LDLtResult -> [[Float]]
get-d
Get the D diagonal from the factorization.
LDLtResult -> [Float]
size
Get the size of the factorized matrix.
LDLtResult -> Int
det
Compute the determinant using the LDLᵀ factorization.
det(A) = det(L) * det(D) * det(Lᵀ) = 1 * prod(d[i]) * 1 = prod(d[i])
LDLtResult -> Float
logdet
Compute the log of the absolute determinant.
LDLtResult -> Float
is-positive-definite?
Check if the matrix is positive definite based on the factorization.
A symmetric matrix is positive definite iff all diagonal elements of D are positive.
LDLtResult -> Bool
is-negative-definite?
Check if the matrix is negative definite based on the factorization.
LDLtResult -> Bool
is-indefinite?
Check if the matrix is indefinite based on the factorization.
LDLtResult -> Bool
inertia
Compute the inertia of the matrix (number of positive, negative, zero eigenvalues).
Returns a tuple {positive: Int, negative: Int, zero: Int}.
LDLtResult -> {positive: Int, negative: Int, zero: Int}
reconstruct
Reconstruct the original matrix from the LDLᵀ factorization.
Computes A = L * D * Lᵀ.
LDLtResult -> [[Float]]
inverse
Compute the inverse of the original matrix using the LDLᵀ factorization.
Solves A * A⁻¹ = I column by column.
LDLtResult -> Result [[Float]] String
TridiagonalMatrix
A general tridiagonal matrix with three diagonals.
- lower: Sub-diagonal elements (n-1 elements, a_2 to a_n) - main: Main diagonal elements (n elements, b_1 to b_n) - upper: Super-diagonal elements (n-1 elements, c_1 to c_{n-1})
Variants
Tridiagonal {lower, main, upper}SymTridiagonalMatrix
A symmetric tridiagonal matrix where lower = upper.
- main: Main diagonal elements (n elements) - off: Off-diagonal elements (n-1 elements, same above and below)
Variants
SymTridiagonal {main, off}from-diagonals
Create a tridiagonal matrix from three diagonal vectors.
The lower and upper diagonals must have length n-1 where n is the length of the main diagonal.
[Float] -> [Float] -> [Float] -> Result TridiagonalMatrix String
from-dense
Create a tridiagonal matrix from a dense matrix. Extracts the three diagonals, ignoring other elements.
[[Float]] -> TridiagonalMatrix
identity
Create a tridiagonal identity matrix.
Int -> TridiagonalMatrix
zeros
Create a tridiagonal zero matrix.
Int -> TridiagonalMatrix
constant
Create a constant tridiagonal matrix.
Int -> Float -> Float -> Float -> TridiagonalMatrix
sym-from-diagonals
Create a symmetric tridiagonal matrix from two diagonal vectors.
[Float] -> [Float] -> Result SymTridiagonalMatrix String
sym-from-dense
Create a symmetric tridiagonal matrix from a dense symmetric matrix.
[[Float]] -> SymTridiagonalMatrix
to-symmetric
Create a symmetric tridiagonal matrix from a general tridiagonal. Uses the upper diagonal as the off-diagonal.
TridiagonalMatrix -> SymTridiagonalMatrix
from-symmetric
Create a general tridiagonal from a symmetric tridiagonal.
SymTridiagonalMatrix -> TridiagonalMatrix
size
Get the size (dimension) of the tridiagonal matrix.
TridiagonalMatrix -> Int
sym-size
Get the size of a symmetric tridiagonal matrix.
SymTridiagonalMatrix -> Int
get
Get an element at position (i, j) from a tridiagonal matrix. Returns 0 for elements outside the three diagonals.
Int -> Int -> TridiagonalMatrix -> Option Float
sym-get
Get an element from a symmetric tridiagonal matrix.
Int -> Int -> SymTridiagonalMatrix -> Option Float
main-diagonal
Get the main diagonal.
TridiagonalMatrix -> [Float]
lower-diagonal
Get the lower diagonal.
TridiagonalMatrix -> [Float]
upper-diagonal
Get the upper diagonal.
TridiagonalMatrix -> [Float]
to-dense
Convert a tridiagonal matrix to dense form.
TridiagonalMatrix -> [[Float]]
sym-to-dense
Convert a symmetric tridiagonal matrix to dense form.
SymTridiagonalMatrix -> [[Float]]
mul-vec
Multiply a tridiagonal matrix by a vector: $y = T \cdot x$
Complexity: O(n) instead of O(n²) for dense matrices.
TridiagonalMatrix -> [Float] -> Result [Float] String
sym-mul-vec
Multiply a symmetric tridiagonal matrix by a vector.
SymTridiagonalMatrix -> [Float] -> Result [Float] String
solve
Solve the linear system $Tx = b$ using the Thomas algorithm.
The Thomas algorithm is a specialized form of Gaussian elimination for tridiagonal systems with O(n) complexity.
TridiagonalMatrix -> [Float] -> Result [Float] String
sym-solve
Solve a symmetric tridiagonal system.
SymTridiagonalMatrix -> [Float] -> Result [Float] String
det
Compute the determinant of a tridiagonal matrix using the recurrence relation.
For a tridiagonal matrix: det(T_n) = b_n * det(T_{n-1}) - a_n * c_{n-1} * det(T_{n-2})
Complexity: O(n).
TridiagonalMatrix -> Float
sym-det
Compute the determinant of a symmetric tridiagonal matrix.
SymTridiagonalMatrix -> Float
trace
Compute the trace of a tridiagonal matrix.
TridiagonalMatrix -> Float
sym-trace
Compute the trace of a symmetric tridiagonal matrix.
SymTridiagonalMatrix -> Float
is-diagonally-dominant?
Check if the tridiagonal matrix is diagonally dominant. A matrix is diagonally dominant if |b_i| >= |a_i| + |c_i| for all i.
TridiagonalMatrix -> Bool
mat-add
Add two tridiagonal matrices.
TridiagonalMatrix -> TridiagonalMatrix -> Result TridiagonalMatrix String
subtract
Subtract two tridiagonal matrices.
TridiagonalMatrix -> TridiagonalMatrix -> Result TridiagonalMatrix String
scale
Scale a tridiagonal matrix by a scalar.
Float -> TridiagonalMatrix -> TridiagonalMatrix
sym-scale
Scale a symmetric tridiagonal matrix.
Float -> SymTridiagonalMatrix -> SymTridiagonalMatrix
TridiagLUResult
LU factorization result for tridiagonal matrices. L is unit lower bidiagonal, U is upper bidiagonal.
Variants
TridiagLUResult {l-diag, u-main, u-upper}lu
Compute LU decomposition of a tridiagonal matrix without pivoting.
Returns L (unit lower bidiagonal) and U (upper bidiagonal) such that T = LU.
TridiagonalMatrix -> Result TridiagLUResult String
inverse
Compute the inverse of a tridiagonal matrix.
Uses the Thomas algorithm to solve T*X = I column by column. Note: The inverse of a tridiagonal matrix is generally dense.
TridiagonalMatrix -> Result [[Float]] String
GeneralizedEigenResult
Result of generalized eigenvalue problem.
- alphas: Numerators of eigenvalues (complex: {real, imag}) - betas: Denominators of eigenvalues (real, non-negative) - n: Size of the matrices
Variants
GeneralizedEigenResult {alphas, betas, n}solve
Solve the generalized eigenvalue problem Ax = λBx.
For nonsingular B, computes eigenvalues of B^(-1) * A. For singular B, some eigenvalues may be infinite (β = 0).
[[Float]] -> [[Float]] -> Result GeneralizedEigenResult String
eigenvalues
Get eigenvalues as α/β pairs.
GeneralizedEigenResult -> [{alpha: {real: Float, imag: Float}, beta: Float}]
eigenvalues-complex
Get eigenvalues as complex numbers (λ = α/β).
GeneralizedEigenResult -> [{real: Float, imag: Float}]
size
Get the size of the problem.
GeneralizedEigenResult -> Int
has-infinite-eigenvalue?
Check if any eigenvalue is infinite.
GeneralizedEigenResult -> Bool
count-finite
Count finite eigenvalues.
GeneralizedEigenResult -> Int
SparseData
Internal sparse data: stores (row, col, value) triples as nested lists. Format: [[row, col, value], ...] encoded as [[Float]] for simplicity.
Variants
SparseData {rows, cols, data}SparseExpr
A lazy expression tree for sparse matrix computations.
Expressions are built without performing any computation. They can be optimized via algebraic rewrite rules and then evaluated in a single pass.
Variants
SLit {SparseData}SAdd {SparseExpr, SparseExpr}SSub {SparseExpr, SparseExpr}SScale {Float, SparseExpr}STrans {SparseExpr}SNeg {SparseExpr}SIdentity {Int}SZeros {Int, Int}SDiagM {_0}SFromDense {_0}of
Wrap a sparse matrix literal as an expression.
Sparse.SparseMatrix -> SparseExpr
SExpr.of sparse-matrixfrom-dense
Create a sparse expression from a dense matrix.
[[Float]] -> SparseExpr
SExpr.from-dense [[1.0, 0.0], [0.0, 2.0]]plus
Add two expressions. Pipe-friendly: piped value becomes left operand.
SparseExpr -> SparseExpr -> SparseExpr
SExpr.of a |> SExpr.plus (SExpr.of b) # a + bminus
Subtract right from left. Pipe-friendly: piped value becomes left operand.
SparseExpr -> SparseExpr -> SparseExpr
SExpr.of a |> SExpr.minus (SExpr.of b) # a - bscale
Scale an expression by a scalar.
Float -> SparseExpr -> SparseExpr
SExpr.scale 2.0 (SExpr.of a)transpose
Transpose an expression.
SparseExpr -> SparseExpr
SExpr.of a |> SExpr.transposenegate
Negate an expression (multiply by -1).
SparseExpr -> SparseExpr
SExpr.negate (SExpr.of a)identity
Create a lazy sparse identity matrix I(n). No allocation until eval.
Int -> SparseExpr
SExpr.identity 3 # 3x3 sparse identity matrix (lazy)zeros
Create a lazy sparse zero matrix. No allocation until eval.
Int -> Int -> SparseExpr
SExpr.zeros 2 3 # 2x3 sparse zero matrix (lazy)diag-matrix
Create a lazy sparse diagonal matrix from a vector. No allocation until eval.
[Float] -> SparseExpr
SExpr.diag-matrix [1.0, 2.0, 3.0] # 3x3 sparse diagonal matrix (lazy)optimize
Apply algebraic rewrite rules to simplify a sparse expression tree.
Rules applied (bottom-up): - Scalar folding: SScale a (SScale b x) -> SScale (a*b) x - Double transpose: STrans (STrans x) -> x - Double negation: SNeg (SNeg x) -> x - Identity scale: SScale 1.0 x -> x - Add-neg to sub: SAdd x (SNeg y) -> SSub x y - Sub-neg to add: SSub x (SNeg y) -> SAdd x y - Neg into scale: SNeg (SScale a x) -> SScale (-a) x - Identity transpose: STrans (SIdentity n) -> SIdentity n - Zero operations: various rules for zeros
SparseExpr -> SparseExpr
eval
Evaluate a sparse expression tree by walking it and calling the underlying sparse matrix operations.
Returns the resulting sparse matrix (COO format), or an error if a dimension mismatch or other linear algebra error occurs.
SparseExpr -> Result Sparse.SparseMatrix String
eval-optimized
Optimize and then evaluate a sparse expression tree.
Applies algebraic rewrite rules before evaluation to eliminate unnecessary intermediate computations.
SparseExpr -> Result Sparse.SparseMatrix String
eval-lazy
Create a lazy (deferred) evaluation of a sparse expression tree.
The expression is optimized and evaluated only when the result is needed.
SparseExpr -> Result Sparse.SparseMatrix String
eval-memo
Create a memoized (cached) evaluation of a sparse expression tree.
The expression is optimized and evaluated once, then the result is cached.
SparseExpr -> Result Sparse.SparseMatrix String
to-dense
Convert a sparse expression result to dense matrix.
Sparse.SparseMatrix -> [[Float]]
nnz
Get the number of non-zeros in a sparse matrix.
Sparse.SparseMatrix -> Int
sparsity
Get the sparsity ratio of a sparse matrix.
Sparse.SparseMatrix -> Float
matvec
Evaluate a sparse expression and multiply by a vector.
SparseExpr -> [Float] -> Result [Float] String
csr-matvec
Evaluate a sparse expression, convert to CSR, and multiply by a vector.
More efficient for repeated matrix-vector products with the same matrix.
SparseExpr -> [Float] -> Result [Float] String
SymmetricMatrix
A symmetric matrix stored in packed form.
- UpperSymmetric: Stores upper triangular part (row-major packed) - LowerSymmetric: Stores lower triangular part (row-major packed)
The packed storage uses $\frac{n(n+1)}{2}$ elements instead of $n^2$.
Variants
UpperSymmetric {size, data}LowerSymmetric {size, data}HermitianMatrix
A Hermitian matrix (complex symmetric under conjugate transpose).
Hermitian matrices have real eigenvalues and orthogonal eigenvectors.
Variants
UpperHermitian {size, data}LowerHermitian {size, data}from-dense
Create a symmetric matrix from a dense matrix. Extracts the upper triangular part and stores in packed form.
[[Float]] -> SymmetricMatrix
from-dense-lower
Create a symmetric matrix from a dense matrix, storing lower triangle.
[[Float]] -> SymmetricMatrix
from-diagonal
Create a symmetric matrix from diagonal elements. All off-diagonal elements are zero.
[Float] -> SymmetricMatrix
identity
Create a symmetric identity matrix.
Int -> SymmetricMatrix
zeros
Create a symmetric zero matrix.
Int -> SymmetricMatrix
fill
Create a symmetric matrix filled with a constant value.
Int -> Float -> SymmetricMatrix
hermitian-from-dense
Create a Hermitian matrix from a dense complex matrix. Extracts the upper triangular part.
[[Complex]] -> HermitianMatrix
hermitian-identity
Create a Hermitian identity matrix.
Int -> HermitianMatrix
hermitian-from-real
Create a Hermitian matrix from a real symmetric matrix.
SymmetricMatrix -> HermitianMatrix
size
Get the size (dimension) of the symmetric matrix.
SymmetricMatrix -> Int
hermitian-size
Get the size of a Hermitian matrix.
HermitianMatrix -> Int
get
Get an element at position (i, j) from a symmetric matrix. Exploits symmetry: A[i,j] = A[j,i].
Int -> Int -> SymmetricMatrix -> Option Float
hermitian-get
Get an element from a Hermitian matrix. For off-diagonal access with i > j, returns conjugate.
Int -> Int -> HermitianMatrix -> Option Complex
to-dense
Convert a symmetric matrix to dense form.
SymmetricMatrix -> [[Float]]
hermitian-to-dense
Convert a Hermitian matrix to dense form.
HermitianMatrix -> [[Complex]]
diagonal
Extract the diagonal elements.
SymmetricMatrix -> [Float]
hermitian-diagonal
Extract the diagonal of a Hermitian matrix (always real for Hermitian).
HermitianMatrix -> [Float]
trace
Compute the trace of a symmetric matrix.
SymmetricMatrix -> Float
hermitian-trace
Compute the trace of a Hermitian matrix.
HermitianMatrix -> Float
is-positive-definite?
Check if a symmetric matrix is positive definite. A matrix is positive definite if all eigenvalues are positive.
SymmetricMatrix -> Bool
is-positive-semidefinite?
Check if the matrix is positive semi-definite (eigenvalues >= 0).
SymmetricMatrix -> Bool
mul-vec
Multiply a symmetric matrix by a vector: $y = A \cdot x$
Exploits symmetry: only reads each stored element once.
SymmetricMatrix -> [Float] -> Result [Float] String
hermitian-mul-vec
Multiply a Hermitian matrix by a complex vector.
HermitianMatrix -> [Complex] -> Result [Complex] String
mat-add
Add two symmetric matrices.
SymmetricMatrix -> SymmetricMatrix -> Result SymmetricMatrix String
subtract
Subtract two symmetric matrices.
SymmetricMatrix -> SymmetricMatrix -> Result SymmetricMatrix String
scale
Scale a symmetric matrix by a scalar.
Float -> SymmetricMatrix -> SymmetricMatrix
hermitian-scale
Scale a Hermitian matrix by a real scalar.
Float -> HermitianMatrix -> HermitianMatrix
EigResult
Result type for eigenvalue decomposition.
Variants
EigResult {values, vectors}eig
Compute eigenvalues and eigenvectors of a symmetric matrix.
For a symmetric matrix, all eigenvalues are real and eigenvectors are orthogonal.
Uses the Jacobi eigenvalue algorithm for pure Kit implementation. For larger matrices, use LA.FFI.eig-symmetric for LAPACK acceleration.
Returns eigenvalues in ascending order.
SymmetricMatrix -> Result EigResult String
eigenvalues
Compute only the eigenvalues of a symmetric matrix.
SymmetricMatrix -> Result [Float] String
cholesky
Compute the Cholesky decomposition of a symmetric positive-definite matrix.
Returns the lower triangular matrix $L$ such that $A = LL^T$. Uses the Cholesky-Banachiewicz algorithm.
For larger matrices, use LA.FFI.cholesky for LAPACK acceleration.
SymmetricMatrix -> Result [[Float]] String
solve
Solve the linear system $Ax = b$ for a symmetric positive-definite matrix.
Uses Cholesky decomposition: A = L*L^T, then solves L*y = b and L^T*x = y.
For larger matrices, use LA.FFI.solve for LAPACK acceleration.
SymmetricMatrix -> [Float] -> Result [Float] String
inverse
Compute the inverse of a symmetric matrix.
The inverse of a symmetric matrix is also symmetric. Uses solve with identity matrix columns.
For larger matrices, use LA.FFI.inverse for LAPACK acceleration.
SymmetricMatrix -> Result SymmetricMatrix String
det
Compute the determinant of a symmetric matrix.
Uses the eigenvalue product: det(A) = product of eigenvalues.
SymmetricMatrix -> Result Float String
logdet
Compute the log-determinant of a symmetric positive-definite matrix.
More numerically stable than log(det(A)) for large matrices.
SymmetricMatrix -> Result Float String
cond
Compute the 2-norm condition number of a symmetric matrix.
For symmetric matrices, cond(A) = |lambda_max| / |lambda_min|.
SymmetricMatrix -> Result Float String
matrix-sqrt
Compute the matrix square root of a symmetric positive semi-definite matrix.
Returns $B$ such that $B^2 = A$ and $B$ is also symmetric.
SymmetricMatrix -> Result SymmetricMatrix String
quadratic-form
Compute the quadratic form $x^T A x$ for a symmetric matrix $A$.
SymmetricMatrix -> [Float] -> Result Float String
is-symmetric?
Check if a dense matrix is symmetric within tolerance.
[[Float]] -> Float -> Bool
SchurResult
Result of Schur decomposition.
- t: Quasi-upper triangular matrix (Schur form) - z: Orthogonal matrix such that A = Z * T * Z^T - n: Size of the matrix
Variants
SchurResult {t, z, n}decompose
Compute the real Schur decomposition of a square matrix.
Given a square matrix A, computes orthogonal Z and quasi-upper triangular T such that A = Z * T * Z^T.
The algorithm: 1. Reduce A to upper Hessenberg form H 2. Apply QR iteration with shifts until convergence 3. Accumulate orthogonal transformations
[[Float]] -> Result SchurResult String
get-t
Get the quasi-upper triangular matrix T from the decomposition.
SchurResult -> [[Float]]
get-z
Get the orthogonal matrix Z from the decomposition.
SchurResult -> [[Float]]
size
Get the size of the decomposed matrix.
SchurResult -> Int
is-quasi-triangular?
Verify that T is in quasi-upper triangular form.
Checks that all elements below the first subdiagonal are zero, and that 2x2 diagonal blocks (if any) correspond to complex eigenvalues.
SchurResult -> Bool
is-orthogonal?
Verify that Z is orthogonal (Z^T * Z ≈ I).
SchurResult -> Bool
reconstruct
Reconstruct the original matrix as Z * T * Z^T.
SchurResult -> [[Float]]
eigenvalues
Extract eigenvalues from the Schur form.
For 1x1 diagonal blocks, the eigenvalue is real. For 2x2 diagonal blocks, computes complex eigenvalue pair.
Returns list of {real: Float, imag: Float} records.
SchurResult -> [{real: Float, imag: Float}]
diagonal
Get the diagonal elements of T.
SchurResult -> [Float]
trace
Compute the trace of T (equals trace of A).
SchurResult -> Float
det
Compute the determinant using the Schur form.
For quasi-triangular T, det = product of 1x1 blocks * product of 2x2 block determinants.
SchurResult -> Float
ComplexExpr
A lazy expression tree for complex matrix computations.
Expressions are built without performing any computation. They can be optimized via algebraic rewrite rules and then evaluated in a single pass.
Variants
CLit {_0}CAdd {ComplexExpr, ComplexExpr}CSub {ComplexExpr, ComplexExpr}CScale {Complex, ComplexExpr}CScaleReal {Float, ComplexExpr}CMul {ComplexExpr, ComplexExpr}CTrans {ComplexExpr}CHerm {ComplexExpr}CConj {ComplexExpr}CNeg {ComplexExpr}CIdentity {Int}CZeros {Int, Int}CFill {Int, Int, Complex}CDiagM {_0}CFromReal {_0}of
Wrap a complex matrix literal as an expression.
[[Complex]] -> ComplexExpr
CExpr.of [[Complex.new 1.0 2.0]]from-real
Create a complex expression from a real matrix.
[[Float]] -> ComplexExpr
CExpr.from-real [[1.0, 2.0], [3.0, 4.0]]plus
Add two expressions. Pipe-friendly: piped value becomes left operand.
ComplexExpr -> ComplexExpr -> ComplexExpr
CExpr.of a |> CExpr.plus (CExpr.of b) # a + bminus
Subtract right from left. Pipe-friendly: piped value becomes left operand.
ComplexExpr -> ComplexExpr -> ComplexExpr
CExpr.of a |> CExpr.minus (CExpr.of b) # a - btimes
Multiply two expressions. Pipe-friendly: piped value becomes left operand.
ComplexExpr -> ComplexExpr -> ComplexExpr
CExpr.of a |> CExpr.times (CExpr.of b) # a * bscale
Scale an expression by a complex scalar.
Complex -> ComplexExpr -> ComplexExpr
CExpr.scale (Complex.new 2.0 1.0) (CExpr.of a)scale-real
Scale an expression by a real scalar.
Float -> ComplexExpr -> ComplexExpr
CExpr.scale-real 2.0 (CExpr.of a)transpose
Transpose an expression.
ComplexExpr -> ComplexExpr
CExpr.of a |> CExpr.transposehermitian
Hermitian (conjugate transpose) of an expression.
ComplexExpr -> ComplexExpr
CExpr.of a |> CExpr.hermitianconjugate
Conjugate an expression (element-wise complex conjugate).
ComplexExpr -> ComplexExpr
CExpr.of a |> CExpr.conjugatenegate
Negate an expression (multiply by -1).
ComplexExpr -> ComplexExpr
CExpr.negate (CExpr.of a)identity
Create a lazy complex identity matrix I(n). No allocation until eval.
Int -> ComplexExpr
CExpr.identity 3 # 3x3 complex identity matrix (lazy)zeros
Create a lazy complex zero matrix. No allocation until eval.
Int -> Int -> ComplexExpr
CExpr.zeros 2 3 # 2x3 complex zero matrix (lazy)fill
Create a lazy complex fill matrix. No allocation until eval.
Int -> Int -> Complex -> ComplexExpr
CExpr.fill 2 3 (Complex.new 1.0 2.0)diag-matrix
Create a lazy complex diagonal matrix from a vector. No allocation until eval.
[Complex] -> ComplexExpr
CExpr.diag-matrix [Complex.new 1.0 0.0, Complex.new 0.0 1.0]optimize
Apply algebraic rewrite rules to simplify a complex expression tree.
Rules applied (bottom-up): - Scalar folding: CScale a (CScale b x) -> CScale (a*b) x - Double transpose: CTrans (CTrans x) -> x - Double hermitian: CHerm (CHerm x) -> x - Double conjugate: CConj (CConj x) -> x - Double negation: CNeg (CNeg x) -> x - Identity scale: CScaleReal 1.0 x -> x - Add-neg to sub: CAdd x (CNeg y) -> CSub x y - Sub-neg to add: CSub x (CNeg y) -> CAdd x y - Neg into scale: CNeg (CScaleReal a x) -> CScaleReal (-a) x - Identity transpose: CTrans (CIdentity n) -> CIdentity n - Identity hermitian: CHerm (CIdentity n) -> CIdentity n - Left identity mul: CMul (CIdentity _) X -> X - Right identity mul: CMul X (CIdentity _) -> X - Zero operations: various rules for zeros - Hermitian of product: CHerm (CMul A B) -> CMul (CHerm B) (CHerm A) - Scale extraction: CMul (CScale a A) B -> CScale a (CMul A B)
ComplexExpr -> ComplexExpr
eval
Evaluate a complex expression tree by walking it and calling the underlying complex matrix operations.
Returns the resulting complex matrix, or an error if a dimension mismatch or other linear algebra error occurs.
ComplexExpr -> Result [[Complex]] String
eval-optimized
Optimize and then evaluate a complex expression tree.
Applies algebraic rewrite rules before evaluation to eliminate unnecessary intermediate computations.
ComplexExpr -> Result [[Complex]] String
eval-lazy
Create a lazy (deferred) evaluation of a complex expression tree.
The expression is optimized and evaluated only when the result is needed.
ComplexExpr -> Result [[Complex]] String
eval-memo
Create a memoized (cached) evaluation of a complex expression tree.
The expression is optimized and evaluated once, then the result is cached.
ComplexExpr -> Result [[Complex]] String
promote-matrix
Promote a real matrix expression result to a complex matrix.
[[Float]] -> [[Complex]]
TriangularMatrix
A triangular matrix with its structure type.
Variants indicate the structure: - Upper: Upper triangular (zeros below diagonal) - Lower: Lower triangular (zeros above diagonal) - UnitUpper: Upper triangular with implicit 1s on diagonal - UnitLower: Lower triangular with implicit 1s on diagonal
Variants
Upper {data}Lower {data}UnitUpper {data}UnitLower {data}upper
Create an upper triangular matrix from a dense matrix. Elements below the diagonal are ignored.
[[Float]] -> TriangularMatrix
lower
Create a lower triangular matrix from a dense matrix. Elements above the diagonal are ignored.
[[Float]] -> TriangularMatrix
unit-upper
Create a unit upper triangular matrix (diagonal implicitly 1). Elements on and below the diagonal are ignored.
[[Float]] -> TriangularMatrix
unit-lower
Create a unit lower triangular matrix (diagonal implicitly 1). Elements on and above the diagonal are ignored.
[[Float]] -> TriangularMatrix
upper-identity
Create an upper triangular identity matrix.
Int -> TriangularMatrix
lower-identity
Create a lower triangular identity matrix.
Int -> TriangularMatrix
data
Get the underlying data matrix.
TriangularMatrix -> [[Float]]
size
Get the size (dimension) of the triangular matrix.
TriangularMatrix -> Int
is-upper?
Check if the matrix is upper triangular.
TriangularMatrix -> Bool
is-lower?
Check if the matrix is lower triangular.
TriangularMatrix -> Bool
is-unit?
Check if the matrix has unit diagonal.
TriangularMatrix -> Bool
get
Get an element at position (i, j). Respects triangular structure (returns 0 for zeros, 1 for unit diagonal).
Int -> Int -> TriangularMatrix -> Option Float
to-dense
Convert a triangular matrix to a dense matrix. Fills in zeros and unit diagonals as appropriate.
TriangularMatrix -> [[Float]]
diagonal
Extract the diagonal elements.
TriangularMatrix -> [Float]
solve
Solve the linear system $T \cdot x = b$ for x using back/forward substitution.
For upper triangular: uses back substitution starting from last row. For lower triangular: uses forward substitution starting from first row.
Complexity: O(n²) instead of O(n³) for general solve.
TriangularMatrix -> [Float] -> Result [Float] String
mul-vec
Multiply a triangular matrix by a vector: $y = T \cdot x$
TriangularMatrix -> [Float] -> Result [Float] String
det
Compute the determinant of a triangular matrix.
For triangular matrices, det = product of diagonal elements. Complexity: O(n) instead of O(n³).
TriangularMatrix -> Float
logdet
Compute the log-determinant of a triangular matrix.
TriangularMatrix -> Result Float String
trace
Compute the trace of a triangular matrix.
TriangularMatrix -> Float
is-invertible?
Check if the triangular matrix is invertible (non-singular).
TriangularMatrix -> Bool
inverse
Compute the inverse of a triangular matrix.
The inverse of a triangular matrix is also triangular with the same structure.
TriangularMatrix -> Result TriangularMatrix String
mat-mul
Multiply two triangular matrices of the same type.
Note: The product of two upper (lower) triangular matrices is upper (lower) triangular.
TriangularMatrix -> TriangularMatrix -> Result TriangularMatrix String
mul-left
Left-multiply a dense matrix by a triangular matrix: $C = T \cdot A$
TriangularMatrix -> [[Float]] -> Result [[Float]] String
mul-right
Right-multiply a dense matrix by a triangular matrix: $C = A \cdot T$
[[Float]] -> TriangularMatrix -> Result [[Float]] String
scale
Scale a triangular matrix by a scalar.
Float -> TriangularMatrix -> TriangularMatrix
transpose
Transpose a triangular matrix.
Transposing upper triangular gives lower triangular and vice versa.
TriangularMatrix -> TriangularMatrix
Expr
A lazy expression tree for matrix computations.
Expressions are built without performing any computation. They can be optimized via algebraic rewrite rules and then evaluated in a single pass.
Variants
Lit {_0}Add {Expr, Expr}Sub {Expr, Expr}Scale {Float, Expr}Mul {Expr, Expr}Trans {Expr}Neg {Expr}Inv {Expr}Identity {Int}Zeros {Int, Int}Ones {Int, Int}Fill {Int, Int, Float}DiagM {_0}of
Wrap a matrix literal as an expression.
[[Float]] -> Expr
LA.of [[1.0, 2.0], [3.0, 4.0]]plus
Add two expressions. Pipe-friendly: piped value becomes left operand.
Expr -> Expr -> Expr
LA.of a |> LA.add (LA.of b) # a + bminus
Subtract right from left. Pipe-friendly: piped value becomes left operand.
Expr -> Expr -> Expr
LA.of a |> LA.sub (LA.of b) # a - btimes
Multiply two expressions. Pipe-friendly: piped value becomes left operand.
Expr -> Expr -> Expr
LA.of a |> LA.mul (LA.of b) # a * bscale
Scale an expression by a scalar.
Float -> Expr -> Expr
LA.scale 2.0 (LA.of a) # 2atranspose
Transpose an expression.
Expr -> Expr
LA.of a |> LA.transposenegate
Negate an expression (multiply by -1).
Expr -> Expr
LA.neg (LA.of a)inv
Invert an expression (matrix inverse).
Expr -> Expr
LA.inv (LA.of a)identity
Create a lazy identity matrix I(n). No allocation until eval.
Int -> Expr
LA.identity 3 # 3x3 identity matrix (lazy)zeros
Create a lazy zero matrix. No allocation until eval.
Int -> Int -> Expr
LA.zeros 2 3 # 2x3 zero matrix (lazy)ones
Create a lazy ones matrix. No allocation until eval.
Int -> Int -> Expr
LA.ones 2 3 # 2x3 ones matrix (lazy)fill
Create a lazy fill matrix. No allocation until eval.
Int -> Int -> Float -> Expr
LA.fill 2 3 5.0 # 2x3 matrix filled with 5.0 (lazy)diag-matrix
Create a lazy diagonal matrix from a vector. No allocation until eval.
[Float] -> Expr
LA.diag-matrix [1.0, 2.0, 3.0] # 3x3 diagonal matrix (lazy)optimize
Apply algebraic rewrite rules to simplify an expression tree.
Rules applied (bottom-up): 1. Scalar folding: Scale a (Scale b x) -> Scale (a*b) x 2. Double transpose: Trans (Trans x) -> x 3. Double negation: Neg (Neg x) -> x 4. Identity scale: Scale 1.0 x -> x 5. Add-neg to sub: Add x (Neg y) -> Sub x y 6. Sub-neg to add: Sub x (Neg y) -> Add x y 7. Neg into scale: Neg (Scale a x) -> Scale (-a) x 8. Identity transpose: Trans (Identity n) -> Identity n 9. Identity inverse: Inv (Identity n) -> Identity n 10. Left identity mul: Mul (Identity _) X -> X 11. Right identity mul: Mul X (Identity _) -> X 12. Zero negation: Neg (Zeros m n) -> Zeros m n 13. Zero scaling: Scale _ (Zeros m n) -> Zeros m n 14. Zero transpose: Trans (Zeros m n) -> Zeros n m 15. Zero addition (right): Add X (Zeros _ _) -> X 16. Zero addition (left): Add (Zeros _ _) X -> X 17. Zero subtraction: Sub X (Zeros _ _) -> X 18. Zero multiplication (left): Mul (Zeros m _) X -> Zeros m (cols X) 19. Zero multiplication (right): Mul X (Zeros _ n) -> Zeros (rows X) n 20. Transpose of product: Trans (Mul A B) -> Mul (Trans B) (Trans A) 21. Transpose of ones: Trans (Ones m n) -> Ones n m 22. Transpose of fill: Trans (Fill m n v) -> Fill n m v 23. Scale extraction (left): Mul (Scale a A) B -> Scale a (Mul A B) 24. Scale extraction (right): Mul A (Scale a B) -> Scale a (Mul A B)
Expr -> Expr
eval
Evaluate an expression tree by walking it and calling the underlying matrix operations.
Returns the resulting matrix, or an error if a dimension mismatch or other linear algebra error occurs.
Expr -> Result [[Float]] String
eval-with-inverse-fn
Evaluate an expression tree with a custom inverse function.
This allows the caller (main.kit) to provide an FFI-backed inverse function for matrices larger than 2x2.
(([[Float]] -> Result [[Float]] String)) -> Expr -> Result [[Float]] String
eval-optimized
Optimize and then evaluate an expression tree.
Applies algebraic rewrite rules before evaluation to eliminate unnecessary intermediate computations.
Expr -> Result [[Float]] String
eval-lazy
Create a lazy (deferred) evaluation of an expression tree.
The expression is optimized and evaluated only when the result is needed.
Expr -> Result [[Float]] String
eval-memo
Create a memoized (cached) evaluation of an expression tree.
The expression is optimized and evaluated once, then the result is cached.
Expr -> Result [[Float]] String
expr-hash
Compute a structural hash of an expression tree.
This produces a unique string key for each distinct expression structure, allowing identical subexpressions to be identified and cached.
Expr -> String
eval-memoized
Evaluate an expression tree with subexpression memoization.
This evaluator caches the results of subexpressions during evaluation, preventing redundant computation when the same subexpression appears multiple times in a DAG-shaped expression tree.
For example, in the expression (A + B) * (A + B), the subexpression A + B is computed only once and the result is reused.
Expr -> Result [[Float]] String
eval-memoized-cache-size
Get cache statistics after memoized evaluation.
Returns the number of unique subexpressions that were cached.
Expr -> Int
DiagonalMatrix
A diagonal matrix represented by its diagonal elements.
Stores only the n diagonal elements for an n×n matrix, using O(n) space instead of O(n²).
Variants
DiagonalMatrix {values}from-vector
Create a diagonal matrix from a vector of diagonal elements.
[Float] -> DiagonalMatrix
of
Create a diagonal matrix from a list of diagonal elements. Alias for from-vector.
[Float] -> DiagonalMatrix
identity
Create an n×n identity matrix as a diagonal matrix.
Int -> DiagonalMatrix
zeros
Create an n×n zero diagonal matrix.
Int -> DiagonalMatrix
fill
Create an n×n diagonal matrix filled with a constant.
Int -> Float -> DiagonalMatrix
from-dense
Create a diagonal matrix from the diagonal of a dense matrix.
[[Float]] -> DiagonalMatrix
values
Get the diagonal elements as a vector.
DiagonalMatrix -> [Float]
size
Get the size (dimension) of the diagonal matrix.
DiagonalMatrix -> Int
get
Get a specific diagonal element (0-indexed).
Int -> DiagonalMatrix -> Option Float
to-dense
Convert a diagonal matrix to a dense matrix.
DiagonalMatrix -> [[Float]]
mul-vec
Multiply a diagonal matrix by a vector: $y = D \cdot x$
Each element $y_i = d_i \cdot x_i$ (element-wise product). Complexity: O(n) instead of O(n²) for dense matrix-vector multiply.
DiagonalMatrix -> [Float] -> Result [Float] String
solve
Solve the linear system $D \cdot x = b$ for x.
Solution: $x_i = b_i / d_i$ Complexity: O(n) instead of O(n³) for dense solve.
Returns error if any diagonal element is zero (singular matrix).
DiagonalMatrix -> [Float] -> Result [Float] String
mul
Multiply two diagonal matrices: $C = A \cdot B$
Result diagonal: $c_i = a_i \cdot b_i$ Complexity: O(n) instead of O(n³) for dense multiply.
DiagonalMatrix -> DiagonalMatrix -> Result DiagonalMatrix String
add
Add two diagonal matrices: $C = A + B$
Result diagonal: $c_i = a_i + b_i$
DiagonalMatrix -> DiagonalMatrix -> Result DiagonalMatrix String
subtract
Subtract two diagonal matrices: $C = A - B$
DiagonalMatrix -> DiagonalMatrix -> Result DiagonalMatrix String
mul-left
Left-multiply a dense matrix by a diagonal: $C = D \cdot A$
Each row i of the result is scaled by $d_i$. Complexity: O(n²) instead of O(n³).
DiagonalMatrix -> [[Float]] -> Result [[Float]] String
mul-right
Right-multiply a dense matrix by a diagonal: $C = A \cdot D$
Each column j of the result is scaled by $d_j$. Complexity: O(n²) instead of O(n³).
[[Float]] -> DiagonalMatrix -> Result [[Float]] String
scale
Scale a diagonal matrix by a scalar: $C = \alpha D$
Float -> DiagonalMatrix -> DiagonalMatrix
inverse
Compute the inverse of a diagonal matrix.
Inverse diagonal: $d^{-1}_i = 1 / d_i$ Complexity: O(n) instead of O(n³).
Returns error if any diagonal element is zero.
DiagonalMatrix -> Result DiagonalMatrix String
det
Compute the determinant of a diagonal matrix.
$\det(D) = \prod_i d_i$ Complexity: O(n) instead of O(n³).
DiagonalMatrix -> Float
logdet
Compute the log-determinant of a diagonal matrix.
$\log|\det(D)| = \sum_i \log|d_i|$ More numerically stable than log(det(D)) for large matrices.
DiagonalMatrix -> Result Float String
trace
Compute the trace of a diagonal matrix.
$\text{tr}(D) = \sum_i d_i$
DiagonalMatrix -> Float
eigenvalues
Get the eigenvalues of a diagonal matrix.
For a diagonal matrix, the eigenvalues are exactly the diagonal elements. Complexity: O(n) instead of O(n³).
DiagonalMatrix -> [Float]
eigenvectors
Get the eigenvectors of a diagonal matrix.
For a diagonal matrix, the eigenvectors are the standard basis vectors. Returns the identity matrix.
DiagonalMatrix -> [[Float]]
is-positive-definite?
Check if the diagonal matrix is positive definite.
A diagonal matrix is positive definite iff all diagonal elements are positive.
DiagonalMatrix -> Bool
is-invertible?
Check if the diagonal matrix is invertible (non-singular).
A diagonal matrix is invertible iff no diagonal element is zero.
DiagonalMatrix -> Bool
cond
Compute the condition number of a diagonal matrix.
$\kappa(D) = \frac{\max_i |d_i|}{\min_i |d_i|}$
DiagonalMatrix -> Result Float String
pow
Compute a matrix power for a diagonal matrix.
$D^p$ where $d^p_i = d_i^p$
DiagonalMatrix -> Float -> Result DiagonalMatrix String
sqrt
Compute the square root of a diagonal matrix.
$D^{1/2}$ where $d^{1/2}_i = \sqrt{d_i}$
Requires all diagonal elements to be non-negative.
DiagonalMatrix -> Result DiagonalMatrix String