Linear algebra refresher, robotics edition
Vectors, matrices, rotations, and eigenvectors — every example a robot. The calculator-level fluency you need for kinematics, dynamics, control, and SLAM.
Half of robotics is linear algebra dressed up in domain language. Here's the working knowledge — what you need cold to read papers, debug Kalman filters, derive Jacobians, or implement an arm controller. Calculator-level fluency, not proof-level.
Vectors: positions and velocities
A 3D point is a vector .
The robot's velocity is a vector. The force on a sensor is a vector. The state of an n-DOF arm is an n-vector of joint angles. If you can speak about "n-vectors of something," you can describe most robot states.
Two operations show up everywhere:
- Dot product — projection of one vector onto another. Zero ⇔ perpendicular.
- Cross product (3D only) — produces a vector perpendicular to both, magnitude . The torque from a force at a lever arm is r × F.
Matrices: linear transforms
A matrix M maps one vector to another: . Two interpretations of the same equation:
- "Apply M to v." M is a function; v is the input.
- "Express v in M's coordinate system." M is a basis change.
Matrix multiplication composes transforms: M₂(M₁v) = (M₂M₁)v. Order matters — matrices generally don't commute.
Three matrix flavors you'll meet most:
- Rotation matrices R, , det R = +1. The set is SO(3) in 3D.
- Rigid-body transforms T (4×4 homogeneous, rotation + translation). The set is SE(3).
- Jacobian matrices J — partial derivatives of one vector function w.r.t. another. The matrix that connects joint velocities to end-effector velocities.
Inverse and transpose
is the matrix that undoes M: . Computed via Gaussian elimination, LU, or analytically for small matrices.
For rotation matrices, the inverse equals the transpose: . This is huge — you never need to compute a matrix inverse to invert a rotation. Just transpose. Same goes for any orthogonal matrix.
For homogeneous transforms , the inverse is . Memorize this — it shows up in every TF lookup you'll ever debug.
Eigenvalues and eigenvectors
An eigenvector v of M satisfies — the matrix scales v by the scalar λ but doesn't change its direction. The set of eigenvalues characterizes M's behavior:
- Stability of a controller: A linear system is stable iff every eigenvalue of A has negative real part.
- Conditioning: a matrix with one tiny eigenvalue is "near-singular" — solving systems with it amplifies noise.
- Principal axes: the eigenvectors of an inertia tensor are the principal axes of rotation. Same for covariance matrices in state estimation.
You rarely compute eigenvalues by hand. You call numpy.linalg.eig or scipy.linalg.eig.
Singular value decomposition (SVD)
For any matrix M, where U, V are orthogonal and Σ is diagonal with non-negative entries. Three things SVD lets you do efficiently:
- Pseudoinverse: . Solves least-squares problems. Used by iterative IK.
- Best low-rank approximation: keep the k largest singular values. Used in dimensionality reduction.
- Detect rank deficiency: tiny singular values mean the matrix is degenerate. Useful for spotting Jacobian singularities.
Linear systems
Solving : when A is square and invertible, . In code, never write np.linalg.inv(A) @ b; use np.linalg.solve(A, b). Faster, more numerically stable.
When A is non-square (more equations than unknowns), use least-squares: np.linalg.lstsq. This is what fits a model to noisy data — calibration, IK, sensor fusion all reduce to least-squares at some level.
Symmetric and positive-definite matrices
A matrix M is symmetric if . It's positive-definite if for all nonzero v. Two facts you'll use constantly:
- Covariance matrices and inertia tensors are symmetric positive-semidefinite. They have real, non-negative eigenvalues.
- Cholesky decomposition works only for SPD matrices. Faster than LU. Used in Kalman filters and quadratic optimization.
If your covariance matrix's Cholesky decomposition fails, you have a numerical bug — usually negative eigenvalues from accumulated rounding error. The fix: re-symmetrize () and add a tiny diagonal regularizer ().
The four operations a roboticist runs in their head
- Apply a rotation:
v' = R @ v. Composition: rotate first, then translate. - Invert a rigid transform: on top, for translation.
- Project a vector onto a direction: .
- Solve with
solve, neverinv.
Master these and 80% of robot-paper math becomes legible.
The 30-minute drill
Before moving on, make sure you can do these on paper:
- A 2D rotation matrix for 60°. Compose two rotations and verify it's a rotation by the sum.
- Compute for a 2×2 matrix and a 2-vector.
- Find the eigenvalues of .
- Invert symbolically.
If you can do those without notes, you have the linear algebra you need to start. Add depth as topics come up — Jacobians for kinematics, covariances for SLAM, gradients for ML — but don't pre-grind.
Resources
- 3Blue1Brown's Essence of Linear Algebra — 14 short videos. Builds geometric intuition no textbook matches.
- Strang's Introduction to Linear Algebra — readable. The MIT OCW lectures are also free.
- Modern Robotics Chapter 3 — concentrated dose of the rotation/transform math you'll actually use.
The right amount of linear algebra for robotics: enough to read a paper without skipping the math, not enough to prove the spectral theorem. You'll be there in 20 hours of focused study.
Next
Probability for state estimation — the other half of every Kalman filter, every particle filter, every SLAM paper.
Comments
Sign in to post a comment.