RobotForge
Published·~13 min

Lyapunov stability for roboticists

The energy-function trick that turns 'I think this controller works' into 'I can prove it converges.' Useful exactly when intuition runs out — nonlinear systems, adaptive control, sliding-mode.

by RobotForge
#control#lyapunov#stability#math

Linear systems have a stability test: check the eigenvalues of A. Done. Nonlinear systems don't have a one-shot test — you can have stable origins, regions of attraction, limit cycles, chaos. Lyapunov's idea (1892, still the standard 130 years later): pick an "energy-like" function of the state, show it always decreases, and you've proved the system converges. Once the trick clicks, every "is this controller stable?" paper becomes legible.

The intuition: energy

A pendulum with friction always stops at the bottom because its mechanical energy strictly decreases. Total energy starts positive, reaches zero at the rest state, and along every trajectory. That's all you need.

Lyapunov generalized: any positive function with strictly negative derivative implies convergence. The function doesn't have to be physical energy — it just has to look like one.

The formal definition

For a system with equilibrium at the origin, a function is a Lyapunov function if:

If V exists, the origin is stable (trajectories don't blow up). If is strictly negative away from the origin, asymptotically stable (trajectories converge to zero). If both hold globally, globally asymptotically stable.

The simplest worked example

System: . Try .

Check: , otherwise. Compute the derivative:

Strictly negative away from x = 0. The origin is globally asymptotically stable. Done. We never had to solve the differential equation.

Why this matters in practice

You'll use Lyapunov to:

  • Prove a controller stabilizes the system. Standard pattern: define to be a quadratic in the error, design the control law so .
  • Bound regions of attraction. The controller might only work near the equilibrium; the level sets of V tell you "trajectories starting inside this region converge."
  • Design adaptive controllers. When system parameters are unknown, Lyapunov-based adaptation laws guarantee both tracking convergence and parameter consistency.
  • Justify nonlinear-control choices. Sliding-mode, backstepping, energy shaping — all are Lyapunov-based.

The robot-arm example everyone meets first

Trajectory tracking on an arm with ideal dynamics. Define error . Pick a Lyapunov candidate:

That's "kinetic energy of the error" plus "elastic energy." Take the derivative, plug in the dynamics, choose torques τ that make negative-definite. The result is a PD-plus-feedforward law that's provably stable for any positive-definite .

You didn't tune by trial-and-error. You proved it from first principles. Slotine and Li's Applied Nonlinear Control works through this in detail — still the canonical reference.

The catch: finding V

Lyapunov gives sufficient conditions, not a recipe. You have to guess a candidate . Some patterns:

  • Quadratic forms for linear systems. Solve the Lyapunov equation for any positive-definite Q. P exists iff A is stable.
  • Energy-like functions for mechanical systems. Total energy or modified energy.
  • Sum of squares (SOS). Modern computational technique: search over polynomial Lyapunov functions via convex optimization. Used by control researchers for systems where intuition fails.
  • Composite Lyapunov: for cascaded or hierarchical systems, sum smaller V's together. Common in adaptive control.

For most robotics work, the first two suffice.

LaSalle's invariance principle (the trick when but not strict)

Sometimes on a non-trivial set (not just the origin). The trajectory could in principle hang there. LaSalle's principle says: if the only invariant subset of {x : } is the origin, you still have asymptotic stability.

This is what saves the arm-tracking proof above: only vanishes when , but inspection of the dynamics shows the only invariant set is the origin. LaSalle gets you home.

What Lyapunov doesn't give you

  • Global stability — only as far as is finite and .
  • Exact convergence rate — Lyapunov-based exponential bounds exist but are usually loose.
  • Robustness to disturbances — that's "input-to-state stability" (ISS), Lyapunov's modern relative.
  • The actual controller — finding doesn't tell you τ; you still have to design the law so .

Lyapunov in 2026 robotics

  • Stability proofs in safe RL. Recent work (Berkenkamp, Khansari-Zadeh) constrains learned policies to remain inside Lyapunov-stable sets.
  • Control barrier functions. Modern descendant of Lyapunov, but for safety constraints rather than convergence. is replaced by with the constraint that respects the boundary.
  • Bipedal walking. Provably stable walking controllers on Cassie, Digit, Atlas use Lyapunov-based gaits.
  • Adaptive control of soft robots. Heavy nonlinearity, parameter uncertainty — Lyapunov-based adaptation handles both.

What to learn first

The minimum you need to read papers:

  1. The three conditions (positive function, zero at equilibrium, derivative non-positive).
  2. Quadratic Lyapunov functions for linear systems via the Lyapunov equation.
  3. Energy-based Lyapunov for mechanical systems (the arm example above).
  4. LaSalle's principle for the "weak " case.

Slotine and Li's textbook (free PDFs floating around) covers all four in 50 pages. After that you can read most modern nonlinear-control papers.

Exercise

For a 1-DOF pendulum with friction (), construct an energy-based Lyapunov function. Show that without friction it's marginally stable (); with friction it's asymptotically stable. Then add a torque-based stabilizing controller and re-derive — you should find an explicit gain condition for global asymptotic stability.

Next

Underactuated swing-up — when even Lyapunov isn't enough because there are fewer actuators than degrees of freedom.

Comments

    Sign in to post a comment.