RobotForge
Published·~14 min

NVIDIA Isaac Sim and Isaac Lab

Photorealistic sensors, USD assets, and scaling RL training to thousands of parallel environments. The simulator the industry adopted for production-scale robot learning.

by RobotForge
#simulators#isaac#nvidia

In 2021, NVIDIA released Isaac Gym — massive parallel RL on a single GPU. By 2024, they renamed and consolidated it as Isaac Lab on top of Isaac Sim. The pitch: ten thousand parallel quadrupeds learning to walk on one RTX card, with photorealistic camera renders, on industrial-grade physics. In 2026 it's the default simulator for production robotics RL when the team has NVIDIA hardware.

The two-tool stack

  • Isaac Sim: the simulator itself. Built on NVIDIA's Omniverse / USD foundation. Photorealistic rendering, accurate physics, massive parallelism.
  • Isaac Lab: the RL training framework. Wraps Isaac Sim with environment APIs, pre-built robots, RL algorithm integrations.

Isaac Lab replaced Isaac Gym (the original GPU-RL framework) and OmniIsaacGymEnvs (the bridge between Gym and Sim). All three were merged into Lab as of 2024. New projects use Isaac Lab.

What's special

1. Massive GPU parallelism

Run thousands of robot environments concurrently on one GPU. Each environment shares the underlying physics tensors; observations and actions batch into single forward passes.

Concrete numbers: ~10,000 quadrupeds at 50 Hz on an RTX 4090. ~30 minutes to train a walking policy that previously took days on a CPU farm.

2. Photorealistic rendering (RTX)

Real-time path-traced rendering with RTX cards. Materials, lighting, shadows that look like real cameras. Train perception networks on Isaac and they transfer to real-world cameras.

Caveat: enabling RTX rendering for thousands of parallel envs is too slow. Typical workflow: train policy with simple rasterized graphics; only enable RTX for evaluation / data collection.

3. USD and the asset pipeline

Isaac uses Universal Scene Description (USD) — Pixar's open-source asset format. Robots, environments, and scenes are all USD files. The format supports:

  • Composition (override + reference patterns).
  • Variants (one robot, multiple configurations).
  • Meta-data (mass, inertia, joints, sensors).
  • Layered editing (base + overrides for randomization).

USD is more powerful than URDF/MJCF; also more verbose. Conversion tools exist; for a production pipeline, learn USD natively.

4. First-class sensors

  • RTX-rendered cameras: RGB, depth, segmentation, optical flow.
  • Simulated lidars: ray-traced; configurable for any commercial lidar (Velodyne VLP-16, Ouster OS1, Innoviz).
  • Synthetic data generation: built-in pipeline for generating labeled training data.
  • Domain randomization: shipped as configurable layer; randomize lighting, materials, camera intrinsics, object pose.

Getting started

The setup process is heavier than PyBullet:

  1. NVIDIA GPU with recent drivers (RTX 30-series or newer recommended).
  2. Install Omniverse Launcher; through it, Isaac Sim and Isaac Lab.
  3. ~30 GB disk space for the install.
  4. Run example: ./isaaclab.sh -p source/standalone/workflows/rsl_rl/train.py --task=Cartpole-v0.

First run takes 5–10 minutes for asset compilation. Subsequent runs start in seconds.

The Isaac Lab environment API

from omni.isaac.lab.envs import ManagerBasedRLEnv, ManagerBasedRLEnvCfg
from omni.isaac.lab.scene import InteractiveSceneCfg
from omni.isaac.lab_assets import ANYMAL_C_CFG  # pre-built ANYmal C asset

@configclass
class MyAnymalEnvCfg(ManagerBasedRLEnvCfg):
    scene: InteractiveSceneCfg = AnymalSceneCfg()
    observations: ObsCfg = ObsCfg()
    actions: ActionsCfg = ActionsCfg()
    rewards: RewardsCfg = RewardsCfg()
    terminations: TerminationsCfg = TerminationsCfg()

Config-driven. Each section (observations, actions, rewards, terminations) declares its components. The framework wires them into a Gym-style environment with thousands of parallel instances.

The pre-built libraries

Isaac Lab ships dozens of pre-built environments + robots:

  • Quadrupeds: ANYmal B/C/D, Spot, Unitree A1, Unitree Go1, Cassie.
  • Manipulators: Franka Panda, Universal Robots UR5/UR10, Kinova.
  • Humanoids: Atlas, Cassie, H1 from Unitree, Tiago.
  • Mobile: TurtleBot variations, Carter (NVIDIA's reference robot).
  • Scenes: warehouses, tabletops, kitchens, outdoor terrains.

Each comes with reasonable defaults. Substitute your own robot's USD; reuse the framework.

Strengths over MuJoCo

  • Photorealism: MuJoCo's renderer is basic; Isaac's is movie-quality.
  • Asset richness: Isaac ships hundreds of pre-built scenes and robots.
  • Synthetic data pipeline: built-in generation of labeled vision training data.
  • Sensor diversity: more sensors out-of-the-box, including specialized ones (radar, RTX lidar).

Weaknesses vs MuJoCo

  • NVIDIA-only: needs NVIDIA GPU. AMD users can't run Isaac.
  • Heavy install: ~30 GB; takes time.
  • Steeper learning curve: USD, Omniverse, multiple frameworks layered. MuJoCo is simpler.
  • Closed-source pieces: Omniverse core is proprietary. Long-term licensing risk.

The 2026 industrial reality

For a serious robotics lab in 2026:

  • Have NVIDIA GPUs → use Isaac Lab. State-of-the-art for parallel RL training.
  • Don't have NVIDIA → use MuJoCo MJX. Comparable speed; runs on CPU/AMD.
  • For perception model training requiring photorealism → Isaac is the only realistic option.

Both Isaac Lab and MuJoCo MJX scale to ~10,000 parallel envs. For a single learner-with-laptop, MuJoCo is friendlier; for a lab with NVIDIA GPUs and serious throughput needs, Isaac wins.

Common gotchas

  • Driver version mismatch: Omniverse requires specific NVIDIA driver versions. Update before starting.
  • USD asset versioning: assets can break between Isaac releases. Pin versions.
  • Simulation rate vs render rate: physics typically at 200–1000 Hz; rendering at 30 Hz. Don't confuse them.
  • VRAM: 10,000 parallel envs eats 12+ GB. RTX 4070 (12 GB) is the practical minimum.

Exercise

Install Isaac Lab. Run the cartpole training example end-to-end. Watch the training-throughput print: thousands of environment-steps per second. Compare to a single-process Gym cartpole running on CPU — the Isaac version is 100×+ faster. Then try the ANYmal walking task; train for an hour; see a working policy emerge. The first time you watch a thousand quadrupeds learn to walk in real-time, the field's progress is tangible.

Next

Drake — the MIT toolkit for serious control + optimization research, less common but powerful in its niche.

Comments

    Sign in to post a comment.