TF2: the transform tree every robot needs
Where is the camera relative to the base? Where is the base relative to the map? TF2 is the universal answer. Here's the mental model, the commands, and the bugs that eat days.
A real robot has a dozen coordinate frames. The base. The wheels. The IMU. The camera. The lidar. The end-effector. Every sensor measurement and every motion command is expressed in one of them, and getting between them is a linear algebra problem you should not be solving by hand in every node. TF2 is the shared library every ROS robot uses.
The mental model
TF2 is a tree of coordinate frames, with transforms on each edge. Every node in your system can publish transforms into the tree; every node can query the tree to ask "what's the pose of frame A in frame B at time T?"
A typical mobile robot tree:
Each edge is a rigid transform (translation + rotation). When the robot drives, the odom → base_link transform changes. When the arm moves, arm_base → arm_tip changes. Everything else stays put.
Static vs dynamic transforms
- Static: sensor mounts, link offsets — things that don't change while the robot runs. Published once by
static_transform_publisheror by your URDF viarobot_state_publisher. - Dynamic: joint angles, wheel-odom pose, SLAM estimates — things that update continuously. Published at whatever rate the source runs (50–200 Hz is typical).
Getting this split wrong is the #1 TF performance bug. A "static" camera mount published at 100 Hz saturates the TF buffer with identical messages.
Naming conventions that save lives
REP 105 defines the conventions most ROS users follow:
map— globally consistent, updated by SLAModom— locally continuous (no jumps), drifts slowly; published by the wheel-odometry nodebase_link— robot body origin, usually at the center of rotation for wheeled robotsbase_footprint—base_linkprojected to the ground plane
Matching these conventions is not aesthetic — Nav2, MoveIt, and every navigation stack expect them. Name your frames to match the REP or add translation nodes.
Hands-on: inspecting the tree
With a robot (real or sim) running:
ros2 run tf2_tools view_frames
# Generates frames.pdf showing the current tree
Or live:
ros2 topic echo /tf
ros2 topic echo /tf_static
ros2 run tf2_ros tf2_echo base_link camera_link
tf2_echo is your best friend. It tells you the transform between any two frames, continuously. If it says "Lookup would require extrapolation into the past/future," you've found a timing bug.
Using TF2 in code (Python)
import rclpy
from rclpy.node import Node
from tf2_ros import Buffer, TransformListener
from rclpy.time import Time
class Lookup(Node):
def __init__(self):
super().__init__('lookup')
self.buffer = Buffer()
self.listener = TransformListener(self.buffer, self)
self.create_timer(1.0, self.tick)
def tick(self):
try:
t = self.buffer.lookup_transform(
'base_link', 'camera_link', Time())
self.get_logger().info(f'{t.transform.translation}')
except Exception as e:
self.get_logger().warn(f'no tf yet: {e}')
def main():
rclpy.init()
rclpy.spin(Lookup())
Publishing a static transform
From the command line, quick and dirty:
ros2 run tf2_ros static_transform_publisher \
--x 0.1 --y 0 --z 0.2 \
--roll 0 --pitch 0 --yaw 0 \
--frame-id base_link --child-frame-id camera_link
For a real robot, describe the link in URDF and let robot_state_publisher emit the static transforms automatically from joint definitions.
Common TF bugs (and how to fix them)
- "TF_OLD_DATA ignoring data." Your system clock and the publisher's clock drift. Most often hits when mixing
use_sim_timeand wall-clock nodes. Fix: setuse_sim_time: trueconsistently when in sim. - "Lookup would require extrapolation into the future." You asked for a transform at a time newer than any TF message. Either pass
Time()(zero = "latest"), or cache transforms withlookupTransformAsync. - "Could not find connection between frames." Your tree is disconnected. Use
view_frames— there are almost certainly two trees where there should be one. - Transform oscillates wildly. Two publishers fighting for the same parent→child edge. Only one node should own each edge.
Why TF2 is worth understanding deeply
Every perception-to-action pipeline on a real robot goes through TF. The lidar returns points in lidar_link. You want them in map to localize. The planner produces waypoints in map. You want them in base_link to control. None of this is optional. TF bugs feel like physics bugs — the robot drives into walls for reasons the planner can't see. Learn TF once and half of "mysterious robot misbehavior" becomes readable.
Next
URDF — the description format that generates most of your static transforms automatically. Once you've written a URDF, you'll understand why robot_state_publisher exists.
Comments
Sign in to post a comment.