RobotForge
Published·~13 min

Tactile sensing: GelSight, DIGIT, e-skins

The sensors finally giving robots touch — and the perception pipelines that consume them. From a $300 fingertip to whole-body e-skins, the modality that's becoming mandatory for dexterous manipulation.

by RobotForge
#frontiers#tactile#sensing

Vision tells you where things are. Tactile tells you what's happening when you touch them — slip, contact normal, surface texture, sub-millimeter relative pose. For dexterous manipulation, tactile is the modality that closes the gap between robot and human. The 2018+ wave of vision-based tactile sensors made this affordable; 2024+ work is integrating them into VLAs. Production reality in 2026 — the sensor types, the algorithms, what's still hard.

Why tactile matters

Three things vision can't do:

  • Slip detection: the object is starting to fall out of the gripper. Vision sees blur; tactile sees vibration.
  • Sub-millimeter relative pose: where exactly inside the gripper is the object? Vision occluded by the gripper; tactile reads the imprint.
  • Force-direction: which way is the object pushing? Wrist F/T gives a coarse 6-DOF; fingertip tactile shows distribution.

For pick-and-place success rate, adding tactile typically lifts production pipelines from 90% to 99%. For dexterous in-hand manipulation, it's enabling rather than improving.

The sensor families

1. Vision-based tactile (GelSight, DIGIT)

A small camera + LED + soft elastomer gel. The gel deforms when pressed; LED + camera image the deformation. Per-frame: an image of the contact patch.

Strengths: high spatial resolution (sub-millimeter); image-shaped data feeds into ConvNets cleanly; affordable ($200–1500).

Weaknesses: large fingertip (cube-ish, 30 mm side); gel wears out (~1000 grasps); slow update rate (30–120 Hz).

Standard for research and a growing share of production work.

2. Capacitive / resistive arrays (BioTac, Tekscan, e-skin sheets)

Arrays of pressure sensors in a flexible substrate. Lower spatial resolution (8×8 to 32×32 typical); much higher temporal rate (1+ kHz).

Strengths: high frequency for slip detection; thin form factor; can wrap around joints; bigger areas covered.

Weaknesses: lower resolution per pixel; hysteresis; per-pixel calibration drift.

BioTac (SynTouch) was the canonical academic sensor; Tekscan FlexiForce is the cheapest; modern e-skins from various startups are emerging.

3. Piezoelectric / vibrational

Tiny piezos detect high-frequency vibration. Excellent for slip detection (slip produces characteristic vibration patterns) but no spatial info.

Often combined with other sensors. Used in industrial gripper "slip detection" upgrades.

4. Optical fiber-based

Lab-only, mostly. Bend-sensitive optical fibers detect deformation. Useful for soft robots; rare in commercial use.

5. Magnetic / Hall-effect tactile

Permanent magnets in a soft skin; Hall sensors below detect deformation. Recent academic work (Reskin, AnySkin) is pushing this; cheaper than vision-based but lower resolution.

The 2026 sensor catalog

Sensor Cost Best for
DIGIT (Meta)~$300Hobby + research entry
GelSight Mini~$1kProduction-grade research
DIGIT 360 (Meta 2024)~$5kHigh-end research; multimodal
AnySkin (Meta 2024)~$50Magnetic, swappable, hobby-friendly
F/T 6-axis wrist (Robotous)~$2kIndustrial assembly
Reskin (CMU)~$200 DIYOpen hardware; hobbyist

Default for hobby in 2026: AnySkin or DIGIT. For production research: GelSight Mini or DIGIT 360.

What you do with tactile data

Slip detection

Window of last 10 tactile frames → CNN → "is the object slipping?" Sub-100 ms detection; trigger grip-force increase.

Grasp success classification

Single tactile frame after grasp → CNN → "is something held?" 99% reliability with 200 training examples.

Object pose estimation

Tactile imprint → CNN → relative 6-DOF pose of object in gripper. Sub-millimeter accuracy possible.

Texture / material classification

Tactile + tap signal → 1D CNN → material class. ~95% accuracy on small known classes.

In-hand reorientation policy

Tactile + proprioception → diffusion / VLA policy → joint commands. The 2024+ recipe for dexterous manipulation.

The Tactile-VLA story (2025+)

Recent work (T-Aloha, TacSL, Tactile Linguistics) integrates tactile data as another modality in VLAs. The model takes vision + proprioception + tactile + language, outputs actions. Tactile-conditioned policies handle scenarios pure-vision policies fail on.

Limitations:

  • Training data is scarce; most teleop rigs don't easily feel tactile feedback.
  • Tactile sensors vary widely; cross-sensor generalization is weak.
  • Latency: vision-based tactile at 30 Hz is slower than ideal for high-bandwidth control.

The whole-body skin trend

Recent work covers entire robot arms (or humanoid bodies) in flexible e-skins. Examples:

  • BioTac arm coverings.
  • Apptronik's full-body e-skin on Apollo (announced 2024).
  • Fraunhofer's robot-skin meshes.

Goal: any contact with the body, anywhere, registers. Enables collaborative robots that are inherently safe (they detect humans by touch) and humanoid robots that can do whole-body manipulation.

Production-ready in narrow forms; research-ish for full-body coverage.

Common gotchas

  • Calibration drift: every tactile sensor drifts over hours. Recalibrate; or use machine learning that's robust to drift.
  • Mounting matters: the gel orientation and tightness affect readings. Document the mounting; replicate exactly when replacing.
  • Wear life: GelSight gels last ~1000–10000 grasps; keep spares.
  • Frame coordination: tactile data is in fingertip frame; combine with kinematics to express in world frame.
  • Compute load: a 60 Hz tactile stream + 30 Hz vision + control = serious bandwidth. Plan compute budget accordingly.

Where this is heading

  • Cheaper / better sensors: AnySkin is $50; the next generations will be cheaper still.
  • Tactile foundation models: pretrain on millions of tactile examples; fine-tune for task. Coming.
  • Simulator support: Isaac Lab and MuJoCo are adding tactile simulation primitives. Sim-to-real for tactile is opening up.
  • Multi-finger integrated: every finger of a humanoid hand having tactile feedback. Apptronik, Sanctuary aiming for this.

Exercise

Get a DIGIT or AnySkin sensor on a parallel-jaw gripper. Collect 50 grasps with successful + failed labels. Train a binary classifier. Deploy: every grasp goes through the classifier; failed grasps trigger a retry. Measure end-to-end success rate before / after. The 5–10 percentage-point lift is what tactile delivers in production.

Next

Embodied LLM agents — the layer above motion that handles long-horizon reasoning and language interfaces.

Comments

    Sign in to post a comment.