We Don't Copy Human Hands — We Capture What Makes Them Work
Introducing the kinematic design and teleoperation stack for TR01 — Our first robot hand
March 2026KinematicsTeleoperation
For truly dexterous manipulation, such as threading a nut, peeling a film, or inserting a connector, precision makes a big difference. The human hand can perform subtle, minute adjustments effortlessly, but getting a robot to do the same is one of the hardest open problems in robotics. We built the TR01 hand model to capture some of these capabilities.
TR01 is our first robot hand prototype, and, to the best of our knowledge, the first robot hand capable of exactly tracking human fingertip positions relative to each other across a meaningful manipulation workspace. This is not a narrow claim — it enables highly dexterous teleoperation, and gives us new ways to teach a robot how to be dexterous.
Teaching a robot to manipulate is, at its core, a data problem, thanks to recent developments in robot learning. You need numerous examples of the robot doing the desired task. One of the most natural and popular ways to generate that data is teleoperation, and, in particular, using the human hand as an input device: the human operator moves their hand, the robot hand follows, and records observations and actions. The collected data can then be used for learning algorithms.
That brings us to a key design question: how do you build a robot hand that a human can control intuitively and precisely to demonstrate dexterous tasks?
The most naive approach to building a dexterous robot hand is to try to replicate human anatomy as closely as possible: same joint axes, same finger proportions, same range of motion. If the robot hand is a perfect copy, controlling it should feel natural, and the robot should be able to follow any movement of the operator's fingertips.
The problem is that such a perfect replica is physically extremely difficult to achieve. Motors have sizes, electronics need to fit somewhere, and structural parts cannot be too small to maintain strength. Making things worse, the anatomy of the human hand is complex, incompletely understood, and not based on engineering abstractions such as perfect revolute joints. No real-world robot hand can be a perfect replica of human anatomy, and even small approximations mean that the robot is incapable of perfectly tracking the human. When the mapping between your hand and the robot hand is imperfect, what should have been a clean pinch becomes a near-miss, and, for fine manipulation, such small errors are often the difference between success and failure.
This led us to ask a more basic question: what are we actually trying to replicate?
The Essence of Dexterity Comes from Fingertips
While the human hand has many important features and capabilities, for TR01 we focused on fingertip positioning ability as a key precursor of dexterity. Since we knew that it would be near-impossible to build a robot hand that can exactly replicate human fingertip movements, we instead set a different goal: designing a two-fingered robotic hand that can achieve any relative position of its fingertips, within a workspace large enough for useful tasks. In other words, the two fingertips should be able to achieve ANY position and orientation relative to each other — the complete six-dimensional space of position and rotation, within a given workspace. If successful, this would implicitly mean that the TR01 would be able to track any relative movement of the human operator's fingertips.
Interestingly, to achieve that, the TR01 does not have to exactly match human kinematics, and does not need to look like a human hand on the inside. It just needs its fingertips to be able to go exactly where yours go in teleoperation, with high precision.
This kinematic design means that the TR01 also has some super-human skills, in terms of being able to achieve finger-to-finger positions that are impossible for humans. When the hand is used purely in teleoperation (or imitation learning), such super-human abilities are not used, but we can imagine future policies that optimize performance on-robot to take advantage of them.
What This Enables
Precise fingertip tracking can turn dexterous teleoperation from an approximation into a true skill transfer system. We have found that, with this system, operators can demonstrate the fine, nuanced manipulations that are the hardest to teach, and collect high-quality demonstration data, which is the raw material for upcoming learning-based dexterous autonomy.
Since this is a robot hand designed from the ground up for skilled fingertip manipulation, we also put some work into the design and manufacturing of the fingertips themselves. We use a multi-material multi-process approach: the fingertips have a “skeleton” 3D-printed in Thermoplastic Polyurethane which provides some measure of compliance, coated in a “flesh” made from molded silicone rubber which is much softer and gives us large contact areas. We have found empirically that this approach gives us the right combination of compliance and durability in our early tests, though there is likely additional performance to be gained from further design iterations.
TR01 is Tangent Robotics' first hand, not our last. But its DNA — highly dexterous skills via exact fingertip tracking — is likely to carry forward into the hands we build next. We will be designing more complex and capable hands (potentially with more fingers) in the near future, but TR01 already gives us a new benchmark for dexterity.
Coming Up
This post focused on TR01's conceptual approach, kinematic design and teleoperation. Stay tuned for more on tactile sensing and autonomous visuotactile manipulation skills!
Technical Details
Concept. Consider the high-dimensional task space of your index fingertip relative to the thumb fingertip. Despite the complexity of the human hand's anatomy, human fingers still trace out a specific subspace, or manifold, of this space when performing fine manipulation tasks — a structured region within the full high-dimensional space of possible hand configurations. A robot hand with different kinematics will naturally inhabit a different manifold, or subspace of relative fingertip positions. The only way to bridge that gap reliably is to design the robot hand to mechanically span the full SE(3) space of one fingertip pose relative to the other, which implicitly means it will be able to also follow the human manifold at runtime.
Kinematic design. Each TR01 finger has four independently actuated degrees of freedom (DoF), in a roll-flex-yaw-flex configuration. The kinematic chain between the two fingertips thus spans eight DoFs, exceeding the six which is the theoretical minimum required for full SE(3) coverage. We found that the redundant DoFs improve conditioning near joint limits and also provide a two-dimensional manipulability nullspace useful for in-hand manipulation. One of these in-hand null-space dimensions (moving the grasped object front-and-back) is more human-like, while the other (moving the grasped object side-to-side) is less so, but still potentially useful.
Teleoperation stack. Fingertip poses are measured by magnetic trackers (sub-mm accuracy and high-frequency readings up to 900 Hz). At each timestep, we solve fingertip-relative Inverse Kinematics (IK) — computing the 8-DoF joint configuration that achieves the target relative fingertip pose. We use IK in a slightly unconventional way: one of the fingertips is considered the base (reference) frame, and the other fingertip is the target frame. The IK solution thus provides us with a way to exactly position the fingertips w.r.t. to each other, and also tells us where the palm needs to go to achieve these fingertip positions. Palm pose is tracked and executed via a Cartesian controller running at 500 Hz on the robot arm; finger DoFs are commanded to the servos directly. The full teleoperation pipeline runs at 50 Hz, with a lot of headroom to push higher. We have also added user-friendly interfaces for data collection, such as foot pedals to control the reset, start and stop. We have found that this teleoperation stack enables highly dexterous tasks and fine manipulation, giving us a wealth of data for kick-starting upcoming autonomous policies.