Skip to main navigation Skip to search Skip to main content

NeuralGrasps: Learning Implicit Representations for Grasps of Multiple Robotic Hands

Research output: Contribution to journalConference articlepeer-review

6 Scopus citations

Abstract

We introduce a neural implicit representation for grasps of objects from multiple robotic hands. Different grasps across multiple robotic hands are encoded into a shared latent space. Each latent vector is learned to decode to the 3D shape of an object and the 3D shape of a robotic hand in a grasping pose in terms of the signed distance functions of the two 3D shapes. In addition, the distance metric in the latent space is learned to preserve the similarity between grasps across different robotic hands, where the similarity of grasps is defined according to contact regions of the robotic hands. This property enables our method to transfer grasps between different grippers including a human hand, and grasp transfer has the potential to share grasping skills between robots and enable robots to learn grasping skills from humans. Furthermore, the encoded signed distance functions of objects and grasps in our implicit representation can be used for 6D object pose estimation with grasping contact optimization from partial point clouds, which enables robotic grasping in the real world.

Original languageEnglish
Pages (from-to)516-526
Number of pages11
JournalProceedings of Machine Learning Research
Volume205
StatePublished - 2023
Event6th Conference on Robot Learning, CoRL 2022 - Auckland, New Zealand
Duration: Dec 14 2022Dec 18 2022

Keywords

  • 6D Object Pose Estimation
  • Grasp Transfer
  • Grasping Contact Modeling
  • Neural Implicit Representations
  • Robot Grasping

Fingerprint

Dive into the research topics of 'NeuralGrasps: Learning Implicit Representations for Grasps of Multiple Robotic Hands'. Together they form a unique fingerprint.

Cite this