Physics-based scene-level reasoning for object pose estimation in clutter

Research output: Contribution to journalArticle

Abstract

This paper focuses on vision-based pose estimation for multiple rigid objects placed in clutter, especially in cases involving occlusions and objects resting on each other. Progress has been achieved recently in object recognition given advancements in deep learning. Nevertheless, such tools typically require a large amount of training data and significant manual effort to label objects. This limits their applicability in robotics, where solutions must scale to a large number of objects and variety of conditions. Moreover, the combinatorial nature of the scenes that could arise from the placement of multiple objects is difficult to capture in the training dataset. Thus, the learned models might not produce the desired level of precision required for tasks, such as robotic manipulation. This work proposes an autonomous process for pose estimation that spans from data generation to scene-level reasoning and self-learning. In particular, the proposed framework first generates a labeled dataset for training a convolutional neural network (CNN) for object detection in clutter. These detections are used to guide a scene-level optimization process, which considers the interactions between the different objects present in the clutter to output pose estimates of high precision. Furthermore, confident estimates are used to label online real images from multiple views and re-train the process in a self-learning pipeline. Experimental results indicate that this process is quickly able to identify in cluttered scenes physically consistent object poses that are more precise than those found by reasoning over individual instances of objects. Furthermore, the quality of pose estimates increases over time given the self-learning process.

Original languageEnglish (US)
JournalInternational Journal of Robotics Research
DOIs
StatePublished - Jan 1 2019

Fingerprint

Pose Estimation
Clutter
Labels
Robotics
Physics
Reasoning
Object recognition
Self-learning
Pipelines
Neural networks
Estimate
Object
Object Detection
Object Recognition
Process Optimization
Learning Process
Occlusion
Placement
Manipulation
Neural Networks

All Science Journal Classification (ASJC) codes

  • Software
  • Modeling and Simulation
  • Mechanical Engineering
  • Electrical and Electronic Engineering
  • Artificial Intelligence
  • Applied Mathematics

Cite this

@article{1134a9dd75ec4e8087071b766b7b7baa,
title = "Physics-based scene-level reasoning for object pose estimation in clutter",
abstract = "This paper focuses on vision-based pose estimation for multiple rigid objects placed in clutter, especially in cases involving occlusions and objects resting on each other. Progress has been achieved recently in object recognition given advancements in deep learning. Nevertheless, such tools typically require a large amount of training data and significant manual effort to label objects. This limits their applicability in robotics, where solutions must scale to a large number of objects and variety of conditions. Moreover, the combinatorial nature of the scenes that could arise from the placement of multiple objects is difficult to capture in the training dataset. Thus, the learned models might not produce the desired level of precision required for tasks, such as robotic manipulation. This work proposes an autonomous process for pose estimation that spans from data generation to scene-level reasoning and self-learning. In particular, the proposed framework first generates a labeled dataset for training a convolutional neural network (CNN) for object detection in clutter. These detections are used to guide a scene-level optimization process, which considers the interactions between the different objects present in the clutter to output pose estimates of high precision. Furthermore, confident estimates are used to label online real images from multiple views and re-train the process in a self-learning pipeline. Experimental results indicate that this process is quickly able to identify in cluttered scenes physically consistent object poses that are more precise than those found by reasoning over individual instances of objects. Furthermore, the quality of pose estimates increases over time given the self-learning process.",
author = "Chaitanya Mitash and Abdeslam Boularias and Kostas Bekris",
year = "2019",
month = "1",
day = "1",
doi = "10.1177/0278364919846551",
language = "English (US)",
journal = "International Journal of Robotics Research",
issn = "0278-3649",
publisher = "SAGE Publications Inc.",

}

TY - JOUR

T1 - Physics-based scene-level reasoning for object pose estimation in clutter

AU - Mitash, Chaitanya

AU - Boularias, Abdeslam

AU - Bekris, Kostas

PY - 2019/1/1

Y1 - 2019/1/1

N2 - This paper focuses on vision-based pose estimation for multiple rigid objects placed in clutter, especially in cases involving occlusions and objects resting on each other. Progress has been achieved recently in object recognition given advancements in deep learning. Nevertheless, such tools typically require a large amount of training data and significant manual effort to label objects. This limits their applicability in robotics, where solutions must scale to a large number of objects and variety of conditions. Moreover, the combinatorial nature of the scenes that could arise from the placement of multiple objects is difficult to capture in the training dataset. Thus, the learned models might not produce the desired level of precision required for tasks, such as robotic manipulation. This work proposes an autonomous process for pose estimation that spans from data generation to scene-level reasoning and self-learning. In particular, the proposed framework first generates a labeled dataset for training a convolutional neural network (CNN) for object detection in clutter. These detections are used to guide a scene-level optimization process, which considers the interactions between the different objects present in the clutter to output pose estimates of high precision. Furthermore, confident estimates are used to label online real images from multiple views and re-train the process in a self-learning pipeline. Experimental results indicate that this process is quickly able to identify in cluttered scenes physically consistent object poses that are more precise than those found by reasoning over individual instances of objects. Furthermore, the quality of pose estimates increases over time given the self-learning process.

AB - This paper focuses on vision-based pose estimation for multiple rigid objects placed in clutter, especially in cases involving occlusions and objects resting on each other. Progress has been achieved recently in object recognition given advancements in deep learning. Nevertheless, such tools typically require a large amount of training data and significant manual effort to label objects. This limits their applicability in robotics, where solutions must scale to a large number of objects and variety of conditions. Moreover, the combinatorial nature of the scenes that could arise from the placement of multiple objects is difficult to capture in the training dataset. Thus, the learned models might not produce the desired level of precision required for tasks, such as robotic manipulation. This work proposes an autonomous process for pose estimation that spans from data generation to scene-level reasoning and self-learning. In particular, the proposed framework first generates a labeled dataset for training a convolutional neural network (CNN) for object detection in clutter. These detections are used to guide a scene-level optimization process, which considers the interactions between the different objects present in the clutter to output pose estimates of high precision. Furthermore, confident estimates are used to label online real images from multiple views and re-train the process in a self-learning pipeline. Experimental results indicate that this process is quickly able to identify in cluttered scenes physically consistent object poses that are more precise than those found by reasoning over individual instances of objects. Furthermore, the quality of pose estimates increases over time given the self-learning process.

UR - http://www.scopus.com/inward/record.url?scp=85065729149&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85065729149&partnerID=8YFLogxK

U2 - 10.1177/0278364919846551

DO - 10.1177/0278364919846551

M3 - Article

JO - International Journal of Robotics Research

JF - International Journal of Robotics Research

SN - 0278-3649

ER -