Project Details
Description
Robots need to effectively interact with a large variety of objects
that appear in warehouses and factories as well as homes and offices.
This requires robust grasping and dexterous manipulation of everyday
objects through low cost robots and low complexity solutions.
Traditionally, robots use rigid hands and analytical models for such
tasks, which often fail in the presence of even small errors. New
compliant hands promise improved performance, while minimizing
complexity, and increased robustness. Nevertheless, they are
inherently difficult to sense and model. This project combines ideas
from different robotics sub-fields to address this limitation. It
utilizes progress in machine learning and builds on a strong tradition
in robot modeling. The objective is to provide adaptive, compliant
robots that are better in grasping objects in the presence of multiple
unknown contact points and sliding or rolling objects in-hand. The
broader impact will be strengthened by the open release of new or
modified robot hand designs, improved control algorithms and software,
as well as corresponding data sets. Furthermore, academic
dissemination will be accompanied by educational outreach to
undergraduate and high school students.
Towards the above objective, the first step will be the definition of
new hybrid models appropriate for adaptive, compliant hands. This
will happen by improving analytical solutions and extending them to
allow adaptation based on data via novel, time-efficient learning
methods. The objective is to capture model uncertainty inherent in
real-world interactions; a process that suffers from data scarcity.
In order to reduce the amount of data required for learning, different
models will be tailored to specific tasks through an automated
discovery of these tasks and of underlying motion primitives for each
one of them. This task identification process will operate iteratively
with learning and utilize improved models to discover new tasks. It
can also provide feedback for improved hand design. Once these
learning-based and task-focused models are available, they will be
used to learn and synthesize controllers for grasping and in-hand
manipulation. To learn controllers, this work will consider a
model-based, reinforcement learning approach, which will be evaluated
against alternatives. For controller synthesis, existing tools for
this purpose will be integrated with task planning primitives and
extended through learning processes to identify the preconditions
under which different controllers can be chained together. The project
involves extensive evaluation on a variety of novel adaptive hands and
robotic arms designed in the PIs' labs. Modern vision-based solutions
will be used to track grasped objects and provide feedback for
learning and closed-loop control. The evaluation will measure whether
the developed hybrid models can significantly improve robustness of
grasping and the effectiveness of dexterous manipulation.
Status | Finished |
---|---|
Effective start/end date | 9/1/17 → 5/31/23 |
Funding
- National Science Foundation: $867,729.00
Fingerprint
Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.