Robots need to effectively interact with a large variety of objectsthat appear in warehouses and factories as well as homes and offices.This requires robust grasping and dexterous manipulation of everydayobjects through low cost robots and low complexity solutions.Traditionally, robots use rigid hands and analytical models for suchtasks, which often fail in the presence of even small errors. Newcompliant hands promise improved performance, while minimizingcomplexity, and increased robustness. Nevertheless, they areinherently difficult to sense and model. This project combines ideasfrom different robotics sub-fields to address this limitation. Itutilizes progress in machine learning and builds on a strong traditionin robot modeling. The objective is to provide adaptive, compliantrobots that are better in grasping objects in the presence of multipleunknown contact points and sliding or rolling objects in-hand. Thebroader impact will be strengthened by the open release of new ormodified robot hand designs, improved control algorithms and software,as well as corresponding data sets. Furthermore, academicdissemination will be accompanied by educational outreach toundergraduate and high school students.Towards the above objective, the first step will be the definition ofnew hybrid models appropriate for adaptive, compliant hands. Thiswill happen by improving analytical solutions and extending them toallow adaptation based on data via novel, time-efficient learningmethods. The objective is to capture model uncertainty inherent inreal-world interactions; a process that suffers from data scarcity.In order to reduce the amount of data required for learning, differentmodels will be tailored to specific tasks through an automateddiscovery of these tasks and of underlying motion primitives for eachone of them. This task identification process will operate iterativelywith learning and utilize improved models to discover new tasks. Itcan also provide feedback for improved hand design. Once theselearning-based and task-focused models are available, they will beused to learn and synthesize controllers for grasping and in-handmanipulation. To learn controllers, this work will consider amodel-based, reinforcement learning approach, which will be evaluatedagainst alternatives. For controller synthesis, existing tools forthis purpose will be integrated with task planning primitives andextended through learning processes to identify the preconditionsunder which different controllers can be chained together. The projectinvolves extensive evaluation on a variety of novel adaptive hands androbotic arms designed in the PIs' labs. Modern vision-based solutionswill be used to track grasped objects and provide feedback forlearning and closed-loop control. The evaluation will measure whetherthe developed hybrid models can significantly improve robustness ofgrasping and the effectiveness of dexterous manipulation.
|Effective start/end date||9/1/17 → 8/31/21|
- National Science Foundation (NSF)