I am happy to announce that this year, we got accepted another 3 papers (1 journal and 2 conference papers) which deal with robot self-calibration.
This long term project is running under great supervision of Mgr. Matej Hoffmann, Ph.D. (see his website) who focuses on how babies develop the sense of their bodies and space around it and how self-touch might contribute to it (GACR project, EXPRO project). This connects closely also to self-calibration of human who might learn about kinematics of their bodies via touching its individual parts.
In our efforts we try to transfer these ideas to the robots and explore if the usage of this new sensorical modality might be useful for their calibration. We focus mainly on humanoid robots, but explore also the usefulness of these methods for the calibration of industrial robots.
Our long journey was this year completed by 3 papers – 2 conference papers at Humanoids conference and 1 journal paper in Robotics and Computer-Integrated Manufacturing (RCIM) journal.
Our paper “Embodied Reasoning for Discovering Object Properties via Manipulation” was accepted for ICRA 2021.
Abstract: In this paper we present an integrated system that includes a reasoning from visual and natural language inputs, action and motion planning, executing tasks by a robotic arm, manipulating objects and discovering their properties. The vision to action module recognises the scene with objects and their attributes and analyses enquiries formulated in natural language. It performs multi-modal reasoning and generates a sequence of simple actions that can be executed by the embodied agent. The scene model and action sequence are sent to the planning and execution module that generates motion plan with collision avoidance, simulates the actions as well as executes them by the embodied agent. We extensively use simulated data to train various components of the system which make it more robust to changes in the real environment thus generalise better. We focus on the tabletop scene with objects that can be grasped by our embodied agent, which is 7DoF manipulator with a two-finger gripper. We evaluate the agent on 60 representative queries repeated 3 times (e.g., ‘Check what is on the other side of the soda can’) concerning different objects and tasks in the scene. We perform experiments in simulated and real environment and report the success rate for various components of the system. Our system achieves up to 80.6% success rate on challenging scenes and queries. We also analyse and discuss the challenges that such intelligent embodied system faces.
Our paper “Simultaneous task and motion scheduling for complex tasks executed by multiple robots” was accepted for ICRA 2020.
Abstract: Coordination of multiple robots operating simultaneously in the same workspace requires the integration of task scheduling and motion planning. We focus on tasks in which the robot’s actions are not necessarily confined to small volumes, but can also occupy a large time-varying portion of the workspace, such as in welding along a line or drilling a hole. Optimization of such tasks presents a considerable challenge mainly due to the fact that different variants of action execution exist, for instance, there can be multiple starting points of lines or closed curves, different filling patterns of areas, etc. We propose a generic and computationally efficient optimization method which is based on constraint programming. It takes into account the kinematics of the robot and guarantees that the motion trajectories of the robots are collision-free while minimizing the overall makespan. We evaluate our approach on several tasks of varying complexity: cutting, additive manufacturing, spot welding, inserting and tightening bolts, performed by a dual-arm robot. In terms of the makespan, the result is superior to task execution by one robot arm as well as by two arms not working simultaneously.