Crocoddyl: Fast computation, Efficient solvers, Receding horizon and Learning - LAAS - Laboratoire d'Analyse et d'Architecture des Systèmes
Communication Dans Un Congrès Année : 2020

Crocoddyl: Fast computation, Efficient solvers, Receding horizon and Learning

Résumé

For a given motion task, an ideal Optimal Control(OC) solver could plan the entire future motion trajectory of a robot in real-time. However, this ideal scenario cannot be reached because of a few factores: • These OC problems are typically highly non-convex. • Motion of a robot (typically legged robots) often requires interaction with the environment (contacts and contact forces). These constraints are difficult to solve. • The computation time increases exponentially with the DoF(Degrees of Freedom) of the robot, and the length of future (horizon) being planned. In this presentation, we will present our recent efforts towards approximating the above stated ideal scenario. Our presentation consists of four works, which all bind towards the same goal. The first part of our contribution is our OC solver itself: Crocoddyl, (Contact RObot COntrol by Differential DYnamic Library) [1]. Crocoddyl is an optimal control library in python and C++ for robot trajectory optimization with pre-defined contact phases. Its solver is based on an efficient Differential Dynamic Programming (DDP) algorithm ([2]), that takes into account the sparsity of the problem, and the contact constraints. In addition, crocoddyl introduces a variant of the DDP algorithm, called Feasibility-Prone DDP (FDDP), that can start from an infeasible guess, and arrive at the optimal solution in a multiple-shooting manner[3]. In addition, the following features are essential to Crocoddyl's design: • Easy transcription of the problem • Analytical Derivatives using Pinocchio [4]. • Multi-threading support • Efficient Memory Management • C code generation using CppADCodeGen [5] The second part of our presentation consists of a highly efficient algorithm for computing the solution of the contact-constrained forward dynamics problem. The forward dynamics of a robot consists of finding the joint accelerations and the constraint forces for a given joint configuration and torque. We need to solve the forward dynamics problem (and its derivatives) multiple times in order to solve our OC problem. We present our novel algorithm, and show how the Operational-Space Inertia Matrix [6] can be used to connect the derivatives of the contact forces with those of acceleration. Thus, we are able to achieve the efficiency of an unconstrained forward dynamics (see benchmarks in [4]), with only a small computational burden that is linear in the number of contacts. Receding horizon control, a.k.a Model Predictive Control (MPC) is a control strategy that tries to control the robot by using predictions of dynamics and costs over a moving horizon. The third part of our presentation consists of application of Crocoddyl as an MPC controller on the Talos robot. We show with experiments in simulation and on the robot, how we can use MPC to do real-time control of the robot. Moreover, we show how the computational efficiency can be increased by using the feedback-gains obtained from DDP to re-compute the optimal trajectory. Finally, MPC as a control strategy is limited. Because of high computation time, the length of our planning horizon cannot be long. In our final presentation, we describe our early results in mimicking the inifinite horizon, by learning the optimal "Value" function [7] at a given state. We modify our Iterative RoadMap Extension and Policy Approximation (IREPA) algorithm [8] to iteratively prolong the horizon and learn the optimal value, thus arriving at an infinite horizon. Our initial experiments, based on simple systems like unicycle, show that the learned value function indeed converges to an optimum.
Fichier non déposé

Dates et versions

hal-03166249 , version 1 (11-03-2021)

Identifiants

  • HAL Id : hal-03166249 , version 1

Citer

Nicolas Mansard. Crocoddyl: Fast computation, Efficient solvers, Receding horizon and Learning. JNRR, Jun 2020, on line, France. ⟨hal-03166249⟩
140 Consultations
0 Téléchargements

Partager

More