Treffer: Training a Four Legged Robot via Deep Reinforcement Learning and Multibody Simulation

Title:
Training a Four Legged Robot via Deep Reinforcement Learning and Multibody Simulation
Contributors:
Andrés Kecskeméthy, Francisco Geu Flores, Benatti, S., Tasora, A., Mangoni, D.
Publisher Information:
Springer
Publication Year:
2020
Collection:
Università di Parma: CINECA IRIS
Document Type:
Konferenz conference object
Language:
English
Relation:
info:eu-repo/semantics/altIdentifier/isbn/978-3-030-23131-6; info:eu-repo/semantics/altIdentifier/isbn/978-3-030-23132-3; ispartofbook:Multibody Dynamics 2019; ECCOMAS Multibody Dynamics 2019; volume:53; firstpage:391; lastpage:398; numberofpages:8; serie:COMPUTATIONAL METHODS IN APPLIED SCIENCES; alleditors:Andrés Kecskeméthy, Francisco Geu Flores; http://hdl.handle.net/11381/2863509; http://www.springer.com/series/6899
DOI:
10.1007/978-3-030-23132-3_47
Accession Number:
edsbas.97DEE73C
Database:
BASE

Weitere Informationen

In this paper we use the Proximal Policy Optimization (PPO) deep reinforcement learning algorithm to train a Neural Network to control a four-legged robot in simulation. Reinforcement learning in general can learn complex behavior policies from simple state-reward tuples datasets and PPO in particular has proved its effectiveness in solving complex tasks with continuous states and actions. Moreover, since it is model-free, it is general and can adapt to changes in the environment or in the robot itself. The virtual environment used to train the agent was modeled using our physics engine Project Chrono. Chrono can handle non smooth dynamics simulation allowing us to introduce stiff leg-ground contacts and using its Python interface Pychrono it can be interfaced with the Machine Leaning framework TensorFlow with ease. We trained the Neural Network until it learned to control the motor torques, then various policy Neural Network input state choices have been compared.