Serviceeinschränkungen vom 12.-22.02.2026 - weitere Infos auf der UB-Homepage

Treffer: Reinforcement learning versus model predictive control: a comparison on a power system problem

Title:
Reinforcement learning versus model predictive control: a comparison on a power system problem
Source:
IEEE Transactions on Systems, Man and Cybernetics. Part B, Cybernetics, 33 (2), 517-519 (2009)
Publisher Information:
IEEE, 2009.
Publication Year:
2009
Document Type:
Fachzeitschrift journal article<br />http://purl.org/coar/resource_type/c_6501<br />article<br />peer reviewed
Language:
English
Relation:
http://www.montefiore.ulg.ac.be/~ernst/; urn:issn:1083-4419; urn:issn:1941-0492
DOI:
10.1109/TSMCB.2008.2007630
Rights:
open access
http://purl.org/coar/access_right/c_abf2
info:eu-repo/semantics/openAccess
Accession Number:
edsorb.13602
Database:
ORBi

Weitere Informationen

This paper compares reinforcement learning (RL) with model predictive control (MPC) in a unified framework and reports experimental results of their application to the synthesis of a controller for a nonlinear and deterministic electrical power oscillations damping problem. Both families of methods are based on the formulation of the control problem as a discrete-time optimal control problem. The considered MPC approach exploits an analytical model of the system dynamics and cost function and computes open-loop policies by applying an interior-point solver to a minimization problem in which the system dynamics are represented by equality constraints. The considered RL approach infers in a model-free way closed-loop policies from a set of system trajectories and instantaneous cost values by solving a sequence of batch-mode supervised learning problems. The results obtained provide insight into the pros and cons of the two approaches and show that RL may certainly be competitive with MPC even in contexts where a good deterministic system model is available.