Treffer: Statistical inference of the value function for reinforcement learning in infinite‐horizon settings.

Title:
Statistical inference of the value function for reinforcement learning in infinite‐horizon settings.
Authors:
Shi, Chengchun1 (AUTHOR), Zhang, Sheng2 (AUTHOR), Lu, Wenbin2 (AUTHOR), Song, Rui2 (AUTHOR) rsong@ncsu.edu
Source:
Journal of the Royal Statistical Society: Series B (Statistical Methodology). Jul2022, Vol. 84 Issue 3, p765-793. 29p.
Database:
Business Source Premier

Weitere Informationen

Reinforcement learning is a general technique that allows an agent to learn an optimal policy and interact with an environment in sequential decision‐making problems. The goodness of a policy is measured by its value function starting from some initial state. The focus of this paper was to construct confidence intervals (CIs) for a policy's value in infinite horizon settings where the number of decision points diverges to infinity. We propose to model the action‐value state function (Q‐function) associated with a policy based on series/sieve method to derive its confidence interval. When the target policy depends on the observed data as well, we propose a SequentiAl Value Evaluation (SAVE) method to recursively update the estimated policy and its value estimator. As long as either the number of trajectories or the number of decision points diverges to infinity, we show that the proposed CI achieves nominal coverage even in cases where the optimal policy is not unique. Simulation studies are conducted to back up our theoretical findings. We apply the proposed method to a dataset from mobile health studies and find that reinforcement learning algorithms could help improve patient's health status. A Python implementation of the proposed procedure is available at https://github.com/shengzhang37/SAVE. [ABSTRACT FROM AUTHOR]

Copyright of Journal of the Royal Statistical Society: Series B (Statistical Methodology) is the property of Oxford University Press / USA and its content may not be copied or emailed to multiple sites without the copyright holder's express written permission. Additionally, content may not be used with any artificial intelligence tools or machine learning technologies. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)

Volltext ist im Gastzugang nicht verfügbar.