A Model-Based Approach for Continuous-Time Policy Evaluation with Unknown Lévy Process Dynamics
This research presents a framework for evaluating policies in a continuous-time setting, where the dynamics are unknown and represented by Lévy processes. Initially, we estimate the model using available trajectory data, followed by solving the associated PDE to conduct the policy evaluation. Our approach encompasses not only the conventional Brownian motion but also the non-Gaussian and heavy-tailed Lévy processes. We have developed an algorithm that demonstrates enhanced performance compared to existing techniques tailored for Brownian motion. Furthermore, we provide a theoretical guarantee regarding the error in policy evaluation given the model error. Experimental results involving both light-tailed and heavy-tailed data will be presented. This research provides a first step to continuous-time model-based reinforcement learning, particularly in scenarios characterized by irregular, heavy-tailed dynamics.
Tuesday, November 28, 2023
11:00AM AP&M 2402 and Zoom ID 915 4615 4399
Center for Computational Mathematics9500 Gilman Dr. #0112La Jolla, CA 92093-0112Tel: (858)534-9056