20.6 C
New York
Wednesday, June 18, 2025

Introducing n-Step Temporal-Distinction Strategies | by Oliver S | Dec, 2024


Dissecting “Reinforcement Studying” by Richard S. Sutton with customized Python implementations, Episode V

In our earlier publish, we wrapped up the introductory collection on elementary reinforcement studying (RL) strategies by exploring Temporal-Distinction (TD) studying. TD strategies merge the strengths of Dynamic Programming (DP) and Monte Carlo (MC) strategies, leveraging their greatest options to type among the most essential RL algorithms, reminiscent of Q-learning.

Constructing on that basis, this publish delves into n-step TD studying, a flexible method launched in Chapter 7 of Sutton’s e-book [1]. This technique bridges the hole between classical TD and MC strategies. Like TD, n-step strategies use bootstrapping (leveraging prior estimates), however additionally they incorporate the subsequent n rewards, providing a singular mix of short-term and long-term studying. In a future publish, we’ll generalize this idea even additional with eligibility traces.

We’ll observe a structured method, beginning with the prediction drawback earlier than shifting to management. Alongside the way in which, we’ll:

  • Introduce n-step Sarsa,
  • Lengthen it to off-policy studying,
  • Discover the n-step tree backup algorithm, and
  • Current a unifying perspective with n-step Q(σ).

As at all times, you’ll find all accompanying code on GitHub. Let’s dive in!

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles