Allen's REINFORCE notes

From Humanoid Robots Wiki
Revision as of 23:12, 25 May 2024 by Allen12 (talk | contribs)
Jump to: navigation, search

Allen's REINFORCE notes

Links

Motivation

Recall that the objective of Reinforcement Learning is to find an optimal policy which we encode in a neural network with parameters . is a mapping from observations to actions. These optimal parameters are defined as . Let's unpack what this means. To phrase it in english, this is basically saying that the optimal policy is one such that the expected value of the total reward over following a trajectory () determined by the policy is the highest over all policies.

Overview

Initialize neural network with input dimensions = observation dimensions and output dimensions = action dimensions
For each episode:
  While not terminated:
    Get observation from environment
    Use policy network to map observation to action distribution
    Randomly sample one action from action distribution
    Compute logarithmic probability of that action occurring
    Step environment using action and store reward
  Calculate loss over entire trajectory as function of probabilities and rewards
  Recall loss functions are differentiable with respect to each parameter - thus, calculate how changes in parameters correlate with changes in the loss
  Based on the loss, use a gradient descent policy to update weights

Objective Function

The goal of reinforcement learning is to maximize the expected reward over the entire episode. We use to denote the total reward over some trajectory defined by our policy. Thus we want to maximize . We can use the definition of expected value to expand this as , where the probability of a given trajectory occurring can further be expressed as P(\tau | \theta) = P(s_0) \prod^T_{t=0} \pi_theta(a_t | s_t) P(s_{t + 1} | s_t, a_t)

Loss Function

The goal of REINFORCE is to optimize the expected cumulative reward. We do so using gradient descent