Open main menu

Humanoid Robots Wiki β

Allen's REINFORCE notes

Revision as of 00:35, 26 May 2024 by Allen12 (talk | contribs)

Allen's REINFORCE notes

Contents

Links

Motivation

Recall that the objective of Reinforcement Learning is to find an optimal policy   which we encode in a neural network with parameters  .   is a mapping from observations to actions. These optimal parameters are defined as  . Let's unpack what this means. To phrase it in english, this is basically saying that the optimal policy is one such that the expected value of the total reward over following a trajectory ( ) determined by the policy is the highest over all policies.

Overview

Initialize neural network with input dimensions = observation dimensions and output dimensions = action dimensions
For each episode:
  While not terminated:
    Get observation from environment
    Use policy network to map observation to action distribution
    Randomly sample one action from action distribution
    Compute logarithmic probability of that action occurring
    Step environment using action and store reward
  Calculate loss over entire trajectory as function of probabilities and rewards
  Recall loss functions are differentiable with respect to each parameter - thus, calculate how changes in parameters correlate with changes in the loss
  Based on the loss, use a gradient descent policy to update weights

Objective Function

The goal of reinforcement learning is to maximize the expected reward over the entire episode. We use   to denote the total reward over some trajectory   defined by our policy. Thus we want to maximize  . We can use the definition of expected value to expand this as  , where the probability of a given trajectory occurring can further be expressed as  .

Now we want to find the gradient of  , namely  . Since the reward function isn't a dependent on the parameters. We can rearrange:  . The next step here is what's called the Log Derivative Trick.

Suppose we'd like to find  . By the chain rule this is equal to  . Thus, by rearranging, we can take the gradient of any function with respect to some variable as  .

Thus, using this idea, we can rewrite our gradient as  . Finally, using the definition of expectation again, we have  

Loss Function

The goal of REINFORCE is to optimize the expected cumulative reward. We do so using gradient descent