Difference between revisions of "Allen's REINFORCE notes"

From Humanoid Robots Wiki
Jump to: navigation, search
Line 38: Line 38:
  
 
====Log Derivative Trick====
 
====Log Derivative Trick====
Suppose we'd like to find <math>\nabla_{x_1}\log(f(x_1, x_2, x_3, ...))</math>. By the chain rule this is equal to <math>\frac{\nabla_{x_1}f(x_1, x_2, x_3 ...)}{f(x_1, x_2, x_3 ...)}. Thus, by rearranging, we can take the gradient of any function with respect to some variable as <math>\nabla_{x_1}f(x_1, x_2, x_3, ...)= f(x_1, x_2, x_3,...)\nabla_{x_1}\log(f(x_1, x_2, x_3, ...)</math>.
+
Suppose we'd like to find <math>\nabla_{x_1}\log(f(x_1, x_2, x_3, ...))</math>. By the chain rule this is equal to <math>\frac{\nabla_{x_1}f(x_1, x_2, x_3 ...)}{f(x_1, x_2, x_3 ...)}</math>. Thus, by rearranging, we can take the gradient of any function with respect to some variable as <math>\nabla_{x_1}f(x_1, x_2, x_3, ...)= f(x_1, x_2, x_3,...)\nabla_{x_1}\log(f(x_1, x_2, x_3, ...)</math>.
  
 
=== Loss Function ===
 
=== Loss Function ===
  
 
The goal of REINFORCE is to optimize the expected cumulative reward. We do so using gradient descent
 
The goal of REINFORCE is to optimize the expected cumulative reward. We do so using gradient descent

Revision as of 00:09, 26 May 2024

Allen's REINFORCE notes

Links

Motivation

Recall that the objective of Reinforcement Learning is to find an optimal policy which we encode in a neural network with parameters . is a mapping from observations to actions. These optimal parameters are defined as . Let's unpack what this means. To phrase it in english, this is basically saying that the optimal policy is one such that the expected value of the total reward over following a trajectory () determined by the policy is the highest over all policies.

Overview

Initialize neural network with input dimensions = observation dimensions and output dimensions = action dimensions
For each episode:
  While not terminated:
    Get observation from environment
    Use policy network to map observation to action distribution
    Randomly sample one action from action distribution
    Compute logarithmic probability of that action occurring
    Step environment using action and store reward
  Calculate loss over entire trajectory as function of probabilities and rewards
  Recall loss functions are differentiable with respect to each parameter - thus, calculate how changes in parameters correlate with changes in the loss
  Based on the loss, use a gradient descent policy to update weights

Objective Function

The goal of reinforcement learning is to maximize the expected reward over the entire episode. We use to denote the total reward over some trajectory defined by our policy. Thus we want to maximize . We can use the definition of expected value to expand this as , where the probability of a given trajectory occurring can further be expressed as .

Now we want to find the gradient of , namely . The important step here is called the Log Derivative Trick.

Log Derivative Trick

Suppose we'd like to find . By the chain rule this is equal to . Thus, by rearranging, we can take the gradient of any function with respect to some variable as .

Loss Function

The goal of REINFORCE is to optimize the expected cumulative reward. We do so using gradient descent