Difference between revisions of "Allen's REINFORCE notes"

From Humanoid Robots Wiki
Jump to: navigation, search
(Overview)
Line 14: Line 14:
 
=== Overview ===
 
=== Overview ===
  
‎<syntaxhighlight lang="bash" line>
+
‎<syntaxhighlight lang="bash" >
 
Initialize neural network with input dimensions = observation dimensions and output dimensions = action dimensions
 
Initialize neural network with input dimensions = observation dimensions and output dimensions = action dimensions
 
For each episode:
 
For each episode:

Revision as of 00:16, 25 May 2024

Allen's REINFORCE notes

Links

Motivation

Recall that the objective of Reinforcement Learning is to find an optimal policy which we encode in a neural network with parameters . is a mapping from observations to actions. These optimal parameters are defined as . Let's unpack what this means. To phrase it in english, this is basically saying that the optimal policy is one such that the expected value of the total reward over following a trajectory () determined by the policy is the highest over all policies.

Overview

Initialize neural network with input dimensions = observation dimensions and output dimensions = action dimensions
For each episode:
  While not terminated:
    Get observation from environment
    Use policy network to map observation to action distribution
    Randomly sample one action from action distribution
    Compute logarithmic probability of that action occurring
    Step environment using action and store reward
  Calculate loss over entire trajectory as function of probabilities and rewards
  Recall loss functions are differentiable with respect to each parameter - thus, calculate how changes in parameters correlate with changes in the loss
  Based on the loss, use a gradient descent policy to update weights

Loss Function