Difference between revisions of "Allen's REINFORCE notes"

From Humanoid Robots Wiki
Jump to: navigation, search
Line 15: Line 15:
  
 
# Initialize neural network with input dimensions = observation dimensions and output dimensions = action dimensions. Remember a policy is a mapping from observations to outputs. If the space is continuous, it may make more sense to make output be one mean and one standard deviation for each component of the action.  
 
# Initialize neural network with input dimensions = observation dimensions and output dimensions = action dimensions. Remember a policy is a mapping from observations to outputs. If the space is continuous, it may make more sense to make output be one mean and one standard deviation for each component of the action.  
# Repeat:
+
‎<syntaxhighlight lang="python" line>
 +
# For # of episodes:
 +
## While not terminated:
 +
### Get observation from environment
 +
### Use policy network to map observation to action distribution
 +
### Randomly sample one action from action distribution
 +
### Compute logarithmic probability of that action occurring
 +
### Step environment using action and store reward
 +
## Calculate loss over entire trajectory as function of probabilities and rewards
 +
</syntaxhighlight>
  
=== State vs. Observation ===
+
=== Loss Function ===

Revision as of 00:03, 25 May 2024

Allen's REINFORCE notes

Links

Motivation

Recall that the objective of Reinforcement Learning is to find an optimal policy which we encode in a neural network with parameters . These optimal parameters are defined as . Let's unpack what this means. To phrase it in english, this is basically saying that the optimal policy is one such that the expected value of the total reward over following a trajectory () determined by the policy is the highest over all policies.

Overview

  1. Initialize neural network with input dimensions = observation dimensions and output dimensions = action dimensions. Remember a policy is a mapping from observations to outputs. If the space is continuous, it may make more sense to make output be one mean and one standard deviation for each component of the action.

1 # For # of episodes:
2 ## While not terminated:
3 ### Get observation from environment
4 ### Use policy network to map observation to action distribution
5 ### Randomly sample one action from action distribution
6 ### Compute logarithmic probability of that action occurring
7 ### Step environment using action and store reward
8 ## Calculate loss over entire trajectory as function of probabilities and rewards

Loss Function