Difference between revisions of "Allen's REINFORCE notes"
(→Overview) |
|||
Line 16: | Line 16: | ||
<syntaxhighlight lang="bash" line> | <syntaxhighlight lang="bash" line> | ||
Initialize neural network with input dimensions = observation dimensions and output dimensions = action dimensions | Initialize neural network with input dimensions = observation dimensions and output dimensions = action dimensions | ||
− | For # of episodes: | + | For \# of episodes: |
While not terminated: | While not terminated: | ||
Get observation from environment | Get observation from environment |
Revision as of 00:05, 25 May 2024
Allen's REINFORCE notes
Contents
Links
Motivation
Recall that the objective of Reinforcement Learning is to find an optimal policy which we encode in a neural network with parameters . is a mapping from observations to actions. These optimal parameters are defined as . Let's unpack what this means. To phrase it in english, this is basically saying that the optimal policy is one such that the expected value of the total reward over following a trajectory () determined by the policy is the highest over all policies.
Overview
1 Initialize neural network with input dimensions = observation dimensions and output dimensions = action dimensions
2 For \# of episodes:
3 While not terminated:
4 Get observation from environment
5 Use policy network to map observation to action distribution
6 Randomly sample one action from action distribution
7 Compute logarithmic probability of that action occurring
8 Step environment using action and store reward
9 Calculate loss over entire trajectory as function of probabilities and rewards