Difference between revisions of "Allen's PPO Notes"

From Humanoid Robots Wiki
Jump to: navigation, search
Line 1: Line 1:
 +
=== Advantage Function ===
 +
<math> A(s, a) = Q(s, a) - V(s) </math>. Intuitively: extra reward we get if we take action at state compared to the mean reward at that state. We use this advantage function to tell us how good the action is - if its positive, the action is better than others at that state so we want to move in that direction, and if its negative, the action is worse than others at thtat state so we move in the opposite direction.
 +
 +
=== Motivation ===
 
Intuition: Want to avoid too large of a policy update
 
Intuition: Want to avoid too large of a policy update
 
#Smaller policy updates more likely to converge to optimal
 
#Smaller policy updates more likely to converge to optimal
 
#Falling "off the cliff" might mean it's impossible to recover
 
#Falling "off the cliff" might mean it's impossible to recover
 
How we solve this: Measure how much policy changes w.r.t. previous, clip ratio to <math>[1-\varepsilon, 1 + \varepsilon]</math> removing incentive to go too far.
 
How we solve this: Measure how much policy changes w.r.t. previous, clip ratio to <math>[1-\varepsilon, 1 + \varepsilon]</math> removing incentive to go too far.

Revision as of 19:38, 26 May 2024

Advantage Function

. Intuitively: extra reward we get if we take action at state compared to the mean reward at that state. We use this advantage function to tell us how good the action is - if its positive, the action is better than others at that state so we want to move in that direction, and if its negative, the action is worse than others at thtat state so we move in the opposite direction.

Motivation

Intuition: Want to avoid too large of a policy update

  1. Smaller policy updates more likely to converge to optimal
  2. Falling "off the cliff" might mean it's impossible to recover

How we solve this: Measure how much policy changes w.r.t. previous, clip ratio to removing incentive to go too far.