53
edits
Changes
no edit summary
=== Motivation ===
Recall that the objective of Reinforcement Learning is to find an optimal policy <math> \pi^* </math> which we encode in a neural network with parameters <math>\theta^*</math>. <math> \pi_\theta </math> is a mapping from observations to actions. These optimal parameters are defined as
<math>\theta^* = \text{argmax}_\theta E_{\tau \sim p_\theta(\tau)} \left[ \sum_t r(s_t, a_t) \right] </math>. Let's unpack what this means. To phrase it in english, this is basically saying that the optimal policy is one such that the expected value of the total reward over following a trajectory (<math> \tau </math>) determined by the policy is the highest over all policies.
=== Overview ===
</syntaxhighlight>
=== Loss Function ===