On the Expressivity of Markov Reward

Reward is the driving drive for reinforcement studying (RL) brokers. Given its central position in RL, reward is commonly assumed to be suitably normal in its expressivity, as summarized by Sutton and Littman’s reward speculation:

“…all of what we imply by targets and functions could be effectively regarded as maximization of the anticipated worth of the cumulative sum of a acquired scalar sign (reward).”

– SUTTON (2004), LITTMAN (2017)

In our work, we take first steps towards a scientific research of this speculation. To take action, we think about the next thought experiment involving Alice, a designer, and Bob, a studying agent:

We suppose that Alice thinks of a activity she may like Bob to study to resolve – this activity might be within the type a a pure language description (“steadiness this pole”), an imagined state of affairs (“attain any of the successful configurations of a chess board”), or one thing extra conventional like a reward or worth operate. Then, we think about Alice interprets her selection of activity into some generator that may present studying sign (reminiscent of reward) to Bob (a studying agent), who will study from this sign all through his lifetime. We then floor our research of the reward speculation by addressing the next query: given Alice’s selection of activity, is there all the time a reward operate that may convey this activity to Bob?

What’s a activity?

To make our research of this query concrete, we first limit focus to a few sorts of activity. Specifically, we introduce three activity varieties that we consider seize wise sorts of duties: 1) A set of acceptable insurance policies (SOAP), 2) A coverage order (PO), and three) A trajectory order (TO). These three types of duties symbolize concrete situations of the sorts of activity we’d need an agent to study to resolve.

We then research whether or not reward is able to capturing every of those activity varieties in finite environments. Crucially, we solely focus consideration on Markov reward capabilities; as an illustration, given a state house that’s adequate to type a activity reminiscent of (x,y) pairs in a grid world, is there a reward operate that solely depends upon this similar state house that may seize the duty?

First Essential End result

Our first essential end result exhibits that for every of the three activity varieties, there are environment-task pairs for which there isn’t any Markov reward operate that may seize the duty. One instance of such a pair is the “go all the best way across the grid clockwise or counterclockwise” activity in a typical grid world:

This activity is of course captured by a SOAP that consists of two acceptable insurance policies: the “clockwise” coverage (in blue) and the “counterclockwise” coverage (in purple). For a Markov reward operate to precise this activity, it might have to make these two insurance policies strictly larger in worth than all different deterministic insurance policies. Nonetheless, there isn’t any such Markov reward operate: the optimality of a single “transfer clockwise” motion will rely upon whether or not the agent was already transferring in that course previously. For the reason that reward operate should be Markov, it can’t convey this sort of data. Related examples show that Markov reward can’t seize each coverage order and trajectory order, too.

Second Essential End result

On condition that some duties could be captured and a few can’t, we subsequent discover whether or not there may be an environment friendly process for figuring out whether or not a given activity could be captured by reward in a given setting. Additional, if there’s a reward operate that captures the given activity, we might ideally like to have the ability to output such a reward operate. Our second result’s a constructive end result which says that for any finite environment-task pair, there’s a process that may 1) resolve whether or not the duty could be captured by Markov reward within the given setting, and a pair of) outputs the specified reward operate that precisely conveys the duty, when such a operate exists.

This work establishes preliminary pathways towards understanding the scope of the reward speculation, however there may be a lot nonetheless to be performed to generalize these outcomes past finite environments, Markov rewards, and easy notions of “activity” and “expressivity”. We hope this work supplies new conceptual views on reward and its place in reinforcement studying.

Leave a Comment