Prediction and Intelligence
Barbados Reinforcement Learning Workshop 2019
The 12th Barbados Workshop on Reinforcement Learning took place February 8-15, 2019 at McGill's Bellairs Institute in Holetown, Barbados. This year's theme is Prediction and Intelligence, organized by Doina Precup, Rich Sutton, Alexandra Kearney, Brian Tanner, and Joseph Modayil.
Learned predictions play many roles in theories of intelligence. The value functions prominent in reinforcement learning are predictions. Reward-prediction error mediated by dopamine is at the center of biological models of decision making.
Predictions have been proposed as representations of state and as models of the transition dynamics of the world. Certainly much, and some would say all, of an intelligent agent’s knowledge of the world is grounded in predictions, and it is often held that the substance of any scientific hypothesis is the predictions that it makes for the outcomes of experiments.
In reinforcement learning, predictive subtasks have been proposed for organizing the incremental construction of minds and their understanding of their worlds. In particular, predictive state representations (GVFs, OOMs, GLMs, TD networks), general value functions (GVFs, forecasts), and options have been proposed as a language for representing, learning, and discovering knowledge. Once acquired, predictions could be used in action selection and in planning/reasoning.
The purpose of this workshop is to explore the capabilities and limitations of such ideas. Appropriate topics for talks at the workshop include, but are not limited to
Formal theories of predictive knowledge:
- General value functions
- Predictive state representations
- Architectures of many predictions, such as Horde
- Off-policy learning
- Temporal-difference networks
- Worked examples of predictive knowledge in robots or simulations
- Thought experiments in structuring abstractions in predictive terms
- Relationships between prediction and action
- Planning with predictive models of the world