Agent Foundations - Clearer Thinking for Messy Minds
What is the nature of goal-directed agency and its implications for AI alignment?
What is the relevance of classic research into agency to AI alignment? What direction should future agency research go in?
The first "Agent Foundations for AI Alignment" workshop took place in October 2023. The theme of this workshop was to examine the theoretical underpinnings of Bayesian expected utility maximizers, see where the conditions need to be relaxed to accommodate more realistic models of agency and consider what consequences this may have for understanding how agents form, observe, believe, desire, act, interact, aggregate and reflect. Researchers from academia, industry and the wider community came together over a shared interest in the fundamental nature of goal-directed agency, and its application to the problem of ensuring current and future AI systems are safe and beneficial. Find out more, including talk recordings, here.