sitebackground

My Research:

During my time at the University of Wyoming, I studied artificial intelligence under Jeff Clune. More specifically, I researched the biological mechanisms of animal curiosity and intrinsic motivation and tried to reproduce them in machines to create more effective and robust AI agents.

Natural animals are capable of a wide variety of interesting behaviors and skills: cheetahs have been known to use cars as vantage points, three-legged dogs can learn to catch frisbees, and humans can produce art, build and use complex tools, and create websites like this one. It is thought that many of these behaviors result from an intrinsic motivation to explore our environments (e.g. through curiosity). For example, many juvenile animals—including human babies—spend a large quantity of time playing. Through play, animals can experiment with new behaviors (crawling, running, jumping, etc.) and play with objects they do not understand. It is thought that this intrinsic motivation to explore and play allows animals to acquire a diverse set of interesting, skillful behaviors that aid in survival.

Unfortunately, the agents produced by current AI techniques are rarely so robust. Many AI algorithms produce specialists, which can only perform a small set of tasks (e.g. recognizing images), even if they perform those tasks very well. In contrast, natural animals are generalists: capable of recognizing images, locomoting through difficult terrain, and adapting to injury and unforeseen circumstances all at once. We would ultimately like AI to foster a similar level of adaptability and generality.

My work introduces Curiosity Search, an algorithm that attempts reproduce the intrinsic motivation of animals by encouraging AI agents to "do something new". This algorithm rewards agents for expressing as many novel behaviors as possible within their lifetime. I showed that Curiosity Search increases domain exploration and boosts skill acquisition in a simulated 2-dimensional maze, in which the agent must learn how to open different doors. This work was published in PLoS ONE.

More recently, I scaled up Curiosity Search to deep reinforcement learning in Atari games. As of 2018, Curiosity Search was able to match Google Deepmind's state-of-the-art performance on Montezuma's Revenge (an extremely difficult game for AI agents to solve), while also improving performance on several other difficult-exploration games. This work was accepted to the NIPS Deep Reinforcment Learning Workshop and was featured in a poster presentation.

Publications:

Stanton C, Clune J (2018). Deep curiosity search: Intra-life exploration can improve performance on Atari games. NIPS Deep Reinforcement Learning Workshop. PDF. Poster. Data/Videos. Code.

Stanton C, Clune J (2016). Curiosity search: Producing generalists by encouraging individuals to continually explore and acquire skills throughout their lifetime. PLoS ONE 11(9): e0162235. PDF. Data Set.