Jonathan Richard Schwarz

I'm a Research Engineer at DeepMind, where I work with Yee Whye Teh, Razvan Pascanu and many others on problems in the intersection of probabilistic modelling and meta/continual learning. A central question of my research is how to learn meaningful prior information from related problems. I'm also interested in Bayesian Optimisation and approximate inference.

Previously, I obtained a Master's degree in Artificial Intelligence from the University of Edinburgh (Top of the class, 2016) during which I worked on deep generative models for semi-supervised learning and sequential data modelling. I've also spent time working on robot navigation at the Fraunhofer IPA in Stuttgart, Germany and on climate research with Takashi Nakajima at Tokai Unviersity, Japan.

Email  /  Google Scholar  /  Twitter  /  LinkedIn  /  Full CV: on request


Selected papers are highlighted.

Recent Preprints


Meta-Learning surrogate models for sequential decision making
Jonathan Schwarz*, Alexandre Galashov*, Hyunjik Kim, Marta Garnelo, David Saxton, Pushmeet Kohli, SM Ali Eslami°, Yee Whye Teh°

ICLR 2019 Workshop on Structure & Priors in Reinforcement Learning

*, ° : Joint first/senior authorship


Functional Regularisation for Continual Learning using Gaussian Processes
Michalis K. Titsias, Jonathan Schwarz, Alexander G. de G. Matthews, Razvan Pascanu, Yee Whye Teh

All papers

Empirical Evaluation of Neural Process Objectives
Tuan Anh Le, Hyunjik Kim, Marta Garnelo, Dan Rosenbaum, Jonathan Schwarz, Yee Whye Teh

NeurIPS 2018 workshop on Bayesian Deep Learning


Information asymmetry in KL-regularized RL
Alexandre Galashov, Siddhant M Jayakumar, Leonard Hasenclever, Dhruva Tirumala, Jonathan Schwarz, Guillaume Desjardins, Wojciech M Czarnecki, Yee Whye Teh, Razvan Pascanu, Nicolas Heess

International Conference on Learning Representations (ICLR) 2019


Attentive Neural Processes
Hyunjik Kim, Andriy Mnih, Jonathan Schwarz, Marta Garnelo, SM Ali Eslami, Dan Rosenbaum, Oriol Vinyals, Yee Whye Teh

International Conference on Learning Representations (ICLR) 2019



Neural Processes
Marta Garnelo, Jonathan Schwarz, Dan Rosenbaum, Fabio Viola, Danilo J Rezende, SM Eslami, Yee Whye Teh

ICML 2018 Workshop on Theoretical Foundations and Applications of Deep Generative Models (Spotlight talk)

Code / Talk (credit to Marta)


Progress & Compress: A scalable framework for continual learning
Jonathan Schwarz, Jelena Luketina, Wojciech M. Czarnecki, Agnieszka Grabska-Barwinska, Yee Whye Teh, Raia Hadsell°, Razvan Pascanu°

International Conference on Machine Learning (ICML) 2018 (Long oral Presentation)


° : Joint senior authorship


The NarrativeQA Reading Comprehension Challenge
Tomas Kocisky, Jonathan Schwarz, Phil Blunsom, Chris Dyer, Karl Moritz Hermann, Gabor Melis, Edward Grefenstette

Transactions of the Association for Computational Linguistics (TACL) 2018

Dataset / Talk (credit to Tomas)


A Recurrent Variational Autoencoder for Human Motion Synthesis
Ikhsanul Habibie, Daniel Holden, Jonathan Schwarz, Joe Yearsley, Taku Komura

British Machine Vision Conference (BMVC) 2017

Code / Dataset

Based on Jon Barron's website.