DeepMind’s latest research at NeurIPS 2022

Advancing best-in-class massive fashions, compute-optimal RL brokers, and extra clear, moral, and honest AI methods

The thirty-sixth Worldwide Convention on Neural Info Processing Techniques (NeurIPS 2022) is going down from 28 November – 9 December 2022, as a hybrid occasion, based mostly in New Orleans, USA.

NeurIPS is the world’s largest convention in synthetic intelligence (AI) and machine studying (ML), and we’re proud to help the occasion as Diamond sponsors, serving to foster the alternate of analysis advances within the AI and ML neighborhood.

Groups from throughout DeepMind are presenting 47 papers, together with 35 exterior collaborations in digital panels and poster classes. Right here’s a short introduction to a number of the analysis we’re presenting:  

Greatest-in-class massive fashions

Massive fashions (LMs) – generative AI methods educated on enormous quantities of information – have resulted in unimaginable performances in areas together with language, textual content, audio, and picture era. A part of their success is right down to their sheer scale.

Nevertheless, in Chinchilla, we’ve created a 70 billion parameter language mannequin that outperforms many bigger fashions, together with Gopher. We up to date the scaling legal guidelines of enormous fashions, displaying how beforehand educated fashions had been too massive for the quantity of coaching carried out. This work already formed different fashions that comply with these up to date guidelines, creating leaner, higher fashions, and has received an Excellent Principal Observe Paper award on the convention.  

Constructing upon Chinchilla and our multimodal fashions NFNets and Perceiver, we additionally current Flamingo, a household of few-shot studying visible language fashions. Dealing with photos, movies and textual knowledge, Flamingo represents a bridge between vision-only and language-only fashions. A single Flamingo mannequin units a brand new state-of-the-art in few-shot studying on a variety of open-ended multimodal duties.

And but, scale and structure aren’t the one components which are vital for the facility of transformer-based fashions. Knowledge properties additionally play a big position, which we focus on in a presentation on knowledge properties that promote in-context studying in transformer fashions. 

Optimising reinforcement studying

Reinforcement studying (RL) has proven nice promise as an strategy to creating generalised AI methods that may tackle a variety of advanced duties. It has led to breakthroughs in lots of domains from Go to arithmetic, and we’re all the time on the lookout for methods to make RL brokers smarter and leaner. 

We introduce a brand new strategy that reinforces the decision-making skills of RL brokers in a compute-efficient means by drastically increasing the size of data out there for his or her retrieval. 

We’ll additionally showcase a conceptually easy but normal strategy for curiosity-driven exploration in visually advanced environments – an RL agent referred to as BYOL-Discover. It achieves superhuman efficiency whereas being strong to noise and being a lot easier than prior work. 

Algorithmic advances

From compressing knowledge to operating simulations for predicting the climate, algorithms are a basic a part of trendy computing. And so, incremental enhancements can have an unlimited impression when working at scale, serving to save power, time, and cash. 

We share a radically new and extremely scalable technique for the automated configuration of pc networks, based mostly on neural algorithmic reasoning, displaying that our extremely versatile strategy is as much as 490 occasions sooner than the present state-of-the-art, whereas satisfying nearly all of the enter constraints. 

Throughout the identical session, we additionally current a rigorous exploration of the beforehand theoretical notion of “algorithmic alignment”, highlighting the nuanced relationship between graph neural networks and dynamic programming, and the way finest to mix them for optimising out-of-distribution efficiency.

Pioneering responsibly

On the coronary heart of DeepMind’s mission is our dedication to behave as accountable pioneers within the area of AI. We’re dedicated to growing AI methods which are clear, moral, and honest. 

Explaining and understanding the behaviour of advanced AI methods is an important a part of creating honest, clear, and correct methods. We provide a set of desiderata that seize these ambitions, and describe a sensible option to meet them, which includes coaching an AI system to construct a causal mannequin of itself, enabling it to clarify its personal behaviour in a significant means. 

To behave safely and ethically on this planet, AI brokers should be capable of motive about hurt and keep away from dangerous actions. We’ll introduce collaborative work on a novel statistical measure referred to as counterfactual hurt, and display the way it overcomes issues with normal approaches to keep away from pursuing dangerous insurance policies. 

Lastly, we’re presenting our new paper which proposes methods to diagnose and mitigate failures in mannequin equity attributable to distribution shifts, displaying how vital these points are for the deployment of secure ML applied sciences in healthcare settings.

See the complete vary of our work at NeurIPS 2022 right here.

Leave a Comment