DeepMind engineers speed up our analysis by constructing instruments, scaling up algorithms, and creating difficult digital and bodily worlds for coaching and testing synthetic intelligence (AI) techniques. As a part of this work, we continuously consider new machine studying libraries and frameworks.
Just lately, we have discovered that an rising variety of initiatives are properly served by JAX, a machine studying framework developed by Google Analysis groups. JAX resonates properly with our engineering philosophy and has been broadly adopted by our analysis group during the last yr. Right here we share our expertise of working with JAX, define why we discover it helpful for our AI analysis, and provides an outline of the ecosystem we’re constructing to assist researchers all over the place.
Why JAX?
JAX is a Python library designed for high-performance numerical computing, particularly machine studying analysis. Its API for numerical features is predicated on NumPy, a group of features utilized in scientific computing. Each Python and NumPy are broadly used and acquainted, making JAX easy, versatile, and simple to undertake.
Along with its NumPy API, JAX contains an extensible system of composable operate transformations that assist assist machine studying analysis, together with:
- Differentiation: Gradient-based optimisation is key to ML. JAX natively helps each ahead and reverse mode automated differentiation of arbitrary numerical features, through operate transformations comparable to grad, hessian, jacfwd and jacrev.
- Vectorisation: In ML analysis we frequently apply a single operate to a number of information, e.g. calculating the loss throughout a batch or evaluating per-example gradients for differentially non-public studying. JAX offers automated vectorisation through the vmap transformation that simplifies this type of programming. For instance, researchers needn’t motive about batching when implementing new algorithms. JAX additionally helps massive scale information parallelism through the associated pmap transformation, elegantly distributing information that’s too massive for the reminiscence of a single accelerator.
- JIT-compilation: XLA is used to just-in-time (JIT)-compile and execute JAX applications on GPU and Cloud TPU accelerators. JIT-compilation, along with JAX’s NumPy-consistent API, permits researchers with no earlier expertise in high-performance computing to simply scale to 1 or many accelerators.
Now we have discovered that JAX has enabled fast experimentation with novel algorithms and architectures and it now underpins a lot of our latest publications. To be taught extra please think about becoming a member of our JAX Roundtable, Wednesday December ninth 7:00pm GMT, on the NeurIPS digital convention.
JAX at DeepMind
Supporting state-of-the-art AI analysis means balancing fast prototyping and fast iteration with the power to deploy experiments at a scale historically related to manufacturing techniques. What makes these sorts of initiatives significantly difficult is that the analysis panorama evolves quickly and is troublesome to forecast. At any level, a brand new analysis breakthrough might, and repeatedly does, change the trajectory and necessities of total groups. Inside this ever-changing panorama, a core accountability of our engineering staff is to be sure that the teachings discovered and the code written for one analysis mission is reused successfully within the subsequent.
One method that has confirmed profitable is modularisation: we extract crucial and demanding constructing blocks developed in every analysis mission into properly examined and environment friendly elements. This empowers researchers to concentrate on their analysis whereas additionally benefiting from code reuse, bug fixes and efficiency enhancements within the algorithmic substances applied by our core libraries. We’ve additionally discovered that it’s necessary to be sure that every library has a clearly outlined scope and to make sure that they’re interoperable however impartial. Incremental buy-in, the power to select and select options with out being locked into others, is important to offering most flexibility for researchers and all the time supporting them in choosing the proper software for the job.
Different concerns which have gone into the event of our JAX Ecosystem embody ensuring that it stays constant (the place doable) with the design of our current TensorFlow libraries (e.g. Sonnet and TRFL). We’ve additionally aimed to construct elements that (the place related) match their underlying arithmetic as intently as doable, to be self-descriptive and minimise psychological hops “from paper to code”. Lastly, we’ve chosen to open supply our libraries to facilitate sharing of analysis outputs and to encourage the broader group to discover the JAX Ecosystem.
Our Ecosystem as we speak
Haiku

The JAX programming mannequin of composable operate transformations could make coping with stateful objects difficult, e.g. neural networks with trainable parameters. Haiku is a neural community library that permits customers to make use of acquainted object-oriented programming fashions whereas harnessing the facility and ease of JAX’s pure purposeful paradigm.
Haiku is actively utilized by lots of of researchers throughout DeepMind and Google, and has already discovered adoption in a number of exterior initiatives (e.g. Coax, DeepChem, NumPyro). It builds on the API for Sonnet, our module-based programming mannequin for neural networks in TensorFlow, and we’ve aimed to make porting from Sonnet to Haiku so simple as doable.
Discover out extra on GitHub
Optax

Gradient-based optimisation is key to ML. Optax offers a library of gradient transformations, along with composition operators (e.g. chain) that enable implementing many commonplace optimisers (e.g. RMSProp or Adam) in only a single line of code.
The compositional nature of Optax naturally helps recombining the identical primary substances in customized optimisers. It moreover presents numerous utilities for stochastic gradient estimation and second order optimisation.
Many Optax customers have adopted Haiku however in step with our incremental buy-in philosophy, any library representing parameters as JAX tree buildings is supported (e.g. Elegy, Flax and Stax). Please see right here for extra data on this wealthy ecosystem of JAX libraries.
Discover out extra on GitHub
RLax

Lots of our most profitable initiatives are on the intersection of deep studying and reinforcement studying (RL), also referred to as deep reinforcement studying. RLax is a library that gives helpful constructing blocks for developing RL brokers.
The elements in RLax cowl a broad spectrum of algorithms and concepts: TD-learning, coverage gradients, actor critics, MAP, proximal coverage optimisation, non-linear worth transformation, common worth features, and numerous exploration strategies.
Though some introductory instance brokers are offered, RLax will not be supposed as a framework for constructing and deploying full RL agent techniques. One instance of a fully-featured agent framework that builds upon RLax elements is Acme.
Discover out extra on GitHub
Chex

Testing is important to software program reliability and analysis code is not any exception. Drawing scientific conclusions from analysis experiments requires being assured within the correctness of your code. Chex is a group of testing utilities utilized by library authors to confirm the widespread constructing blocks are right and sturdy and by end-users to examine their experimental code.
Chex offers an assortment of utilities together with JAX-aware unit testing, assertions of properties of JAX datatypes, mocks and fakes, and multi-device take a look at environments. Chex is used all through DeepMind’s JAX Ecosystem and by exterior initiatives comparable to Coax and MineRL.
Discover out extra on GitHub
Jraph

Graph neural networks (GNNs) are an thrilling space of analysis with many promising functions. See, as an example, our latest work on visitors prediction in Google Maps and our work on physics simulation. Jraph (pronounced “giraffe”) is a light-weight library to assist working with GNNs in JAX.
Jraph offers a standardised information construction for graphs, a set of utilities for working with graphs, and a ‘zoo’ of simply forkable and extensible graph neural community fashions. Different key options embody: batching of GraphTuples that effectively leverage {hardware} accelerators, JIT-compilation assist of variable-shaped graphs through padding and masking, and losses outlined over enter partitions. Like Optax and our different libraries, Jraph locations no constraints on the consumer’s selection of a neural community library.
Be taught extra about utilizing the library from our wealthy assortment of examples.
Discover out extra on GitHub
Our JAX Ecosystem is consistently evolving and we encourage the ML analysis group to discover our libraries and the potential of JAX to speed up their very own analysis.