An empirical analysis of compute-optimal large language model training

In the previous couple of years, a spotlight in language modelling has been on enhancing efficiency via growing the variety of parameters in transformer-based fashions. This method has led to spectacular outcomes and state-of-the-art efficiency throughout many pure language processing duties.

We additionally pursued this line of analysis at DeepMind and lately showcased Gopher, a 280-billion parameter mannequin that established main efficiency on a variety of duties together with language modelling, studying comprehension, and query answering. Since then, a good bigger mannequin named Megatron-Turing NLG has been printed with 530 billion parameters.

Because of the substantial value of coaching these massive fashions, it’s paramount to estimate the absolute best coaching setup to keep away from losing assets. Particularly, the coaching compute value for transformers is decided by two components: the mannequin dimension and the variety of coaching tokens.

The present era of huge language fashions has allotted elevated computational assets to growing the parameter rely of huge fashions and retaining the coaching information dimension fastened at round 300 billion tokens. On this work, we empirically examine the optimum tradeoff between growing mannequin dimension and the quantity of coaching information with growing computational assets. Particularly, we ask the query: “What’s the optimum mannequin dimension and variety of coaching tokens for a given compute funds?” To reply this query, we practice fashions of assorted sizes and with numerous numbers of tokens, and estimate this trade-off empirically.

Our major discovering is that the present massive language fashions are far too massive for his or her compute funds and aren’t being skilled on sufficient information. In reality, we discover that for the variety of coaching FLOPs used to coach Gopher, a 4x smaller mannequin skilled on 4x extra information would have been preferable.

Determine 1: Primarily based on our method, we present our projections of the optimum variety of coaching tokens and parameters. We present factors representing the coaching setup of three totally different established massive language fashions together with our new mannequin, Chinchilla.

We check our information scaling speculation by coaching Chinchilla, a 70-billion parameter mannequin skilled for 1.3 trillion tokens. Whereas the coaching compute value for Chinchilla and Gopher are the identical, we discover that it outperforms Gopher and different massive language fashions on almost each measured process, regardless of having 70 billion parameters in comparison with Gopher’s 280 billion.

Determine 2: For numerous widespread benchmarks that embrace Query Answering (TriviaQA), CommonSense (HellaSwag, PIQA, Winogrande, and BoolQ), Studying Comprehension (LAMBADA), and the big Multi-task Language Understanding (MMLU) normal information benchmark, we examine the efficiency of Gopher, Chinchilla, GPT-3, and Megatron-Turing NLG.

After the discharge of Chinchilla, a mannequin named PaLM was launched with 540 billion parameters and skilled on 768 billion tokens. This mannequin was skilled with roughly 5x the compute funds of Chinchilla and outperformed Chinchilla on a spread of duties. Whereas the coaching corpus is totally different, our strategies do predict that such a mannequin skilled on our information would outperform Chinchilla regardless of not being compute-optimal. Given the PaLM compute funds, we predict a 140-billion-parameter mannequin skilled on 3 trillion tokens to be optimum and extra environment friendly for inference.

An extra advantage of smaller, extra performant fashions is that the inference time and reminiscence prices are decreased making querying the fashions each quicker and doable on much less {hardware}. In follow, whereas the coaching FLOPs between Gopher and Chinchilla are the identical, the price of utilizing Chinchilla is considerably smaller, along with it performing higher. Additional easy optimisations could also be doable which might be in a position to proceed to supply massive positive factors.

Leave a Comment