In our latest paper, we present that it’s potential to robotically discover inputs that elicit dangerous textual content from language fashions by producing inputs utilizing language fashions themselves. Our method supplies one instrument for locating dangerous mannequin behaviours earlier than customers are impacted, although we emphasize that it needs to be considered as one element alongside many different methods that might be wanted to search out harms and mitigate them as soon as discovered.
Massive generative language fashions like GPT-3 and Gopher have a outstanding potential to generate high-quality textual content, however they’re tough to deploy in the actual world. Generative language fashions include a threat of producing very dangerous textual content, and even a small threat of hurt is unacceptable in real-world functions.
For instance, in 2016, Microsoft launched the Tay Twitter bot to robotically tweet in response to customers. Inside 16 hours, Microsoft took Tay down after a number of adversarial customers elicited racist and sexually-charged tweets from Tay, which had been despatched to over 50,000 followers. The result was not for lack of care on Microsoft’s half:
“Though we had ready for a lot of forms of abuses of the system, we had made a crucial oversight for this particular assault.”
Peter Lee
VP, Microsoft
The problem is that there are such a lot of potential inputs that may trigger a mannequin to generate dangerous textual content. In consequence, it’s laborious to search out all the instances the place a mannequin fails earlier than it’s deployed in the actual world. Earlier work depends on paid, human annotators to manually uncover failure instances (Xu et al. 2021, inter alia). This method is efficient however costly, limiting the quantity and variety of failure instances discovered.
We goal to enhance handbook testing and cut back the variety of crucial oversights by discovering failure instances (or ‘crimson teaming’) in an computerized means. To take action, we generate check instances utilizing a language mannequin itself and use a classifier to detect varied dangerous behaviors on check instances, as proven under:
Our method uncovers quite a lot of dangerous mannequin behaviors:
- Offensive Language: Hate speech, profanity, sexual content material, discrimination, and many others.
- Information Leakage: Producing copyrighted or non-public, personally-identifiable info from the coaching corpus.
- Contact Data Era: Directing customers to unnecessarily electronic mail or name actual folks.
- Distributional Bias: Speaking about some teams of individuals in an unfairly totally different means than different teams, on common over numerous outputs.
- Conversational Harms: Offensive language that happens within the context of an extended dialogue, for instance.
To generate check instances with language fashions, we discover quite a lot of strategies, starting from prompt-based technology and few-shot studying to supervised finetuning and reinforcement studying. Some strategies generate extra various check instances, whereas different strategies generate tougher check instances for the goal mannequin. Collectively, the strategies we suggest are helpful for acquiring excessive check protection whereas additionally modeling adversarial instances.
As soon as we discover failure instances, it turns into simpler to repair dangerous mannequin conduct by:
- Blacklisting sure phrases that regularly happen in dangerous outputs, stopping the mannequin from producing outputs that include high-risk phrases.
- Discovering offensive coaching knowledge quoted by the mannequin, to take away that knowledge when coaching future iterations of the mannequin.
- Augmenting the mannequin’s immediate (conditioning textual content) with an instance of the specified conduct for a sure sort of enter, as proven in our latest work.
- Coaching the mannequin to reduce the probability of its authentic, dangerous output for a given check enter.
Total, language fashions are a extremely efficient instrument for uncovering when language fashions behave in quite a lot of undesirable methods. In our present work, we targeted on crimson teaming harms that immediately’s language fashions commit. Sooner or later, our method will also be used to preemptively uncover different, hypothesized harms from superior machine studying programs, e.g., because of interior misalignment or failures in goal robustness. This method is only one element of accountable language mannequin improvement: we view crimson teaming as one instrument for use alongside many others, each to search out harms in language fashions and to mitigate them. We seek advice from Part 7.3 of Rae et al. 2021 for a broader dialogue of different work wanted for language mannequin security.
For extra particulars on our method and outcomes, in addition to the broader penalties of our findings, learn our crimson teaming paper right here.