How to talk about AI (even if you don’t know much about AI)

Deeper Studying

Catching dangerous content material within the age of AI

Within the final 10 years, Huge Tech has develop into actually good at some issues: language, prediction, personalization, archiving, textual content parsing, and knowledge crunching. However it’s nonetheless surprisingly dangerous at catching, labeling, and eradicating dangerous content material. One merely must recall the unfold of conspiracy theories about elections and vaccines in the US over the previous two years to grasp the real-world injury this causes. The benefit of utilizing generative AI might turbocharge the creation of extra dangerous on-line content material. Persons are already utilizing AI language fashions to create pretend information web sites. 

However might AI assist with content material moderation? The latest giant language fashions are a lot better at decoding textual content than earlier AI techniques. In principle, they may very well be used to spice up automated content material moderation. Learn extra from Tate Ryan-Mosley in her weekly e-newsletter, The Technocrat.

Bits and Bytes

Scientists used AI to discover a drug that would combat drug-resistant infections
Researchers at MIT and McMaster College developed an AI algorithm that allowed them to discover a new antibiotic to kill a kind of micro organism accountable for many drug-resistant infections which can be widespread in hospitals. That is an thrilling growth that reveals how AI can speed up and help scientific discovery. (MIT Information) 

Sam Altman warns that OpenAI might stop Europe over AI guidelines
At an occasion in London final week, the CEO stated OpenAI might “stop working” within the EU if it can not adjust to the upcoming AI Act. Altman stated his firm discovered a lot to criticize in how the AI Act was worded, and that there have been “technical limits to what’s doable.” That is doubtless an empty risk. I’ve heard Huge Tech say this many occasions earlier than about one rule or one other. More often than not, the danger of dropping out on income on the planet’s second-largest buying and selling bloc is just too large, and so they determine one thing out. The apparent caveat right here is that many corporations have chosen to not function, or to have a restrained presence, in China. However that’s additionally a really completely different state of affairs. (Time)

Predators are already exploiting AI instruments to generate baby sexual abuse materials
The Nationwide Middle for Lacking and Exploited Kids has warned that predators are utilizing generative AI techniques to create and share pretend baby sexual abuse materials. With highly effective generative fashions being rolled out with safeguards which can be insufficient and straightforward to hack, it was solely a matter of time earlier than we noticed circumstances like this. (Bloomberg)

Tech layoffs have ravaged AI ethics groups 
This can be a good overview of the drastic cuts Meta, Amazon, Alphabet, and Twitter have all made to their groups centered on web belief and security in addition to AI ethics. Meta, for instance, ended a fact-checking mission that had taken half a 12 months to construct. Whereas corporations are racing to roll out highly effective AI fashions of their merchandise, executives wish to boast that their tech growth is secure and moral. However it’s clear that Huge Tech views groups devoted to those points as costly and expendable. (CNBC) 

Leave a Comment