Technology

#Inside Google DeepMind’s approach to AI safety

This article features an interview with Lila Ibrahim, COO of Google DeepMind. Ibrahim will be speaking at TNW Conference, which takes place on June 15 & 16 in Amsterdam. If you want to experience the event (and say hi to our editorial team!), we’ve got something special for our loyal readers. Use the promo code READ-TNW-25 and get a 25% discount on your business pass for TNW Conference. See you in Amsterdam!

AI safety has become a mainstream concern. The rapid development of tools like ChatGPT and deepfakes has sparked fears about job losses, disinformation — and even annihilation. Last month, a warning that artificial intelligence posed a “risk of extinction” attracted newspaper headlines around the world.

The warning came in a statement signed by more than 350 industry heavyweights. Among them was Lila Ibrahim, the Chief Operating Officer of Google DeepMind. As a leader of the pioneering AI lab, Ibrahim has a front-row view of the threats — and opportunities.

DeepMind has delivered some of the field’s most striking breakthroughs, from conquering complex games to revealing the structure of the protein universe.

The company’s ultimate mission is to create artificial general intelligence, a nebulous concept that broadly refers to machines with human-level cognitive abilities. It’s a visionary ambition that needs to remain grounded in reality — which is where Ibrahim comes in. 

In 2018, Ibrahim was appointed as DeepMind’s first-ever COO. Her role oversees business operations and growth, with a strong focus on building AI responsibly.

“New and emerging risks — such as bias, safety and inequality — should be taken extremely seriously,” Ibrahim told TNW via email. “Similarly, we want to make sure we’re doing what we can to maximize the beneficial outcomes.”

Lila Ibrahim
Close

Please allow ads on our site

Please consider supporting us by disabling your ad blocker!