5 impressive feats of DeepMind’s new self-evolving AI coding agent

Table of Contents
Google DeepMind’s AI systems have taken big scientific strides in recent years — from predicting the 3D structures of almost every known protein in the universe to forecasting weather more accurately than ever before.
The UK-based lab today unveiled its latest advancement: AlphaEvolve, an AI coding agent that makes large language models (LLMs) like Gemini better at solving complex computing and mathematical problems.
AlphaEvolve is powered by the same models that it’s trying to improve. Using Gemini, the agent proposes programs — written in code — that try to solve a given problem. It runs each code snippet through automated tests that evaluate how accurate, efficient, or novel it is. AlphaEvolve keeps the top-performing code snippets and uses them as the basis for the next round of generation. Over many cycles, this process “evolves” better and better solutions. In essence, it is a self-evolving AI.
DeepMind has already used AlphaEvolve to tackle data centre energy use, design better chips, and speed up AI training. Here are five of its top feats so far.
1. It discovered new solutions to some of the world’s toughest maths problems
AlphaEvolve was put to the test on over 50 open problems in maths, from combinatorics to number theory. In 20% of cases, it improved on the best-known solutions to them.
One of those was the 300-year-old kissing number problem. In 11-dimensional space, AlphaEvolve discovered a new lower bound with a configuration of 593 spheres — progress that even expert mathematicians hadn’t reached.
2. It made Google’s data centres more efficient
The AI agent devised a way to better manage power scheduling at Google’s data centres. That has allowed the tech giant to improve its data centre energy efficiency by 0.7% over the last year — a significant cost and energy saver given the size of its data centre operation.
3. It helped train Gemini faster
AlphaEvolve improved the way matrix multiplications are split into subproblems, a core operation in training AI models like Gemini. That optimisation sped up the process by 23%, reducing Gemini’s total training time by 1%. In the world of generative AI, every percentage point can translate into cost and energy savings.
4. It co-designed part of Google’s next AI chip
The agent is also using its code-writing skills to rewire things in the physical world. It rewrote a portion of an arithmetic circuit in Verilog — a language used for chip design — making it more efficient. That same logic is now being used to develop Google’s future TPU (Tensor Processing Unit), an advanced chip for machine learning.
5. It beat a legendary algorithm from 1969
For decades, Strassen’s algorithm was the gold standard for multiplying 4×4 complex matrices. AlphaEvolve found a more efficient solution — using fewer scalar multiplications. This could lead to more advanced LLMs, which rely heavily on matrix multiplication to function.
According to DeepMind, these feats are just the tip of the iceberg for AlphaEvolve. The lab envisions the agent solving countless problems, from discovering new materials and drugs to streamlining business operations.
AI’s evolution will be a hot topic at TNW Conference, which takes place on June 19-20 in Amsterdam. Tickets for the event are now on sale — use the code TNWXMEDIA2025 at the checkout to get 30% off.
If you liked the article, do not forget to share it with your friends. Follow us on Google News too, click on the star and choose us from your favorites.
If you want to read more like this article, you can visit our Technology category.