The Speed of AI

Explaining the Pace of Innovation in AI

It feels like a major AI development drops every other week. Perhaps, like me, you subscribed to The Intelligent Age to keep up. But can this dizzying pace of innovation continue?

COMPUTING POWER IS THE ENGINE DRIVING AI PROGRESS

Progress in AI hinges on three things: algorithmic efficiency, data, and computing power (often called compute in the AI community). All three parts of this AI Triad are important for development, but whereas algorithmic efficiency and data are difficult to measure, compute is uniquely quantifiable. And experts like Richard Sutton, widely considered one of the founders of modern AI, argue that computation is the most critical.

Indeed, recent advancements in AI correlate with a staggering surge in computing power used to train models. Since 2010, the progress in compute has outpaced Moore's law, doubling every six months. Cutting-edge models today leverage a computing power 5 million times that of their predecessors a mere decade ago.

The increase in compute has been facilitated in large part by running chips in parallel—sticking thousands of GPUs together in data centres and sharing the computing load across all of them. Now, companies are racing to develop purpose-built AI hardware for greater efficiency gains. While this kind of development will be costly, investment in AI has also grown substantially in recent years, with investments in 2021 were roughly 30x greater than just eight years prior.

However, the meteoric rate of growth will face limitations in the future. The laws of physics will eventually curtail chip improvement, and economic forces could render data centres beyond a certain size unsustainable. Epoch, a research institute investigating the key trends that will shape AI’s development, estimates that will happen sometime in the early 2030s, suggesting we’re not on the brink of such limits.

The external factors shaping AI’s future

Outside the triad of data, algorithms and compute, other forces may indirectly shape AIs trajectory. Earlier this year, an open letter calling for a 6-month ban on ‘Large AI experiments’ garnered 30 thousand signatures. Elon Musk and Apple co-founder Steve Wozniak were among the signatories, as were several DeepMind researchers. Although the moratorium hasn’t succeeded in halting large training runs, that’s not to say a clampdown on AI development couldn’t happen in future.

To draw a parallel, in 2011, the Fukushima nuclear disaster caused a notable dip in global nuclear energy output as countries pulled their plants offline due to safety concerns, despite the radiation from the disaster resulting in a single fatality. It took nine years for global output to return to pre-Fukushima levels.

Should there be a fatal mishap involving AI, stopping short of catastrophe, it could lead to the implementation of bans or stringent regulatory protocols, halting AI progress. It’s critical that we take proactive steps to develop and regulate AI safely to mitigate the risk of such a disaster.

How you can help

Take action for AI Safety! We’ve partnered with Giving What We Can, who recommend charities making a real impact in this space. Check them out here.pcae

Join the conversation

or to participate.