The New York Times Takes On OpenAI

PLUS: Navigating the Rapid Waves of AI Innovation and Safeguarding its Future.

Welcome! Today is Thursday, December 28th, and we're diving into legal action against OpenAI and Microsoft by The New York Times, alongside Big Tech surpassing VC firms in AI startup investments. New to The Intelligence Age? Sign up here.
Drop us a line anytime at [email protected] with feedback.

News & Insights

The New York Times Wants OpenAI And Microsoft To Pay For Training Data

In a landmark case, The New York Times (NYT) is taking legal action against OpenAI and Microsoft for allegedly using its copyrighted material to train generative AI models without permission. The complaint, filed in Manhattan, demands the destruction of related models and data, seeking reparation for "unlawful copying and use" of the newspaper's content. The NYT underscores the potential peril to independent journalism, asserting that the infringement undermines its business by creating AI-driven competitors that bypass the need for subscriptions and misappropriate the brand through AI-generated inaccuracies.

OpenAI, while expressing disappointment over the lawsuit, maintains its respect for content creators and is open to discussions on beneficial collaborations. However, the conflict mirrors a broader struggle within the industry, where generative AI's reliance on extensive data scraping faces pushback from copyright holders. With legal skirmishes increasing, the outcome of this confrontation could set a precedent for future interactions between AI enterprises and content providers. techcrunch

Big Tech Is Spending More Than VC Firms On AI Startups

In the race to lead generative AI, Big Tech has eclipsed VC firms in terms of investment firepower. Last year, behemoths like Microsoft, Google, and Amazon inked deals that accounted for over $18 billion of the total $27 billion raised by AI startups. This surge, partly kindled by the launch of OpenAI’s ChatGPT, signifies a shift in market dynamics—where once VCs were the go-to for early-stage funding, now the tech giants are leveraging their vast resources to forge partnerships and mold the AI landscape.

Nina Achadjian of Index Ventures notes the changing tide, "For traditional VCs, you had to be in early and you had to have conviction." Meanwhile, Big Tech's aggressive strategy is not just about the money; it's also about providing the indispensable infrastructure and cutting-edge technology that AI development demands. This symbiotic arrangement has propelled valuations and set new competition standards, making it challenging for VCs to stake their claim on the AI frontier. arstechnica

Together with Aragon

🚀 Transform your selfies into AI headshots

Aragon AI’s tech is built by leading AI researchers and delivers the best results within 90 minutes.

  • Get Stunning Professional Headshots Without The Hassle

  • Never Pay For A Photographer Again

  • Just Upload Some Selfies & See For Yourself

Try it now → (use code “INTELLIGENCEAGE” for 10% off)

Please support our sponsors!

Mastering AI

Top Tools

  • Learn No-Code & AI skills in 100 days: Learn No-Code and AI skills by committing to the 100DaysOfNoCode or 100DaysOfAI challenges. Receive a 30-minute bite-sized lesson straight to your inbox every day. It’s free, fun and effective! Both challenges start Jan 1st, 2024. Start Learning No-Code or AI skills (sponsored)

  • Tagbox: Organize creative assets with AI and smart tags. (website)

  • WordAi: AI-driven text rewriting tool for better quality and SEO. (website)

Practical AI

Understanding the Pace of AI Innovation

Guest Article by Harry Booth

It feels like a major AI development drops every other week. Perhaps, like me, you subscribed to The Intelligent Age to keep up. But can this dizzying pace of innovation continue?

Computing power is the engine driving AI progress

Progress in AI hinges on three things: algorithmic efficiency, data, and computing power (often called compute in the AI community). All three parts of this AI Triad are important for development, but whereas algorithmic efficiency and data are difficult to measure, compute is uniquely quantifiable. And experts like Richard Sutton, widely considered one of the founders of modern AI, argue that computation is the most critical.

Indeed, recent advancements in AI correlate with a staggering surge in computing power used to train models. Since 2010, the progress in compute has outpaced Moore's law, doubling every six months. Cutting-edge models today leverage a computing power 5 million times that of their predecessors a mere decade ago.

The increase in compute has been facilitated in large part by running chips in parallel—sticking thousands of GPUs together in data centres and sharing the computing load across all of them. Now, companies are racing to develop purpose-built AI hardware for greater efficiency gains. While this kind of development will be costly, investment in AI has also grown substantially in recent years, with investments in 2021 were roughly 30x greater than just eight years prior.

However, the meteoric rate of growth will face limitations in the future. The laws of physics will eventually curtail chip improvement, and economic forces could render data centres beyond a certain size unsustainable. Epoch, a research institute investigating the key trends that will shape AI’s development, estimates that will happen sometime in the early 2030s, suggesting we’re not on the brink of such limits.

The external factors shaping AI’s future

Outside the triad of data, algorithms and compute, other forces may indirectly shape AIs trajectory. Earlier this year, an open letter calling for a 6-month ban on ‘Large AI experiments’ garnered 30 thousand signatures. Elon Musk and Apple co-founder Steve Wozniak were among the signatories, as were several DeepMind researchers. Although the moratorium hasn’t succeeded in halting large training runs, that’s not to say a clampdown on AI development couldn’t happen in future.

To draw a parallel, in 2011, the Fukushima nuclear disaster caused a notable dip in global nuclear energy output as countries pulled their plants offline due to safety concerns, despite the radiation from the disaster resulting in a single fatality. It took nine years for global output to return to pre-Fukushima levels.

Should there be a fatal mishap involving AI, stopping short of catastrophe, it could lead to the implementation of bans or stringent regulatory protocols, halting AI progress. It’s critical that we take proactive steps to develop and regulate AI safely to mitigate the risk of such a disaster.

How you can help

Take action for AI Safety! We’ve partnered with Giving What We Can, who recommend charities making a real impact in this space. Check them out here.

Around The World

More News

  • Nvidia Reaps Significant Financial Gains from Generative AI's Surge, While Other Firms Venture into Bold, Experimental Applications.

  • Olympic Committee Introduces Algorithmic Video Surveillance as a New Event to Enhance Security and Efficiency at Games.

  • Microsoft Launches Free AI-Powered Copilot App for Android Users to Streamline Tasks and Enhance Productivity.

Trending Today

Written by Isaac R. Ward, Casey Clifton, and Alex Brogan.

Send us feedback at [email protected] and help us provide the best coverage of Artificial Intelligence possible.

Interested in reaching smart business leaders like you? To become an Intelligence Age partner, apply here.

Join the conversation

or to participate.