In the long-ago days of… six weeks ago, the conventional wisdom was that every tech company should have been backing up a Brinks truck to Nvidia’s headquarters and desperately buying up as many GPUs as possible while sprinkling the ✨emoji on all their products. And then DeepSeek came along and, while their actual impact on the economics of LLM models may have been somewhat exaggerated, there’s no doubt that, well, things have changed. The next frontier of what matters in AI for ordinary companies is no longer going to be just mindlessly throwing the most money at training the most costly new large language model.
And maybe that means, for the first time since ChatGPT blew up a couple years ago, we can start being a little more normal in how we think about AI. It’s always a good thing when we’re being rational in our decision-making, especially for businesses.
For most companies, the AI adoption cycle typically encompasses a range of phases from teams still in the “kicking the tires” process of discovering their first uses for LLMs, to deeper evaluations of how fundamental processes can be re-imagined using deeper reasoning capabilities.
From technical practitioners, we’re hearing some pretty consistent themes, whether it’s from CTOs of Fortune 50 companies or indie developers:
Yes, LLMs now do very interesting things! But that’s an evolution of the long history of machine learning that’s been happening for years, and it’s in classically strong areas like coding. We still need a more rigorous assessment of strengths and weaknesses.
The hype around AI is pretty exhausting, and the credibility of many offerings is undermined by people (vendors, investors or various loud bullshitters in the media ecosystem) making assertions about AI technologies that simply aren’t supported by facts and data.
The rapidly accelerating maturity in benchmarking is delivering a lot of value in assessing tools in the ecosystem and making it easier to evaluate various platforms.
Increasingly, the choice of AI platforms is “all of the above”, with a selection of commercial vendors, open source platforms, and (especially) internally-trained models built around proprietary or industry-specific data. The gap between the headline-grabbing consumer AI products and the real-world business benefits is large and rapidly growing.
With that fairly strong consensus, that leads to a fairly consistent view of the risk profile for companies exploring the AI space going forward, and it’s very different than the market looked like even six months ago. Yes, hallucinations are still on the list, but the emphasis of what leaders are concerned about has shifted a lot. What are AI worries going to look like for the rest of 2025? Probably something like this:
Whose content is this? How is consent and access going to be managed? Everybody knows that nearly every large-scale model is built around content that’s been used as training material without consent. And every company that has a large amount of public IP knows that AI vendors are voraciously training on that material without negotiating consent. There may be some conversation to be had about what legal regimes and fair use policies allow, and then there’s another conversation about what different organizations will consider a fair exchange of value. What’s clearly obvious is that a plaintext robots.txt file that’s enforced through the honor system is not going to cut it for enforcing a licensing regime.
How are we going to maintain our sustainability commitments? It’s absurd to see coal plants coming back online in 2025 on Dolly Parton’s internet. Yes, yes, we know the public bluster about this topic, but look at what companies are actually doing in reality, especially globally, and they’re interested in sustainable renewable energy and fulfilling their longstanding commitments to responsible energy usage. Whether it’s about better caching of LLM results, more efficient AI model training, or simply having AI models that are specifically tailored to workloads and use cases that are focused on an actual problem to be solved, there are lots of ways to get wins in AI efficiency that will result in meaningful reductions in energy usage and carbon emissions.
How does the open source story evolve? We’re seeing backdoored open-weight models already, and precious few actual open source LLMs. But nearly every org wants to roll its own models for at least some of their projects, so the hope is that 2025 sees this space shake out and mature. Are we going to see consensus and consolidation around one or two major open source platforms like we’ve seen in past markets like content management?
These are interesting questions! And more importantly, they’re not the same old questions we’ve been asking about AI for the past few years. These questions are going to require our organizations to think differently, and to challenge the conventional wisdom that’s been handed to us in this first era of LLM hype. We’re focusing on listening to real-world developers who are building real things and using that as our source of truth for making decisions about what comes next. The exciting part is that already seems to be opening up a lot of different and new opportunities that may have been getting crowded out by all the noise that’s just starting to quiet down.