2025 AI Year in Review: We’re Back at the GeoCities Moment
hatch

If 2025 felt like the year AI almost became magic and then immediately turned into plumbing, congratulations. You were paying attention.
For a brief and unhinged stretch, models felt AGI-adjacent. Demos were jaw-dropping. Twitter became unreadable. Every startup pitch sounded like a spiritual awakening with a deck. And then something deeply inconvenient happened.
Reality arrived. With a clipboard.
This is not a story about disappointment. It is a story about normalization. If you lived through the early internet, this pattern should feel familiar. GeoCities gave everyone the power to publish. Most of it was unreadable. A tiny fraction became the modern economy.
AI in 2025 was GeoCities with GPUs.
Models Felt AGI-ish, Then Immediately Commoditized
Early in the year, frontier models crossed an important psychological threshold. Reasoning improved. Multimodality stopped feeling bolted on. Latency dropped enough that systems felt responsive instead of theatrical.
For a moment, it felt plausible that we were one release away from something truly general.
Then the arms race did what arms races always do. It erased differentiation. Labs shipped faster than most enterprises could finish vendor security reviews. Capabilities converged. Pricing compressed. Open and closed models leapfrogged each other until the gap stopped mattering.
The uncomfortable truth emerged.
If your AI strategy still starts with “Which model should we choose?”, you are already behind.
The winning organizations in 2025 treated models like electricity. Ubiquitous. Replaceable. Boring. The advantage moved up the stack.
Agents Turned AI from Assistant into Teammate
The most important shift of the year was not raw intelligence. It was agency.
Memory, tool use, voice, and vision quietly transformed AI from something you prompted into something that could execute. Agents scheduled meetings, triaged tickets, reviewed pull requests, summarized contracts, and handled multi-step workflows with minimal supervision.
This is where AI stopped being impressive and started being operational.
For enterprises, this was both exhilarating and terrifying. You could automate work that previously required coordination and judgment. You could also break production in ways that were previously theoretical.
The lesson was clear. AI without constraints creates chaos. AI with structure creates leverage.
Prompt Engineering Died. Context Engineering Replaced It.
The loudest voices of the early AI era promised advantage through clever phrasing. That era ended quietly in 2025.
What mattered instead was context. Who the model could see. What data it could access. What actions it could take. What memory persisted. What workflows it could touch.
This is why enterprises quietly started winning the second half of the year. They owned data, permissions, and process. Individuals moved fast. Organizations scaled safely.
As practitioner and strategist Nate B. Jones has argued repeatedly, models are utilities and systems are the moat. His writing on pragmatic AI adoption is required reading for leaders trying to separate signal from noise at.
The era of model loyalty is over. Long live orchestration.
Adoption Beat Hype Again, Despite the Headlines
No AI retrospective would be complete without addressing the statistic that refused to die. The claim that 95 percent of AI projects fail circulated widely again this year, often attributed to an MIT study.
The problem is that this framing is misleading and widely criticized by practitioners. Analysts and builders have pointed out that the figure conflates experiments, prototypes, paused pilots, and actual failures. It measures organizational indecision more than technical viability. A representative critique can be found summarized here.
What actually happened in 2025 was simpler.
Teams that treated AI like infrastructure shipped. Teams that treated it like a demo did not.
Most value creation happened quietly. The absence of a press release was often a positive signal.
Infrastructure Bent the Economy Around It
AI stopped being primarily a software story and became an infrastructure story.
Power constraints mattered. Data center capacity mattered. Capital expenditure mattered. Compute planning entered board-level conversations. Companies that invested early looked reckless until they looked prescient.
This is why comparisons to past bubbles miss the point. Yes, there was excess. There always is. But underneath the hype, real infrastructure was built.
You cannot hallucinate a data center.
Search Turned into Funnels
Search as we knew it began to dissolve.
Instead of blue links, users increasingly received synthesized answers, recommendations, and implied next steps. Search became conversation-shaped conversion.
This shift had immediate consequences. Brands that were not legible to AI assistants effectively disappeared. SEO became less about keywords and more about being referenced, trusted, and structured for machine consumption.
If your content cannot be summarized accurately by an AI, it may as well not exist.
We Started Measuring “Can AI Do This Job?”
The most uncomfortable question of 2025 was also the most practical.
Not “Will AI replace jobs someday?” but “Can AI do this task right now?”
Role by role, task by task, organizations began testing assumptions. Some roles fragmented. Some accelerated. Some quietly vanished at the edges.
Hiring slowed where automation worked. Demand spiked where judgment, coordination, and accountability remained scarce.
Leadership stopped being about headcount and started being about leverage.
Safety, Risk, and the Return of Existential Dread
As capabilities improved, so did concern.
Geoffrey Hinton’s renewed warnings about AI risk brought existential questions back into the mainstream, including his decision to leave Google so he could speak freely about the technology’s trajectory.
2025 lived in tension between acceleration and restraint. Regulation lagged. Self-governance filled some gaps. Nobody felt fully comfortable, which was probably appropriate.
We are still learning what these systems are and what they will become.
The Device Revolution Did Not Arrive. Yet.
Despite rumors and prototypes, no AI-first device reshaped daily life in 2025. Software moved faster than hardware. Interfaces evolved. Form factors stayed familiar.
This was not failure. It was sequencing.
Behavior changes before hardware does. When the device shift comes, it will feel sudden and obvious in hindsight.
So What Did 2025 Actually Mean?
2025 was not the year AI peaked. It was the year AI normalized.
Models are not moats. Systems are.Intelligence is cheaper than judgment.Adoption beats hype.Infrastructure matters more than demos.
Most importantly, we were reminded that being early feels chaotic, underwhelming, and confusing at the same time.
In 1995, the internet was ugly, fragmented, and misunderstood. In 2025, AI is too. The builders who recognize this moment for what it is will be the ones still standing when the pages stop blinking.
Sources and Further Reading (with Annotations)
Core AI Thinkers and Researchers
-
Andrej Karpathy – Talks and essays on large language models and AI cognition YouTube Insight: Offers perspective on current model capabilities, jagged intelligence, and early-stage potential of AI systems.
-
Demis Hassabis – Interviews and discussions on AGI, jagged intelligence, and AI safety YouTube
Insight: Emphasizes cautious optimism, the distinction between narrow and general intelligence, and the importance of safety research. -
Geoffrey Hinton – Public statements and interviews on AI risk and loss of control The Guardian
Insight: Highlights long-term alignment risks and the challenges of building AI systems whose internal representations we do not fully understand.
Industry Analysis and Strategy
-
Ben Thompson – Stratechery, “The Benefits of Bubbles” and 2025 AI analysis Stratechery
Insight: Provides strategic context, framing AI infrastructure investments as part of a beneficial “bubble” that lays groundwork for future innovation. -
Matt Wood (PwC) – Perspectives on enterprise AI adoption and operating models LinkedIn
Insight: Explains why AI adoption struggles are often organizational, not technological, highlighting the importance of workflow redesign and governance.
Practitioner and Builder Perspectives
-
Nate B. Jones – Essays and talks on practical AI stacks, fluency, and real-world usage Nate B. Jones
Insight: Focuses on assembling multi-model AI stacks and the importance of fluency and understanding system behavior over mere access to tools. -
Francis Shanahan – “The Contrarian Agent: Why Making AI More Autonomous Can Make It Worse” Substack
Insight: Argues that autonomy without supervision often reduces reliability, offering a grounded take on agent design. -
Sabrina Ramonov – Writing on AI, creativity, and the limits of automation Sabrina Ramonov
Insight: Emphasizes that AI amplifies mediocrity without human oversight and that creativity remains inherently human. -
Allie K. Miller – The Entire 2025 AI Year in Review AI with Allie
Insight: Provides a comprehensive overview of 2025 AI trends, tools, adoption patterns, and enterprise ROI insights.
Broader Context and Reporting
-
The Guardian – Coverage of AI safety debates and Geoffrey Hinton’s warnings The Guardian
Insight: Offers journalistic context on public and governmental reactions to AI risk and alignment concerns. -
Lex Fridman Podcast – Interviews with prominent AI researchers and thinkers Lex Fridman Podcast
Insight: Captures real-time expert insights, debate on timelines, and discussions of AI’s impact on society and safety.