Share
Google , AI
4 4 min read

Google Cloud Next ’25: Thomas Kurian Unveils the Future of Enterprise AI (Maybe)

🧑‍💻 Let’s Be Real for a Second

Every year, tech giants promise that this is the year AI changes everything. And every year, we get a few impressive demos, a flood of hype, and… a lot of half-baked tools that don’t quite work outside the keynote stage.

So when Thomas Kurian took the stage at Google Cloud Next ’25 and started dropping stats like “42.5 exaflops per pod” and “4 million Gemini devs,” I braced myself for another round of polished-but-vague announcements.

But here’s the thing: underneath all the AI fireworks and superlative metrics, there’s a clearer signal this time. Google Cloud is genuinely building out a full-stack AI platform — from chips to agents — that enterprises can actually use. That doesn’t mean it’s all there yet, or that every claim holds up under scrutiny. But it’s starting to look like more than just a flashy deck.


🚀 What’s Actually New (and What Might Just Be Marketing)

Gemini Everywhere, All the Time

Gemini 2.5 is the star of the show, now split into:

  • Pro: high precision for coding and complex tasks
  • Flash: cheaper, faster, tuned for quick customer interactions

That split actually makes sense — it shows Google is thinking in terms of how models are used, not just how big they are. That said, it’s always tricky to validate the “24x intelligence per dollar vs GPT-4o” claim without context. Your mileage (and your latency budget) may vary.

They also showed off the full multi-modal suite: Imagen 3, Chirp 3, Lyria, and Veo 2. All cool, all available in Vertex AI. But we’ve seen generative demos before — and creative tools tend to overpromise. We’ll see how these hold up in actual production.

Ironwood, Blackwell, and the AI Hypercomputer

This part felt like Google saying, “Oh, you thought we weren’t serious?”

The Ironwood TPUs are a beast: over 9,000 chips per pod, 42.5 exaflops, custom-built for large models like Gemini. They’re also leaning into NVIDIA Blackwell GPUs, and building high-performance storage with Hyperdisk Exapools and Anywhere Cache.

It’s impressive — but also… niche. Unless you’re training billion-parameter models or running inference at scale, most orgs won’t need this. Still, for the companies that do? Google’s stack is starting to look scary fast.

Agents, Agents, Agents

This is where Google got more interesting (and more future-facing). They’re making a big bet that AI agents — not just models — are the next interface layer.

Here’s what they rolled out:

  • Agent Development Kit (ADK) – low-code framework for building agents
  • Agent2Agent Protocol (A2A) – lets agents talk to each other
  • Agentspace – centralized UI for employees to use, manage, and create agents

It’s a cool idea, and the pieces look well thought out. But it’s early. Most companies are still figuring out basic AI adoption — jumping to multi-agent orchestration is like learning to drive and being handed a Formula 1 car.

That said, if any company can normalize this shift, it’s probably Google.

Cloud WAN and the Infra Flex

Kurian also announced Cloud WAN, a fully-managed, ultra-fast global network. It’s the same backbone that powers Google Search, now being opened up to customers — promising up to 40% performance gains and major cost reductions.

I’m not an enterprise network engineer, but I know this: WAN architecture is a pain, and if Google can offload that complexity without locking you in too hard, a lot of CIOs are going to be very interested.


🧪 Real-World Use: Hype Meets Workflow

We’ve been using Workspace’s Gemini features (like the writing assistant and data analysis in Sheets) for months. They’re legit helpful — not mind-blowing, but quietly useful, which honestly feels like a win.

Vertex AI is also maturing. You can actually ground responses in your data, fine-tune Gemini on proprietary info, and now track usage and optimize performance from a single dashboard. It finally feels like a platform that wants to meet enterprises where they are, not force them to rebuild everything.

Still, a lot of the new multi-agent stuff is TBD. The demos are slick. The docs are promising. But will the average team have the time or know-how to build and manage these agents in a meaningful way? We’ll see.


🔢 Trevor Score: 8/10

This isn’t a formal review — it’s just how I felt using this thing. A gut-check from someone who actually used it.

Trevor Score: 8/10 — Bold, fast, and finally cohesive — but some claims need a reality check.

Google’s stack is coming together in a big way, especially for developers and enterprises that are ready to go all-in on AI. But like all these AI rollouts, it’s easy to overshoot. There’s still a lot that needs to be tested in the wild.


📦 Final Verdict

Kurian’s keynote wasn’t subtle — it was Google saying “We’ve got the stack, we’ve got the models, and we’re open for business.” And honestly, a lot of it holds up. If you’re an enterprise building AI into your core processes, there’s probably no cloud provider with a more complete vision right now.

But there’s a gap — as always — between keynote reality and production reality. The tools are here. The question is whether teams have the resources (and clarity) to use them well.


👋 Final Thought

Props to Google for aiming big — but let’s keep watching how this stuff actually plays out. Because in AI, shipping is the easy part. Scaling, integrating, and not overpromising? That’s the real challenge.

And hey, if we’re all working alongside AI agents by next year, I’ll happily eat my skeptical little hat.

Watch The Keynote Below!