Navigating the AI Tsunami: Investment, Product, and the Professional Services Imperative
The artificial intelligence boom continues its relentless march, prompting both excitement and apprehension. With unprecedented capital flowing into AI infrastructure and a rapid pace of technological advancement, a critical question emerges: are we witnessing an AI bubble, or is this a foundational shift? The answer, it seems, is nuanced, depending on which part of the AI ecosystem one examines.
The AI Investment Landscape: A Three-Tiered View
Andrew NG offers a framework for understanding where investment is flowing and where the opportunities and risks lie, as detailed in his newsletter Is there an AI bubble?. He categorises AI investment into three distinct areas:
- AI Application Layer: Andrew believes this area is currently underinvested. There is immense potential for new applications built atop AI infrastructure, and these applications, by their nature, must be more valuable than the underlying technology to sustain the ecosystem. He observes “many green shoots across many businesses that are applying agentic workflows” and notes that Venture Capital investors often hesitate here, preferring the more understood recipe of infrastructure deployment.
- AI Infrastructure for Inference: This sector still requires significant investment. The demand for processing power to generate tokens (AI outputs) already outstrips supply. Andrew highlights that businesses are “supply-constrained rather than demand-constrained,” a positive problem indicating strong underlying need. As agentic coders like Claude Code, OpenAI Codex (with GPT-5), and Gemini 3’s Google CLI improve and gain market penetration, the aggregate demand for token generation will only grow.
- AI Infrastructure for Model Training: While Andrew is “cautiously optimistic” about this sector, he considers it the riskiest of the three. The rise of open-source/open-weight models and continuous algorithmic and hardware improvements weaken the “technology moat” for training frontier models. This could lead to a scenario where companies pouring billions into training might not see attractive financial returns.
Andrew’s primary concern regarding a potential “bubble” is not a fundamental flaw in AI, but rather the risk that overinvestment and a subsequent collapse in one part of the stack (e.g., training infrastructure) could trigger negative market sentiment across the entire field. However, he remains “very confident about the long-term health of AI’s fundamentals.”
The Product Reality: What Actually Works (and What’s Next)
Sean Goedecke offers a pragmatic view on the types of AI products that have genuinely found traction, noting that despite massive investment, many “new AI products” are simply chatbots, as discussed in his article Only three kinds of AI products actually work. He identifies three successful archetypes:
- Chatbots: While ubiquitous, Sean argues that “the best chatbot product is the model itself.” AI labs possess decisive advantages in model access and simultaneous development of chatbot harnesses (e.g., Anthropic with Claude Code, OpenAI with Codex). Most bespoke chatbots struggle to compete with general-purpose models like ChatGPT.
- Caveat: Explicit Roleplay: A niche exists for chatbots that fulfil requests (e.g., adult content) that mainstream models avoid. However, this segment is likely to be absorbed by major AI labs as they become more flexible.
- Caveat: Chatbots with Tools: These “AI assistants” often fail because “chat is not a good user interface.” Savvy users can manipulate tools, and simple actions are often more efficiently performed via traditional UIs.
- Completions: Products like GitHub Copilot, which predated ChatGPT, exemplify this. They act as “smart autocompletes,” allowing users to leverage AI power without changing their workflow. The genius lies in users never having to talk to the model. Sean expresses surprise that this hasn’t taken off more widely outside coding.
- Agents: This is the most recent successful archetype, with coding agents making significant strides in the last year. Agents differ from chatbots-with-tools because they take an initial request and autonomously implement and test it. Their success in coding stems from the ease of verifying changes (running tests) and AI labs’ incentive to produce effective coding models. The “multi-billion-dollar question” is whether agents can be useful for tasks beyond coding, with research agents (e.g., in law or medicine) showing promise.
Sean also identifies two types of LLM-based products that “don’t work yet but may soon”:
- LLM-generated Feeds: With major players like Mark Zuckerberg, OpenAI (Sora), and xAI exploring infinite personalised content feeds, this could become a primary mode of interaction. The advantage, like completions, is that users don’t need to interact with a chatbot; inputs come from user behaviour.
- Games: While speculative, the integration of LLMs into video games, from full world simulations to AI-generated dialogue, holds potential. However, long development cycles, gamer resistance to AI, and challenges in making AI-generated content genuinely “challenging” are hurdles.
The Professional Services Imperative: Productise or Perish
The rapid evolution of AI, particularly agentic capabilities, is profoundly reshaping professional services. Jonas Braadbaart starkly illustrates this with the recent news of Accenture firing 11,000 employees who couldn’t adapt to AI, as detailed in his post Accenture Just Fired 11,000 People. You’re Next.. These were not underperformers, but consultants whose billable-hour work was rendered obsolete by AI performing tasks in minutes that once took hours.
This event underscores a critical shift: professional services firms are moving from billable-hour models to outcome-based business models. The economic reality is stark: a EUR 500 per hour client engagement struggles against an AI competitor charging EUR 20 per month.
Jonas argues that firms must “productise or perish.” This involves:
- Embracing Opportunity: Identifying services that can be productised through AI and agentic solutions.
- Value-Based Pricing: Aligning offerings with the value created for customers, rather than time spent. A “back-of-the-napkin” calculation reveals that €20 (the hourly rate for entry-level white-collar positions) can buy 5 million Gemini 3 Pro tokens, translating to 3 million input words and 0.75 million output words. This volume of information would take a human 625 hours to process, costing €12,500 at €20 per hour—a staggering 625x difference in processing cost.
- Re-architecting for AI: Traditional firm structures, organised around human cognitive limits, are out of sync with AI’s capabilities.
While venture capital pours into vertical agentic AI startups, Jonas contends that these often struggle to gain significant market share due to the complexity of real-world economies. This presents a unique opportunity for existing professional services firms. By leveraging their “local industry and client expertise” and packaging it into “agentic, done-for-you productised services that create direct customer outcomes,” they can gain a significant advantage.
Jonas makes a bold prediction for 2026 and beyond: “The next big platform play will be to build the Shopify or Stripe of agentic service delivery: a platform that allows professional services firms to build and deliver agentic done-for-you services to their existing clients in a composable no-code environment.”
Synthesis: The Intertwined Future
The perspectives of Andrew Ng, Sean Goedecke, and Jonas Braadbaart paint a cohesive picture of the AI future. Andrew’s call for underinvestment in the AI application layer directly aligns with Sean’s identification of successful product archetypes like completions and agents. These are the “green shoots” Andrew observes, and they represent the kind of AI applications that professional services firms must learn to build and productise, as Jonas argues.
The increasing demand for AI inference capacity, highlighted by Andrew, is a direct consequence of the growing adoption of agentic tools and the massive token consumption demonstrated by Jonas’s calculations. The rapid advancements in models like Claude Sonnet 3.7, GPT-5, and Gemini 3, which have only truly enabled effective agents in the past year, are the catalysts for this transformation.
The AI landscape is not merely a speculative bubble but a rapidly evolving ecosystem. Strategic investment, focused product development on truly effective archetypes, and a fundamental re-evaluation of business models in professional services are not just opportunities but necessities for navigating this transformative period. The future belongs to those who can effectively build, deploy, and productise AI’s capabilities to deliver tangible outcomes.