The 2026 Solow Paradox: Why Your AI Strategy is a Camel (and How to Build an Architecture of Speed)

Happy Saturday! In this edition of the ‘Saturday Strategy’, let’s take a look at the situation where the financial investment isn’t lifting productivity. According to Gartner, global AI spending has hit a staggering $2.5 trillion, so financial investment isn’t the issue. On the other, your CFO is probably asking why the promised “productivity revolution” hasn’t shown up in the quarterly reports yet. Welcome to the 2026 Solow Paradox. In 1987, Nobel laureate Robert Solow famously quipped, “You can see the computer age everywhere but in the productivity statistics.” Fast forward nearly forty years, and history is repeating itself. We have LLMs in our pockets, agents in our workflows, and massive GPU clusters in our basements. However, the National Bureau of Economic Research found that roughly 90% of firms report zero measurable impact on their bottom-line productivity. Why? AI strategies aren’t built for speed or scale. They are built by committee. And as the old saying goes, a horse designed by a committee is a camel. I’m seeing a widening gap between companies that treat AI as a “tool” and those that treat it as a fundamental restructuring of how work happens. If you want to stop building camels and start building an architecture of speed, you need to change the way you think about governance, identity, and decision-making. As the Gartner report notes, AI adoption is shaped by the readiness of both human capital and organisational processes. The Gartner data shows that there is an increased financial investment, but it is not enough on its own. It takes maturity and self-awareness to focus on a discovery mindset that focuses on much-needed outcomes over speculative potential.

The J-Curve: Why Productivity Dips Before it Soars

The reason we’re feeling the Solow Paradox so acutely right now is due to the Productivity J-Curve. History shows us that transformative technologies, like electricity or the original internet, initially trigger a dip in productivity. This is because organisations have to endure massive disruption and adjustment costs. Even the series Downton Abbey had a storyline about the introduction of the telephone. The Dowager Lady Grantham speaks about a new “instrument of torture” – a telephone, commenting that “I feel as if I were living in an H.G. Wells novel”.  Like other technology, general purpose technologies (GPTs) such as AI enable and require significant complementary investments. It can mean architecting of new processes and business models. It can also mean retraining human capital; coming back to Downton Abbey, the staff were initially interested in the new telephones, but reacted with consternation when they realised they would actually have to answer the telephone when it rang. AI Iinvestments such of these are often intangible and poorly measured in the accounts, even when they create valuable assets for the firm. Organisations are buying the tools, but haven’t yet redesigned the workflows to match them. The improved predictability of ROI must occur before AI can be scaled up by the enterprise, so businesses can end up stuck because AI is in the Trough of Disillusionment throughout 2026. From my experience, AI is most often be sold to enterprises by their incumbent software provider, e.g. Google or Microsoft 365, rather than bought as part of a new project. Despite being sold by incumbents, recent research by ManpowerGroup’s Global Talent Barometer found that worker confidence in AI’s utility actually plummeted by 18% in 2025, even as usage increased. Why? Because we’ve layered complex AI tools on top of old, clunky human processes. We’re asking a Ferrari to pull a plow. To get past the dip, we have to move beyond the AI trust wall and start looking at how we actually manage this new digital workforce.

Your Strategy is a Camel: The Curse of Consensus Management

Let’s take a common example: someone has a great idea for an autonomous AI agent enters the pipeline. It goes to IT for a security review. It goes to Legal for a risk assessment. It goes to HR to discuss “human-in-the-loop” ethics. It goes to a steering committee for “alignment.” By the time the project is approved, it’s been washed out into a shadow of its former self to the point where it is no longer an autonomous agent. It has been redefined to become a “supervised chatbot with restricted permissions that requires manual approvals to send an email.” This is the Camel Problem. Consensus-driven management is designed to minimise risk, but in the AI era, the greatest risk is slowness. While your committee is debating the font on the AI’s user interface, your competitors are building lean, fast-moving systems that operate without the friction of “governance theatre.”

Robotic horse turned into a camel by committee hands, representing slow AI strategy and governance theatre.

The Rise of Non-Human Identity (NHI)

One of the biggest shifts we’ve seen in 2026 is the concept of Non-Human Identities (NHI). Research by the Cloud Security Alliance (CSA) established the current industry baseline at 45 NHIs for every 1 human identity. Like human team members, they have specific roles, access levels, and the ability to act autonomously. To achieve speed, AI agents need to be able to “sign” documents, move data, and make low-level decisions without waiting for a human to click “OK.”

Why NHI Matters for Security and Speed

  1. Traceability: You need to know exactly which agent did what, when, and why.
  2. Granular Permissions: You don’t give a junior analyst the keys to the kingdom; you shouldn’t give it to a general-purpose LLM either.
  3. Autonomous Action: When an agent has its own identity, it can interact with other systems (and other agents) in a secure, verifiable way.
There is not a “one size fits all” approach to automation or journey toward a  governance model that imcludes NHI.

From Governance Theatre to an Architecture of Speed

There is a risk that corporate governance is “theatre”. It can be regarded as a series of tickboxes that reduce and disseminate risk. To break the Solow Paradox, you need an Architecture of Speed which means shifting your focus from preventing mistakes to enabling velocity.
The challenge is the organisational inertia that treats AI like a fancy version of Excel.
How can you circumvent treating AI as if it is Excel on steroids?

Decentralise Decision-Making

If every AI implementation requires a signature from the C-suite, you will never scale. Instead, create “Guardrail Templates.” Define the boundaries as you would with fellow human team members, such as data privacy and ethical constraints. Then, let your departments deploy agents within those boundaries as they see fit. Decentralisation can help businesses to keep up with the pace of AI and Data Strategy.

Prioritise Supervised Autonomy

Organisations can move from “Co-pilot” (where the human does the work and the AI helps) to “Supervised Autonomy” (where the AI does the work and the human manages the interesting, more complicated work and the exceptions). In 2026, the most productive firms are those where one human supervisor can manage 5 to 10 AI agents. This is where the real Total Factor Productivity (TFP) gains are hidden. Human supervisor managing many non-human identities and AI agents in a modern office lobby.

Embrace “Small Data” for Fast Wins

While everyone is obsessed with massive data lakes, the fastest gains often come from high-quality, clean data used in targeted, “small” applications. Don’t wait for a three-year data migration project to finish. Build your Architecture of Speed on the data you have today to deliver small, successful tasks to build confidence.

Actionable Advice for Enterprise Leaders

If you want to stop the 2026 Solow Paradox from swallowing your ROI, here is how you reorganise for speed:
  • Audit your “Camel” processes: Look at your last three AI projects. Where did they slow down? If the answer is “committees,” it’s time to rethink your approval workflows.
  • Invest in NHI Management: Implement a dedicated system for managing Non-Human Identities, ensuring they have the autonomy they need to be useful and the security they need to be safe.
  • Measure Velocity, Not Just Accuracy: Accuracy is important, but if your “accurate” AI takes months to deploy, then it could be stale by the time it is productionised. Start measuring “Time to Value” for every AI experiment.
  • Shift to “Restructuring First”: Don’t just automate an old process. Use AI as an excuse to burn the old process down and build something that was meant to be autonomous from day one.

Final Thoughts: The Future is Fast, or it’s Nothing

The 2026 Solow Paradox isn’t a sign that AI is overhyped. Instead, it’s a sign that our organizations are outdated because the “Camel” problem is a human problem, not a technical one. By embracing Non-Human Identities and building an Architecture of Speed, you can move past the J-Curve dip and start seeing the productivity gains that everyone else is still just talking about. If you’re ready to stop the consensus-driven slowdown and start scaling your AI strategy with precision and speed, let’s talk. I specialise in turning complex AI theory into practical, high-velocity reality. Are you ready to stop building camels?
Looking for more insights on AI and Data Strategy? Check out our latest book reviews and strategy guides on the Jen Stirrup Consulting blog.

Share this:

Like this:

Like Loading...

Discover more from Jennifer Stirrup: AI Strategy, Data Consulting & BI Expert | Keynote Speaker

Subscribe now to keep reading and get access to the full archive.

Continue reading