
The most striking insight from this year’s London AI Summit wasn’t about model complexity or power of compute. It was this: The companies that win with AI won’t be the ones building more tools, they’ll be the ones operationalising intelligence.
From the opening remarks of the Summit and throughout the whole day, it became clear that we are at a turning point with how businesses are talking about AI and implementing it. AI is no longer a pilot project; it can be infrastructure within your organisation. But infrastructure only creates value when it’s designed for the people who use it, and that means embedding AI into your operating model – not just bolting it onto your tech stack.
“AI isn’t the driver; it’s the crew chief”
One insight from HMS Racing stood out: they don’t expect to see an AI driver anytime soon, but an AI crew chief is already within reach.This is a useful shift in perspective – AI isn’t the driver of your business, it’s the virtual crew chief, advising when to pit, adjusting for conditions, helping you go faster with less fuel.
This distinction matters. When leaders treat AI as an autonomous solution, they overestimate its capability. AI isn’t here to replace judgment, it’s here to support it.
In motorsports, teams like HMS Racing are already using AI to calibrate engines for altitude or weather. The real edge? Not the data, but the way it’s interpreted – quickly, contextually and collaboratively.
Your operating model might be the real barrier
What we’re seeing now is the emergence of a new operating model, where humans and AI agents work side by side. But that collaboration isn’t automatic, it needs governance, orchestration and a complete rethink of ownership.
During the summit, we saw a wave of demos from organisations deploying agentic systems, virtual ‘teammates’ (aka agents) embedded within workflows in the business. Some approaches even highlighted agents coordinated across multiple workflows, not stuck in function-specific sandboxes.
This orchestration without structure can lead to chaos. If you’re deploying 2,000 agents, who updates them? Who governs the decisions they support?
This calls for more than just a new platform – it calls for a new AI-native organisation design. One where agents and employees sit within the same governance frameworks, are held to shared standards and are working toward the same outcomes.
There’s also a growing opportunity to learn from companies born in the AI age. These businesses aren’t burdened by legacy tech or organisational inertia, and they’ve been building with agent-led thinking from day one. If traditional enterprises want to stay competitive, it may be time to stop retrofitting AI into old models – and start learning from those who are inventing new ones.
The real value lies in human + AI
Throughout the Summit, a common sentiment emerged: “Anything you can quantify can be replaced.”Neil Lawrence’s keynote, Business and the Atomic Human: Strategic Decision Making in the Age of AI, reminded us that while much of business can be quantified and delegated, strategic leadership remains fundamentally human. It’s in this space that AI becomes an amplifier – not a replacement.
Nowhere was this more evident than in a case study from social care. Rather than building from the perspective of a big tech provider (e.g. OpenAI), the team behind the project focused on the lived experience of users and how their day-to-day lives would be transformed – in this case, frontline care workers. The result? An AI tool that allowed staff to increase from seeing 5 vulnerable adults to 14 per week.
That’s not just efficiency. That’s transformation through thoughtful augmentation – AI designed to empower the people who rely on it, not just the organisations that deploy it.
End experience vs UI
A key narrative throughout the Summit – reinforced by Sir Keir Starmer’s announcement of a £1bn investment in AI infrastructure (inc. £187m investment in AI training) – was the UK’s ambition to be an AI maker, not just a taker. But that vision won’t be realised through demos and prototypes alone. It depends on building the infrastructure that connects systems, people and purpose.That means addressing what one speaker aptly called “the plumbing” – data quality, integration and physical infrastructure like storage and cooling. AI systems don’t work in isolation – they rely on the so-called boring stuff to function, and to deliver real value.
Too often, organisations also rush into LLM adoption and agent deployment without thinking through the hard part: orchestration. Yes, you can plug a model into a dashboard – but if it can’t communicate with your CRM, ERP or compliance systems, it’s just theatre. And if you're experimenting with multiple LLMs, how do they talk to each other? Unlike older enterprise tech, these systems can't live in silos.
There’s also a sustainability dimension to this – where your infrastructure lives matters. Powering AI at scale comes with a carbon footprint, and the location of that “plumbing” has serious environmental implications. As we build out these systems, we need to ask: Where can we support AI’s growth using the most sustainable energy sources?
Our take? Infrastructure decisions must account not only for performance, but for long-term sustainability. The UK’s opportunity lies in building AI systems powered by clean, scalable energy – not just fast processors.
Conclusion
The future of AI isn’t about who builds the smartest agent. It’s about who builds the smartest digital operating model – one where humans, tools and data work together in real time.Stay tuned for more on our view of what the AI-led operating model looks and feels like – and how we can help you implement it.