Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Cloud and Infrastructure in 2026

2026-01-29

How AI Is Rewriting Architecture Assumptions

What gives with AI in 2026 from a cloud, infrastructure, and operations perspective? We asked our analysts what is changing in architecture and operations as AI moves from experimentation into sustained use.

 The consistent theme was that AI is no longer something organisations are “adding” to existing platforms. Rather, it is starting to change the architectural assumptions on which those platforms were built. Let’s see what they said, and what organizations can do in response.

Where Organizations Are With AI Today

 Most organizations now have some form of AI in production, even if it is limited to specific teams or use cases. The early phase has been dominated by pilots, managed services, and tooling layered onto existing cloud and data platforms. That approach is showing strain, says William McKnight: “AI is evolving far faster than the architectural assumptions underneath it.”

 Why? AI workloads behave differently from transactional or analytical systems, and they are more sensitive to latency, data locality, throughput, and governance constraints. Architectures that are acceptable for BI or application workloads often struggle when models need continuous access to large, evolving datasets.

 The challenge is not that the models fail, but that organizations are building AI systems without a clear sense of what the architecture will become. Says Tom Garske: “Most organizations haven’t figured out what ‘good’ looks like yet. They’re building agents and pipelines without clear boundaries or separation, so the systems become unpredictable. Until they understand the architectural patterns, they’ll continue to hit the same operational limits.”

 Together, these pressures are pushing AI out of the “innovation sandbox” and into the core of infrastructure planning.

Impact on Infrastructure Architecture and Design

 One of the clearest shifts discussed was the reversal of traditional data movement patterns. Says Darrel Kent, “‘Move the data to AI’ workflows are being superseded as AI moves closer to the data.”

Rather than centralizing data and sending it to large AI platforms, organiations are increasingly looking to place compute closer to where data already lives.

 This shift has practical implications for infrastructure design. GPU availability, network bandwidth, and data movement costs all constrain where AI workloads can run effectively. As workloads scale, compute placement becomes a design decision tied to data gravity, compliance boundaries, and cost, rather than a default cloud choice.

 As William McKnight observes, this exposes mismatches in existing architectures and the associated costs. “Once AI workloads scale, the cost and performance curves stop making sense under traditional cloud assumptions. Plus, AI exposes how much technical debt organizations have been carrying in their data pipelines. When everything is stitched together manually, it works for a demo, but it falls apart in production.”

 Storage also changes its role as a result. AI systems depend on fast access to embeddings, vectors, metadata, and versioned datasets. Data infrastructure platforms are increasingly expected to support lower-latency access patterns and tighter integration with compute. Storage increasingly sits inside AI pipelines rather than at their edges, becoming an active component of AI system design rather than a passive repository.

Organizational and Operational Impact

 These technical changes affect how teams are structured and how infrastructure is operated day to day. Explains Andrew Green, “Infrastructure teams supersede traditional Network and ITOps roles in cloud-native environments.” Cloud-native, API-driven platforms and AI workloads require tighter coordination across these domains, particularly as models move from isolated experiments to continuous operation.

 As AI systems move from pilots to persistent workloads, project-based approaches to data engineering also begin to break down. Pipelines need to be governed, repeatable, and resilient over time, rather than rebuilt for each new use case. As Darrel Kent explains, “AI needs persistent, governed pipelines, and that immediately breaks the project-based model. You can’t keep rebuilding data prep for every use case when the workloads run every day and depend on consistent lineage.”

 This operational pressure is closely tied to architectural clarity. Without clear separation, control layers, and automation, teams are left reacting to system behavior rather than designing for it. Says Tom Garske, “Most organizations don’t have the separation or control layers they need to run AI safely at scale. Without clear boundaries and governance patterns, teams end up reacting to behavior instead of designing for it.”

This also pulls governance closer to engineering teams, with requirements around data handling, access control, and auditability increasingly needing to be expressed in code and enforced automatically, rather than managed through documentation and manual processes. Says Andrew Green, “AI pushes you into a world where pipelines have to be monitored and tested constantly. You can’t assume anything will stay stable without automation holding it together.”

Practical Next Steps

 So, what to do? Across the discussion, the emphasis was on pragmatic action rather than large-scale transformation programs. Rather than treating AI as a separate initiative, analysts and Field CTOs consistently emphasized the need to adjust existing infrastructure thinking to account for sustained AI use. As William McKnight summarised, “The near-term work is dealing with the fragility that AI exposes. You can’t scale anything until the underlying architecture can run every day without heroic effort.”

 Common themes included:

  • Reviewing existing architectural assumptions in light of AI workload behavior
  • Treating data governance, lineage, and sovereignty as design inputs rather than afterthoughts
  • Increasing automation in data pipelines to improve reliability and repeatability
  • Expanding infrastructure skills to include accelerated compute and policy-driven platforms
  • Making compute placement a conscious design decision rather than a default

 None of these steps are radical. Taken together, however, they reflect a meaningful shift in how cloud and infrastructure environments are planned and operated once AI becomes a sustained, operational part of the system.