Introduction to AI Infrastructure
AI spending in Asia Pacific continues to rise, yet many companies still struggle to get value from their AI projects. Much of this comes down to the infrastructure that supports AI, as most systems are not built to run inference at the speed or scale real applications need. Industry studies show many projects miss their ROI goals even after heavy investment in GenAI tools because of the issue.
The Challenge of AI Infrastructure
The gap shows how much AI infrastructure influences performance, cost, and the ability to scale real-world deployments in the region. Akamai is trying to address this challenge with Inference Cloud, built with NVIDIA and powered by the latest Blackwell GPUs. The idea is simple: if most AI applications need to make decisions in real time, then those decisions should be made close to users rather than in distant data centres.
Why AI Projects Struggle Without the Right Infrastructure
Jay Jenkins, CTO of Cloud Computing at Akamai, explained why this moment is forcing enterprises to rethink how they deploy AI and why inference, not training, has become the real bottleneck. Jenkins says the gap between experimentation and full-scale deployment is much wider than many organisations expect. Many AI initiatives fail to deliver on expected business value because enterprises often underestimate the gap between experimentation and production. Even with strong interest in GenAI, large infrastructure bills, high latency, and the difficulty of running models at scale often block progress.
The Importance of Inference
Most companies still rely on centralised clouds and large GPU clusters. But as use grows, these setups become too expensive, especially in regions far from major cloud zones. Latency also becomes a major issue when models have to run multiple steps of inference over long distances. AI is only as powerful as the infrastructure and architecture it runs on, Jenkins says, adding that latency often weakens the user experience and the value the business hoped to deliver.
The Rise of Edge Infrastructure
Across Asia Pacific, AI adoption is shifting from small pilots to real deployments in apps and services. Jenkins notes that as this happens, day-to-day inference – not the occasional training cycle – is what consumes most computing power. With many organisations rolling out language, vision, and multimodal models in multiple markets, the demand for fast and reliable inference is rising faster than expected. This is why inference has become the main constraint in the region.
How Edge Infrastructure Improves AI Performance and Cost
Jenkins says moving inference closer to users, devices, or agents can reshape the cost equation. Doing so shortens the distance data must travel and allows models to respond faster. It also avoids the cost of routing huge volumes of data between major cloud hubs. Physical AI systems – robots, autonomous machines, or smart city tools – depend on decisions made in milliseconds. When inference runs distantly, these systems don’t work as expected.
Industries That Benefit from Edge Infrastructure
Early demand for edge inference is strongest from industries where even small delays can affect revenue, safety, or user engagement. Retail and e-commerce are among the first adopters because shoppers often abandon slow experiences. Personalised recommendations, search, and multimodal shopping tools all perform better when inference is local and fast. Finance is another area where latency directly affects value. Jenkins says workloads like fraud checks, payment approval, and transaction scoring rely on chains of AI decisions that should happen in milliseconds.
The Future of AI Infrastructure
As AI workloads grow, companies need infrastructure that can keep up. Jenkins says this has pushed cloud providers and GPU makers into closer collaboration. Akamai’s work with NVIDIA is one example, with GPUs, DPUs, and AI software deployed in thousands of edge locations. The idea is to build an “AI delivery network” that spreads inference across many sites instead of concentrating everything in a few regions.
The Importance of Security
Security is built into these systems from the start, Jenkins says. Zero-trust controls, data-aware routing, and protections against fraud and bots are becoming standard parts of the technology stacks on offer. Running agentic systems – which make many decisions in sequence – needs infrastructure that can operate at millisecond speeds. Jenkins believes the region’s diversity makes this harder but not impossible.
Conclusion
In conclusion, AI infrastructure plays a crucial role in the success of AI projects. The gap between experimentation and full-scale deployment is wider than expected, and inference has become the main constraint in the region. Edge infrastructure can improve AI performance and cost, and industries such as retail and finance are already benefiting from it. As AI workloads grow, companies need infrastructure that can keep up, and security is a critical component of these systems.
FAQs
Q: What is the main challenge facing AI projects in Asia Pacific?
A: The main challenge facing AI projects in Asia Pacific is the lack of suitable infrastructure to support AI, particularly in terms of running inference at the speed and scale required by real-world applications.
Q: What is Inference Cloud, and how does it address the challenge of AI infrastructure?
A: Inference Cloud is a solution developed by Akamai, built with NVIDIA and powered by the latest Blackwell GPUs, that aims to address the challenge of AI infrastructure by running inference closer to users, rather than in distant data centres.
Q: What is the importance of edge infrastructure in AI?
A: Edge infrastructure is critical in AI as it allows for faster and more reliable inference, which is essential for many AI applications, particularly those that require real-time decision-making.
Q: Which industries are most likely to benefit from edge infrastructure?
A: Industries such as retail, e-commerce, and finance are most likely to benefit from edge infrastructure, as they require fast and reliable inference to operate effectively.
Q: What is the future of AI infrastructure, and how will it impact businesses?
A: The future of AI infrastructure will be shaped by the growing demand for edge inference, and businesses will need to adapt to this shift by investing in infrastructure that can support fast and reliable inference.









