AI Infrastructure
All briefings covering ai infrastructure.
MCP Hits 97 Million Installs and Becomes the AI Standard
The Model Context Protocol reached 97 million installs in March 2026, with every major AI provider now shipping MCP-compatible tooling. MCP has become the foundational standard for connecting AI agents to external tools, databases, and APIs. Operators building AI workflows on proprietary integration approaches are creating technical debt that will be expensive to unwind.
Gemini 3.1 Flash-Lite Makes Powerful AI 8x Cheaper to Run
Google launched Gemini 3.1 Flash-Lite on 3 March 2026, pricing it at $0.25 per million input tokens, one-eighth the cost of Gemini 3.1 Pro. The model is 2.5 times faster than its predecessor and outperforms rival efficiency models from OpenAI and Anthropic across most benchmarks. For operators building or buying AI-powered tools, the cost of running capable AI at scale has dropped significantly.
Cisco and NVIDIA Bring Secure AI to the Enterprise Edge
Cisco announced a major expansion of its Secure AI Factory with NVIDIA at GTC 2026 on 17 March, extending AI deployment capabilities from central data centres to edge locations including warehouses, hospitals, and vehicles. The platform compresses enterprise AI deployment timelines from months to weeks, with zero-trust security and agent-level guardrails built in from the start. AT&T is the first service provider to bring these capabilities to market.
NVIDIA GTC 2026: NemoClaw Brings Enterprise AI Agents to Every Business
NVIDIA launched NemoClaw at GTC 2026 today, an open-source platform that lets businesses deploy AI agents without proprietary lock-in. Paired with the Vera Rubin chip platform, which delivers up to 10 times cheaper AI inference than its predecessor, NVIDIA has made a clear push to become the foundational layer for the agentic AI era. For operators, this means the infrastructure for autonomous AI workflows is becoming faster, cheaper, and more accessible.