McKinsey Now Runs 25,000 AI Agents Alongside Its Staff

Friday 20 March 2026|McKinsey & Co.|
Employee Amplification SystemsAI Growth Engine

McKinsey CEO Bob Sternfels has confirmed the firm operates 25,000 AI agents working alongside its 40,000 human employees, growing from just 3,000 agents 18 months ago. The deployment has saved 1.5 million hours of work in a single year and prompted McKinsey to introduce an AI collaboration test as a formal stage in its graduate hiring process. The announcement signals that agentic AI has moved from competitive advantage to operational standard at the world's largest management consultancy.

Operator Insight

McKinsey went from 3,000 AI agents to 25,000 in 18 months, and the firm is not treating this as a technology project. It is treating it as a workforce redesign. The productivity gains are documented, the headcount ratios are shifting, and the hiring criteria have already changed. Operators who are still in the evaluation phase of AI adoption are not watching a competitor pilot a new tool. They are watching a firm restructure how professional services work at scale.

30-Second Summary

McKinsey CEO Bob Sternfels has confirmed the firm operates 25,000 AI agents alongside 40,000 human employees, a figure that grew from 3,000 agents just 18 months ago. The agents handle research, synthesis, data analysis, and the creation of client deliverables. The deployment saved 1.5 million hours of work last year and generated 2.5 million charts in six months alone. McKinsey has also introduced an AI collaboration test using its internal tool, Lilli, as a formal stage in its graduate recruitment process. For operators, this is not an abstract milestone. It is a documented playbook for what agent-first operations look like at scale.

At a Glance

  • Topic: AI Strategy
  • Company: McKinsey & Co.
  • Date: 20 March 2026
  • Announcement: McKinsey CEO confirms 25,000 AI agents operating alongside 40,000 human staff
  • What Changed: McKinsey's agent count grew from 3,000 to 25,000 in 18 months, now embedded in core client delivery workflows
  • Why It Matters: The world's largest management consultancy has operationalised AI agents at a scale that is reshaping its workforce model, hiring practices, and pricing approach
  • Who Should Care: Founders, CEOs, and operators evaluating how AI agents fit into their own service delivery and team structures

Key Facts

  • Company: McKinsey & Co.
  • Launch Date: Ongoing deployment, confirmed publicly in early 2026
  • What Changed: Agent workforce grew from 3,000 to 25,000 in 18 months; Lilli AI now part of graduate hiring process
  • Who It Affects: Professional services firms, knowledge-work businesses, and any operator evaluating AI agent deployment
  • Primary Source: McKinsey CEO Bob Sternfels, Harvard Business Review interview and Consumer Electronics Show remarks

What Happened

McKinsey & Co. CEO Bob Sternfels confirmed in early 2026 that the firm now operates approximately 25,000 AI agents working alongside its 40,000 human employees. The figure represents an eight-fold increase from 3,000 agents just 18 months prior. Sternfels has described the firm's total workforce as 65,000: "40,000 humans and 25,000 agents."

The agents are not simple chatbots. They are advanced systems capable of breaking down complex research problems, synthesising information across large document sets, producing structured analysis, and generating client-ready outputs. In practical terms, McKinsey's agents saved 1.5 million hours of search and synthesis work in a single year and generated 2.5 million charts in just six months.

Sternfels described McKinsey's approach as "25 squared": the firm has grown client-facing roles by roughly 25% while reducing non-client-facing roles by approximately the same proportion. Output from the non-client-facing side has still grown by 10%, reflecting the productivity gains from agent deployment.

The firm has also introduced an AI collaboration test as a formal stage of its graduate recruitment process. Candidates are assessed on their ability to work with Lilli, McKinsey's internal AI tool, to solve applied business scenarios. The evaluation focuses on reasoning, judgement, and the quality of collaboration with the system, rather than technical AI knowledge.

McKinsey is simultaneously migrating its commercial model toward outcomes-based pricing, where fees are linked to measurable client impact rather than hours billed. Sternfels has indicated this shift is made possible, in part, by the productivity unlocked through AI agents.

Why It Matters

  • McKinsey's deployment demonstrates that agent-first operations are viable at enterprise scale, with documented productivity outcomes rather than projected estimates
  • The eight-fold growth in agents over 18 months sets a pace of adoption that other professional services and knowledge-work businesses will face competitive pressure to match
  • The restructuring of roles, where non-client-facing headcount shrinks while output grows, provides a concrete model for how agent deployment changes headcount planning
  • The introduction of an AI collaboration test in hiring signals that AI fluency is becoming a baseline professional expectation across knowledge-work disciplines
  • The shift toward outcomes-based pricing suggests that AI-enabled productivity is beginning to change the commercial logic of professional services, not just its internal operations
  • For operators running lean teams, McKinsey's documented gains, 1.5 million hours saved, represent the type of leverage that determines whether a small firm can compete on equal terms with a larger one

The David and Goliath View

McKinsey's announcement is not primarily about technology. It is about a deliberate decision to treat AI agents as a workforce category, not a software feature. The firm did not pilot 25,000 agents through a series of cautious experiments. It scaled from 3,000 to 25,000 in 18 months because the outcomes justified continued deployment. That is the key data point: not the headline number, but the pace.

For operators running businesses of 10 to 200 people, the McKinsey story contains a more useful signal than most AI press releases. It shows what happens when a firm stops asking "how do we use AI" and starts asking "how do we design our operations assuming agents are part of the team." The work that was previously done by non-client-facing staff, research, synthesis, formatting, analysis, did not disappear. It was absorbed by agents, freeing human attention for higher-value work.

The practical implication is immediate. Operators should not wait for the right platform or the perfect use case. They should identify the category of work in their business that is high volume, well-defined, and currently handled by humans spending time they would rather redirect. That is where the first agent belongs. Build a baseline, measure the hours recovered, and scale from evidence.

Where This Fits in the AI Stack

Employee Amplification Systems: McKinsey's agent workforce is the clearest real-world example of this concept in action. Agents handling research, synthesis, and document production free human staff to focus on client relationships, judgement calls, and strategic delivery. For operators, this is the direct template for how to structure an amplification system inside a lean team.

AI Growth Engine: The shift to outcomes-based pricing at McKinsey is enabled by the productivity gains from agents. For operators, AI-enabled capacity creates the same opportunity: to serve more clients, deliver faster, or price on impact rather than time.

Questions Operators Are Asking

Is McKinsey's agent model relevant to a business our size? Yes, and in some ways it is more accessible. McKinsey operates at a scale that required significant internal development. Operators running smaller businesses can access equivalent agent capabilities through existing platforms without building custom systems. The principles, identifying high-volume workflows, measuring outcomes, and scaling deliberately, apply at any size.

What types of tasks should we start with? Begin with work that is high volume, rule-governed, and currently consuming time that should go elsewhere. Research summaries, data collation, first-draft documents, and structured reporting are the most common starting points. McKinsey's agents began in search and synthesis, which generated the 1.5 million hours of savings cited by its CEO.

Should we be testing AI collaboration in our hiring process? It is worth considering. McKinsey's move reflects a broader shift in what competency looks like in knowledge-work roles. If your team uses AI tools as part of daily work, assessing how candidates engage with those tools during interviews gives you practical signal about their effectiveness in the role.

How do we measure whether agents are actually delivering value? Establish a baseline before deployment. Track the hours currently spent on the tasks you intend to automate or augment. After deployment, measure the same tasks. McKinsey's public figures (1.5 million hours, 2.5 million charts) are useful benchmarks for understanding what meaningful scale looks like, even if your initial targets are far more modest.

Does this change how we should think about headcount planning? It should inform it. If agents absorb a category of work currently handled by staff, the question becomes what that staff does instead, not whether you need fewer people. McKinsey's "25 squared" model grew client-facing roles while shrinking non-client-facing ones. For operators, the equivalent question is: what is the highest-value use of our team's time, and can agents handle everything else?

Citable Summary

What happened: McKinsey CEO Bob Sternfels confirmed the firm operates 25,000 AI agents alongside 40,000 human employees, up from 3,000 agents 18 months ago, with documented outcomes including 1.5 million hours saved and a new AI collaboration test in graduate hiring.

Why it matters: McKinsey's deployment represents the clearest documented example of agent-first workforce design at enterprise scale, with productivity gains, hiring shifts, and commercial model changes that set a visible benchmark for the broader professional services industry.

David and Goliath view: Operators should stop treating AI agents as a capability to explore and start treating them as a workforce category to plan around. Identify the highest-volume, well-defined work in your business, deploy agents there first, measure the hours recovered, and scale from evidence.

Offer relevance:

  • Employee Amplification Systems: agents absorbing research, synthesis, and structured output work free human teams to focus on relationships, judgement, and strategic delivery
  • AI Growth Engine: productivity gains from agents create the capacity to serve more clients, deliver faster, or shift toward outcomes-based commercial models

Why This Matters for Operators

  • Agent deployment is no longer an IT initiative. McKinsey frames its 25,000 agents as a workforce decision, not a software rollout. Operators should assign ownership of AI agent strategy to a business leader, not a technical one.

  • The productivity signals are real and measurable. McKinsey documented 1.5 million hours saved and 2.5 million charts generated in six months. Build your own baseline metrics now so you can demonstrate equivalent gains to your stakeholders.

  • Hiring criteria have already shifted. McKinsey now tests graduate candidates on their ability to collaborate with AI systems. Operators should consider how they assess AI fluency in their own hiring and performance reviews.

  • The move toward outcomes-based business models is AI-enabled. McKinsey is migrating from fee-for-service to impact-linked pricing. For operators, this means AI adoption is beginning to reshape pricing power and service delivery across professional industries.

Want to act on this?

Every briefing connects to systems we build. If this development is relevant to your business, let us show you what it looks like in practice.

Book a Strategy Call