Francesco Di Costanzo
Back to shorts

(2) The Agent Unlock: Why AI Needs Managers, Not Magicians

When a new graduate joins your team, you do not hand them a project, close your office door, and expect a perfect result by Friday. You brief them. You set milestones. You review early drafts. And yet with AI agents — tools now capable of persistent memory and independent action — most professionals do the opposite. They expect magic. They get disappointment. And they blame the technology.

The real breakthrough is not the model. It is the organisational maturity required to deploy it well.

The three tiers of human-AI interaction

At the first tier, AI is a search tool: one-shot queries, no memory, the human rebuilding context each time. At the second tier, it becomes a cowork tool — Claude Code, Cursor, Copilot — where the human remains present, instructing the AI to act on local files. Delegation surfaces, but the human must sit at the machine. The third tier is the agent unlock: a separate identity with persistent memory and independent channels, capable of working while you are away. This is where AI stops being a tool and becomes a report — someone you brief, trust, and verify.

The orchestration gap

Enterprise AI adoption follows a familiar pattern: companies rush to deploy agents expecting autonomous perfection, then hit what platforms like Coworker.ai and Google Agentspace call the "orchestration gap" — the space between a capable model and reliable business outcomes. McKinsey's State of AI report found that 65% of organisations now regularly use generative AI, yet fewer than 15% have scaled beyond pilots. The barrier is not model capability. It is organisational readiness.

The management failure most people miss

The failure is managerial, and the analogy is the new graduate. They arrive articulate and knowledgeable — much like a large language model. But they do not know your stakeholders, your unwritten rules, or the conversation you had about that project yesterday. As Andy Grove argued in High Output Management, the quality of a manager's output is the output of their team — which depends entirely on how well tasks are delegated, not how hard the manager works individually. An AI agent, like a new hire, arrives with high raw capability but low "task-relevant maturity" in your specific environment.

Ethan Mollick's research at Wharton confirms this. The strongest predictor of successful AI integration is not technical sophistication but "task decomposition" — the ability to break work into discrete, verifiable steps with clear checkpoints. The organisations winning with AI are the ones already good at briefing and feedback loops.

The dangerous illusion

The cognitive trap is subtle. Because the model sounds confident and broadly informed, users assume it possesses contextual maturity. It does not. When ambiguous delegation produces plausible-sounding but misaligned output, the user concludes "AI doesn't work here" rather than "I briefed this poorly." The technology gets blamed for a management failure.

What comes next

There is a counterargument: that future models will infer intent better and shrink the managerial burden. This may hold for search and cowork use cases. But the point of an agent is independent action across time and context — and even a very smart employee still needs to know what "good" looks like in your organisation.

The uncomfortable truth is that the people most enthusiastic about AI are often the worst at using it well, because their enthusiasm correlates with overestimating what unsupervised delegation can achieve. Meanwhile, the patient managers — the ones already skilled at breaking down tasks, setting checkpoints, and giving clear feedback — are the ones who will extract compounding returns from the agent unlock.

AI will not democratise management skill. It will widen the gap between those who have it and those who do not.


Sources

  1. https://coworker.ai

  2. https://cloud.google.com/blog/topics/generative-ai/google-agentspace-announcement

  3. https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2024

  4. https://www.microsoft.com/insidetrack/blog/how-our-employees-are-extending-enterprise-ai-with-custom-retrieval-agents/

  5. Grove, A.S. (1983) High Output Management. Random House.

  6. Mollick, E. (2024) Co-Intelligence: Living and Working with AI. Wharton School research.