Most leadership teams aren’t short on effort. They’re short on traction. I thought I’d look back at some of my previous predictions to see what shifted and what landed and it triggered further reflection.
Over the last two years, the most common executive pain I hear isn’t “we don’t have ideas.” It’s “we’re doing a lot — and still not moving fast enough.”
That’s the paradox of acceleration. When the environment speeds up, activity increases. But unless leaders change how the organization thinks and decides, momentum doesn’t follow.
The World Economic Forum’s Future of Jobs Report 2025 gives a sharp signal about why this is happening. By 2030, 86% of employers expect AI and information processing technologies to transform their business. And importantly, 60% expect broadening digital access to be transformative too. That pairing matters: AI isn’t only a “new capability.” It’s a new dependency. The winners won’t be those who simply adopt AI. They’ll be those who make it usable, safe, and repeatable — at scale.
That’s the leadership challenge for the year ahead: turn AI into an operating model, then increase decision velocity so value compounds.
AI leadership that compounds: from tools to an operating model
AI is tempting because it promises speed. But many organizations are discovering an uncomfortable truth: AI often increases speed in the wrong direction. Teams spin up pilots, experiment with vendors, generate prototypes — and months later, the business is still asking, “So what changed?”
This is where AI leadership becomes different from “innovation leadership.” AI leadership is about designing repeatability. It’s less about cleverness and more about making value show up consistently across the organization.
The WEF report notes that since the release of ChatGPT in November 2022, investment flows into AI have increased nearly eightfold. That isn’t just market hype — it’s competitive pressure. When investment rises that sharply, your competitors aren’t debating whether AI matters. They’re figuring out how to operationalize it before you do.
So the question isn’t “Should we use AI?” The question is:
Where will we use AI in ways that are measurable, trusted, and repeatable?
A simple operating-model approach has three executive moves.
Choose where humans stay fully human
AI works best when leaders define the “human-critical” work that must remain anchored in judgment and relationships: customer trust, negotiation, ethics, leadership communication, high-stakes decision-making, and accountability. When leaders don’t draw this line, AI adoption becomes accidental — automation creeps into places where it erodes trust or quality (especially where agentic AI is on the agenda).
WEF’s task outlook reinforces this: by 2030, tasks are expected to split roughly into thirds across human-only, technology-only, and human–technology collaboration. That’s a leadership design problem, not a software selection problem. The future of work is not “humans replaced.” It’s work reallocated — intentionally or by accident.
Leaders should make it intentional.
Set guardrails that create speed (not bureaucracy)
Guardrails sound boring, until you notice what happens without them: endless debates, risk anxiety, inconsistent quality, and stalled adoption. In an AI-accelerated world, guardrails are what allow the organization to move with confidence.
The goal is not a heavy policy framework. The goal is clarity:
- What data can be used, and how?
- What customer/IP boundaries exist?
- What approvals are required for which use cases?
- What is the escalation path if output is wrong?
Strong guardrails don’t slow the business down. They remove uncertainty so teams stop improvising.
Measure outcomes, not pilots
This is the moment many AI programs fail: the organization celebrates activity. “We launched 12 pilots.” “We trained 1,000 employees.” Those are inputs. They don’t prove value.
Outcome metrics are what compound:
- Cycle time reduction
- Error rates and rework
- Customer response time and resolution quality
- Employee workload reduction (not just “productivity”)
When AI is treated as an operating model, measurement becomes cultural: teams learn what “good” looks like, and they repeat it.
And here’s the pivot: once AI is operationalized — once the guardrails and metrics exist — leaders can make the second compounding move.
Decision velocity: the advantage that turns AI into momentum
AI expands possibility. Possibility expands choices. And choices can paralyze.
In the year ahead, organizations won’t lose because they lacked ideas. They’ll lose because they couldn’t decide — what to prioritize, what to stop, what to scale, and what to ignore. That’s why decision velocity is the hidden advantage.
WEF’s labor-market projection makes the stakes plain: by 2030, the report estimates 22% job churn, with 170 million jobs created and 92 million displaced (net +78 million). That level of churn is not a one-off transformation. It’s a continuous reallocation of work. Leaders who can’t decide cleanly will drag the organization through churn without ever capturing the upside.
Decision velocity isn’t about rushing. It’s about removing friction — structurally.
The connected truth: AI raises the cost of slow decisions
In a slower world, slow decisions are annoying. In an AI-accelerated world, slow decisions are expensive because the environment changes while you’re still debating yesterday’s choices. Worse, slow decisions push teams into “shadow adoption” — people use AI tools informally without guardrails because leadership hasn’t created the official path. (Remember, we saw this pattern with “shadow IT” through the SaaS/cloud adoption era.)
So the operating model and decision velocity are not separate topics. They’re a loop:
- Guardrails reduce risk uncertainty
- Reduced uncertainty increases decision speed
- Faster decisions increase learning
- Learning strengthens the operating model
- The operating model makes scaling possible.
That’s compounding.
Three practices that increase decision velocity without sacrificing quality
Fewer bets, clearer focus. Decision velocity begins with prioritization. The fastest organizations aren’t the ones with the most initiatives. They’re the ones with the fewest — and the best sequencing. A CEO-level discipline for the year ahead:
- Pick 2–3 AI use cases tied directly to business outcomes.
- Explicitly name what you will not do this quarter.
- Put the rest in a backlog — visible, but not active.
This reduces noise. Noise is the enemy of speed.
Separate reversible from irreversible decisions. Most decisions are reversible – vendors can be hired or unhired, processes can be changed and adapted, early product features can be unwound or accelerated, and internal enablement tools can be deployed or rescinded. Reversible decisions should be time-boxed and pushed closer to the work – delivering speed to value.
Irreversible decisions however, are a different order of magnitude. Areas that carry customer trust implications, regulated data exposure, and core platform commitments all deserve higher-level scrutiny. This is how leaders prevent two extremes:
- Treating every choice like a board decision (slow)
- Treating high-risk decisions like experiments (dangerous).
Use “kill criteria” to escape pilot purgatory. The most common leadership failure in technology adoption is not choosing the wrong tool. It’s failing to stop. And AI is no exception.
When planning your IT agenda, set kill criteria before the pilot starts:
- Performance thresholds
- Quality benchmarks
- Adoption requirements
- Compliance conditions
- A date when a decision must be made: scale, redesign, or stop
Kill criteria create psychological safety because they make stopping a pilot normal — not political. And they protect focus, which is what allows the organization to get better at what it chooses.
Nina Nets It Out
If you want the year ahead to feel calmer and faster at the same time, keep it simple: operationalize AI, then accelerate decisions.
Use the WEF signals as your prompt. With 86% of employers expecting AI to transform business, the differentiator won’t be adoption — it will be execution. Choose a small number of use cases, put crisp guardrails around data and risk, and measure outcomes that matter. Then build decision velocity with clear owners, time-boxed choices, and kill criteria that protect focus.
AI will increase the pace either way. Your job is to ensure it increases momentum.