AI Is a Force Multiplier, If You Know Where to Apply It

I don’t think AI is controversial because it’s dangerous, I think it’s because it makes leverage uneven and makes people fearful.

Whenever any new tool or technology dramatically increases speed, output, or reach, it tends not to lift all organizations equally: it rewards those that understand how to apply it and those that don’t are left behind. We’ve seen this pattern over the last few decades, with spreadsheets, databases, email, and cloud infrastructure. AI is going to follow the same trajectory, but at a much larger scale.

Organizations that are deploying AI deliberately are already moving faster and sharpening their business model and economics. Teams that adopt it without structure are experiencing confusion, mistrust, or stalled results. I read one statistic that 1/3 of employees admitted to actively sabotaging their employers AI deployment efforts (and even more astonishing was that this rose to over 40% for millennials). The difference isn’t belief in AI’s potential, it’s clarity about what and where the leverage actually exists.

AI as Historical Leverage, Not Novelty

It’s easy to treat AI as something entirely new, but it’s just the latest in a long line of tools that have reduced human effort and expanded our capability. Spreadsheets made financial modeling cheap. Databases made storing and retrieving information cheap. The web made information widely accessible. Each of these altered what organizations could reasonably expect from small teams.

AI is making certain kinds of reasoning work cheaper: summarizing, comparing alternatives, drafting content, synthesizing inputs, and when used well, exploring scenarios. This doesn’t replace expertise or judgment though, it both amplifies them and abstracts them away. The organizations that see the greatest gains won’t be chasing novelty though; they will recognize a familiar dynamic and intentionally act on it.

Why Some Teams See Explosive Gains and Others See Chaos

The gap between successful and unsuccessful AI adoption is unlikely to come down to model quality, I believe it’s going to comed down to structure. High-performing teams often have clear goals, defined decision ownership, and workflows which make judgment visible. In those types of environment, AI can be inserted into existing processes, almost seamlessly, and accelerate work that already has clarity and direction.

In less structured environments though, AI is often likely to surface ambiguity rather than value. Questions about ownership, escalation, and evaluation become unavoidable. Who is responsible for the outcome? How are conflicts resolved? What happens when outputs are wrong? Without answers to these questions, AI feels unpredictable, not because the technology is erratic, but because the “stuff” around it is.

Capability, Deployment, and Value Are Not the Same Thing

One common source of disappointment is the assumption that capability automatically translates into value. In reality, there are three distinct stages at play.

  1. Capability reflects what a model can do on its own, in isolation.

  2. Deployment reflects whether the model has been integrated into tools and workflows.

  3. Value reflects when outcomes improve in ways people trust and understand.

Most organizational effort goes into the first two, but most frustration (especially among executives) is the gap to the third.

Value depends on applying AI to the right problems, at the right time, with clear expectations and accountability. Without that alignment, even the most powerful and capable models underdeliver, not because they’re ineffective, but because they’re being misapplied.

Structuring AI to Accelerate Good Judgment

The highest returns are likely to come when AI is used to prepare decisions rather than replace them.

Used well, AI is able to provide options, help to highlight tradeoffs, provide a sounding board to stress-test assumptions, and reduce cognitive load before a decision is made. Positioned like this it is able to function as an analyst or collaborator rather than an authority. Judgment remains with people, but it’s informed more quickly and more thoroughly through partnership with AI.

This distinction matters. When AI is framed as replacing judgment, it creates resistance and risk. When it’s framed as strengthening judgment, it enables faster and more confident action.

An Opportunity, Not a Warning

It’s common to frame AI discussions in cautionary terms, but I believe that often misses the larger strategic opportunity: AI is best understood as a force multiplier. AI can increase clarity, improve discipline, and drive intent, but it can also expose their absence just as quickly.

Organizations that recognize this early aren’t simply chasing efficiency gains. They’re building durable advantages by redesigning how decisions are prepared, evaluated, and executed. AI belongs in almost every organization if its value can be clearly understood and articulated.

The real question is whether it’s being applied in a way that compounds value over time. For teams that get this right, AI isn’t a risk to manage…it’s a step-change opportunity to lead.

Next
Next

Feature-driven vs Capability-driven companies