Call Center Execs Are Asking the Wrong Questions About AI

Image

There’s a question that comes up in almost every AI conversation happening in contact centers right now: Can we automate this?

It’s the wrong question. And the gap between asking what’s possible versus what’s prudent is exactly where most AI initiatives break down. 

Technology has outpaced the thinking that’s supposed to guide it. AI can now handle complex conversations, take action in backend systems, write case notes, route contacts, monitor quality, and do it all at scale. The capability ceiling has essentially disappeared.

That’s precisely the problem.

Because when anything is technically achievable, the hard work shifts from engineering to judgment.

The AI Capability Trap

Most executives approach AI by asking what the technology can do: Can we automate this interaction type? Can we deflect more volume? Can we cut headcount? These questions aren’t wrong, but they lead organizations to build whatever is buildable, when what they should first be asking is: Should we?

The contact center is not a laboratory. It’s where customers land when something has gone wrong, when they need help, or when trust is on the line. Deploying an AI agent that’s technically capable doesn’t mean customers will accept it, or that the business outcome will justify it.

Consider voice AI. An organization can stand up an agentic voice experience that handles thousands of calls autonomously. But if customers call expecting a human and immediately encounter a bot—with no warning, no context, no choice—a meaningful portion will simply hang up. The automation rate looks fine in a pilot. At scale, it becomes a customer experience problem and a cost problem simultaneously.

The better question isn’t can we deploy this? It’s should we, and under what conditions?

When the Math Betrays the Strategy

The capability trap compounds when organizations adopt deflection-first strategies without pressure-testing the economics.

A 30-35% automation rate sounds like a clear win. But here’s what that number often obscures: every interaction still passes through the automation layer first. The 65% that requires human handling now carries extra latency and processing cost, without gaining anything from it. You’ve added friction for the majority to automate the minority. In many cases, the financial impact washes out.

“Deflection” is also the wrong frame entirely. Marketing teams spend significant money driving customers to engage with a brand. Designing a system whose internal goal is to push those customers away is a contradiction that tends to surface in CSAT scores and churn data before it surfaces in boardroom conversations.

A better frame is containment: resolve the customer’s issue in the most effective way available. Sometimes that’s automation. Sometimes it’s human. The strategy should serve the outcome, not the automation rate.

What the Right Questions Look Like

Getting AI right in the contact center starts by defining, in writing, what an AI agent is allowed to do and what it should never do. Not based purely on what’s technically possible, but on what’s appropriate for the brand, acceptable to customers, and aligned with how the business actually operates. 

That clarity requires asking harder questions up front:

Does this align with your brand culture? A healthcare company and a gaming platform serve customers with fundamentally different expectations. In healthcare, AI may be most valuable working behind the scenes, assisting agents, surfacing information, and reducing handle time, rather than replacing the human interaction entirely. In gaming, younger, digitally native customers may actively prefer self-service. Neither is right by default. Both require a deliberate choice.

What does failure look like at scale? Pilots contain edge cases. Production surfaces them. A single unusual interaction that reaches the wrong place, like an escalation that goes to the CEO or a compliance failure in a recorded call, can undo months of deployment work. Edge cases will occur, so the question becomes whether you’ve mapped them carefully enough to build appropriate guardrails before they happen.

Are you optimizing or transforming? There’s a meaningful difference between using AI to do the same things faster and using AI to do things that weren’t previously possible. Automated case notes are a useful efficiency gain. But the real value isn’t speed. It’s that when documentation costs almost nothing to produce, case notes can become comprehensive, structured, searchable intelligence instead of brief notes written under time pressure. The organizations that capture that value are the ones that asked what should case notes be, not just how we write them faster. 

AI Is an Operation, Not a Task

One more question too few executives ask: How will we manage this after it’s deployed?

Traditional software projects have a beginning, a middle, and a handoff. AI doesn’t work that way. Products change. Customer behavior shifts. New scenarios emerge that the original deployment never anticipated. An AI system that isn’t continuously monitored, evaluated, and refined will drift. And in a customer-facing context, that drift becomes visible quickly.

Human agents go through ongoing quality assurance, calibration, and coaching. AI agents require the same ongoing attention. Organizations that build the operational infrastructure to support that—dedicated ownership, regular performance review, clear governance—are the ones that turn pilots into durable infrastructure.

The ones that don’t are still running pilots two years later, wondering why the results never scaled.

The competitive advantage in CX AI won’t come from access to the best models. It will come from the clarity to know what those models should and shouldn’t be doing, and the operational discipline to hold that line at scale.

The technology is ready. The question is whether the thinking is.