• Blog
  • Why your customers are conflicted about AI (and what to do about it)

Why your customers are conflicted about AI (and what to do about it)

Last updated 08 January 2026
Insights

I took a ride-share this morning, and it did that thing that always happens. The app said the driver was three minutes away. Then four. Then five. Finally, it took six minutes to arrive. And I was fuming. Sound familiar?

But then I thought: Why am I annoyed? I tapped a screen, and within seconds, a signal traveled to a satellite and coordinated a driver to arrive at my exact location, in a nice car, ready to drive me exactly where I needed to go, without me even needing to hand them any cash.

And yet… six minutes, and I was annoyed.

This happens because the technology has reprogrammed our expectations. We don’t judge experiences by the effort behind them anymore, we judge them by immediacy.

But speed is only half the battle. We are also living through what I call the “ChatGPT Effect.” Generative AI has permanently elevated our baseline for digital interaction. The consumer tolerance for robotic scripts is dead, and we now expect natural, human-like conversations that understand us instantly.

For enterprises, this “Expectation Revolution” has created a dangerous trap. There is a massive disconnect between what customers value today and what they expect tomorrow.

New research from SINTEF, Telenor and boost.ai sheds light on this contradiction. In a survey, when asked to define excellent service right now, customers overwhelmingly pointed to more human experiences, like warmth, politeness and a genuine desire to help. But when asked what they expect in the near future, the priority flips entirely to values like efficiency, availability and digitalization.

The uncomfortable truth is that customers are conflicted. They are demanding a digital future, yet they remain deeply ambivalent about the automation required to get there.

The empathy gap

The study puts a number on this feeling. Today, 66% of respondents cite that a company “wants the best” for them as the primary driver of excellence. However, as early as 2028, customers anticipate a shift toward pragmatic efficiency.

This puts brands in a bind. Customers fear that as efficiency goes up, the human touch will vanish. They worry that relying on technology will strip away the quality of care, especially when their claim is denied or their card is blocked.

If your automation strategy is just deflecting calls, you are validating that fear. You are delivering efficiency at the cost of empathy.

Moving from Customer Experience to Conversational Experience

The old standard of long queues, robotic FAQs and “good enough” service is now brand-damaging. As we noted, the "ChatGPT Effect" means that because consumers now have access to hyper-intelligent tools in their personal lives, their tolerance for dumb corporate bots has evaporated.

To bridge the gap between the empathy customers want and the efficiency they expect, we have to stop thinking in terms of call containment and start thinking in terms of Conversational Experience.

This is where the distinction between “chatting” and “doing” becomes critical. The research shows that future expectations are driven by pragmatic value, meaning customers don't just want advice; they want results. A legacy bot can only explain how to file a claim. An AI Agent actually files it for you. In a digital world, that’s the modern equivalent of wanting the best for the customer.

The infrastructure of trust

This is where the technology choice stops being about specs and starts being about philosophy. To deliver those outcomes, you need agentic AI capable of reasoning and planning. But you can't simply unleash a raw generative model on your customer base and hope for the best.

Consumer sentiment highlights that customers are ambivalent toward automation, specifically because they lack trust. The antidote to this anxiety isn't just better AI, it’s a hybrid approach.

In fact, we need to stop defining "Hybrid" as just a tech stack. Instead, think of it as the mechanism that makes automation safe. It combines the creative reasoning of generative AI with the strict, deterministic guardrails of NLU.

This structure allows an AI Agent to plan and execute complex goals, without the risk of hallucinating a policy that doesn't exist. It satisfies the customer’s need for speed without validating their fear of a broken, uncaring machine.

Intelligent escalation

Of course, even the best agentic AI has limits. Respondents explicitly noted skepticism of AI handling complex or sensitive issues. This is where the strategy needs to pivot back to humans.

The goal of automation shouldn’t be to hide your human agents. It should be to elevate them. A true Conversational Experience is intelligent in its use of escalation because the AI Agent understands the context and the goal. It can recognize when a conversation requires a human touch, like in the case of a mortgage negotiation or a sensitive insurance denial.

Crucially, the AI should hand off the conversation with full context. Nothing kills the hedonic vibe faster than a customer having to repeat their policy number to a second person.

The future is conversational

The SINTEF research proves that the future isn’t AI vs. Humans. It is AI for the Human.

Customers want the instant resolution of a machine and the genuine care of a person. But brands that try to solve this with legacy chatbots will fail. And those that try by throwing more humans at the problem will go broke.

The only way forward is a balanced approach. We need AI that is smart enough to act, controlled enough to trust, and aware enough to know when to step aside for a human.

The future is conversational. The question is, are you building a system that just talks, or one that actually listens?