5 Reasons Why Your Team is Afraid of AI
Continuing our 5 Reasons series, this month we talk about why your team is afraid of AI, or even fights it outright. You probably have a few fans of AI who can’t wait to try new tools and ideas; we’ll be discussing the other camp, those who think AI is evil, dangerous, and needs very tight guardrails. Here are five common pushbacks and what you can do to handle them
1. They think you will suck their knowledge and then fire them
With the steady stream of news of companies firing big groups of workers, reportedly to replace them with AI (even if it’s not true), it’s normal to worry about survival. We saw the same concern with the arrival of what we now consider to be very basic self-service tools.
Be transparent: for most post-sales teams, AI will automate away many tasks, which means that each team member will be able to serve more customers. Since post-sales teams are usually understaffed, it just means you will need to hire fewer people to handle increasing volumes, but again, be candid about what you expect to happen.
Lesson: Reassure with facts
2. The current AI tools are dismal
AI-powered searches that return hallucinated results. AI summaries that leave out interesting bits. AI-powered QA tools that focus on form over function. No wonder your team members hate AI: it’s actually messing with their performance. Listen to their concerns: while AI tools can’t be 100% correct all the time, some are just not worth the hassle. And consider letting team members choose the AI tools they use, at least if they can maintain good productivity.
Lesson: Throw out tools that don’t deliver
3. They think AI will give wrong information, and then they’ll have to clean up
A special case of issue #2, above, is that customers may be exposed to outright falsehoods from failed searches, or a team member may communicate something incorrect and be blamed for it. AI is not infallible, and AI relies on information that’s also vulnerable to errors and gaps.
Subject externally-facing tools to very thorough vetting. This will minimize errors and help convince your teams that they are safe. And have a plan to collect data on failures. It’s too easy for human beings to fixate on one bad anecdote and forget about the thousands of successful transactions.
Lesson: Test customer-facing tools very thoroughly
4. They don’t want to work only on tough cases or with tough customers
The promise of AI is often presented as removing all the tedium of the job, leaving only the most complex, demanding, human-challenging pieces. But what if those routine, mindless, unchallenging moments are seen as nice little breaks, or even pleasant ways to feel useful at little personal cost?
Take a sober look at what you are now requiring of your team members, and compare it to the skills and preferences of the current team members. Do you have a mismatch? You will need to retrain or redeploy. And, perhaps, redesign the way you handle high-complexity, high-demand, high-emotion tasks so they don’t stack up in unhealthy ways.
Lesson: Review the match between job requirements and team members
5. They have become blind to the automatable parts of their jobs
I hear this pushback mostly from old-timers, who have learned to navigate awkward tools, obscure information repositories, and internal human connections with ease. Indeed, very experienced team members may benefit less from (some) AI tools because they have mastered how to get their job done under the current setup. But only because they’ve been at it for ever!
Newcomers welcome AI tools that simplify their work, reduce the onboarding period, and make it easier to figure out how to get help when needed. Focus on them. Automation gains will eventually reach all team members.
Lesson: Zero in on tedious or hard-to-learn tasks
What kinds of pushback are you getting with AI? Please share, and share what you are doing to overcome them.

Leave a Reply