For AI Advocates: A Guide to Co-Creating with People Who Fear AI

2025/12/20
Yoshika Ichihara

Introduction

Hello, I’m Masamitsu Nishihara.

A lot of people feel AI is “untrustworthy,” “dangerous,” or “just scary for some reason.” If you are in a position to promote AI adoption, a very practical challenge is how to work with colleagues, team members, or managers who carry that anxiety or resistance.

In this post, I will organize the main reasons people “fear” AI, and also look at the emotion of “afraid” itself—something humans naturally have. Then, I will summarize practical ideas—helpful hints—for AI advocates to face those concerns and expand usage step by step in a way that fits real operations.

The Main Reasons People Find AI Scary

When I asked ChatGPT why many people fear AI, the answers can be organized into 3 main reasons:

1) Fear of job loss and widening inequality

If AI can work faster, cheaper, and with fewer mistakes than humans, people naturally worry: “Will my job disappear?”
Beyond job reduction, many also fear that only a small group who can use AI well will be rewarded—expanding gaps inside organizations.

2) The image of AI becoming uncontrollable or going rogue

It’s the classic “AI goes rogue” image—the fear that an AI smarter than humans might ignore human intent and start pursuing its own goals. Even though today’s AI is not a general superintelligence, it does have “blackbox characteristics where it can be hard to explain why it produced a certain output.” That uncertainty strengthens the fear of “Can we stop something we don’t fully understand?”

3) Concerns about surveillance and privacy violations

As face recognition, behavior logs, location data, and speech analysis become more advanced, the idea of a surveillance society feels increasingly real: “Will everything I do or think be tracked and scored?”
Many people also fear loss of privacy and human dignity.

From the next section, we’ll look at these 3 reasons in more concrete terms.

Anxiety About Jobs Being Taken

The fear that “AI will take my job” is not just paranoia.

In reality, automation and efficiency improvements are progressing first in areas such as repetitive tasks and rule-based work: certain manufacturing steps, data entry, report checks, and simple inquiry handling are common examples.

At the same time, these jobs do not become zero overnight. Even in highly automated environments, work that requires “judgment” and “design”—such as exception handling, workflow improvement, and new process design—will always remain. In fact, the value of people who can design processes assuming AI exists will likely continue to rise.

The key is not “running away from work that can be replaced,” but moving to the side that rebuilds work with AI as a given.
For example, if you shift into a role where AI handles routine tasks, you validate the results, find improvement points, and then provide the next instructions, your risk of being replaced drops significantly.

If you can change your mindset from AI as a “competitor” to AI as a “strong subordinate,” you can build a better mid-to-long-term career.

Fear of Losing Control and Going Rogue

Stories of uncontrollable AI have been repeated in sci-fi like Terminator. On the other hand, Doraemon and Astro Boy also portray futures where AI supports humans and coexists with them. Many of us carry both images at the same time.

Today’s AI is not an autonomous superintelligence like in SF. It is essentially responding by finding patterns within the data and instructions it is given.
However, it is true that the rationale behind outputs can be hard for humans to see like a “blackbox” and in many cases it is difficult to explain “why it judged that way.”

What matters here is not treating “something unclear” as “an uncontrollable threat” immediately. In real business usage, users define purpose and constraints—such as “the use case”, “what data is allowed as input”, and “how outputs will be used”—and operate within that scope.

Just like managing human team members, you can control risk by designing: “scope of delegation”, “accountability”, and “validation mechanisms.”

If you treat AI as something to be managed—consciously designing what to delegate and where humans must decide—you can use it without being swallowed by excessive fear.

Concerns About Surveillance and Privacy Violations

Anxiety around surveillance and privacy has become one of the issues that has intensified rapidly in the AI era.

With technologies like facial recognition cameras, behavior tracking, and analysis of purchase history and location data, massive amounts of personal data can now be collected and processed. Every time you see news about data leaks or misuse, the worry “Is my data safe?” becomes stronger.

At the same time, AI is also used defensively: detecting cyberattacks, predicting suspicious access, identifying abnormal login patterns, and monitoring access to confidential data. These capabilities can help address risks that humans alone cannot fully handle.

The key point is this: it’s not simply “AI that collects data without permission is scary.” The real question is” what rules and governance define how AI is used.”
If you pair AI usage with human-side controls—internal policies, consent processes, log handling, and access design—then it can be reframed from “a surveillance technology” to “a technology that balances safety and convenience.”

The Fear That Still Remains

As we’ve seen, fear of AI often comes from concrete factors like jobs, control, and privacy. Even if people understand these points to some degree, many still feel lingering discomfort or feel somehow scary.

So, let’s step away from AI and look at the emotion of “fear” itself.

~What Is Fear?~

Humans fear many things: ghosts, heights, confined spaces, snakes and spiders, darkness, war, disasters, illness, death—endlessly. People vary, but the common mechanism is that the brain judges something as “possibly dangerous to me.”

Fear is considered a defense reaction involving the amygdala in the brain. When excessive, it can disrupt daily life in the form of anxiety disorders or phobias. But originally, it is “a system for detecting danger early and protecting yourself”. Elevated heart rate near a cliff, or heightened alertness in an unfamiliar place, are examples of this system working normally.

In other words, fear is a necessary human emotion for survival. It is neither something you should “eliminate completely,” nor something you can “remove easily”. Fear of AI can be seen as a natural response rooted in that survival instinct.

~How to Deal with Fear in a Healthy Way~

Fear is meant to protect us. But when it becomes too strong, it labels every new challenge or change as “danger,” and we can’t move at all. Fear of AI has the same structure.

2 basic steps help you work with fear:

1) Put into words what you are afraid of right now

Not “I feel uneasy,” but more specific: “I’m afraid my job will disappear,” or “I’m afraid my expertise will be denied.”

2) Examine the basis of that fear

Is it based on your past experience, something you heard from someone, impressions from news or social media? Separate them. Even if the basis is unclear, it’s okay to treat it as a hypothesis.  However, you still need to distinguish what is fact versus opinion.

This approach applies not only to AI, but to fear of any change. If you have capacity, I recommend validating the basis of the fear. You don’t need to accept a big change all at once. Try something low-risk and within your control, even once—then you can understand the true impact range of what you fear. By then, “fear” often changes into something you can name as an “issue”.

The key is not “crushing” fear, but “controlling” it while keeping it functional.

How to Face AI

AI is often used as a label for “cutting-edge technology we don’t fully understand.” But over time, many of these technologies become “normal.” OCR (Optical Character Recognition) and recommendation features used to be treated as AI symbols, yet today they are simply “standard functions”.

Advanced technologies come with uncertainty and slower rule-making. So, in early stages, fear is expected. As discussed, fear is a natural defense response. What matters is not ignoring it, but understanding what exactly feels scary, learning how to engage step by step, and co-creating a practical approach.

~How to Work with People Who Fear AI~

If you want to promote AI adoption, the most important thing is not denying the other person’s feelings. Fear is a defense response. If you push them with “You’re overthinking” or “You’re behind the times,” you will only create backlash.

1) Listen fully to their fear first

Ask “Which part worries you?” or “What feels unpleasant about this?” and listen without judging.

2) Separate facts from impressions or opinions

Check whether “actual risks” are being mixed with “impressions from news or social media” or “personal opinions,” and organize them separately.

3) Create small success experiences together

Don’t try to AI-enable an entire workflow at once. Start with low-impact areas such as drafting meeting materials or brainstorming ideas.

While respecting their fear, find and share the line where both sides agree: “This scope feels safe enough to try.” This is a realistic approach for expanding AI usage inside an organization.

Closing

AI will keep evolving and become part of our daily “infrastructure”—like electricity and the internet. We will increasingly use AI without consciously thinking about it.

Even so, fear and anxiety about AI will not disappear easily. That is why AI advocates—especially middle managers and engineers—need to do more than understand the technology. They also need to face human emotions.

This post organized why people fear AI and how to engage with fear itself. Instead of denying anxiety, put it into words, understand it, and move forward through small experiments. If we can follow that process, we can co-create—not compete—with both AI and the people who feel negative toward it, and position AI not as a “scary technology,” but as a tool that expands human and organizational potential.

I hope this helps you take the first step.