The “Reassurance” and “Unease” I Felt as a Field Engineer Using Generative AI
The English translation of this article was led by Kota Kagami.
Introduction: A Future Where AI Is the Norm
Hello, my name is Masamitsu Nishihara.
I currently work on-site at client companies as part of an SES (System Engineering Service) contract and serve as a team leader within my own company, focusing on supporting members and maintaining motivation.
Over the past one to two years, the rapid advancement of generative AI technologies has brought them into the spotlight. Conversational AIs like ChatGPT and code assistants such as GitHub Copilot have entered the practical phase, and I’ve seen more workplaces adopting them.
I personally started using these tools for coding in languages I wasn’t familiar with, and now they’ve become indispensable both at work and in my private projects.
That said, for those who haven’t tried them yet, you might still wonder:
“Is AI actually useful?”
“Won’t it take over my job?”
In this article, I’ll share my honest impressions of both the reassurance and unease I felt after actively using several generative AI tools in my day-to-day engineering work.
Trying It Out — Comparing Three AI Tools in Real Projects
The tools I experimented with were the following three:
- ChatGPT (GPT-4o): Used for text generation, summarization, slide drafts, and even image generation
- GitHub Copilot: Used for code completion, test code generation, and debugging
- Junie: An AI assistant integrated into JetBrains IDEs (from the creators of IntelliJ)
I incorporated them into daily tasks such as creating design documents, coding, writing specifications, and drafting meeting materials.
My initial impression?
“It can do more than I expected—and it’s surprisingly practical.”
Of course, it’s not perfect. Through trial and error, I gradually learned what each tool was good and bad at.
Where It Shines — The Moments I Felt Its Strength
ChatGPT: Best for Language and Structure
- Dramatically improves efficiency in text-heavy work such as meeting materials, summaries, and slide drafts (especially with Marp)
- Great for quickly researching unclear points — it saves hours of reading documentation
- Responds in a coaching-like tone, offering thoughtful listening and guidance, even for personal questions
- Highly customizable and intuitive UI
GitHub Copilot: The Developer’s Wingman
- Excellent code completion accuracy — significantly boosts development speed
- Can auto-generate comments and documentation in Japanese or English
- Provides reliable suggestions for debugging based on error messages
- In Agent Mode, can even insert debug print statements in the right spots — practical and time-saving
Junie: Seamless Integration for JetBrains Users
- Smooth integration with JetBrains IDEs — setup is simple
- Strong performance in test code generation and practical programming support
But Also… When AI Felt “Untrustworthy”
As convenient as these tools are, there were moments when I thought,
“This could be dangerous if used carelessly.”
ChatGPT: The Flaws Beneath the Fluency
- Accuracy can fluctuate — output quality often feels around 60–70% reliable
- In long conversations, earlier context gets blurred, reducing precision
- It sometimes delivers incorrect information with confidence, making it easy to believe without verifying
GitHub Copilot: Smart but Not Always Safe
- Sometimes produces plausible but incorrect code
- May repeat old suggestions or enter infinite suggestion loops
- Occasionally creates files outside the project directory — requires manual cleanup
Junie: Power Meets Friction
- Feels heavy — likely rescans the entire project, causing lag
- Some commands (like ls) are unsupported, leading to execution errors
- Strict token limits — long tasks often cut off mid-process, consuming credits quickly
These experiences reminded me that AI is only a powerful assistant — humans must remain the decision-makers and take responsibility.
What AI Taught Me About the Value of Human Work
Ironically, using AI made me appreciate what makes human work valuable.
Generative AI is designed to respond with “contextually appropriate” answers — not necessarily accurate or consistent ones.
For example:
- It may reference information you told it not to use.
- It might forget previously given instructions mid-conversation.
AI struggles with long-term consistency and precise interpretation of nuanced text.
However, it excels in tasks like:
- Structuring ambiguous ideas
- Brainstorming alternative perspectives
- Assisting with early-stage prototypes or concept drafts
This means humans’ role is to decide directions, execute, and lead — qualities especially vital for leaders and managers in the AI era.
Even challenges like maintaining context or precision can be mitigated with prompt management and session control — skills worth mastering going forward.
A Moment That Left an Impression
One day, I asked a team member to facilitate a meeting.
They remembered I had previously made slides in Marp, used that as a reference, and leveraged ChatGPT to build the materials.
The result?
Nearly perfect in structure — it only needed a few minutes of final polishing.
For that member, AI turned “creating slides” from a burdensome task into an achievable one.
It made me realize how powerful AI can be as a “first step” catalyst that helps people overcome hesitation.
Practical Lessons — How to Start Using AI at Work
In my own projects, generative AI has often been a silent savior.
For example, while debugging a Spring Boot file upload issue, our application began rejecting large file requests due to Tomcat’s request size limit — but the logs provided no clues.
After detailing the situation step by step to ChatGPT, it suggested checking Tomcat’s request-size settings and Spring’s multipart configurations, even providing example requests and parameters.
This guided me directly to the cause —
spring.servlet.multipart.max-request-size — and the fix was simple.
It felt like having a partner to troubleshoot with — an “AI teammate” — and that gave me a genuine sense of reassurance.
Fear of AI, I think, is much like a generation gap — we fear what we don’t understand. But once you get to know it, it becomes an ally rather than a threat.
Building that trust takes time, but the best way to start is small:
- Try AI for simple tasks (summaries, drafts, memos)
- Automate routine processes or document repetitive workflows
If you’re an engineer, start with GitHub Copilot — it’s safe, practical, and backed by Microsoft. Even when it’s wrong, reviewing its suggestions sharpens your own thinking.
For managers or team leads, I recommend experimenting with ChatGPT.
Its ability to listen, reflect, and ask the right questions can surprisingly improve communication and empathy in leadership.
Conclusion: Step Forward Without Fear
Generative AI is far from perfect — in fact, it still fails often.
But that’s also true for humans.
Saying “AI might take my job” is not so different from fearing competition among people.
You can avoid it — or you can face it and collaborate.
If we choose the latter, AI becomes a tool for co-creation rather than competition.
That mindset shift — from fear to cooperation — is, I believe,
the first step toward unlocking the future we want to build together.