What Work Means for Humans in the Age of AI (Part 1)

2026/05/11
西原 将光


Introduction

Hello, I’m Nishihara.

As the new fiscal year begins, many of you are likely facing fresh challenges in managing your teams and services under new structures.

I’ve noticed growing interest in the topics I regularly write about AI and management. Generative AI continues to evolve at a pace that feels less like progress and more like acceleration.

With that in mind, I’d like to try something new: a three-part series covering the following themes.

  • Part 1: Human work in the age of AI
  • Part 2: Management in the age of AI
  • Part 3: Communication in the age of AI

To state the conclusion of Part 1 upfront: in the age of AI, the center of gravity in work is shifting, away from doing tasks and toward defining purpose, determining outcomes, and making judgments.

In this article, I explore that shift through three lenses: work, output, and decision-making.


What Is Work, Really?

When asked “what is work?”, people offer many different answers.

Some examples:

  • Writing emails
  • Creating documents
  • Attending meetings
  • Conducting research
  • Reporting to stakeholders
  • Explaining things to customers
  • Maintaining procedure manuals
  • Documenting incident responses
  • Answering inquiries

All of these are part of work, but most of them are means, not ends.

Creating a document isn’t the goal in itself. The goal is to use that document to explain, persuade, or reach a decision. Reporting isn’t the goal. The goal is to share context and drive the next action.

If we step back and look at work more broadly, it can be defined as delivering outcomes that are valuable to someone.

For example:

  • Solving a customer’s problem
  • Moving a team’s decision forward
  • Containing the impact of an incident
  • Keeping a service running stably

All the tasks — the documents, the reports, the research — exist in service of these goals.

Generative AI is transforming the task side of this equation. Writing, organizing information, generating ideas, polishing explanations — AI is dramatically accelerating all of these. That’s precisely why defining human work as “doing tasks” becomes increasingly difficult to defend in the age of AI.


AI Is Reducing the Scarcity — Not the Value — of Tasks

AI doesn’t eliminate the value of tasks. But it does reduce their scarcity.

Work that once took hours of human effort — drafting documents, organizing information — can now be handled by AI in minutes. In IT operations, for example, AI can assist with:

  • Compiling incident timelines
  • Drafting initial customer-facing reports
  • Writing first responses to inquiries
  • Reviewing procedure documentation
  • Summarizing system change logs
  • Writing up meeting notes

These are all areas where AI can deliver real efficiency gains.

But does that mean anyone can produce the same results? No.

As tasks become faster, the differentiator between people shifts — away from execution speed, toward something else:

  • What do you ask AI to do?
  • What context do you provide?
  • How do you evaluate the output?
  • In what situation do you use the result?

In the age of AI, the gap shows up not in the task itself, but in the judgment that surrounds it. The gap between those who use AI effectively and those who are overwhelmed by it will only widen.


The Ceiling on AI Output Is the Ceiling on the Person Using It

If you’ve used AI regularly, you’ve probably experienced moments where the output misses the mark:

  • The response is off-target
  • The analysis stays shallow
  • The answer is generic and doesn’t fit your situation

Sometimes that’s a limitation of the AI itself. But often, the problem lies on the human side.

If you ask AI to “write an incident report” without context, you won’t get a useful one. What’s missing:

  • Who is the audience?
  • Is this a preliminary or final report?
  • How much technical detail is appropriate?
  • Should it include an apology?
  • Is a root cause analysis required?

Without that context, AI can produce something that looks like a report — but won’t actually work in practice.

AI mirrors the thinking of the person using it. Vague prompts produce vague answers. Structured thinking produces structured output.

To get useful output for real work, the person using AI needs:

  • Clarity on what needs to be addressed
  • Judgment about what context to provide
  • A standard for what “good” looks like

The resolution of your thinking directly shapes the quality of AI’s output.


AI Produces Volume. Who Decides Quality?

One of AI’s clear strengths is volume. It can rapidly generate:

  • Multiple draft versions of an email
  • Several structural options for a presentation
  • A list of hypotheses for the cause of an incident
  • A range of improvement proposals

In less time than it takes a human to produce one option, AI can produce many — sometimes more than you can even read through.

But which one do you choose? Which option has the highest quality?

Here’s the key: quality is not universal. It depends on context.

  • For an executive audience, conciseness matters most
  • For a technical team, precision is essential
  • For a customer-facing incident report, clear facts matter — but so does language that doesn’t amplify anxiety

AI can help improve quality — refining expression, catching gaps, offering new angles. But what counts as “good” must be decided by the person with a purpose in mind.

Formatting and generating options? AI can handle that. But the final calls — what to adopt, what to revise, what to discard — remain with the human.

In a world where AI produces the volume, human judgment about quality matters more than ever.


Decision Authority Stays with Humans

AI can propose. It can:

  • List out options
  • Organize risks
  • Draft explanations

But AI is not, at present, an organizational decision-maker.

Business decisions carry responsibility and consequences — ones shaped by unwritten organizational dynamics, customer relationships, and historical context that can’t realistically be captured in a prompt.

In incident response, AI can help organize possible causes and draft report templates. But these decisions look different:

  • Which hypothesis do you investigate first?
  • When do you notify the customer?
  • How wide is the blast radius?
  • Do you prioritize recovery or root cause analysis?

AI can support these decisions. It cannot own them. Ultimately, a human — with full context — must decide.

Even as AI makes tasks easier, the responsibility of choosing doesn’t disappear.

This isn’t just about AI being useful or not. It’s about what humans are responsible for.


Conclusion: Work Is Shifting from Making to Deciding

AI is increasingly supporting the task layer of work:

  • Writing
  • Organizing information
  • Generating ideas
  • Refining explanations

This will only grow. But that doesn’t mean human work disappears. It means the center of gravity moves toward something more fundamental:

  • Who do we create value for, what do we deliver, and how?
  • What kind of output does that require?
  • Where do we need to make a judgment call?

Human work remains here.

And the role long associated with this kind of responsibility — defining direction, making judgment calls, bearing accountability — is management.

So is management a safe harbor in the age of AI? A domain AI can’t touch?

In Part 2, I’ll explore what management looks like in the age of AI.