A new survey reveals many workers are using AI without reviewing the results—raising new risks for businesses as adoption outpaces oversight and governance.

ANN ARBOR – A national survey is raising concerns about how artificial intelligence is being used in the workplace—but the bigger story may be what it reveals about how quickly businesses are adopting AI without clear guardrails.

According to a recent report from Resume Now, more than one-third of U.S. workers say they rarely or only occasionally review AI-generated content before using it.

On its face, the finding points to what researchers are calling an emerging “AI oversight gap”—a growing disconnect between how widely AI tools are used and how carefully their outputs are checked.

But while the survey offers a useful snapshot of workplace behavior, experts say it should be viewed as a signal of a broader trend rather than a definitive measure of risk.

AI Adoption Is Moving Faster Than Oversight

There’s little debate that AI use is expanding rapidly across industries.

The Resume Now survey found:

  • 52% of workers use AI in their weekly tasks
  • Nearly 1 in 5 rely on it for at least a quarter of their workload

That aligns with broader research from firms like McKinsey & Company and Deloitte.

Erik Brynjolfsson, director of the Stanford Digital Economy Lab at Stanford University, has emphasized that AI’s value depends heavily on human oversight.

“AI can dramatically improve productivity, but only if humans remain in the loop to verify and guide its output.”

In Michigan, that tension is especially relevant across:

  • Automotive and advanced manufacturing
  • Healthcare systems
  • Financial services
  • Cannabis retail and compliance

A Survey With a Signal—But Not the Full Picture

While the headline figure—35% of workers not consistently reviewing AI output—is striking, it comes with important caveats.

Resume Now is a career-focused platform, not an academic research institution. Like many workplace surveys released by private firms, the report is designed in part to generate insight and visibility.

That doesn’t invalidate the findings—but it does mean they should be interpreted carefully.

Andrew Ng, founder of DeepLearning.AI and former head of AI at Google, has long warned that companies often underestimate the operational discipline required to deploy AI safely.

“AI is the new electricity. But just like electricity, it needs proper infrastructure and controls to be used safely and effectively.”

Surveys based on self-reported behavior can blur the line between:

  • Skipping review entirely
  • Doing a quick scan
  • Applying rigorous verification

Which means the real risk is less about the exact percentage—and more about the underlying behavior.

The Real Risk: Overconfidence in AI

Even with those limitations, the survey highlights a widely recognized issue: over-trust in AI systems.

Sam Altman, CEO of OpenAI, has publicly cautioned users against assuming AI outputs are always reliable.

“People should be careful about trusting AI too much. It can make mistakes.”

That risk is already showing up in workplaces.

Separate research has found:

  • Managers are seeing AI-related errors
  • Employees often treat outputs as final work
  • Mistakes slip through when content “sounds right”

For Michigan businesses, the implications are real:

  • Manufacturing: Incorrect specifications
  • Healthcare: Documentation errors
  • Finance: Compliance risks
  • Cannabis: Regulatory violations

The Rise of “Shadow AI” in the Workplace

Another finding from the survey points to a quieter but equally important issue: lack of transparency.

  • 40% of employees say they use AI tools at work
  • 15% admit they’ve used AI without informing their employer

This mirrors earlier waves of unsanctioned tech adoption.

Satya Nadella, CEO of Microsoft, has stressed that governance must evolve alongside adoption as AI becomes embedded in business workflows.

“Every company is going to become an AI-powered company—but governance and trust will be critical.”

For employers, that creates a hidden layer of operational risk.

Why This Matters Now

AI is no longer experimental—it’s embedded in daily workflows.

But many organizations still lack:

  • Formal policies
  • Clear accountability
  • Employee training

That gap between adoption and oversight is where problems emerge.

And it’s not just about individual mistakes—it’s about scale.

What Businesses Should Do Next

Experts say companies don’t need to slow AI adoption—but they do need to catch up on governance.

1. Define Acceptable Use

2. Require Human Review

3. Increase Transparency

4. Train for Verification

The Resume Now survey may not offer a precise measurement of workplace risk—but it highlights something more important:

AI is moving into the workplace faster than many organizations are prepared to manage it.

For Michigan businesses, the takeaway isn’t the exact percentage.

It’s the broader reality:

The technology is already here.
The guardrails are still catching up.