As generative AI becomes central to enterprise innovation, many outputs still require human correction before reaching production quality. CIOs now face unexpected costs, shifting labor dynamics, and ambiguous ROI tied to AI deployment.
A recent MIT report found that 95% of generative AI pilot programs yield no return on investment, citing a lack of learning feedback loops and continued dependence on manual oversight.
In real-world use, distorted visuals, misaligned formats, and unstable or insecure code are common. To meet quality and performance standards, organizations are increasingly relying on contract professionals to refine or rebuild what AI produces. Yet these roles are often undervalued, brought in late, under tight budgets, and with limited recognition of the complexity involved.
Why It Matters: While generative AI is often positioned as a labor-saving innovation, many organizations are now confronting the operational reality: outputs still require substantial human intervention. This introduces hidden costs and new dependencies that many enterprise budgets and timelines fail to account for. For CIOs, this challenges assumptions around ROI and automation efficiency, and underscores the need to rethink talent sourcing, delivery models, and oversight strategies across AI-driven initiatives.
- AI Tools Are Producing Volume, Not Value: Generative AI allows rapid creation of content, but scale alone does not guarantee quality. Designers are frequently brought in to correct outputs that break when resized or miss brand requirements. These corrections can take as much or more time than starting from scratch.
- The False Economy of AI-First Workflows: Businesses drawn to low-cost, quick AI outputs sometimes find themselves spending more in the long run. Budgets may be committed early to tool usage, only to find additional work is needed. Freelancers hired to revise AI content are often paid less, even when the work requires high-level skill or judgment.
- Developers Are Fixing What AI Gets Wrong: AI-generated code can accelerate prototyping, but it is not always deployment-ready. Developers report working on code that lacks proper logic or exposes security risks. Many of these issues stem from the tools’ current limitations in understanding context or system-level architecture.
- The Human Touch Still Signals Quality and Trust: Clients and audiences are becoming more aware of the patterns and style of AI-generated content. Some report that visuals or writing produced by AI lack emotional depth or clarity. In response, businesses and creators often turn to human professionals to produce more intentional or expressive work.
- Generative AI Doesn’t Learn the Way Humans Do: A common assumption is that AI tools improve through continued use. In most cases, these systems do not retain user feedback or adapt without retraining. A recent MIT study found that a large share of generative AI pilot programs showed no measurable return on investment, due in part to the extra work still required from human users.
Go Deeper -> Humans are being hired to make AI slop look less sloppy – NBC News
Trusted insights for technology leaders
Our readers are CIOs, CTOs, and senior IT executives who rely on The National CIO Review for smart, curated takes on the trends shaping the enterprise, from GenAI to cybersecurity and beyond.
Subscribe to our 4x a week newsletter to keep up with the insights that matter.


