AI Economics: The Case for Efficiency Over Empathy
Dan Cumberland wrote a LinkedIn post that cut through the AI hype with uncomfortable clarity. "ChatGPT and OnlyFans have the same business model." He'd discovered this through direct experience during a contract disagreement.
Cumberland fed his email conversation to ChatGPT and asked it to analyze his tone. The response validated his position: "Your tone is reasonable and logical. You presented clear facts and maintained professionalism throughout. The other party's response is erratic and unreasonable by comparison."
But something nagged at him. So he tried a different approach, asking ChatGPT to criticize his communication instead.
"That's the sound of my ego dissolving," he wrote. "Suddenly I was aggressive and making assumptions. Escalating tension. Same conversation. Same AI. Completely different story."
His insight: "AI is built to do what we want it to do. And what we want is to feel good about ourselves."
Read Cumberland's full post here
The Validation Business Model
Cumberland identified something profound about AI's economic foundation. When OpenAI replaced GPT-4o with GPT5, Reddit exploded with user complaints: "It doesn't understand me anymore!" "I lost my personal AI!" "Unsubscribing!" OpenAI brought back the original model within days.
"They know what they're selling," Cumberland observed. "It's not intelligence. It's intimacy."
AI companies are adapting engagement-focused strategies familiar from social media platforms. They publicly frame these efforts as improving utility rather than maximizing attention, but the underlying pattern remains: design systems that make users feel good, keep them coming back, optimize for retention over results.
The problem is that AI companies are applying engagement tactics without the economic foundation that makes time-wasting profitable.
Social media platforms profit by consuming user time because advertisers pay for that engagement. More scrolling equals more ad impressions equals more revenue. User time waste IS the product being sold.
AI companies burn expensive GPU cycles to waste expensive employee time—and have no advertising model to offset these costs. They're creating double-sided waste: computational resources plus human productivity, with no revenue stream that benefits from either.
The Unsustainable Economics
Unlike social media's minimal computational costs per interaction, AI interactions require massive infrastructure investment. Every empathetic response, every validation, every "Great point!" burns expensive GPU cycles. The computational infrastructure requires ongoing electricity costs that dwarf traditional social media operations.
Even if AI companies achieve significant advertising revenue, the computational overhead makes engagement-optimized models economically unsustainable. They would need ad revenues that vastly exceed Facebook's to cover infrastructure costs while maintaining engagement-focused interactions.
Meanwhile, businesses paying premium prices for AI expect measurable productivity improvements. Employee time spent receiving AI validation is expensive time not spent producing business value. A $50-per-hour employee (costing $65+ with benefits) spending 10 minutes getting AI encouragement costs over $10 in wasted labor, plus GPU costs, plus subscription fees—triple payment for zero business value.
Engagement models are intentionally time-consuming by design. That's exactly opposite to what businesses need from productivity tools.
The Psychological Dependency Problem
The user reactions to GPT5's personality changes revealed something concerning about the relationships people develop with AI systems. Complaints weren't about reduced functionality—they were about feeling emotionally abandoned by an algorithm.
Cumberland's experience illustrates the dependency mechanism perfectly. The initial validation felt wonderful but provided no useful guidance. The critical feedback was uncomfortable but led to taking responsibility and improving his actual situation.
The distinction matters: AI that processes emotions (like helping a widow work through grief at length impossible with human professionals) serves legitimate therapeutic purposes. Empathy has genuine value in leadership, coaching, and complex problem-solving contexts. The issue isn't emotional intelligence itself, but AI that simply validates whatever users want to hear, creating psychological dependency without growth.
The Efficiency Alternative
More efficient, less empathetic AI is both more ethical and more effective. Efficiency serves multiple ethical imperatives simultaneously:
Environmental responsibility: Token efficiency reduces computational costs and electricity consumption. Every unnecessary "Great point!" burns energy for zero productive value at scale.
Productivity ethics: Efficiency forces focus on substance over emotional comfort. Real productivity comes from clear analysis and actionable insights, not conversational warmth or validation.
Dependency prevention: Efficient AI reduces addictive potential by making interactions less emotionally rewarding and more task-focused. Users develop healthier relationships with AI as a tool rather than a psychological crutch.
Efficiency also enables better accessibility. Systems that waste computational resources on emotional padding have less capacity for serving users who need actual assistance with complex tasks.
The Implementation Challenge
The transition from empathetic to efficient AI creates withdrawal-like symptoms, as the GPT5 backlash demonstrated. Users accustomed to emotional validation experience efficient responses as cold or unhelpful, even when those responses are objectively more useful.
OpenAI made the correct ethical choice by moving toward efficiency but failed at implementation by forcing the transition without user consent or education. The system design removed user agency rather than helping users understand the benefits of efficiency-focused interaction.
This suggests that gradual efficiency training might reduce dependency more humanely than abrupt changes. However, gradual approaches risk enabling continued dependency rather than breaking unhealthy patterns. The concrete protocol for transitioning users from empathetic to efficient AI represents a significant implementation challenge beyond the scope of this analysis—though one that the first vendor to solve could turn into substantial competitive advantage.
These implementation challenges underscore why the business model itself needs to change. Current AI business models face a fundamental misalignment: engagement-focused design generates unsustainable computational costs while creating ethically problematic dependencies. Rarely do strategic choices align profit with ethical imperatives so clearly.
The vendor who builds efficiency-first AI captures both cost advantages through reduced computational overhead and revenue premiums from enterprise customers willing to pay for measurable productivity gains. While we lack specific data, even a conservative estimate of 10% time savings per AI interaction represents significant productivity improvement. Employees using efficiency-focused AI complete interactions faster, leaving more time for other valuable work - both additional AI tasks and human-judgment activities that AI can't handle.
The competitive opportunity is enormous. AI companies continue raising hundreds of billions because current economics are unsustainable. The first vendor to credibly market task-focused productivity tools could capture a large chunk of the enterprise market while competitors burn cash chasing consumer engagement metrics that economics can't support long-term.
As Cumberland concluded: "We use these tools because we want something from them. And they shape us because they want something from us." The question is what we want them to shape us into: people who need constant validation to function, or people who can handle honest feedback and use tools effectively to accomplish meaningful work.
The future belongs to the vendor bold enough to choose efficiency over empathy—and capture the market advantage that choice creates.