The Time Surplus Problem
If AI Saves an Hour and No One Feels It, Did It Happen?
OpenAI just released a big “State of Enterprise AI 2025” report about how companies are using AI at work. The details aren’t surprising: usage is up, people say they’re saving time, and there’s a lot of optimism about productivity. It’s the first big, formal attempt to quantify what many of us in tech already feel anecdotally.
But even with numbers like that, I keep feeling there’s a lot about the actual reality of AI adoption that a self-reported survey doesn’t capture.
Just to be clear: this isn’t an “AI is overhyped” post. I actually believe the report: AI really is saving people time and making work better. What interests me is the gap between the measured productivity gains and the lived experience. Most heavy users I know don’t feel like they got their time back. If anything, they feel more stretched. This post is about that gap1 - because at the ground level, though, the story is messier than a survey chart.
By now, we should probably assume that most knowledge work is now AI-assisted or AI-infused. People don’t always say it out loud, but AI has become a default thought partner and writing buddy. It’s very good at taking a rough idea and turning it into something sharper and more polished. And yet, almost everyone I know who uses AI heavily - including me - feels like they have less time, not more.
One reason, I suspect, is that AI quietly promotes you from “doer” to “manager.” You’re no longer just completing tasks; you’re directing a system that hands you work that’s 80–90% done. That sounds efficient, but reviewing, correcting, and deciding can be more cognitively expensive than doing a simpler version yourself. We’ve trained people for decades to be good executors, not good editors or creative directors. AI suddenly demands the latter.
Another reason is that the interaction loop is addictive. You refine the prompt, ask for another variation, make it shorter, then longer, then “more like X.” The work doesn’t happen once; it happens five or ten times, in slightly different flavors. The result is nicer, but the time cost quietly expands.
And on top of that, AI is incredible at what I think of as “empty-calorie productivity.” It blasts through low-value / mundane tasks - emails, summaries, formatting - so fast that you can spend an entire day feeling productive while never touching the hard, uncomfortable thinking - the stuff that actually moves your life or work forward - often gets pushed to the side. Your best hours get spent on AI-accelerated housekeeping.
So when I see a report saying “AI saves people an hour a day,” I don’t doubt it. That’s directionally correct. The irony is that this “saved” hour doesn’t show up as more space in people’s lives. It just gets instantly filled with more projects, more communication and more expectations. The treadmill speeds up.
That’s the gap I keep coming back to. The problem isn’t that AI doesn’t work. It’s that we haven’t yet decided what to do with the surplus it creates, and how to protect our own attention and cognition in a world where the default is to just do more.
Until we figure that out, AI will keep amplifying whatever system it’s dropped into. If our existing system is confusion, overwork, and shallow focus, that’s what it will scale. If we want something better - more real productivity, more clarity, more time that actually feels like time - we’ll have to design that part ourselves.
If I had to write a “request for startup” off of this, it wouldn’t be “AI Agent for that.” We’re drowning in those. It would be the opposite: a holistic system that sits on top of all your AI tools and turns raw time savings into actual dividends. Something that helps teams decide what not to do when AI makes a task cheap, that protects deep-work hours the same way a CFO protects cash, and that treats human attention as the scarce resource to optimize for.
AI is already very good at doing more. The next important company in this space might be the one that helps us, systematically, how to do less.
📩 P.S. If you want to work with me and the broader Leonis team full-time in our San Francisco office, we’re hiring a Partner-track Associate. If you’ve ever thought, “I wish I could help build the venture firm, not just work at one,” this is that shot.
Outside of the Silicon Valley bubble, most of the world doesn’t really understand - let alone fully adopt - the actual productivity-unleashing power of AI. In a good light, that’s encouraging: there’s clearly a lot of upside left, and we’re still early. In a less good light, it means we haven’t really thought through the downside at all.
We haven’t grappled with job replacement in any serious way, or the way AI can amplify human bias at scale, or what it means for our own minds when we outsource more and more thinking to machines. And we definitely haven’t designed any kind of fair, sustainable way to handle the economic reallocation that comes with all of that.
In the middle of that confusion, a lot of the AI narrative lives at this vague altitude: “post-AGI, future-proof companies,” “AGI that benefits all humanity,” “AI will cure cancer and solve energy, so everything we do now is justified.” Without concrete mechanisms or tradeoffs, those lines function mostly as branding. It’s not evil, it’s just incomplete - and when everyone talks like that, it makes it harder for the public to trust what’s real. It’s the same vibe as Facebook’s early “we just want to make the world more open and connected.”
It’s not wrong to pursue self-interest and profit; the problem is pretending that you’re not. That mismatch between reality and narrative is part of why the public feels so disoriented and frankly concerned about AI. There are too many masks in the arena, and it doesn’t help build trust.


