8 min read

What To Do With The Time AI Saves You?

Published on
February 17, 2026
Author
Krisztina Dányi
Krisztina Dányi
Digital Storyteller
Subscribe to our newsletter
Subscribe
What To Do With The Time AI Saves You?

At a recent webinar I joined, the lecturer posed a riddle: What do the Rubik's Cube, the iPhone, and Michael Jackson's Thriller have in common? The juxtaposition was intentional: they’re from different industries, they're among the top-selling products of all time, and none of which responded to existing market demand. The implicit lesson: out-of-the-box thinking creates markets rather than serves them. But this framing has a survivorship bias problem. For every iPhone, there were a hundred Microsoft Zunes. For every Thriller, a thousand forgotten albums that also "didn't respond to market demand" and failed spectacularly. So what separates visionary from delusional?

Although Steve Jobs famously eschewed traditional market research, his focus on "latent needs" and "human perceptions" rather than stated preferences can be translated to today’s lingo as “better syntheses of data”. It also means looking at trajectories, instead of states. The creators (and crucially, the curators) of breakthrough products excelled at connecting the dots others can’t see and instinctively identified latent demand through deep user insight. Which brings me to AI.

How To Reinvest Your Capacity Freed Up by AI

Everyone agrees AI "saves time." At a recent Service Design x AI panel in Budapest, UX designer and researcher participants said it unanimously: AI finally gives you capacity for deep understanding of users, customers, and problems. But if we look at how enterprise AI tools like Gemini Enterprise* are actually being marketed—automate entire workflows, empowering every employee to transform work and be more stragic—we see a gap. 

Nobody is talking about what "being more strategic" actually means in practice. The brochure use cases of AI are already commoditized. Worse, they're evolutionary, not revolutionary.  Currently, we're using a pattern-completion machine for incremental gains, when the real opportunity is using it as infrastructure for judgment—a way to reallocate your cognitive budget from rote synthesis to the irreducibly human work of taste, curation, and vision

This article outlines a general framework, and three ways to do exactly that, inspired by Zoe Scaman’s genius solo work. I do believe that these are not just "contrarian AI hacks," but can be used as a deliberate strategy for investing the capacity AI frees up.

I. Mine Your Idea Graveyard

Most medium-to-large enterprises are sitting on a goldmine of failure. You have twenty years of abandoned patents, half-baked strategies, and "too early for the market" product specs gathering digital dust. Instead of asking AI to generate new ideas, feed it your archive. AI excels at analyzing states but trajectories too: it can map the "structural signature" of why projects failed and identify whether conditions have changed. This is using AI as a future historian to identify inflection points you were too early for.

Try asking:

  • “Here are five product concepts we shelved between 2015-2020. Why did each fail, and which would succeed today?" 
  • "This patent was abandoned due to technical limitations. Are those limitations still barriers?"
  • "What market conditions would need to change for this strategy to work?"

II. Pattern Recognition as Creative Constraint

AI is better at seeing what exists; humans are better at imagining what doesn’t. If you ask AI for a "good idea," it will give you the average of every idea it has ever seen. To find the "Thriller" or the "iPhone," you have to break the pattern. But how to do it with something that’s designed for pattern recognition?

Here’s an idea: if AI can analyze past failures, it can map current saturations too.

  • White Spacing: Use AI to map the entire landscape of existing competitor solutions, marketing claims, and product features. By having the AI visualize the "center of gravity" in your industry, you can deliberately design for the empty spaces. It’s about using the machine to define the status quo so you can move precisely in the opposite direction.
  • Cross-Domain Hybridization: Innovation often happens when a solution from one industry is applied to a problem in another. Feed the AI two entirely unrelated datasets. For example, "Supply Chain Logistics" and "Urban Sociology." Ask it to identify structural similarities or "cross-domain frustrations" that neither department sees. You aren't looking for a final answer, but for a new lens through which to view your old problems.
  • Mutation lab: Remember those movies in which a handful of heroes save the world with something that “shouldn’t work”? Use multimodal capabilities to generate hundreds of "evolutionary branches" of your core concept, especially combinations that seem counter-intuitive or "wrong." This flips the creative process: instead of struggling to ideate, your job shifts to high-level curation. You are looking for the "useful anomaly"—the 0.1% of output that doesn't fit the existing market pattern but solves a latent human need.

III. Red Teaming and Engineering Friction

In most enterprises, the approval chain is where bold ideas go to be sanded down into smooth, unremarkable pebbles. We usually use AI to bypass friction, but the most contrarian use of AI to engineer purposeful friction, as a strategic challenger. In security, it is called "Red Teaming." 

  • Challenge your assumptions: Define your project's mission and objectives, then ask AI to argue why they're flawed or incomplete. For instance, our team started using our sister company’s framework called LIZA. Its creators formalized this thinking into a persistent AI agent nicknamed "Brutally Honest Challenger." Before any project kicks off, we can feed it our brief and ask it to attack our assumptions.
  • Simulate failure modes: Feed it your product roadmap and prompt it to explain how it will fail— with specific scenarios
  • Model cultural mutation: Ask AI to simulate how your idea will be misunderstood across different contexts—how it'll be received by different stakeholder groups, how it might be weaponized by critics, how it could be misinterpreted in different cultural or ideological frameworks

This allows for combinatorial thinking, enabling people to work in adjacent domains to their expertise: a product manager who can now do basic data analysis without a data scientist, a strategist who can model scenarios without an MBA quant team, a marketing generalist who can model cultural impact without a semiotics or sociologist degree but just to stress test the ideas. 

I don’t suggest replacing human feedback with AI feedback. I’m just saying, after you've stress-tested your concept against AI opposition, you've earned the right to take up someone's time. Now you can go to customers, users, or domain experts with a sharper hypothesis and more targeted questions.

This Is Your Reallocated Cognitive Budget

Looking at these examples you’ve probably realized that I don’t believe AI will democratize genius. However, I firmly believe it can democratize the conditions for genius because it can remove tedious bottlenecks so humans can focus on the irreducibly human work of taste, judgment, and vision. Human perception is a "predictive model" that seeks to reduce uncertainty. AI can act as a "collaborative partner" that updates these models with "priors" that a human could never process alone—such as millions of global financial news signals or real-time IoT sensor data. 

I think, in reality, AI will make us not smarter but more *available*. Available to stress-test ideas before wasting others' time. Available to recognize patterns in decades of data. Available to curate anomalies that don't fit existing markets but might create new ones. And most importantly: available to spend time with the people your products are meant to serve. To ask questions AI can't answer. To understand context AI can't see. To build relationships AI can't form.

Of course, rebuilding a practice from the ground up requires a level of cognitive flexibility that is difficult to enforce at scale, especially where middle management resists change that feels "imposed rather than co-created”. But the alternative—using revolutionary tools for evolutionary gains—is a waste of the moment we're in. 

*A note on tools: This article focuses on Gemini Enterprise because it's what many organizations are deploying, but the principles apply to any enterprise AI system with large context windows and multimodal capabilities. 

Author
Krisztina Dányi
Digital Storyteller
Subscribe to our newsletter
Subscribe