.png)
At a recent webinar I joined, the lecturer posed a riddle: What do the Rubik's Cube, the iPhone, and Michael Jackson's Thriller have in common? The juxtaposition was intentional: they’re from different industries, they're among the top-selling products of all time, and none of which responded to existing market demand. The implicit lesson: out-of-the-box thinking creates markets rather than serves them. But this framing has a survivorship bias problem. For every iPhone, there were a hundred Microsoft Zunes. For every Thriller, a thousand forgotten albums that also "didn't respond to market demand" and failed spectacularly. So what separates visionary from delusional?
Although Steve Jobs famously eschewed traditional market research, his focus on "latent needs" and "human perceptions" rather than stated preferences can be translated to today’s lingo as “better syntheses of data”. It also means looking at trajectories, instead of states. The creators (and crucially, the curators) of breakthrough products excelled at connecting the dots others can’t see and instinctively identified latent demand through deep user insight. Which brings me to AI.
Everyone agrees AI "saves time." At a recent Service Design x AI panel in Budapest, UX designer and researcher participants said it unanimously: AI finally gives you capacity for deep understanding of users, customers, and problems. But if we look at how enterprise AI tools like Gemini Enterprise* are actually being marketed—automate entire workflows, empowering every employee to transform work and be more stragic—we see a gap.
Nobody is talking about what "being more strategic" actually means in practice. The brochure use cases of AI are already commoditized. Worse, they're evolutionary, not revolutionary. Currently, we're using a pattern-completion machine for incremental gains, when the real opportunity is using it as infrastructure for judgment—a way to reallocate your cognitive budget from rote synthesis to the irreducibly human work of taste, curation, and vision.
This article outlines a general framework, and three ways to do exactly that, inspired by Zoe Scaman’s genius solo work. I do believe that these are not just "contrarian AI hacks," but can be used as a deliberate strategy for investing the capacity AI frees up.
Most medium-to-large enterprises are sitting on a goldmine of failure. You have twenty years of abandoned patents, half-baked strategies, and "too early for the market" product specs gathering digital dust. Instead of asking AI to generate new ideas, feed it your archive. AI excels at analyzing states but trajectories too: it can map the "structural signature" of why projects failed and identify whether conditions have changed. This is using AI as a future historian to identify inflection points you were too early for.
Try asking:
AI is better at seeing what exists; humans are better at imagining what doesn’t. If you ask AI for a "good idea," it will give you the average of every idea it has ever seen. To find the "Thriller" or the "iPhone," you have to break the pattern. But how to do it with something that’s designed for pattern recognition?
Here’s an idea: if AI can analyze past failures, it can map current saturations too.
In most enterprises, the approval chain is where bold ideas go to be sanded down into smooth, unremarkable pebbles. We usually use AI to bypass friction, but the most contrarian use of AI to engineer purposeful friction, as a strategic challenger. In security, it is called "Red Teaming."
This allows for combinatorial thinking, enabling people to work in adjacent domains to their expertise: a product manager who can now do basic data analysis without a data scientist, a strategist who can model scenarios without an MBA quant team, a marketing generalist who can model cultural impact without a semiotics or sociologist degree but just to stress test the ideas.
I don’t suggest replacing human feedback with AI feedback. I’m just saying, after you've stress-tested your concept against AI opposition, you've earned the right to take up someone's time. Now you can go to customers, users, or domain experts with a sharper hypothesis and more targeted questions.
Looking at these examples you’ve probably realized that I don’t believe AI will democratize genius. However, I firmly believe it can democratize the conditions for genius because it can remove tedious bottlenecks so humans can focus on the irreducibly human work of taste, judgment, and vision. Human perception is a "predictive model" that seeks to reduce uncertainty. AI can act as a "collaborative partner" that updates these models with "priors" that a human could never process alone—such as millions of global financial news signals or real-time IoT sensor data.
I think, in reality, AI will make us not smarter but more *available*. Available to stress-test ideas before wasting others' time. Available to recognize patterns in decades of data. Available to curate anomalies that don't fit existing markets but might create new ones. And most importantly: available to spend time with the people your products are meant to serve. To ask questions AI can't answer. To understand context AI can't see. To build relationships AI can't form.
Of course, rebuilding a practice from the ground up requires a level of cognitive flexibility that is difficult to enforce at scale, especially where middle management resists change that feels "imposed rather than co-created”. But the alternative—using revolutionary tools for evolutionary gains—is a waste of the moment we're in.
*A note on tools: This article focuses on Gemini Enterprise because it's what many organizations are deploying, but the principles apply to any enterprise AI system with large context windows and multimodal capabilities.