Why GenAI Won’t Replace Everything: The Multi-Level Advantage
This article was co-written with Claude AI for translation, reformulation, and structure. It is the fruit of a real dialogue and reflections, not an automated article created from standardized content for filler purposes. I wish everyone a good read.
A short exercise story : Hyperkitchen
Imagine that you have a new appliance at home called the Hyperkitchen.
This machine can store food like a fridge, prepare many types of food (from regular meals to desserts and drinks), wash your dishes, wash and dry your clothes, vacuum and mop the floor, manage deliveries, take out the trash, clean the table, make the bed, and even iron your shirts.
In terms of cognitive load, the machine is incredible. One interface, does it all. All your neighbors have it, you have it at the office. Don’t need to think, coordinate, balance, select, choose. You don’t have to clean it, repair it, or even plug or unplug it.
It legitimately becomes the new ubiquitous central element of your home.
But there’s a catch.
The machine runs on very high energy consumption, and the electricity bill is delayed for the moment by your electricity provider. Let’s make the assumption also that the energy is clean, to focus on our key matter. For the moment the cost is acceptable, but further down the road, it can cost you way more than today, maybe 10, 20, 50 times the cost. You don’t know when the price adjustment will happen, but you know for sure it will happen, one way or another. Hopefully, by the time adjustment happens, the machine efficiency will improve enough to have a recurring high cost, you’ll just have a short debt to pay.
What would you do then ? Continue to use the Hyperkitchen for everything, or combine the hyperkitchen with a coffee machine (you take the same coffee everyday anyway), and other traditional appliances that are optimized for dedicated recurring functions but not for flexibility ?
Recognize something ?
We all know this story is about generative AI, it’s not about being against or being for, but being wise about it.
Personally, I totally love genAI—I’m even building my own product with it and managed to remake my whole knowledge architecture thanks to it. But I believe most of its current use could be more relevant at individual, collective, corporate, and societal levels. Also, as for any tool, I believe mastery needs awareness of when to use it and when NOT to use it.
Like many others, I believe we shouldn’t blame the technology itself, but rather how some humans misuse it. When I say “a technology is dangerous,” I actually mean “decision-makers in charge of this technology have great potential for dangerous uses.”
This article is not a rant at all (not my mindset, never has been, never will be), it’s a call for a little hindsight, hopefully leading to more wisdom in our daily uses.
I have been recently working on how genAI can transform ways of working, ways of organizing, even on ways on thinking about self and others, whether at home, at the office or elsewhere. I have met people who utterly resist genAI, people who embrace it like a new god, and many types of profiles in between.
If you can read French, I also created a game about managing human factors, competing goals, and limited resources when managing a genAI team transformation in a corporate context. You can find the game here: https://openseriousgames.org/iagen-team-transfo/
Here are a few ideas I want to share today, especially for those still fearful of the space genAI is taking in our lives and society.
“It’s a revolution, so everything, everywhere will be done by or with genAI”
One common belief I’ve encountered is “Everything will be done by AI, it’s going to be everywhere.” This sentiment comes from both critics (who see it negatively) and enthusiasts (who see it as a dream).
I guess most of the time, these words are mostly emotions expressed by words rather than real ideas and beliefs. The problem is that some people can get so entrenched in their sentence that they begin dreading or getting euphoric under their own emotion. Then take the sentence as literal.
Some fear (gen)AI will replace all human activities and tasks, the others want genAI to replace all human activities and tasks.
A lot of it has to do with the idea that genAI is a revolution, and a revolution has to be everywhere. There is no question that genAI, in its current state, on the path to AGI and even just on the path to neurosymbolic AI, is a revolution.
genAI can be considered generalist and somehow ubiquitous because of its extremely extended range compared to previous ways of considering intelligence, bringing incredible savings in terms of both cognitive and computational loads.
I’m not challenging genAI status, the speed of its evolution, neither the benefits nor the costs it creates. Here I am challenging the perception we humans can have about this technology.
[!So the key question is:] Does potential “ubiquity” mean real effective use everywhere on the long run ?
I will challenge here the word “everywhere”, because some people will confuse everywhere with “for absolutely every task” (which could definitely get them into extreme mental or emotional states.)
The example of the electricity revolution
Let’s take a step back and look at another revolution, one for which time gave us some hindsight. I’m not saying every parallel I make is correct, but let’s hope you find some common principles.
Electricity that also seemed ubiquitous at the time. Electricity was by no doubt a revolution. Our current society would collapse in a few days or even less without electricity. It is everywhere in the sense of most major key points in our society.
Still, 150 years later, electricity is not systematically used for countless little things like tying our shoes, opening most doors, washing ourselves in the shower, hugging our family, or injecting vaccines—even though it can be used in most cases, and electricity is used in producing all these items (shoelaces, door knobs, vaccines, and syringes) in some way.
So no, even if something is clearly a revolution doesn’t make it necessary or even sufficient everywhere. It doesn’t make electricity less of a revolution, and even if massive changes make a technology omnipresent, it does not mean all human tasks.
I. The Economic Reality: Cost Optimization Drives Multi-Level Adoption
Cost optimization doesn’t push us toward high-tech, low-tech, or even “medium-tech” solutions. Instead, the right level of technology is chosen based on need, but also on the cost function relative to the technology variable.
Consider a simple manufacturing example: If I need to create one complex mechanical part, I’ll probably use a 3D printer. But if I need to create 1,000 simple mechanical parts, I’ll likely use a fixed mold because the cost per piece will be cheaper. This is why we don’t use 3D printers to make screws and bolts.
The same principle applies to software solutions. When creating a software response to a given context, if there are fixed elements and treatments, there’s no reason to burn tokens and call the supercomputer. Instead, we should create fixed code for repetitive elements.
This multi-level thinking about technology means that genAI isn’t automatically the best solution for every task, even when it’s technically capable of handling it. The optimal choice depends on the frequency, complexity, and cost structure of each specific use case.
The Problem-First Mindset
Generally speaking, we can expect any problem to break down into a sum of needs that are more or less recurrent, more or less risky, with transition costs that are more or less high. The bias we currently have is that we’re overwhelmed with news about PRODUCTS, because new products like LLM tools can cover so much more than previous generations of software products. What we hear daily is a flow of news like “Now with genAI we can create podcasts, videos, write research papers, write code, etc.,” which is great, even if overwhelming sometimes.
How we interpret this is “AI can do everything”—a naturally induced generalization from product news. However, we may tend to forget to think about the problems we want to solve and the goals we may have, focusing more on what we can do with these capabilities.
I’m not going to say “Let’s stick to the old world,” but rather suggest we look at situations and challenges with a complete view of all the parameters we can, to provide a sound and smart solution. Even on the question of software solutions only, there is a wide range of questions to ask: “Traditional” software and code or Machine Learning algorithms? SLMs or LLMs? Secured hosting or web hosting? Edge computing or supercomputing? Server-side or client-side? Context-aware or context-free?
These many questions are not answered by obsession or repulsion towards a given technology but by clear understanding of the problem to solve, of the range of our intentions, and of existing legacy context if any.
II. The Human Paradox: Why Satisfaction Breeds Innovation
Humans are insatiable, things that would have been gifts for kings at some point of history are not even looked at by the everyday person later on.
But here’s the fascinating pattern: even when we automate yesterday’s needs for incredibly cheap, making yesterday’s complex systems accessible to everyone, this very accessibility eliminates any competitive advantage. What happens next is predictable yet beautiful—people will inevitably create new combinations between automation and human elements, initially targeting niches that can afford them, then gradually becoming mainstream until the cycle repeats.
Consider the entertainment industry. Now that everyone has access to films from home (yesterday’s revolutionary cinema technology, now commoditized), we see the rise of immersive live experiences that blend cinema with theater and live acting—commanding premium prices. Similarly, now that people have unlimited access to music through streaming, concerts have become more spectacular and experiential than ever before.
The pattern extends everywhere. We have musical greeting cards that seemed magical when I was young—now they’re considered kitsch. We can play chess against computers far superior to any human level, with analysis that can improve our game, yet people still prefer playing against humans on platforms like chess.com. We have incredible gaming experiences available instantly, yet live Twitch streams draw massive audiences for the human element.
This is both the beauty and the trap of the hedonic treadmill. On one hand, it has pushed artistic and experiential innovations to constantly improve—concerts are more spectacular, live experiences more immersive, human connections more valued. On the other hand, we’ve lost the ability to appreciate incredible things that now seem simple: access to quality content from anywhere in the world, books, drones we can fly at home (now people want drone races against humans), or the chess example above.
This human insatiability creates a perpetual drive toward innovation and new combinations. It’s not just about having more—it’s about having something distinctly different from what everyone else can access. This force continuously pushes us to find new ways to combine the automated with the uniquely human, ensuring that technology adoption will always be multi-level rather than uniform.
III. The Persistence Principle: Each Format Has Irreplaceable Value
There’s another fascinating pattern that supports multi-level thinking: each technological format, regardless of age, maintains its own intrinsic value and beauty that newer technologies cannot fully replicate.
Consider the media evolution timeline. Radio has existed for over a century, yet it survived the arrival of television. Television persisted through the internet revolution. Static websites endure alongside social media. Social media platforms continue to thrive even as generative AI emerges. Each was supposed to replace the previous one, yet here we are, with all these formats coexisting.
This persistence isn’t just nostalgia or resistance to change—it’s recognition that each format offers something unique. When a new technology arrives, it doesn’t eliminate the value of previous ones; instead, it helps us understand both the upper and lower bounds of each format’s value proposition.
The upper bound effect: In the presence of television, radio loses some of its absolute value compared to when it was the primary technology. The lower bound effect: Radio still provides something distinctive—intimacy, portability, the theater of the mind—that persists across generations.
Take theater versus cinema. Cinema has existed for over 100 years, yet theater not only survives but thrives. Why? Because memorizing text and performing it live every night, while seemingly more “primitive” than filming once and distributing globally, offers irreplaceable qualities: the energy of live performance, the unique chemistry between audience and actors, the knowledge that each performance is unrepeatable.
This pattern suggests that rather than seeking the “one technology to rule them all,” we should recognize that different formats serve different human needs and contexts. The multi-level approach isn’t just economically smart—it’s also respectful of the unique value each technological format brings to human experience.
IV. The Resilience Imperative: Building Antifragile Systems
There’s a third compelling reason for multi-level thinking: resilience. Wise individuals and organizations maintain “less high-tech” processes as fallback options when their latest technology fails. This isn’t just about having backups—it’s about building antifragile systems.
Consider personal workflows. When I don’t have access to AI, I still have my traditional software and human skills. When I don’t have internet, I can work offline. When I don’t have my latest training equipment, I have my dumbbells—and even without those, I have bodyweight exercises. Each level down is not just cheaper (as we discussed earlier) but also more resilient and self-contained.
This principle extends far beyond individual use cases. In professional settings, we implement robotic processes but always maintain human fallback procedures for when automation fails. Customer service chatbots handle routine inquiries, but human agents take over for complex issues. Automated trading systems execute most transactions, but human traders monitor and can intervene.
The “less advanced” option isn’t just a cost consideration—it’s a strategic resilience layer. It represents skills, tools, and processes that are more portable, more reliable, and often more immediately accessible than their high-tech counterparts.
This resilience thinking naturally leads to multi-level implementations. Rather than asking “Should we use AI or humans?” the question becomes “How do we create graceful degradation from AI to human oversight to pure human execution?” The goal isn’t to avoid advanced technology but to build systems that can operate at multiple levels of technological sophistication depending on circumstances.
This approach acknowledges that technology, no matter how advanced, exists within contexts of uncertainty, failure, and change. Multi-level thinking isn’t just about optimization—it’s about building antifragile systems that get stronger when stressed.
V. The Scale Spectrum: Right Tool for Right Context
Here’s yet another compelling reason for multi-level thinking: diverse usage patterns naturally require different scales of technological solutions. The variety of contexts and needs means we’ll naturally find applications across the entire spectrum of technological sophistication.
Consider electricity as an analogy. Despite having super grids and massive power plants, we don’t power everything from centralized mega-infrastructure. We have phone batteries, AA batteries, small solar panels, car batteries, generators, and yes, the power grid. Each serves different contexts, constraints, and needs. The existence of nuclear power plants doesn’t eliminate the need for a flashlight battery.
The same principle applies to intelligence and computing. Even as centralized genAI becomes more powerful and accessible, we’ll still need the full spectrum: edge computing, local processing, traditional software, and human intelligence. This isn’t inefficiency—it’s optimal resource allocation.
Privacy considerations alone ensure this diversity persists. Some organizations will prefer private servers with less computational power over connecting to major cloud providers. Certain use cases will require local processing for confidentiality reasons. Sometimes you want to discuss sensitive matters with humans specifically to maintain non-cloud confidentiality.
Performance and availability also drive this diversity. Smartphone processing power and local computing capabilities will continue to grow. For many tasks, local processing will be faster, more reliable, and more responsive than cloud-based solutions. We don’t always need mega-computational power—sometimes good enough, fast, and local is superior.
This natural scaling means that rather than convergence toward a single solution, we’ll see an ecosystem where different scales of intelligence and automation serve different contexts. The smartphone in your pocket, the edge computing device, the local server, the cloud service, and the human expert each have their optimal use cases.
VI. The Knowledge Frontier: Where Data Meets the Unknowable
Finally, there’s a fundamental limitation that ensures multi-level thinking will always be relevant: much of the world’s most valuable information doesn’t exist in accessible data form, and complexity is often fractal in nature.
Generative AI achieves remarkable pattern matching by processing incredible masses of data, approaching something that resembles science fiction. But this strength reveals a critical blind spot: in many use cases, the information simply doesn’t exist as data, or isn’t accessible under conditions that work for generative AI (which increases security costs significantly).
Consider my work in executive coaching and organizational consulting. No document on company servers contains the unofficial information: “Person X wants that position, they took Person Y’s feedback poorly, they’re plotting or hoping for something else, they doubt this initiative, they’re hoping to leave, they have a secret project.” This information exists in minds, in gut feelings, in forms of storage that are neither accessible nor intended to be accessible.
Sometimes even the unconscious mind is so powerful that the person themselves couldn’t formalize or express this information in data form, even with friendly and simple capture tools like text, microphones, or cameras. The richness of human intuition, political awareness, and emotional intelligence often remains locked away from any digitization process.
Then there’s the fractal nature of complexity. I could become passionate about the keyboard I’m typing on right now. I could examine each key, disassemble them, study them under a microscope, run millions of experiments, drill down into the infinitely small. But complexity is fractal not just in size—it’s fractal in the variety of parameters I could explore: how this keyboard fits in this space, how the mouse and keyboard create good ergonomics, whether the colors work well together, whether they need cleaning. I could ask infinite questions about something as mundane as a keyboard and mouse. What then about everything else?
This fractal nature of reality means that even infinite computing power couldn’t capture every relevant detail about any given situation. There will always be contexts where human insight, intuition, and the ability to navigate incomplete information provide irreplaceable value.
The data-driven approach is powerful but inherently limited by what can be captured, digitized, and made accessible. Multi-level thinking acknowledges these limits and builds systems that can leverage both the power of large-scale pattern matching and the irreplaceable value of human intelligence operating in data-sparse, complex, and fractal environments.
The multi-level perspective: Where does this lead us?
So what’s the practical takeaway from all this? Here are ten practical consequences for how we should approach technology adoption and implementation:
1. Stay calm about AI transformation complexity
First, let’s acknowledge the reality: implementing AI transformation plans is incredibly complex, and I discover new aspects daily across my personal projects, professional work, and studies. The multi-level approach suggests we should resist the urge to solve everything at once and instead build incrementally.
2. Understand the complete problem and evolving human capabilities
Consider the full problem with its vortex of complexity, and recognize that human capabilities are constantly evolving. Rather than asking “Will AI replace humans?” ask “How are human capabilities changing, and how can we design systems that leverage these evolving strengths?”
3. Dare to think multi-level
Actively ask: “How can I combine different layers and generations of technology in a holistic solution that is resilient, cost-efficient, imaginable, maximally reusable, and ecology-efficient?” This requires courage to move beyond single-solution thinking.
4. Distinguish fixed vs. variable context elements
Some things won’t change “in your lifetime” or “during your project’s lifetime,” even if the timeframe is very long. It’s valuable to assess the probable lifespan of certain assumptions. Build your solutions with awareness of what’s likely to remain stable versus what will evolve.
5. Maintain excitement for creative problem-solving
Don’t lose sight of the excitement of creating and solving problems previously ignored because they seemed too complex. Multi-level thinking opens up new solution spaces that were previously unimaginable.
6. Recognize the value of accumulated efforts
Take time to acknowledge the value of things and efforts accumulated over generations. As Newton said, “we stand on the shoulders of giants.” Each technological layer represents accumulated human wisdom and effort worthy of respect.
7. Avoid absolutist discourse
As Henri Poincaré wrote in “La Science et l’Hypothèse” (1902): “Douter de tout ou tout croire, ce sont deux solutions également commodes qui l’une et l’autre nous dispensent de réfléchir” (To doubt everything or to believe everything, these are two equally convenient solutions that both spare us from thinking). Avoid absolute statements about technology’s role and embrace nuanced thinking.
8. Separate data from inference
Develop critical thinking skills and continue building individual and collective decision-making capacity. It will be even more complex to decide on combinations of N+1 technological levels than N levels. Understanding where information comes from and how conclusions are drawn becomes crucial.
9. Dare to imagine and create
Don’t limit yourself to a given world—or do so consciously. Art provides incredible insights and inspiring creativity: mixes between music, theater, writing, happenings, film, sculpture, etc., are multi-level combinations that create incredible experiences for human insatiability.
10. Help others navigate technological choices wisely
Do your best to help people not just use the right technological level, but also know how to value existing technologies and combine them with the rest. It’s not just “use or don’t use” but also maintaining psychological continuity, rethinking skills differently to recreate or reuse them, and inventing new combinations.
I believe much more in intelligent recombination than in pure replacement, which would keep us in an old world just with automation but nothing new in experience. The multi-level approach requires both technical literacy and wisdom about when different tools are appropriate, combined with creativity in finding new ways to blend human capabilities with technological power.
11. Beware of cognitive unload for its own sake
Our lazy nature pushes us toward cognitive unload, but we must be careful. This is especially concerning in our era where “brain rot” can become a trend (hopefully not general) on social media, meaning that the nobility of cognitive development may be challenged by the easiness of instant pleasure. Automation should help us get rid of mundane tasks so we can learn other things and continue developing—not just atrophy alone in our chairs. If we have access to a very cheap tutor, let’s use it to practice learning better, not to do homework for us. This is why I deeply believe in creating AI products that develop critical decision-making skills rather than just automating task execution. There’s a whole world of decisions to make in the coming years.
12. Dare to question and embrace reality
Question everything. Look at phenomena, systems, philosophize. Identify our limits and conditioning in our relationship with reality. Seek the flame of questioning, of novelty, of ancient wisdom that resurfaces. Embrace reality in all its complexity.
The multi-level perspective isn’t just a technology strategy—it’s a framework for navigating complexity with wisdom, creativity, and respect for both human capabilities and technological potential.
Acknowledgments
Thank you to everyone who has contributed to shaping these thoughts through conversations, challenges, and shared experiences. Special thanks to the open-source community, researchers, and thinkers who continue to explore the intersection of human intelligence and artificial intelligence with nuance and wisdom.
Questions for Reflection
As we close this exploration, consider these questions:
- In your own field, what would a multi-level approach to technology adoption look like?
- How can you maintain the excitement of innovation while respecting the wisdom embedded in existing systems?
- What combination of human skills and technological capabilities would create the most value for the challenges you face?
- How might we design AI systems that enhance rather than replace human cognitive development?
- What aspects of your work or life would benefit from “graceful degradation” between different levels of technological sophistication?
The future isn’t about choosing between human and artificial intelligence—it’s about creating intelligent combinations that honor both.