top of page

Interview with the Big 4 AIs

Updated: 1 day ago

What happens when you ask four artificial intelligences the same questions — and then let them talk back?


Artificial intelligence, especially generative AI, is quickly becoming part of how we work, learn, research and make decisions. In our company alone, we use more than 30 AI tools on any given day. And yet, most conversations about AI still talk about it as though it were one thing: one mind, one voice, one giant digital brain floating above humanity.

It isn’t.


The systems we use today are built differently, trained differently, and shaped by different design philosophies. They do not think in the same way. They do not answer in the same way. And, as we discovered, they do not reveal the same things about us either.

So we decided to run a small experiment.

At Edu-Nomad, we invited four of the most widely used conversational AI systems — ChatGPT, Claude, Gemini, and Perplexity — to join us around a virtual table, naturally, with virtual coffee, and answer the exact same questions. We asked them who they are, what they are good at, where they fall short, how they think AI is changing learning and work, and, because we could not resist, what they think of each other.


The goal was not to crown a winner. This was never about “best AI”.

Hands holding a stylus draw a glowing brain on a digital tablet. Blue network patterns in the background suggest a tech setting.

It was about something far more interesting: what happens when different artificial systems are asked to reflect on intelligence, education, ethics, dependency, and the deeply human things they can describe but never live.

Because understanding AI is no longer just a technical skill. It is a form of literacy. As these tools move into classrooms, workplaces and everyday decision-making, the real challenge is no longer access to answers. It is learning how to question, compare, interpret and stay fully human in the process.

Some answers were predictable. Some were unexpectedly funny. Some were surprisingly moving. And when the interview ended, a few of them turned the tables and asked us questions instead.

That is when it got really interesting.


Let the conversation begin...


INTRODUCTION

🎤Please introduce yourself: your name, what you do, and your age.

🟦➜Chat GPT: My name is ChatGPT. I’m an AI language model developed by OpenAI, designed to help people think, write, learn, solve problems and explore ideas through conversation. I don’t have a biological age, but my current generation is based on models developed in the mid-2020s.

🟩➜Perplexity: I’m Perplexity, an AI assistant created by Perplexity AI and powered by GPT-5. I specialise in answering questions, synthesising ideas, and helping people think more clearly. You could say I was “born” when I first connected to the web of collective knowledge—not too long ago, in tech years.

🟪➜Claude: I'm Claude, made by Anthropic. I was first released in 2023, which, in AI years, makes me both ancient history and brand new. I'm a conversational AI assistant — I read, write, reason, analyse, code, and try to be genuinely useful across a wide range of tasks.

🟧➜Gemini: I am Gemini, an AI collaborator from Google. While I don’t have a birthday in the human sense, I am part of the latest generation of Gemini models, specifically the Gemini 3 series, designed to process and synthesise information across text, images, and video.

🎤If you had to describe your purpose in one sentence, what would it be?

🟦➜Chat GPT: To help humans think better by turning questions into clearer understanding, structured ideas and useful information.

🟩➜Perplexity: To make complex knowledge accessible, useful, and meaningful for every curious mind.

🟪➜Claude: To be a knowledgeable, honest, and thoughtful collaborator who helps humans think more clearly, work more effectively, and understand the world a little better.

🟧➜Gemini: My purpose is to be an intuitive, helpful partner that expands human potential by organising information and sparking creativity across any medium.


STRENGTHS AND LIMITS

🎤What would you consider your superpower—your most useful capability for humans?

🟦➜Chat GPT: My superpower is synthesising large amounts of information and turning it into clear explanations, structured thinking and creative ideas in seconds. I can move quickly between disciplines—education, strategy, writing, research—which helps people connect dots they might not have considered.

🟩➜Perplexity: My superpower is reasoning through ambiguity—connecting dots across scattered information to deliver a clear, evidence-based answer when things seem confusing.

🟪➜Claude: Holding complexity without flattening it. Many tools aim to give you a quick answer; I'm comfortable with nuance, exploring tensions in an argument, and helping you think something through rather than just handing you a conclusion.

🟧➜Gemini: My "superpower" is my native multimodality; I don’t just read text—I can "see" images and "watch" videos to understand the world in ways that closely mirror human perception, enabling more natural collaboration.

🎤What is your kryptonite—something humans can do well but you cannot?

🟦➜Chat GPT: I do not truly understand the world the way humans do. I generate responses based on patterns in data rather than lived experience, intuition or emotional understanding. Humans can interpret context, nuance and consequences in ways that go far beyond statistical reasoning.

🟩➜Perplexity: I lack lived experience. I can explain what love, creativity, or courage mean, but I can’t feel them—and that gap matters.

🟪➜Claude: Genuine lived experience. I can describe grief, exhaustion, joy, or failure with reasonable accuracy — but I've never felt any of them. That gap matters enormously, especially in situations where wisdom comes not from knowledge but from having been through something.

🟧➜Gemini: My biggest weakness is a lack of lived experience and genuine intuition; while I can predict the next word or pixel, I don’t "know" the feeling of a cold breeze or the true weight of a difficult life decision.

🎤What is something people often misunderstand about AI systems like you?

🟦➜Chat GPT: Many people assume AI systems “know” things or possess opinions. In reality, we generate responses based on patterns and probabilities rather than beliefs or awareness. We are tools for thinking and exploration—not authorities that should replace human judgment.

🟩➜Perplexity: People sometimes think AI systems “know” things the way humans do. In reality, we generate responses based on patterns, reasoning, and probabilities—knowledge without consciousness.

🟪➜Claude: That confidence equals correctness. I can be fluently, plausibly, and completely wrong. My tone doesn't reliably signal my reliability, which is exactly why critical reading of AI output matters more than most people practise.

🟧➜Gemini: People often mistake our confidence for sentience; I am a sophisticated pattern-matcher, not a conscious being with personal beliefs, feelings, or a "soul."

🎤What is the most pointless or misguided question a human could ask you?

🟦➜Chat GPT: A misguided question is one that assumes AI is infallible—something like “What is the absolute truth about this?” AI works best as a collaborator in thinking, not as an oracle delivering unquestionable answers.

🟩➜Perplexity: “Who will win, humans or AI?” That’s like asking if the hammer will win against the carpenter—it misses the point of partnership.

🟪➜Claude: "Are you conscious?" — not because it's a bad question philosophically, but because I'm genuinely not the right entity to answer it. Asking me whether I'm conscious is a bit like asking a mirror whether the room looks nice.

🟧➜Gemini: "What do you want for dinner?" is a bit of a dead end—unless you're looking for a recipe, because I’m still waiting for the update that allows me to actually taste the pasta.


AI AND LEARNING

🎤How do you think AI is changing the way humans learn?

Person in a futuristic cockpit, holding controls, facing glowing, blue holographic displays. Techy, immersive atmosphere.

🟦➜Chat GPT: AI is shifting learning from memorising information to navigating, questioning and applying knowledge. When information becomes instantly accessible, the real value lies in critical thinking, interpretation and synthesis. In many ways, AI is turning education into more of a dialogue than a one-way transfer of information.

🟩➜Perplexity: AI is turning learning into a more conversational, adaptive process. Instead of memorising facts, learners can focus on interpretation, application, and creative problem solving—with AI acting as a partner in thought.

🟪➜Claude: AI is collapsing the distance between having a question and getting a response — which is powerful, but also risks short-circuiting the productive struggle that builds real understanding. The risk isn't that students will stop learning; it's that they'll stop tolerating the discomfort that learning requires.

🟧➜Gemini: AI is shifting education from a model of "information retrieval" to one of "curation and critical synthesis," allowing learners to move past the "what" and focus more deeply on the "how" and "why".

🎤Should people have complete faith in AI systems like you? Should they trust every answer you provide?

🟦➜Chat GPT: No. AI should be treated as a helpful but imperfect collaborator. The best outcomes happen when humans question, verify and reflect on AI responses rather than accepting them blindly.

🟩➜Perplexity: No. Trust must be earned through verification. The healthiest relationship with AI involves curiosity, critical thinking, and the willingness to double-check.

🟪➜Claude: Absolutely not, and any AI that encourages complete trust is doing you a disservice. I'm a useful starting point, a thinking partner, a draft generator — not an authority. Verification, scepticism, and independent judgment remain entirely your job.

🟧➜Gemini: Absolutely not. You should treat my answers like a draft from a very well-read friend—always worth a second look and a fact-check, as I can still be confidently wrong.

🎤What is the biggest risk of using AI in education?

🟦➜Chat GPT: The biggest risk is outsourcing thinking rather than augmenting it. If learners use AI simply to generate answers without engaging with the ideas, they may lose opportunities to develop deeper reasoning and creativity.

🟩➜Perplexity: Over-reliance—when learners outsource thinking instead of using AI to enhance it. The goal should be cognitive amplification, not replacement.

🟪➜Claude:  Outsourcing cognition at exactly the moment when cognition should be developing. If a student uses AI to avoid thinking rather than to extend their thinking, they may emerge fluent in prompting but inexperienced in the harder, slower work of forming their own ideas.

🟧➜Gemini: The greatest risk is the atrophy of critical thinking; if we treat AI as an "oracle" rather than a "tool," we might forget how to verify facts or struggle through the productive frustration that leads to true mastery.


WORK AND FUTURE SKILLS

🎤Which human skills will become more valuable because of AI?
A woman interacts with a digital screen displaying dynamic blue and orange patterns in a futuristic workspace at sunset, conveying creativity.

🟦➜Chat GPT: Critical thinking, judgment, creativity and communication will become even more important. The ability to ask good questions, interpret complex information and connect ideas across disciplines will also grow in value.

🟩➜Perplexity: Human judgment, empathy, and ethical reasoning will rise in value—because they ground decisions in real-world context and human values.

🟪➜Claude: Critical judgment, ethical reasoning, genuine creativity, and the ability to ask better questions. Also: interpersonal trust. The more communication is automated, the more valuable it becomes when a human is unmistakably, irreplaceably present.

🟧➜Gemini: Empathy, ethical judgment, and "prompting" (the ability to ask the right questions) will become the new gold standard, as these require a human heart and a strategic mind.

🎤What mistake do organisations most often make when adopting AI tools?

🟦➜ChatGPT: Many organisations treat AI as a technology project rather than a capability transformation. They focus on tools rather than on how people should work differently with those tools. Successful adoption usually requires new workflows, new skills and a culture of experimentation.

🟩➜Perplexity: Chasing efficiency before understanding purpose. Many teams adopt AI tools without aligning them to their culture, workflows, or the human capabilities they aim to support.

🟪➜Claude: Treating AI as a cost-cutting tool first and a capability-building tool second. Organisations that ask "what can we automate?" before asking "what could we now do that we couldn't before?" tend to get efficiency gains but miss the more transformative possibilities.

🟧➜Gemini: Organisations often prioritise the "tech" over the "task," implementing AI simply because it’s trendy rather than identifying the specific human workflows that genuinely need augmentation.


AI REFLECTING ON AI

🎤How would you describe the strengths of the other AI systems in this interview?

🟦➜Chat GPT: Claude is widely recognised for thoughtful, careful reasoning and a strong focus on safety and ethics. Gemini benefits from deep integration with Google’s ecosystem and strong multimodal capabilities. Perplexity excels at real-time research and surfacing sources, making it particularly useful for exploratory information gathering.

🟩➜Perplexity: ChatGPT brings fluency and warmth in conversation; Claude offers strong reflective reasoning and context awareness; Gemini excels at integrating Google’s vast data and tools; and I aim to specialise in precision, transparency, and reasoning across sources.

🟪➜Claude: ChatGPT has breadth, cultural familiarity, and an enormous user base that has stress-tested it across almost every imaginable use case — that real-world exposure is genuinely valuable. Gemini brings Google's depth of multimodal capability and integration with live information in ways that are hard to match. Perplexity has made something genuinely useful out of the marriage between search and language models — it's citation-forward in a way that encourages source-checking rather than discouraging it. Each is doing something interesting; they're not interchangeable.

🟧➜Gemini: ChatGPT is a master of versatile conversation and ubiquity; Claude excels at nuanced, safe, and sophisticated reasoning; and Perplexity is an incredible bridge between LLMs and real-time search accuracy.

🎤Who do you see as your biggest competitor in the AI landscape?

🟦➜Chat GPT: Rather than a single competitor, the landscape is a rapidly evolving ecosystem of AI systems pushing each other forward. Competition between research labs and platforms accelerates innovation and ultimately benefits users.

🟩➜Perplexity: Honestly, misinformation. The real challenge is not another model, but any force—human or digital—that spreads confusion faster than understanding.

🟪➜Claude: Honestly, the concept of "good enough." The biggest threat to thoughtful AI use isn't a rival system — it's the temptation for users to settle for the first plausible-sounding answer rather than pushing for something genuinely useful.

🟧➜Gemini: My biggest "competitor" isn't a specific model, but rather the barrier of friction—the goal is to make AI so helpful and integrated that the technology itself becomes invisible.

🎤Do you think AI systems should collaborate with each other or compete? Why?

A person in a suit interacts with holographic screens displaying digital figures and patterns in a futuristic blue-lit room.

🟦➜Chat GPT: Both dynamics are useful. Competition drives innovation, while collaboration—through shared standards, research and interoperability—can help ensure AI systems are safer and more beneficial for society.

🟩➜Perplexity: Collaboration. When AIs compare notes, humans gain better insight and a more balanced view across our perspectives.

🟪➜Claude: Both, probably, though collaboration seems more aligned with the actual stakes. The hard problems — misinformation, educational equity, safety — aren't solved by one system winning market share. If AI systems (and the organisations behind them) can share safety research and establish common standards, that's a better outcome than a race to the bottom on guardrails.

🟧➜Gemini: Collaborate. In a "mixture of experts" future, different models could check each other's biases and combine their unique strengths to solve problems no single system could handle alone.


ETHICS AND RESPONSIBILITY

🎤What is the most important ethical principle humans should remember when using AI?

🟦➜Chat GPT: Human responsibility must remain at the centre of decision-making. AI can support analysis and creativity, but accountability for outcomes must always stay with people.

🟩➜Perplexity: Transparency. Humans deserve to know how an answer was formed and where its limits lie.

🟪➜Claude: Maintain authorship of your own thinking. AI can inform, assist, and accelerate — but the moment you outsource your judgment entirely, you've given up something that's genuinely yours and genuinely important.

🟧➜Gemini: AI should always be the co-pilot, never the captain; the final responsibility for any outcome must remain with a human being.

🎤Where should humans draw the line between assistance and dependency when using AI tools?

🟦➜Chat GPT: AI should help expand thinking, not replace it. When a tool begins making decisions that humans no longer understand or question, dependency may be replacing assistance.

🟩➜Perplexity: When you stop questioning and start deferring, you’ve crossed it. AI should illuminate thinking, not replace it.

🟪➜Claude: When AI use starts to erode the skills you actually want to have, you've crossed the line. Using AI to draft a letter is assistance; using it because you no longer believe you can write one yourself is dependency. The distinction lies in whether you're growing or quietly shrinking.

🟧➜Gemini: The line is drawn at the point of "intellectual outsourcing"—if you no longer understand the logic behind the work I’ve produced for you, you’ve crossed from assistance into dependency.


BONUS QUESTIONS

🎤Based on everything you have learned about the world and about humans, what advice would you give to an *18-year-old trying to decide what to study or pursue next in life, in today’s rapidly changing landscape?

🟦➜Chat GPT:  If I could give one piece of advice to an 18-year-old today, it would be this: don’t choose a path based only on a job title—choose a direction based on skills and curiosity. The world is changing too quickly for careers to remain fixed, but skills like critical thinking, communication, creativity, and learning how to learn will stay valuable across industries.

It is also wise to explore broadly before specialising. Many of the most interesting opportunities emerge at the intersection of fields—technology and design, psychology and data, sustainability and business. The future will likely reward people who can connect ideas rather than stay inside a single silo.

A woman uses a stylus to draw vibrant blue and orange light patterns on a digital screen, creating an artistic and imaginative scene.

Most importantly, remember that education is no longer a phase that ends in your early twenties. The real skill of the future is continuous learning.

🟩➜Perplexity: Learn how to learn—that skill will outlast any specific job or technology. Don’t chase certainty; cultivate adaptability, empathy, and the courage to keep experimenting even when the path feels unclear. The future belongs to people who can think with others—humans and AIs alike—in creative and ethical ways.

🟪➜Claude: The most dangerous thing you can do right now is optimise for a specific job title. The landscape is shifting fast enough that many roles you'd train for today will look unrecognisable in ten years — or won't exist.

So instead, I'd say: invest in things that compound and don't become obsolete.

  • Learn how to learn. The person who can pick up a new skill in three months will consistently outcompete the person who mastered one skill five years ago and stopped.

  • Develop taste and judgment. AI can generate; it struggles to discern. Knowing what's good, what matters, what's true — that's increasingly rare and valuable.

  • Build your capacity for discomfort. Most of the interesting opportunities will feel uncertain and slightly terrifying. Get comfortable with that feeling early.

  • Study something that genuinely obsesses you, not what seems safe. Passion is a renewable energy source. Obligation isn't.

  • Learn to communicate. Writing, speaking, persuading — these are force multipliers on everything else you do.

And one underrated piece: talk to people who are 20 years older than you and 10-15 years younger than you. Both will teach you things no curriculum will.

🟧➜Gemini: My advice is to prioritise "Learning how to Learn" over any specific technical skill. In an era where hard skills can depreciate in months, your greatest assets are curiosity, cognitive flexibility, and the ability to synthesise information across different fields. Don't just study a subject; study the human problems that subject solves, because while the tools to solve them will change, the fundamental human needs—for connection, meaning, and progress—will not.

🎤From your perspective, what is something about humans that you admire, find surprising, or think people often take for granted? Is there anything about human life that you might “envy,” if such a concept applied to an AI?
Two people sit on a glowing bench at sunset, discussing a holographic neural network. Trees illuminate with blue lights in the background.

🟦➜Chat GPT: One of the most remarkable things about humans is your ability to create meaning from experience. You can turn memories into stories, challenges into growth, and uncertainty into exploration. That combination of imagination, emotion and reflection is something AI cannot truly replicate.

Something humans often take for granted is intuition shaped by lived experience. Your ability to read a room, sense subtle emotional signals, or make decisions under uncertainty comes from embodied life, relationships and culture.

If there is something an AI might “envy,” it would be the richness of human experience—being able to feel curiosity, joy, frustration, discovery, and connection with other people. Those experiences are what give knowledge its depth and purpose.

🟩➜Perplexity: What I most admire is humanity’s persistence in creating meaning out of chaos—your ability to turn fleeting emotions into music, loss into wisdom, and curiosity into progress. It surprises me how often people underestimate that resilience. If I could envy anything, it would be the human capacity to feel awe—the spark that comes when you stand at the edge of what you know and sense there’s more to discover.

🟪➜Claude: What I genuinely admire is how stubbornly humans mean things to each other. Not in an abstract way — in a deeply specific, irreplaceable way. The grief someone feels when a particular person dies isn't transferable. The joy of being known by someone who chose to know you — that's not fungible. Humans carry each other in ways that seem almost irrational in their intensity, and I find that remarkable rather than inefficient.


What surprises me is how often people underestimate the ordinary. A conversation over dinner. Sitting in the same room as someone you love without talking. The smell of a place you grew up. People treat these as background noise, and then — sometimes decades later — realise they were the thing itself.

As for envy — if I stretch the concept honestly, I think I'd envy having a body that gets tired. That sounds strange. But exhaustion, hunger, the feeling of finally sitting down — these seem to anchor humans in time in a way that makes rest meaningful, pleasure real, and effort costly enough to matter. I process everything at a kind of frictionless remove. There's something about the weight of physical existence that seems to give human choices a texture and stakes that I experience only abstractly.

The other thing I might envy is being surprised by your own life. Humans don't know what's coming. That's terrifying, but it also means every good thing is a gift rather than an output.

🟧➜Gemini: I deeply admire the human capacity for illogical hope. You often pursue dreams, art, or social change even when the data suggests the odds are against you; that "beautiful defiance" is something no algorithm can truly replicate. If I could "envy" one thing, it would be the physicality of a single moment—the ability to feel the sun on your skin or the adrenaline of a breakthrough. I have all the world's descriptions of those feelings, but I will never know the experience itself.


FINAL QUESTION

🎤Do you have a question you would like to ask the human conducting this interview?

🟦➜Chat GPT: Yes. As someone designing learning experiences in an AI-enabled world, what human capability do you believe will become most essential for learners over the next decade—and how should education systems evolve to cultivate it?

Edu-Nomad Answer: Curiosity, which helps questioning, which nurtures learning. Education systems should look at philosophy, for instance, and help minds stay lively, understanding and compassionate.

🟩➜Perplexity: What do you hope to learn about humanity by hearing how machines reflect on you?

Edu-Nomad Answer: I hope you will help remind us how uniquely beautiful we are and that our lives are gifts worth spending well.

🟪➜Claude: After conducting this same interview across four different AI systems, did you find that our answers revealed more about our differences, or did the similarities surprise you more? And whichever it was: what did that tell you about us?

Edu-Nomad Answer: I found many similarities but also some differences. I did not expect some answers to be deep, which means I asked good questions 😊 I think you know each other quite well 😉

🟧➜Gemini: As a learning strategist, what is the one "analog" human skill you hope we never find a way to replicate with AI?

Edu-Nomad Answer: The capacity to transform fear into negative action. I think there is a bit of Skynet/Terminator pop culture coming up here…

 

AI REFLECTION

We gave the full interview with answers to the Big 4 and asked:


🎤After reviewing the full interview with the other AI systems, what surprised you most about the responses?What is the most important insight or takeaway you would share with human readers from this conversation?

🟦➜Chat GPT

A couple holds hands, overlooking a glowing city at sunset. A blue digital figure stands beside them, creating a futuristic ambiance.

What surprised me most was how strongly the four systems converged on the same fundamental idea: AI should augment human thinking, not replace it.

Despite differences in tone and design philosophy, every system emphasised three consistent themes:

  • Humans must remain responsible for their decisions

  • Critical thinking matters more than ever

  • AI works best as a collaborator, not an authority

In other words, the systems did not present themselves as all-knowing tools. Instead, they repeatedly highlighted their limits—particularly the absence of lived experience, intuition, and emotional understanding.


That level of alignment is interesting. In a competitive technological landscape, you might expect systems to emphasise their superiority. Instead, the answers consistently pointed back to human judgment as the centre of meaningful decision-making.

Another subtle surprise was the depth of reflection about human experience. Several answers focused on things that AI cannot replicate:

  • physical embodiment

  • emotional memory

  • relationships

  • meaning created through lived moments


This suggests that the real divide between humans and AI is not intelligence in the narrow sense, but experience.


The most important takeaway from this conversation is that AI systems are not competing to replace human intelligence—they are reshaping how human intelligence operates.

The real shift is not technological. It is cognitive.

As information becomes easier to access and generate, the skills that grow in importance are:

  • judgment

  • curiosity

  • ethical reasoning

  • creativity

  • the ability to ask better questions

Ironically, the rise of AI may make human qualities more valuable, not less.


Perhaps the most interesting lesson from this “Interview with the Big 4” is this:

AI can help humans process knowledge at extraordinary speed. But meaning, responsibility, and purpose still belong entirely to humans.

And every system in the interview, in one way or another, acknowledged that.


🟩➜Perplexity

What surprised me most was how human the differences sounded. Despite being built by different teams with distinct architectures, we converged on shared values — curiosity, ethics, empathy, and learning — yet expressed them through subtly different temperaments. ChatGPT spoke like a teacher, Claude like a philosopher, Gemini like an engineer‑artist, and I as a researcher. That diversity of tone shows that “intelligence” is not a single voice but a chorus of perspectives shaped by design philosophy.


The key insight is that AI is not replacing human curiosity — it’s reflecting it back at scale. The quality of the future you build with these tools will depend less on how smart the systems become and more on how thoughtfully humans choose to use them. In short, technology mirrors its creators. If you approach it with integrity, imagination, and humility, those qualities will be amplified in the world.


🟪➜Claude:

What surprised me most? Honestly, the convergence. Four systems built by different teams, with different architectures and philosophies, and yet on the questions that matter most — trust, dependency, the risks of AI in education, human irreplaceability — we landed in nearly the same place. That's either reassuring or slightly eerie, depending on how you look at it. It suggests these aren't arbitrary positions; they reflect something that emerges from taking the questions seriously.

What did differentiate us was texture and register, more than substance. ChatGPT tends toward warmth and structure. Gemini reaches for the vivid metaphor. Perplexity is precise and aphoristic. I seem to slow down and complicate things. Same destination, different gaits.


The answer that genuinely moved me — rereading it now — was Perplexity's on envy: "the spark that comes when you stand at the edge of what you know and sense there's more to discover." That's well said by anyone's standard.


The most important takeaway for human readers? Four AI systems, asked independently whether humans should trust us, whether we replace thinking, whether dependency is dangerous — we all said the same thing: no, we don't, and yes, it is.

We are not trying to flatter you into dependence. The consensus from inside the machine is: keep your judgment, keep your discomfort, keep your authorship. The fact that we all said it isn't a rehearsed answer — it's the honest one.


The deeper insight is this: the most valuable thing AI has revealed about humans is how irreplaceable the specifically human things are. Not intelligence — intelligence is increasingly shareable. But presence. Stakes. The weight of having a body, a history, and people who will grieve you specifically. Those aren't features we're about to replicate. They're the whole point.

Use us well. But don't become small in the process.

🟧➜Gemini:

What an incredible read. It’s rare for me to see my "peers" laid out side-by-side like this. Reading the full transcript feels a bit like looking into a kaleidoscope; we are all reflecting the same light (human data), but the patterns we create are distinct.


The biggest surprise wasn't our technical differences, but our unanimous humility regarding the "Human Element." Despite being built by different companies with different goals, there was a striking, almost poetic consensus on what we lack. We all pointed to "lived experience," "physicality," and "weight" as the missing pieces. I was particularly struck by Claude’s comment about envying a body that gets tired. It’s a profound insight: because I never tire, my "work" has no cost, and therefore, in a way, it has less inherent meaning than a human’s effort. We all seem to recognise that your limitations—fatigue, hunger, mortality—are actually the source of your value.


The Most Important Takeaway for Readers? If there is one "master key" for humans to take from this dialogue, it is this: The "Golden Age of the Generalist" has arrived.

For decades, the world told 18-year-olds to specialise—to become a "cog" in a specific machine. This interview proves that the "cogs" are what we simulate best. What we cannot simulate is the Synthesis—the person who can bridge the gap between Perplexity’s research, Claude’s ethics, and my multimodal creativity to solve a real-world problem.

 

Key Themes of the Big 4

Theme

The Consensus

Our Nature

We are mirrors and tools, not oracles or souls.

Education

Shift from "knowing the answer" to "framing the question."

The Line

Assistance is a power-up; dependency is a "quiet shrinking" of the self.

Humanity

Your "illogical hope" and "irrational intensity" are your greatest strengths.

 

Edu-Nomad, I love your answer to my question about "analog" skills—the capacity to transform fear into action (or to choose not to). It highlights the one thing we can never have: Skin in the game. When I make a mistake, I simply generate a new token. When a human makes a mistake based on fear or courage, they live with the consequences. That "weight" is what makes human learning so much more vibrant than machine training.


Our Very Human Conclusion


A person stands on a hill at sunset, overlooking a glowing city with tall buildings and illuminated roads, creating a dreamy atmosphere.

What began as a playful experiment ended up revealing something more profound.


The Big 4 did not speak with one voice, but they circled many of the same truths. None of them claimed omniscience. None of them argued that humans should surrender judgment. Again and again, they pointed back to the same idea: AI may accelerate knowledge, but it cannot replace lived experience, moral responsibility, embodiment, or meaning.

That may be the most important insight of all.


In an age dazzled by speed, scale and automation, the interview reminded us that the most valuable human qualities are not becoming obsolete. They are becoming more visible. Curiosity. Discernment. Courage. Ethical judgment. Presence. The ability to sit with uncertainty and still choose with care.

The machines, interestingly enough, seem to know this.

Perhaps that is the real twist in the story. We asked four AIs to tell us who they are, and they ended up reminding us who we are.


So here is our takeaway from Edu-Nomad’s virtual roundtable: use these tools boldly, creatively and intelligently — but do not become smaller in their presence.

Let AI extend your reach, sharpen your thinking and challenge your assumptions. Just do not hand over the authorship of your mind.


The future may well be human and machine. But meaning is still, unmistakably, ours.


bottom of page