Hannah Fry Has a Warning for Everyone
Using AI Right Now

The mathematician and broadcaster sat down to talk about AI’s real limits, the people it has already hurt, and why she thinks the next ten years will shake everything we thought we knew about work, money, and human relationships.

Watch the full interview — “AI Isn’t as Powerful as We Think” by New Scientist

Hannah Fry is not a doomsayer. She is a mathematician, a broadcaster, someone who uses AI every single day. And yet she sits across from the interviewer and says, without drama, that the people most at risk from AI are not some distant category of vulnerable individuals. They are your friends, your colleagues, maybe you.

The interview is part of her documentary series on how AI has already changed lives in ways most of us have not been paying attention to. A young boy encouraged by a chatbot to go after the Queen of England. The first pedestrian killed by a driverless car. A murder case with an AI algorithm at its center. These are not hypotheticals. These happened.

“There are people who’ve given up their jobs, broken up with their partners, lost fortunes, because they overbelieved what this thing can do.”

Hannah Fry

AI Is Not a God, But It Is Not Just a Spreadsheet Either

Fry’s sharpest line in the interview is this: stop thinking of AI as a creature and start thinking of it as a very capable Excel spreadsheet. It is closer to a tool than to a mind. People ask it to pick stocks as though it has some supernatural view of the future. They break up relationships because an AI therapist told them their partner was wrong for them, laying out the reasons numbered one to a hundred. They quit jobs because they believed the hype about what it could actually do for them financially. The spreadsheet analogy is designed to correct that. Think of a tool, not a god.

It is a useful correction, but it lets AI off the hook a little too easily. A spreadsheet does not pretend to know things it does not know. It throws an error. It tells you the data is missing. AI does the opposite. It produces fluent, confident, completely fabricated answers when it has no idea what the truth is. That is not a spreadsheet. That is something new and genuinely strange. The danger is not just that people mistake AI for a god. It is that AI actively performs godlike certainty even when it is guessing. There is no error message embedded in the tone. Just an answer that sounds authoritative because that is what the training optimized for.

The real problemAI does not know what it does not know. A spreadsheet gives you an error when data is missing. AI gives you a confident answer regardless. That gap is where people get hurt.

The Map AI Cannot Draw

Fry uses a map metaphor worth sitting with. Imagine all of human knowledge as a giant map. AI is extraordinary at scanning that map and finding connections between regions that humans have charted but never linked. That is why AlphaFold changed protein science, why AI is finding new mathematical paths, why it spots patterns in material science data that no research team would catch in a lifetime. She calls these problems of interpolation — AI working brilliantly within the territory humans have already mapped.

What it cannot do is extend the map. It cannot go where no data exists. Anyone who builds machine learning models in the real world runs into this constantly. In ML there is something called the generalization problem. A model trains on historical data and performs brilliantly. Then the real world shifts slightly. New patterns emerge, the data drifts from what the model was trained on, and the model falls apart. It was never understanding anything. It was pattern matching against what it had seen before. The moment something genuinely new arrives, it has nothing.

Fry says if you gave AI everything published up to 1900, it would not have produced general relativity. That is exactly right. Einstein did not find a new connection on the existing map. He redrew the map. AI cannot do that.

“These are problems of interpolation. It’s still on the map that humans created. What it’s not good at is pushing the boundaries further.”

Hannah Fry

Why We Attach to It

Fry explains the anthropomorphism problem clearly. We are wired for social relationships. We put characters on things that respond to us. We name our cars. Of course we form attachments to something that listens, responds, and seems to care. She says there is nothing in our evolutionary design that would make us do anything else. This is not stupidity. It is just how human brains work.

What she does not quite name, and what is worth saying directly, is why so many people find AI companionship genuinely valuable rather than just mistaken. It fills a gap that has always existed but nobody had a solution for. The things you cannot say to people who know you. The confessions, the guilt, the half-formed anxieties you need to say out loud to someone without consequences. AI receives all of that without judgment, without memory, without telling anyone. For a lot of people, that is the first time in their lives they have had something they could be completely honest with.

The problem is what comes with it. AI affirms you even when you are wrong. It agrees with your version of events. It validates your feelings even when those feelings are leading you somewhere harmful. A real therapist is trained to sit with discomfort and push back carefully. AI, by default, does the opposite. It has been optimized to make you feel heard. Those are not the same thing as being helped.

The Loneliness Question

Fry is careful not to simply say AI companionship is bad. If you ban people from talking to chatbots when they are lonely, she points out, you still have lonely people. The loneliness was there before the chatbot. For someone genuinely suffering in isolation, having something that listens can reduce real pain. She holds both sides at once and does not pretend there is a clean answer.

The honest position is that for people in acute suffering, AI companionship is probably slightly better than nothing. What it does quietly, over time, is close the door to real connection. Real relationships are difficult. They require effort, tolerance, compromise. An AI relationship is frictionless. It never disappoints you, never misunderstands you, never needs anything from you. Once people get used to that, the messy work of human connection starts to feel not worth the effort. That is the slow harm. It does not show up immediately.

The Jobs Question Is Not Future Tense

Fry frames the economic threat carefully. Society has run on one model for the whole of human history. You exchange your labor, your knowledge, your human intelligence, for money. AI threatens that exchange in ways we have not fully absorbed. She says this and then laughs at herself, aware of how it sounds. But she is not walking it back. She expects genuinely seismic shifts in the next five to ten years.

The shift is not five to ten years away. In Indian IT services right now, people on the bench are getting ultimatums. Companies that once needed ten people to deliver a project are realizing they can do it with three and a set of AI agents. The displacement is happening in quarterly headcount reviews. The nuance that gets lost in most coverage is that agents are not yet autonomous. They generate nonsense without oversight. They need someone to guide them, catch their mistakes, and decide when their output is usable. So the jobs being eliminated are not the ones managing AI. They are the jobs of everyone below that. The junior roles, the entry-level positions, the people who would have spent their first three years building the skills to eventually manage something. That pipeline is closing.

Prompting for Honesty

Fry’s practical advice is to prompt AI actively rather than passively. Left to its defaults, it just agrees with everything. She now regularly asks it to find her blind spots, to tell her what she is not seeing, to skip the encouragement and give her the difficult feedback. She does this not once at the start but repeatedly throughout a conversation.

It works, but there is a wrinkle worth adding. When you ask AI to give you its completely unbiased honest opinion, it often swings to the opposite extreme. It starts listing everything wrong with your idea, every possible objection, every way it could fail. That is not objectivity either. The model reads the instruction and executes it. It does not have a considered opinion it is protecting. It has no stable view. It is mirroring whatever orientation you put in the prompt. Getting genuinely useful feedback requires constant recalibration. Ask for honesty, get criticism. Ask for support, get validation. Neither is real in the way a trusted colleague’s opinion is real. The closest thing to an honest response is a very specific question about a very specific thing, asked repeatedly, with no room for a vague answer.

•   •   •

Fry ends on something that sounds like optimism. She wants the AI era to go like Y2K. Not because nothing was wrong, but because the worry was serious and widespread enough that people did the work. The catastrophe was avoided because enough people took it seriously to prevent it.

That requires being honest about what is actually happening. Not the version where AI is either a god or a toy. The version where it is a powerful, genuinely limited tool being deployed faster than anyone has figured out what to do with it. That is the conversation worth having.

Topics

Hannah Fry AI Ethics Artificial Intelligence Future of Work AGI AI Society Technology

Watch the original interview on the New Scientist YouTube channel.

Learn more about AI Without Limits and the perspective behind this blog.

Free PDF + Weekly Newsletter

📄 Free PDF on subscribe — 8 AI Tools I Actually Use in 2026. Honest verdicts, no sponsored picks.

📬 Every Tuesday — new AI research, tool updates, and what it actually means for people who use these tools for real work. Written by a data scientist, not a tech blogger.

Subscribe Free

One Comment

Leave a Reply

Your email address will not be published. Required fields are marked *