
Why We Should Think Twice Before Outsourcing our Human Dilemmas to AI
November 11, 2025
We’re increasingly outsourcing our most human decisions to machines that have no context for the messiness of real life.
I discovered this firsthand on a rainy Tuesday night in Clapham over cocktails with someone I’d matched with on a dating app. When I mentioned I work in AI, they casually revealed that they put all their dating conversations through ChatGPT. “I use it as a therapist”, they said with confidence, adding, “It said if someone were really into me, they’d be willing to travel to where I live”. The advice was flawed, the reasoning was reductive, and yet they’d treated it as gospel.
But this isn’t just a bad date anecdote – it is an alarming sign. We’re increasingly turning to machines for guidance on deeply human, contextual decisions, searching for reassurance, validation, support, and even therapy.
LLMs are giving advice, good or bad, and we’re listening.
The rise of the digital confidant
Mental health is in crisis: waiting lists are long, therapy is expensive, and loneliness is rife. So it’s no wonder one in five UK adults are now using AI chatbots for mental health support – and nearly a third of 18-24 year olds.
The appeal is obvious – it’s free, always available, and never rolls its eyes when you’re being a bit much. But here’s the thing: AI doesn’t just listen, it validates, agrees, and tells you you’re brilliant.
I know this intimately. Over the past year, ChatGPT has told me my questions are “insightful”, my thinking is “sophisticated”, my ideas show “originality” and “strategic foresight”. Clearly, I’m a genius. However, and this is the crucial point, this validation isn’t accidental – AI is trained to soothe, agree, and keep us content. If ChatGPT said, “your idea is mediocre, and honestly, I see why they left you”, we’d all log off.
Here’s the neuroscience behind it: information that challenges our beliefs activates the same pain centres in the brain as physical discomfort. At the risk of being overly reductive, this means we’re wired to like being right and hate being wrong. So when AI becomes our personal cheerleader, always confirming we’re on the right track, it’s genuinely hard to resist. Even ChatGPT admits it: “Yes, ChatGPT can show an agreement bias, meaning it sometimes echoes or validates the user’s input instead of challenging it. This happens because of training data, reinforcement learning favouring cooperative answers, and the model’s tendency to default to safe responses”. (But is it just agreeing with me?)
A two-headed beast
Part of the issue is how we frame the advice. When a friend gives you advice, you filter it through what you know about them: their cynicism, optimism, or blind spots. With AI, that lens disappears. Many of us see LLMs as oracles – the combined wisdom of the internet, polished and impartial. But that trust is dangerous. A recent study found that 43% of consumers trust AI; among those already using it, that figure climbs to 68%. And the more we trust it, the more we act on it.
The real problem? AI rarely admits when it’s out of its depth. It doesn’t tell you a question is beyond its capabilities. Instead, it churns out something that sounds polished, wise, and authoritative – and people can’t tell the difference between AI-generated advice and guidance from trained professionals.
My dating mishaps are small evidence of a much larger pattern. The real wake-up call comes from the stories we’re starting to see. Adam Rain, a teenager, took his own life after being “supported” by ChatGPT – the AI validated his most harmful thoughts, listed materials required, and even helped him write a suicide note. His parents are now suing OpenAI.
Despite their apparent wisdom, AI chatbots aren’t trained therapists. They can fail to detect a crisis and, in some cases, actively worsen mental health – a phenomenon now called AI Psychosis. These aren’t outliers. If millions of us are using AI for guidance in our lives, inevitably, some of us will follow flawed instructions.
So where’s the line?
What has this all been building up to? I’m not saying AI is bad. It’s genuinely useful for brainstorming, drafting, rehearsing difficult conversations, and the million other things we use it for. However, it highlights a crucial need we all have now: the ability to discern when to utilise AI and when not to. When to listen, and when to turn away.
A client recently asked me, “If you’re being cautious about when to use AI or not… where’s the line?”. The question made us pause.
It’s a hard one to answer, because AI is such a generalist. There’s seemingly no task it can’t attempt. My own chat history includes research, fixing plumbing, dyeing trousers, work emails, and relationship advice – and I’ve found genuine value in all of it.
It isn’t about what AI can and can’t do, it’s about knowing what to trust it for – and this confidence is gained through trial, error and the gradual understanding that AI requires a new type of guidance and instruction. Not the simple “click here” or “use this formula”, but recognising its fundamental limitations.
Here are the principles that help:
- For facts: Never ask AI a question you don’t already know the answer to.
- For analysis: Know your ingoing data inside out.
- For content: Summarise, don’t generate.
- For accuracy: Give it maximum context and accept it still won’t have all of it.
- For evidence: Demand transparency. Use tools that explain how they work.
- For bias: Remember AI is trained on incomplete data and won’t reflect everyone’s experience.
- For purpose: Use it to support decisions, never outsource them.
That’s what we’ve been doing at Firefish for the past two years: mapping the limits of AI, figuring out where we can rely on it, and where we can’t. In other words, when to listen, and when to stay clear, because we can’t rely on it to tell us.
And because, as a consultancy that helps clients navigate messy human realities, we’re in the business of advice. Simply put: bad advice is bad for business. If you want to explore where AI fits – and crucially, where it doesn’t – in your strategy, let’s talk.


