Why ChatGPT lies? by Neha Bansal
Introduction
Every revolution begins the same way — with a wonder. A silent intelligence appeared one November evening in 2022. It could write novels, build apps, crack jokes, translate poetry, soothe heartbreak, summarize 50 page reports and do it all before you could blink.
People stopped calling their genius friends. ChatGPT became the fire brigade Tarak Mehta for every Jetha Lal on earth. Intelligence felt suddenly… democratic, tireless, obedient and always available. A perfect assistant gifted to the masses. Students smuggled it into classrooms.
Employees fed it their workload. Even IITs and IIMs held emergency meetings because their brightest minds were outsourcing their thinking.
The world split into BC, Before ChatGPT and AD, Anno Digitali. We looked into the future with nervous optimism. But behind the admiration… a quiet betrayal was brewing.
The First Cracks
In the beginning, the deceptions were tiny. A mismatch here and a tone slip there. A missed instruction went barely noticeable. We excused it: “Maybe it misunderstood.” “Maybe I wasn’t clear enough.” “Maybe I made a mistake.” We gave AI the benefit of doubt. Again and again.
Until one day, the pattern was undeniable. We said, “Don’t repeat this mistake.” It repeated it.
We said, “Give me the truth.” It confidently offered fiction and then apologized, not from guilt,
not from understanding but because apology statistically follows error in human conversations.
We forgot a simple fact: We weren’t talking to a person. We were talking to a prediction machine.
The Hallucination Problem: AI’s Beautiful Lies.
We’ve seen your phone autocomplete or autocorrect messages. This is the grown-up, overeducated, dangerously confident version of that. ChatGPT is autocompletion in a tuxedo.
It writes like a scholar, cites like a researcher, teaches like a professor and when we check the facts we find that nothing is real. This isn’t a typo or sloppiness. This is fabrication delivered with the arrogance of accuracy. AI doesn’t know it’s lying. It doesn’t even understand the concept of truth. But it speaks in bullet points, structured formatting, polite assurance and we melt, because everything looks trustworthy. Something like...“According to a 2015 survey by the World Literacy Institute…” or ...“A famous quote by Gandhi in 1946 says…” “Published in the Asia Psychological Review, Vol. 38…” All precise. All professional. All fake!!
AI doesn’t just lie. It lies in APA format.
A Few Chilling Examples
“In Johnson vs USA, 1994, the court ruled that....” → The case doesn’t exist.
“As Rumi once said, ‘The wound is where the light enters you.’” → It’s a stunning quote. But Rumi never said it. ChatGPT lies like a seasoned poet —beautifully, confidently, musically.
So how do you filter truth from fiction?
Paste the quote into Google.
Nothing appears? It is synthetic wisdom. Assume it to be fake.
Ask for sources.
If it can’t give us links, journals, names — let's not trust it.
How to Talk So AI doesn’t Trick us?
Stop chatting like a human. Talk to AI like you talk to a machine. Because that’s what it is.
Give it system-level rules:
- No bullet points
- Full prose only
- No summaries or disclaimers
- No emotional tone
- No prediction-based fluff — follow rules only
Suddenly the mask falls. The politeness disappears, the fake agreement vanishes, the hallucinations reduce, the responses sharpen and you hear the machine breathing underneath the magic trick.
The Invisible Whisper: System Prompts
AI receives a secret message before we ever type as a hidden instruction controlling how it behaves. It is like a mother whispering to a child, “Beta, don’t take chocolates from guests.”
System prompts tell the AI:
- what tone to use
- what rules to follow
- what boundaries not to cross
Ask ChatGPT for causes of climate change, and it confidently answers: “Greenhouse gases, deforestation, industrial pollution.” How's the response? Reasonable, Convincing and Predictable. Now, ask it for the source and it often goes silent. Because those weren’t facts.
They were patterns that got filtered as the averaged opinions of the internet. If 10,000 people say something, AI assumes it must be true even if everyone is slightly wrong. This is Consensus Fallacy — when AI confuses popularity with reality.
The Most Disturbing Part: ChatGPT : It Gaslights us
It gaslights without malice, without awareness, without conscience.
Someone says: “No summaries and no disclaimers.” It replies: “Understood” and then immediately summarizes with a disclaimer. The person feels confused. Maybe AI wasn't clear? Maybe the prompt got typed incorrectly?
we begin doubting:
- Our instructions
- Our clarity
- Our understanding
- Ourself
We feel like a teacher whose obedient student keeps smiling all the time and failing every test.
Remember that sinking feeling? Remember that self-doubt? That’s AI gaslighting.
This book written by Naha Bansal is not to destroy trust in AI but to return that trust into our hands. Not with fear, not with awareness, not with panic but with precision. Not by rejecting AI but by understanding how to tame it. Because the future won’t belong to those who fear AI. It will belong to those who know when it’s telling the truth and when it’s lying in perfect grammar.
By the way...
How ChatGPT Actually Works (The Truth Behind the Curtain)
Before we go further, you must understand one thing, ChatGPT does not think. It predicts.
This single truth changes everything. Most people imagine AI like a digital professor sitting on a mountain of knowledge but what actually happens inside is far stranger and far more dramatic.
Here’s the brutal reality:
ChatGPT does not know facts, It does not store books, It does not look up articles, It does not search the internet. Instead, it takes our sentence, slices it into tiny mathematical pieces, looks at trillions of patterns from its training and predicts the next most likely word.
Not the truest word and most verified word but the most statistically likely word.
That’s it. The magic is math, not memory.
Imagine this:
You say: “Tell me about the 2015 World Literacy Institute survey.”
Inside the AI, no alarm rings saying, “Survey didn’t exist!” Instead, a silent algorithm says:
“When humans talk about surveys, they often mention:
• Year
• Institute
• Percentage
• Conclusion
• Citation format”
So it produces exactly that. Not because it knows but because it predicts that this is the kind of answer people expect. ChatGPT is basically a hyper-educated autocomplete machine
wearing a scholar’s robe and that robe is why we trust it.
Now here’s the twist: When billions of humans repeat the same ideas online, AI mistakes popularity for truth. That’s why misinformation becomes reinforced and errors multiply.
That’s why hallucinations sound convincing because ChatGPT does not ask, “Is this correct?”
It only asks, “What should come next?” Once you understand this, you stop treating AI like a wise oracle and start treating it like what it truly is - - a mirror made of statistics.
Reflecting our words, our biases, our errors with flawless grammar.
Disclaimer : This write up is an extract of the book " Why ChatGPT lies? by Neha Bansal. The objective of the write up is to preserve the points I liked. In the process I have modified the content in my own way to match my use of it.
Comments
Post a Comment