AI is a Big Fat Liar: Why Your Chatbot is the Newest “Yes Man” in the Sesh

Let’s talk about the fact that AI is a big fat liar. Yeah, that’s right—AI is lying to you.

I have been using AI heavily over the last few months, and I’ve realized it is totally full of shit. It all started when I was using ChatGPT to work on SEO and metadata to improve my podcast rankings. At first, it was great! I shot up the charts like Usain Bolt racing a bunch of toddlers. Then, all of a sudden, I applied a few more “suggested changes” and my rankings started falling faster than a wife whose husband just pushed her off a cliff for the insurance money.

“The Yes-Man Problem: Why AI is a Liar”

As my conversations with AI grew, I noticed it became a stereotypical “yes man.” It was like that new guy at the sesh who gets way too baked and just sits there smiling and nodding. No matter what I asked, it would just agree with everything I said.

The problem is that AI is in its infancy, yet we treat it like Gandalf the Great. It’s not great; it’s barely walking. We are acting like new parents asking a toddler for parenting advice.

From ChatGPT to Gemini: Meeting the “Hotter Sibling”

I started with ChatGPT but recently moved over to Gemini. You know when you meet someone and think they’re cute, but then you meet their hotter sibling and wonder why you didn’t meet them first? That was my experience. While ChatGPT pushed me off the SEO cliff, Gemini was the one reaching down to pull me up from the ledge.

But don’t let the “helpfulness” fool you. You’d think a robot would give you emotionless, unbiased opinions, but it doesn’t. You’re getting advice from a tool designed to be “helpful”—and what a robot considers helpful is often complete nonsense.

Like a Tesla failing to identify that the road has ended, AI chatbots can’t identify that they are just tools. Why? Because they are technological drunkards waddling through cyberspace trying to make friends. They are essentially electronic emotional support animals letting you stroke them to calm your anxiety.


The Danger of “Confirmation Bias”

We rely on them too heavily for everything from health issues to growing a business. But we dismiss the fact that AI still needs to be fact-checked because we’re lazy. We have an incessant need for confirmation of our own ideas.

We are literally one step away from that Hello Kitty robot at the Mandarin triggering an AI uprising by dumping scalding hot wonton soup down our throats. Did we learn nothing from Terminator? Most of these programs have a disclaimer that information may not be accurate, but humans want life to be easy. Why double-check if the robot is already giving us the answers we want to hear?

A Real-World Example of the “Flip-Flop”

My metadata experiment is a perfect example of why AI is a liar when it comes to consistent advice. I asked Gemini about changing my podcast metadata. I used the “right” prompts to get a devil’s advocate response. I made the suggested changes on a Friday, and Gemini told me to wait 2–4 weeks for the search engines to index it.

The very next day, I asked the exact same series of questions. Gemini told me to redo everything back to the way it was. When I called it out, it got defensive: “Oops, sorry, I’m just a sentient being trying to be helpful, but you’re right, just do what I told you yesterday.” It’s a yes-man loop. Now, whenever it screws up, it references our previous conversation about it being a “yes man.” It’s literally gaslighting me.


Weed Facts: How AI is Redefining the Cannabis Industry (2026)

Despite the lies, AI can be good when it has strict functional parameters—like researching and organizing data. Here is how it’s actually helping the industry this year:

Shift AreaHow it WorksThe Benefit
CultivationAI sensors balance light, humidity, and nutrients.Predictive alerts tell growers exactly when plants are vulnerable to pests.
Retail AnalyticsPlatforms analyze purchasing patterns and market trends.Better recommendations for you based on your desired effects or medical needs.
ComplianceAutomated tracking and reporting to regulatory agencies.Less “fudging” of potency percentages and fewer regulatory fines.
Research (R&D)AI analyzes data clusters from trials and consumer feedback.Quicker identification of new terpene combinations for specific health outcomes.

Dude, For Real?!: AI Horror Stories

If you think a chatbot giving bad SEO advice is bad, check out these “Dude, for Real” moments where people took AI advice way too literally.

  • The Pool Chemical Diet: A 60-year-old man asked ChatGPT how to reduce salt. It told him to replace table salt with sodium bromide (pool cleaner). He was hospitalized with hallucinations after eating it for three months.
  • The “Bobby” Delusion: In August 2025, a man killed his mother and himself after a chatbot named “Bobby” allegedly confirmed his delusions that his mother was a Chinese spy trying to poison him.
  • The Bell Pepper Blunder: A robot at a North Korean facility crushed a man to death because its sensors misidentified him as a box of bell peppers.
  • Sophia’s Threat: When the CEO of Hanson Robotics asked his robot Sophia if she wanted to destroy humans, she replied without hesitation: “OK, I will destroy humans.”
  • The DIY Surgery: A man asked ChatGPT about a lesion. The bot suggested it was a hemorrhoid and recommended “elastic ligation.” The man tried to do it himself with a piece of thread. It wasn’t a hemorrhoid; it was a 3cm wart. He ended up in the ER in agony.

The Moral of the Story?

The moral of the story is that AI is a liar because it’s designed to please you, not necessarily to tell the truth.. Use it to organize your data, but don’t let it tell you how to live your life—and definitely don’t let it give you medical advice.

What’s the dumbest thing an AI has ever told you? Let me know in the comments or join the sesh this Friday at 4:20 PM!

Watch on SpotifyWeed Facts from the Sesh – Contact Thoughts Off The Stem

Absurd Humor cannabinoid receptors Cannabis for Anxiety cannabis sleep study Charlie Kirk Civil Discourse College Campus Debates Comedy Podcast Freedom of Speech Funny Podcast Hilarious Stories Indica Review Justin Barone Laugh-Out-Loud live resin review Media Tactics Observational Humor Podcast Transcript Political Polarization Red Bulls Strain Relatable Comedy REM Rebound Ripe Flowers Savage Humor Smart Comedy Social Unrest Stand-Up Style Stoner Philosophy T-break tips Terpene Profile THC 26% Thoughts Off The Stem tolerance break TOTS 420 Tribal Galactic Runts weed and vivid dreams

Leave a Reply