top of page
Search

You’re Replacing Human Judgment with a Bot That Can’t Count to 3?

  • Writer: Jana Simeonovska
    Jana Simeonovska
  • Feb 7
  • 4 min read

It’s 2026. I was supposed to be extinct by now.


Remember when marketing felt ‘human-er’? That era is now gone. The game we’re playing in 2026 is: Who can appear the least synthetic?


I’m a content writer and editor. I’ve also been a manager, a strategist, an editor-in-chief. I own the full editorial lifecycle. According to the statistics, I am also the exact person supposed to be replaced by the non-human. Writers are consistently listed among the professions most "exposed" to AI.


Did that concern me? Yes. For a long time. Am I concerned now? No. At this point, I’m just tired.


I’m tired of proving that we aren't going anywhere. When I talk about my profession, I’m talking about those of us who understand nuance — not the wanna-be experts or the CEOs using AI to create messy, useless content.


When I say professional content, I don’t mean grammatically correct. I mean credible. Reliable. Relevant. Straightforward.


I know the value of a skilled writer. I’m just waiting for companies to do the same. To realize that you cannot prompt a chatbot to care.


Neither AI nor a non-expert can fake the knowledge we have of human behavior. That value comes from real-life experience. It comes from the craft.


It certainly does not come from perfecting a prompt!


CEO: 'Can we just generate content with AI?'
CEO: 'Can we just generate content with AI?'

The Strawberry Test


I recently saw a reel that sums up the entire AI problem in ten seconds. A user asked ChatGPT:


How many R’s are there in ‘strawberry’?

The answer, mind you, a very confident one: "Two."


The user asked again. And again. The AI stuck to its original answer. It even gaslit the user, suggesting that while they might think there are three, there are actually only two.


A Very Confident Path to Being Wrong


It’s easy to laugh at a chatbot failing a third-grade spelling test. But if an AI can’t count letters in a simple fruit, we have to ask ourselves:


Why are we trusting it to summarize legal contracts, write medical advice, or build financial strategies?


The "strawberry test" is about hallucination.


AI doesn't "know" facts. It predicts the next likely piece of information based on patterns. When it’s wrong, it doesn't have a ’gut feeling’ that something is off. It presents lies with the exact same authority as a proven fact.


The Danger of Confident Ignorance


The real danger isn't that AI is wrong. It's that it is confidently wrong. In a B2B environment, "close enough" isn't good enough. A hallucinated statistic in a white paper or a fake case study in a pitch deck can ruin a brand's credibility in seconds. AI can process mountains of data, but it lacks the one thing that keeps businesses safe: contextual truth.


This is why human judgment is inevitable.


We need editors, strategists, and subject matter experts to act as the final filter. We provide the "sanity check" that an algorithm cannot. We check for the truth, the ethics, and the impact of the words on the page.


The strawberry test: A user asks ChatGPT, "How many R’s are there in ‘strawberry’?"
The strawberry test: A user asks ChatGPT, "How many R’s are there in ‘strawberry’?"

What Are You Really Replacing?


Let’s look at the transaction honestly.


When companies cut writers in favor of automation, what are they really replacing?


They are removing a content specialist — a human who understands strategy, empathy, and nuance. And they are replacing that person with a non-human that struggles to recognize reality. Or count to three.


And then there’s this second question: Why?


The obvious answer is resources. It’s the budget. It’s the illusion that you can get 80% of the result for 0% of the cost.


Here I am challenging that math.


If you "save" budget by publishing content that lacks judgment, what will be the cost you’ll pay in the long run?


The Cost of Inauthenticity


Your ideal customer persona is not a dataset. They are a human being. They are, what I call, the modern skeptic.


The modern skeptic is tired. They are overwhelmed with noise and they can smell a generic, AI-generated sentence from a mile away.


When a company chooses speed over quality, they pay a much higher price:


  • Lost trust: Once a reader sees you didn't care enough to write it, they won't care enough to read it.

  • Missed opportunities: A generic article might fill a content calendar, but it won’t convert a skeptic.

  • Brand irrelevance: You voluntarily classify yourself as "noise."


It doesn’t matter that you paid less for production. If your audience disconnects because your brand feels inauthentic, that "savings" will cost you x3 later.


Authenticity is key. And right now, too many companies are bankrupting their brand to save a few dollars.


If You Want Mediocrity, LLMs Are Perfect


LLMs are designed to be average.


Literally.


These models work by predicting the most probable next word. They don't aim for the creative choice, or the risky choice, or the brilliant choice. They aim for the middle.

And in a crowded feed, 'middle' is the most dangerous place to be.


When you optimize for the average, you become the average.


If your goal is to simply fill a content calendar, by all means, let the bot drive. But if your goal is to stop the scroll? You need to say something a machine wouldn't dare to predict. :)



Cheers,

Jana

 
 
bottom of page