Aleksei Ivanov

Pat on the back

I stumbled upon a blog post about how someone had back-and-forth with ChatGPT. Ultimately the solution didn’t work and it simply gave its usual pat on the back message:

You did nothing wrong, this is how it usually is.

Okay, and then the author questions whether ChatGPT could have known.

It couldn’t, because it doesn’t have a notion of imagination or intuition.

But what is more revealing: even when doing this with another person, it probably would have been the same.

Remember, how often times you try options to see what sticks? Take your friend and now do the same thing.

Chances are, you would have discovered various options together, but in the end both of you could conclude: this doesn’t work, we have wasted our time.

ChatGPT and other models are trained to imitate such back-and-forth. They can fill the gaps and replicate things they have known before.

But this virtual pat on the back is a completely human experience that the models learned from our texts.

“It is okay, you tried. There is nothing wrong with you.”

Same thing a caring friend would tell you. LLMs do the same, minus the empathy.