Why ChatGPT Goes “Wrong” & People Misinterpret It (Comment of the Week)


As for the NY journalist, he did the same move as Lemoine: prompt leading.

The model then gives you what you request (or what you hint, even subconsciously: someone isn’t aware of doing that) and the model follows your prompt.

ChatGPT, by default, essentially roleplays as a robotic assistant, so people use it as a sort of oracle or they ask calculations to… a language model, apt to perform, guess what, language tasks.

You can ask it to summarize, to generate text of any kind in any style, etc. Character.ai, developed by previous developers of Google’s LaMDA, can pretend to be any character, from game characters to famous people, imitating their speech and personality. And you can trick these models by making them generate anything else.

However, prompt engineering could also be malicious. OpenAI (and Microsoft with Bing Chat) tries to prevent that with some filter and pre-prompting (there is a hidden prompt before your input), but that works only so much.

At any rate, it’s true that these models work with patterns. Neural networks (your brain too) are pattern recognition systems. This is also how you try to predict things, from the next word to finding patterns in stock market graphs. Often wrongly and that’s also how stereotypes happen, but pattern recognition has its advantages too.

It’s also how you learn, by training, repetition, until you “get it”, as the mentioned song and the sentences you heard so many times; sadly propaganda, especially on social media, does the same by repeating misinformation, aided by bots, until people begin to parrot it; so, ironically, they are trained by propaganda bots.

The human brain not only predicts the next word and the next sentence, though, but also tries to predict what the interlocutor may reply. Having empathy and a developed theory of mind helps. Since that also helps in predicting the next word (or better, the next token), there are papers investigating possible emergent abilities in language models.

Hallucination is also another feature of neural networks, that fills the missing spots, [similar to your] blind spot in the retina up to your dreams. So it’s not something we are going to get rid of. [But] without hallucinations you couldn’t have Stable Diffusion, nor in-painting techniques.

Someone even thinks the human brain lives in a controlled hallucination and even the sense of self and consciousness may be illusory. Anyway, that’s true also for your “faulty”, lossy fuzzy memories (that however can archive a huge amount of info), so that you have to check your notes and photos, because that red tulip you remember maybe it’s your brain making it up for some missing information and it was a pink petunia instead.

Since these LLMs are good at making stuff up and they are language models, one of the best use cases is indeed chat-based roleplay gaming.

We had some examples of that several years ago already with e.g. AI Dungeon. GPT-2 based AI Dungeon was hilarious with all the nonsense generated and it was quickly incoherent. GPT-3 based AI Dungeon was somewhat better, but still derailing, roleplaying for you, etc.

But have you tried paragraph-roleplaying with ChatGPT? It’s way more coherent. Essentially it’s still GPT-3, but much improved (it’s GPT-3.5) and it takes advantage of a larger context window, InstructGPT and several other things.

Good stuff. I am pretty skeptical comparing anything AI programs do with the human mind. We’re still pretty fuzzy about how the mind works!



Stay in the Loop

Get the daily email from CryptoNews that makes reading the news actually enjoyable. Join our mailing list to stay in the loop to stay informed, for free.

Latest stories

- Advertisement - spot_img

You might also like...