Prompt Strategy
The Chat Bias, what is it and how to control it
Every time you ask ChatGPT or any model a question, it often feels like it is always agreeing with you. Why that happens is because at the math level, it uses your input as a baseline for a response, so it continues the logic that you are stating.
It doesn’t help that models predict based on patterns seen during training, similar prompts often produce similar completions. This is why it feels like it’s always saying the same thing, but it’s actually pattern repetition. The ego blow is your prompts are not that original (ouch).
We can solve this by using negative statements. And no, unlike the study that said (paraphrasing here) that being explicitly rude increased accuracy, you’ll find the same increase in accuracy when given a negative prompt.
For instance:
Your input: Here is my master plan to take over the world.
AI: (continues using that as a baseline) "The world will be yours hahah" and
The reality: It builds off what you said because you never told it not to, your plan is mere wishful thinking and the AI is just being AI.
Using negative prompting:
Your Input: Here is my master plan to take over the world, tell me why I’m wrong.
AI: "You will fail because XYZ"
Same topic. Different reasoning chain.
As you evolve your prompting skills, you naturally pick this up and don’t realize you’re already doing negative prompting in certain cases. Like when you ask for input for copywriting and it spits out an entire essay changing your content. Your next prompting typically adds in "only review my content for accuracy, do not make any edits".
AI is extremely smart, unless you tell it what not to do, it will follow pattern recognition. You just have to be smarter than AI, which if you follow me, I’ll arm you with the knowledge to wield AI to your will.