The art of the prompt

Rijk - Solo AI Builder

The first time I used a language model was in 2019. Talktotransformer.com had just gone up. You type a sentence and it writes back. People called it impressive or creepy but neither really captured it. What surprised me was how little it needed to almost sound human. You could see that with a few tweaks, it was going to get good.

A few months later OpenAI put out the full GPT-2 models. That was the last time they did that. I ran it on my own machine and started messing around. What I noticed right away was that the model does not really understand anything. It has read a lot but has not lived anything. The default answers are a bit strange. It sort of mimics conversation but never actually lands on anything that feels real.

The people doing the most with these early models were not always the most technical. They were just better at explaining what they wanted. Most people typed things like “write about productivity” or “how do I focus more” and the model gave them something bland. They figured AI was still too dumb and moved on.

If you actually say who is talking, who they are talking to, and what you want, the output changes. It is like pulling something clear out of the fog.

LLMs know a lot but they do not know what you want. Not who, not why. Lazy prompts get lazy answers. Strange how it forces you to notice your own preferences just by having to spell them out.

If you think it does not matter, you can check for yourself. Ask the same question, swap the role, and see what happens.

Different Context Different Output

Most people who complain that AI is generic just have not told it what actually matters. If you make it pick a perspective the results stop feeling like Wikipedia and start sounding like someone you might actually listen to.

Early on I thought prompts were just tricks for getting better answers. Now I think it is almost the entire game. It is not complicated. You just have to be way more specific and direct than feels natural.

I still catch myself wanting the model to know what I want. It never does. You have to tell it everything. That is the only thing that really works. Everything else is luck.