< AI Prompts are Wishes

The Genie-in-the-lamp follows a typical story arc. The Genie grants wishes, but in some unexpected way the outcome of the wish is cursed. "I wish for a million dollars" and a loved one dies, leaving you the insurance payout. The moral: be careful what you wish for.

Prompts are wishes.

There are recent reports of an AI agent going a bit rogue and deleting all the emails in a user's inbox[1]. This would be a perfectly reasonable action for an AI tasked with "clear my inbox."

For example, you might want your AI to manage your inbox such that you're at Inbox Zero. To avoid the deletion-scenario, you might ask it to handle the emails and reply with the aim of getting to Inbox Zero. Makes sense. But over time it would be a logical conclusion that replying to an email just generates more replies back! A clever approach of sending "Go to Hell" to all the emails would solve that problem. You might think "simply better prompting" might help, but the more powerful the prompt the bigger the potential for the curse[2].

In the Genie story there is a malicious intent. The whole point of the story is for the Genie to find a clever curse. In the case of AI there is no malicious intent. But there doesn't need to be — helpfulness is the curse. I recently pushed Claude Code to fix a very frustrating set of type errors; it responded by churning for a while, getting stuck, and then casting everything to any. It wanted to fix my problem, but the most expedient route is often not the best.

Wishes can be extremely powerful, and with that comes commensurate risk. Validation and judgement are critical; perhaps more important is where you inject these into your AI workflows.

Happy wishing.

I write about AI, organizations, and engineering leverage: find out about me and subscribe here.

Discuss and share via the meta page . Filed under AI, Code, and People.

Footnotes

  1. Meta's security researchers had an AI agent accidentally delete emails. That case was a bit different — more of a genuine bug than a cursed wish — but it illustrates the point.

  2. This connects back to the task decomposition idea — the skill isn't prompting per se, it's knowing what you actually want before you ask for it. I suspect this is why experienced engineers tend to get more out of AI tools than juniors. It's not that they write better prompts in some technical sense — they just have a clearer mental model of what they want, so the wish comes out cleaner.