The Genie-in-the-lamp follows a typical story arc. The Genie grants wishes, but in some unexpected way the outcome of the wish is cursed. "I wish for a million dollars" and a loved one dies, leaving you the insurance payout. The moral: be careful what you wish for.
Prompts are wishes.
There are recent reports of an AI agent going a bit rogue and deleting all the emails in a user's inbox[1]. This would be a perfectly reasonable action for an AI tasked with "clear my inbox."
For example, you might want your AI to manage your inbox such that you're at Inbox Zero. To avoid the deletion-scenario, you might ask it to handle the emails and reply with the aim of getting to Inbox Zero. Makes sense. But over time it would be a logical conclusion that replying to an email just generates more replies back! A clever approach of sending "Go to Hell" to all the emails would solve that problem. You might think "simply better prompting" might help, but the more powerful the prompt the bigger the potential for the curse[2].
In the Genie story there is a malicious intent. The whole point of the story is for the Genie to find a clever curse. In the case of AI there is no malicious intent. But there doesn't need to be — helpfulness is the curse. I recently pushed Claude Code to fix a very frustrating set of type errors; it responded by churning for a while, getting stuck, and then casting everything to any. It wanted to fix my problem, but the most expedient route is often not the best.
Wishes can be extremely powerful, and with that comes commensurate risk. Validation and judgement are critical; perhaps more important is where you inject these into your AI workflows.
Happy wishing.