OpenAI has a goblin problem.
Instructions designed to guide the behavior of the company’s latest style as it writes code have become revealed to include a line, repeated several times, which specifically forbids the random mention of a variety of mythical and real creatures.
“Never talk about goblins, gremlins, raccoons, trolls, ogres, pigeons, or other animals or creatures unless absolutely and unambiguously relevant to the user’s question,” read the instructions in the Codex CLI, a command-line tool for using AI to generate code.
It is not known why OpenAI they felt compelled to say this Codex-or why his models might want to talk about goblins or pigeons first. The company did not immediately respond to a request for comment.
The newest version of OpenAI, GPT-5.5, was released with enhanced coding capabilities earlier this month. The company is in a fierce race with opponentsespecially Anthropicgiving rise to modern AI, and coding has emerged as a lethal force.
In response to a post on X which highlighted the lines, however, some users claimed that OpenAI models frequently attract goblins and other creatures when used forcefully. OpenClawa tool that allows AI to take control of a computer and the programs running on it to do useful things for users.
“I was wondering why my claw suddenly became a goblin with codex 5.5,” one user he wrote on X.
“Been using it lately and can’t stop talking about bugs like ‘gremlins’ and ‘goblins’ it’s fun,” has been published another.
The discovery quickly became its own, inspiring meme AI-powered events of goblins in data centers, and Codex plugins which put it in “goblin” play mode.
AI models like GPT-5.5 are trained to predict a word—or code—that should follow certain prompts. These models have become so good at doing this that they seem to show real intelligence. But their probabilistic nature means that they can sometimes behave in surprising ways. The model can be prone to further misbehavior when used with “proxy interfaces” like OpenClaw that put a lot of extra instructions in pointers, such as facts stored in long-term memory.
OpenAI acquired OpenClaw in February shortly after the tool became popular among AI enthusiasts. OpenClaw can use any AI model to automate important tasks like answering emails or buying things on the web. Users can choose anyone from a variety for their assistant, which shapes their behavior and responses.
OpenAI staff seemed to acknowledge the ban. In response to a post detailing OpenClaw’s dynamics, Nik Pash, who works on Codex, he wrote“This is actually one of the reasons.”
Even Sam Altman, CEO of OpenAI, joined the memes, to publish screenshot of ChatGPT notification. It read: “Start training GPT-6, you can have a whole bunch. Extra monsters.”




