Using hidden rules like this to shape the output of an AI system isn’t unusual, though. For example, OpenAI’s image-generating AI, DALL-E, sometimes injects hidden instructions into users’ prompts to balance out racial and gender disparities in its training data. If the user requests an image of a doctor, for example, and doesn’t specify the gender, DALL-E will suggest one at random, rather than defaulting to the male images it was trained on.










