Why do AIs keep creating nightmarish images of strange characters?
Some artificial intelligences can generate realistic images from a simple text prompt. These tools have been used to illustrate magazine covers and win art contests, but they can also create very strange results. Nightmarish images of strange creatures keep popping up, sometimes called digital cryptids, named after animals that cryptozoologists, but not mainstream scientists, believe may exist somewhere. The phenomenon has made national headlines and sparked rumors on social media, so what’s going on?
What images are generated?
A Twitter user asked an AI model called DALL-E mini, renamed from Craiyon, to generate images of the word “crungus”. They were surprised by the consistent theme exits: image after image of a surly, hairy, goat-like man.
Then come the pictures of Loab, a woman with black hair, red cheeks, and missing or disfigured eyes. In a series of artist-generated images, Loab evolves and emerges in ever more disturbing scenarios, yet remains recognizable.
Are these characters discovered, invented or copied?
Some people on social media have jokingly suggested that the AI is simply revealing the existence of Crungus and Loab, and the consistency of the images is proof that they are real beings.
Mhairi Aitken of the Alan Turing Institute in London says nothing could be further from the truth. “Rather than something scary, what this actually shows are some of the limitations of AI image generator models,” she says. “Scary demon theories will likely continue to spread via social media and fuel the public imagination about the future of AI, while the real explanations might be a little duller.”
The origins of these images lie in the vast amounts of text, photographs and other human-created data, which are retrieved by AI-in-training, Aitken says.
Where does Crungus come from?
Comedian Guy Kelly, who generated the original Crungus footage, told new scientist that he was just trying to come up with made-up words that the AI could somehow construct a clear picture of.
“I’d seen people try existing things in the bot — ‘three dogs riding a seagull,’ etc. — but I didn’t recall seeing anyone using plausible-sounding gibberish,” he says. “I thought it would be fun to plug a nonsense word into the AI bot to see if something that looked like a concrete thing in my head gave consistent results. I had no idea what a C was.rungus would look like, just that it sounded a bit ‘goblinny’.
While AI influences in the creation of Crungus number in the hundreds or thousands, there are a few things we can point to as likely culprits. There are a range of games that involve a character named Crungus and mentions of the word on Urban Dictionary from 2018 are of a monster that does “gross” things. The word is also no different from Krampus – a creature believed to punish naughty children at Christmas in some parts of Europe – and the appearance of the two creatures is also similar.
Mark Lee of the University of Birmingham, UK, says Crungus is just a composite of data Craiyon has seen. “I think you could say he produces original stuff,” he says. “But they are based on previous examples. It may just be a mixed image from multiple sources. And it looks very scary, right? »
Where is Loab from?
Loab is a slightly different, but equally fictional beast. The artist Supercompositewho generated Loab and asked to remain anonymous, said new scientist that Loab was the result of time spent sifting through the outputs of an unnamed AI for original results.
“That says a lot about the accidents that happen inside these neural networks, which are kind of black boxes,” they say. “It’s all based on the images that people have created and how people have decided to collect and maintain the training data set. So while it may seem like a ghost in the machine, it doesn’t only reflects our collective cultural production.
Loab was created with a “negatively weighted prompt”, which, unlike a normal prompt, is an instruction to the AI to create an image conceptually as far away from the input as possible. The outcome of these negative entries can be unpredictable.
Supercomposite asked the AI to create the opposite of “Brando”, which resulted in a logo with the text “DIGITA PNTICS”. They then requested the opposite of that and received a series of images from Loab.
“Text prompts generally lead to a very broad set of outputs and greater flexibility,” says Aitken. “It may be that when a negative prompt is used, the resulting images are more constrained. So one theory is that negative prompts might be more likely to repeat certain images or aspects of them, and that may explain why Loab seems so persistent.
What does this say about public understanding of AI?
Although we rely on AIs daily for everything from unlocking our phones with our face to talking with a voice assistant like Alexa or even protecting our bank accounts from fraud, even the researchers who develop them don’t quite understand how. AIs work. This is because AIs learn to do things without us knowing how they do them. We just see an entrance and an exit, the rest is hidden. This can lead to misunderstandings, says Aitken.
“AI is discussed as if it’s somehow magical or mysterious,” she says. “This is probably the first of many examples that could well give rise to conspiracy theories or myths about characters living in cyberspace. It’s really important that we correct these misunderstandings and misconceptions about AI so that people understand that these are just computer programs, which only do what they are programmed to do, and that what they produce is the result of human ingenuity and imagination.
“What’s scary, I think, is really that these urban legends were born,” says Lee. “And then kids and other people take these things seriously. As scientists, we have to be very careful to say, “Look, that’s all that’s really going on, and it’s not supernatural.”
Learn more about these topics: