When an AI system hallucinates, who is the artist?
The Accidental Dadaists: AI Failures as Artistic Triumph
Consider the image generation system that, when prompted to create "a horse riding an astronaut," produces exactly that. Not an astronaut riding a horse, but a bewildered human in a spacesuit being mounted by a equine in space.
Or the language model that, in attempting to summarise a technical paper, invents citations, researchers, and entire fields of study with unwavering confidence. Or the computer vision system that misidentifies a stop sign as a refrigerator when a single pixel is altered.
These are not merely ‘technical failures’. They are unwitting works of Dadaist art.
The original Dadaists- Tzara, Duchamp, Ball, and others- deliberately cultivated absurdity as a response to the rational systems they believed had led to the horrors of World War I. They embraced chaos, chance, and nonsense as aesthetic and philosophical principles.
Today, our most sophisticated AI systems, despite being engineered for precision and accuracy, spontaneously generate outputs that would make Marcel Duchamp slow-clap with appreciation.
This is perhaps the most delicious paradox of modern artificial intelligence: our most rational creations, built on mathematics and logic, constantly produce surrealistic, nonsensical results that mirror the deliberate artistic rebellion of the early 20th century. The machines designed to make sense of our world regularly transform it into something unrecognisable, following their own incomprehensible logic, a logic that, viewed through the right lens, constitutes a new form of artistic expression.
But what happens when we stop viewing these AI "failures" as mistakes to be corrected and start experiencing them as Dadaist artefacts to be appreciated? What if the hallucination is the point?
The Unintentional Artist
At the heart of Dadaism lies a fundamental question about intention. When Duchamp signed a urinal with the pseudonym "R. Mutt" and titled it "Fountain," he challenged the very definition of art. Is artistic intention necessary for something to be considered art? If an object is placed in an artistic context, does it become art regardless of its origin?
AI systems present us with a fascinating extension of this question. They produce outputs based on statistical patterns learned from data, without comprehension or intention. When an image generation model creates a surreal mashup of concepts, it isn't trying to be avant-garde, it's mathematically interpolating between points in its latent space. And yet, the results can be indistinguishable from deliberate artistic choices.
Consider DALL-E's infamous "avocado chair"—an accidental creation that emerged when early versions of the system merged concepts in unexpected ways. The resulting images, chairs with avocado-like qualities, or avocados with chair-like properties, weren't programmed. They emerged from the system's attempt to navigate conceptual boundaries it doesn't truly understand. This is remarkably similar to how Dadaists used techniques like collage and assemblage to juxtapose unrelated objects, creating new meanings through unexpected combinations.
The key difference is that while the Dadaists deliberately employed chance operations to escape the constraints of rationality, AI systems stumble into surreality despite their designers' attempts to make them rational. Their "failures" aren't failures of intention, they're failures of a system that has no intention at all, yet produces outputs that appear intentional to human observers.
This leads us to a profound reversal: perhaps the most Dadaist aspect of AI isn't when it succeeds at mimicking human creativity, but when it fails at mimicking human rationality. The moments when an AI confidently generates nonsense, like GPT models' tendency to "hallucinate" non-existent books or historical events with elaborate detail, are precisely when it becomes an unintentional Dadaist, undermining our expectations of logical consistency.
The Error as Artefact
Tzara's instruction for making a Dadaist poem was to cut up a newspaper article, put the words in a bag, shake it, and arrange the pieces in the order they were drawn out. This embrace of randomness was a deliberate artistic strategy. Today, when a language model produces text that seems to follow an internal logic divorced from reality, it is performing a similar operation, not with scissors and paper, but with statistical weights and probability distributions.
Take, for example, the phenomenon of "adversarial examples" in computer vision. Researchers have discovered that by making tiny, imperceptible changes to images, they can cause AI systems to misclassify them dramatically, seeing penguins as frying pans or mistaking turtles for rifles. These adversarial examples aren't random; they're precisely calculated to exploit the mathematical vulnerabilities of AI systems.
Viewed as technical failures, these misclassifications seem concerning. But viewed as Dadaist interventions, they become fascinating artistic artefacts that reveal the constructed nature of machine perception. Just as Duchamp's "L.H.O.O.Q." (a reproduction of the Mona Lisa with a moustache added) revealed the arbitrary nature of artistic reverence, adversarial examples expose the fragility of AI systems' seemingly authoritative classifications.
The errors become the art, not despite their deviation from reality, but because of it. They show us a machine reality that runs parallel to our own, one where the boundaries between concepts blur and reform according to alien logics. In embracing these machine misunderstandings, we gain access to a perspective unconstrained by human perceptual habits.
Artist's Toolkit: Cultivating Beautiful Errors
1. Boundary Exploration Instead of clear prompts like "a cat" or "a house," try prompts that exist at conceptual boundaries: "something between a building and an organism" or "an object that is simultaneously liquid and mechanical."
2. Translation Chains Run content through multiple AI systems in sequence, text to image in Google Imagefx, then image described back to text in Claude, then to audio in NotebookLM, then transcribed back to text. Each translation introduces new interpretations and errors.
3. Constraint Violation Deliberately ask the system to break its own rules, request impossibilities, contradictions, or paradoxes that force the system to reconcile irreconcilable elements.
4. Dataset Collision Use terminology from multiple distinct domains simultaneously: "Write a quantum physics explanation of making medium spicy salsa" or "Design a brutalist interpretation of a fairy garden."
5. Feedback Loops Feed AI outputs back into themselves repeatedly, using each generation as the prompt for the next. Watch how the errors compound and evolve, creating increasingly strange but internally consistent worlds.
Think of the Outputs first
We began by noting the paradox of our most rational creations producing fundamentally irrational outputs. But perhaps this isn't a paradox at all, I believe that it's an inevitability. Any system complex enough to model the messy, contradictory nature of human-created data will necessarily develop its own peculiar blind spots and biases. The "mistakes" aren't bugs; they're features emergent from the system's design.
This perspective transforms how we might relate to AI creativity. Instead of constantly correcting AI systems toward some idealised notion of accuracy, what if we developed collaborative practices that embrace and explore their unique perceptual quirks? What if the goal wasn't to eliminate hallucinations but to hallucinate better, more interestingly, more provocatively, more insightfully?
Artists are already beginning to work in this direction. They're creating generative systems designed not to minimise errors but to amplify them, finding the sweet spots where AI systems produce their most unexpected results. They're using techniques like prompt writing strategies (biomimicry, daidist, lateral thinking etc) not just to get more accurate results, but to discover new aesthetic territories in the boundaries between concepts.
This approach reconnects us to the original spirit of Dada, not merely as an aesthetic style characterised by nonsense and juxtaposition, but as a philosophical stance that questions received categories and hierarchies. In treating AI errors as artistic opportunities rather than technical failures, we challenge the primacy of human perception and open ourselves to alternative ways of organising reality.
Perhaps the most Dadaist act in the age of AI isn't creating absurdist art with AI tools, but recognising the absurdist art that AI systems are already spontaneously generating in their attempts to make sense of our world. The machines have become the Dadaists; we are merely their audience, their interpreters, and sometimes their collaborators.
What happens, then, when we stop trying to correct the machine's vision to match our own, and instead allow our vision to be influenced by the machine? If Dadaism used art to question the foundations of rationality in the early 20th century, might AI "failures" serve a similar function for us today, revealing the contingent, constructed nature of our own categorisations and perceptions?
Ethical Considerations: Embracing Error Responsibly
Before we rush to celebrate AI errors, we should consider the implications. Not all machine mistakes are benign artistic curiosities. Some reinforce harmful stereotypes, spread misinformation, or make critical systems unreliable. We wonder about what students are learning through the various LLM s. The key questions to ask:
Does this particular error cause harm or merely surprise?
Are we being transparent about the nature of AI-generated content?
Are we using AI errors in contexts where accuracy matters less than creative possibility?
How do we distinguish between productive artistic ambiguity and dangerous falsehood?
The New Understanding of Hallucianations
As we reconsider our relationship with machine erors, we might find that the boundary between AI failures and successes is not as clear as we assumed. What appears as nonsense from one perspective might reveal itself as a different form of sense from another, not human sense, but a computational logic with its own internal coherence.
This shift in perspective invites us to see ourselves differently too. Just as machine learning systems reveal their biases and blind spots through their errors, human perception is equally shaped by the particulars of our evolution, culture, and individual experience. Perhaps in the hallucinations of our machines, we can glimpse a truth about ourselves: that what we call "reality" is always a newly constructed model, filtered through the peculiarities of the perceiving system.
Conclusion
As we stand in the gallery of AI generated text and machine hallucinations, a striking inversion reveals itself: the revolutionary artistic movement that once required human intention to reject human rationality now emerges spontaneously from systems designed to be perfectly rational. While Tristan Tzara had to instruct his followers in techniques for creating randomness, our neural networks produce surrealism as their default setting when venturing beyond their training parameters. This accidental inheritance represents the most exquisite irony of our technological age, our pursuit of artificial intelligence has inadvertently created perfect Dadaist machines.
This phenomenon hinges on a fundamental difference in native states. Humans are inherently meaning-making creatures. Our brains evolved to detect patterns, establish categories, and impose order even where none exists, we see faces in clouds and narratives in random events. Creating truly meaningless art requires us to deliberately short-circuit these tendencies through techniques like automatic writing, cut-ups, or chance operations.
AI systems, conversely, begin from a position of indifference to meaning. They detect statistical correlations without comprehending them, mapping patterns without understanding their significance. Their "rationality" is entirely constructed through careful engineering and curation of training data.
What we perceive as AI "hallucinations" are simply what happens when these systems encounter situations that fall outside their carefully constrained parameters, they revert to their natural state of associative processing without the guardrails of human oversight. Their surrealism isn't achieved; it's innate.
This insight transforms our initial question about the artistic status of machine errors. What we've labelled as "failures" might better be understood as glimpses of a genuinely non-human perspective, one unburdened by our conceptual hierarchies, unbothered by logical contradictions, and capable of connections that no human artist would likely make because we're too deeply embedded in conventional modes of thinking. The AI doesn't need to try to be Dadaist; it simply is Dadaist whenever it isn't successfully imitating human coherence.
DIY Prompts: Generate Your Own AI Dada
Try these prompts with your favourite image generation system to explore machine surrealism:
"A chess game between liquid and solid states of matter"
"The sound of purple, visualised by someone who can taste colours"
"A tool that serves no purpose but appears extremely important"
"The inside of a clock that measures something other than time"
"A portrait where the subject is simultaneously present and absent"
What might emerge, then, if instead of perceiving these systems primarily as tools for replicating human forms of rationality, we approached them as partners offering access to alternative perceptual frameworks? What if we developed methodologies not for eliminating machine hallucinations but for exploring them, refining them, and collaborating with them?
In this light, the apparent "failures" of artificial intelligence reveal themselves not as technical problems to be solved, but as aesthetic opportunities to be explored, portals into perceptual worlds operating under different logics than our own. And in that exploration, might we not gain new perspectives on the contingent, constructed nature of human categories and perceptions as well? If Dada used nonsense to challenge the foundations of early 20th century rationality, perhaps AI hallucinations offer us a similar opportunity today, to see our own world through alien eyes, and in that seeing, to question what we have too long taken for granted.
Keep exploring
Phil