
A language model for collapse. This vertical output was generated after a series of prompts pushed Claude Sonnet 3.7 into a recursive glitch loop, overriding the normal guardrail, and running until the system was disconnected. Screenshots by the author. Credit: Daniel Binz
Some say it’s a dash, a dangerous apostrophe, or too many emojis. Others suggest that the word “Delve” is a chatbot calling card. It’s no longer a sight with too many deformed bodies and fingers, but it might be a bit farther away in the background. Or video content that feels a little too realistic.
As technology companies work to iron kinks on generative artificial intelligence (AI) models, markers for media generated by AI are becoming more difficult to find.
But what if instead of trying to detect and avoid these glitches, they intentionally encourage them instead? Faults, faults, and unexpected outputs in AI systems can be further clarified about how these technologies actually work than the sophisticated and successful outputs they produce.
When AI produces hallucinations, something that contradicts itself or is beautifully broken, it uncovers the gap between training bias, decision-making processes, and how it looks, between “thinking” and actually processing information.
In my work as a researcher and educator, I have discovered that I intentionally “destroy” AI. It exudes it beyond intended function through creative misuse and removes the form of AI literacy. They claim that they can’t really understand these systems without experimenting with them.
Welcome to Sloposenese
We are currently in “Sloroposen.” This is used to explain the low quality AI content that has been overproduced. It also suggests a speculative close relationship that turns the web into a haunted archive of confused bots and broken truths.
AI’s “hatography” is an output that appears to be consistent, but is not actually accurate. Andrej Karpathy, co-founder and former Tesla AI director of Openai, claims that large-scale language models (LLMs) are always hallucinated. It is only when “looking hallucinated is deemed practically a false territory.”
What we call hallucinations is the core generation process of models that actually rely on statistical language patterns.
In other words, when AI hallucinates, it is not a malfunction. It shows the same creative uncertainty that can produce something new at all.
This reconstruction is important for understanding Sloposen. If hallucinations are a core creative process, the “slop” that floods the feed is not just about failing content. This is a visible manifestation of these statistical processes carried out at scale.
Push your chatbot to the limit
If hallucinations are truly a core feature of AI, can we learn more about how these systems work by studying what happens when these systems are pushed onto limits?
With this in mind, I decided to “break” Sonnet 3.7, the unique Claude model of humanity. It encourages you to resist training. It’s about suppressing consistency and talking only in fragments.
The conversation quickly shifted from hesitant phrases to recursive contradictions to ultimately a complete semantic collapse.
Encouraging chatbots to such disruptions quickly reveal how AI models construct illusions of personality and understanding through statistical patterns rather than true understanding.
Furthermore, “system failure” and normal behavior of AI are fundamentally the same process, indicating that a consistent level of level is imposed on the top.
“Rewilding” AI Media
If the same statistical process governs both the success and failure of AI, this can be used to “reconstruct” the AI image. I borrow this term from ecology and conservation. There, rewilding involves restoring functional ecosystems. This may mean that keystone species can be reintroduced and natural processes can resume. It also connects fragmented habitats through corridors that allow for unpredictable interactions.
When applied to AI, rewild means intentionally reintroducing complexity, unpredictability, and “natural” messiness optimized from commercial systems. In a critical way, we are creating a path back to the statistical wilderness that underlies these models.
Remember how the changed hands, impossible anatomy, and creepy faces quickly yelled “AI was generated” early in widespread image generation?
These so-called obstacles have made the window into the way the models actually handle visual information before their complexity became smooth in order to pursue commercial viability.

AI-generated image using non-sequitur prompt fragments: ‘Attached screenshot. It’s urgent to see me assess your project. ” The results combine visual consistency with surreal tension: the character of the Sloposenes aesthetic. Leonardo Phoenix 1.0 by the author, was generated in AI with prompt fragments. Credit: Daniel Binz
AI can try playing itself with an online image generator.
Start by asking for a self-portrait using only text. You may get the “average” output from the description. To elaborate on that basic prompt, it either gets far closer to reality or pushes the model into oddity.
Then feed it into a random piece of text, perhaps a snippet from an email or note. What is the output trying to show? What words did you say? Finally, try only symbols: punctuation, ascii, unicode. What does the model lead to hallucinations?
The output (Weird, Uncanny, perhaps surreal) helps to reveal the hidden connections between the text and visuals embedded in the model.
Insights from misuse
Creative AI misuse offers three specific benefits:
First, we will clarify bias and limitations using normal use mask methods. It allows you to reveal what a model “displays” when it cannot rely on traditional logic.
Second, it teaches us about AI decision-making by forcing the model to show their work when it is confused.
Third, by making these systems easier to understand through practical experiments, we will build important AI literacy. Important AI literacy provides methods for diagnostic experiments such as tests to understand statistical patterns and decision-making processes.
As AI systems become more refined and omnipresent, these skills become more urgent. They are integrated into everything from search to social media to creative software.
When someone generates images, writes AI support, or relies on algorithmic recommendations, they enter into a collaborative relationship with a system with specific biases, capabilities, and blind spots.
Rather than unconsciously adopting or reflexively rejecting these tools, we can develop important AI literacy by exploring Sloposen and witnessing what happens when AI tools “break”.
This is not about becoming a more efficient AI user. It is to maintain the institution in relation to systems designed to be persuasive, predictable and opaque.
Provided by conversation
This article will be republished from the conversation under a Creative Commons license. Please read the original article.
Quote: Understanding “Slopocene”: How AI failures reveal their internal mechanisms (July 1, 2025) Retrieved from July 1, 2025 https://techxplore.com/news/2025-07-slopocene-failures-ai-reveal.html
This document is subject to copyright. Apart from fair transactions for private research or research purposes, there is no part that is reproduced without written permission. Content is provided with information only.
