Holiday special - creativity, AI, and slop
Published by Vlasta on December 20th.
One of my college professors liked to frame originality in a way that stayed with me long after graduation. Paraphrased, his point was this: if a master’s thesis merely retraces the path of another, it fails; if it meaningfully combines two existing theses, it earns a degree; and if it successfully weaves together three, it deserves special recognition.
Creativity
The lesson was about how small the difference is between being a copycat and making a valuable contribution. Originality and creativity are not born from isolation, but from the skillful integration of what already exists into something that carries a new structure, purpose, or insight. Sometimes, adding a tiny new part is just enough.
Nearly all creative work builds on what came before, and genuinely new elements tend to emerge slowly rather than all at once. A neat illustration appears in the movie Back to the Future (I hope you still know that one, guys): when Marty McFly plays "Johnny B. Goode" in the 1950s, the audience responds positively because the song still fits within a musical vocabulary they can recognize. But as he begins to layer in elements borrowed from much later hard rock and metal (distortion, aggressive rhythm, exaggerated performance) the crowd grows confused. The music has crossed a threshold where too much of it is unfamiliar at once, and without a shared context, it stops registering as music at all.

Marty McFly in Back to the Future
Popular works often succeed precisely because they recombine familiar elements in measured ways. Harry Potter draws on a long lineage of earlier material: British boarding school stories, classical myth, medieval alchemy, fairy tales, and modern fantasy. Yet it blends them into a form that feels new without being alien. Even foundational religious texts follow a similar pattern: the Bible incorporates and reworks older myths, laws, and narratives from earlier Near Eastern traditions, reshaping them into a new theological framework rather than inventing everything from scratch. At the opposite extreme lie figures who deliberately pushed so far ahead (or sideways - time will tell) of prevailing expectations that recognition itself became the challenge. Andy Warhol's work, for example, was initially dismissed by many (and still is by some) as trivial or not art at all, precisely because it broke too sharply with established ideas of authorship, craft, and meaning. Or if you want a more recent example, just two words: Italian brainrot - and let's not mention it ever again.
Seen this way, creativity does not need to be treated as a mystical spark reserved exclusively for humans. It can be understood more pragmatically as a skill: the ability to balance order and chaos. Order represents the shared understanding of how things are "supposed" to look or sound - the conventions, styles, and structures that make a work legible at all. Chaos enters as deviation: a new rhythm, an unexpected color or image effect, a rule bent or broken. Creative judgment lies in introducing just enough novelty to be interesting without dissolving the underlying order that allows the work to be recognized and understood in the first place.
How current AI image generation works
Most modern AI image generators are based on diffusion models. In simplified terms, learning data for these systems is created by taking existing images and gradually adding noise until the original image is almost completely destroyed. The model is then trained to reverse this process: starting from noise and step by step predicting how to remove it so that a coherent image emerges. When generating a new image, the model begins with random noise and iteratively refines it, guided by a text prompt and statistical patterns it has learned from vast numbers of images.
What is remarkable is not just that this approach works quite well, but that it works at all. There is no explicit understanding of objects, scenes, or meaning. There is only an accumulation of correlations learned through training. Yet out of this process emerges imagery that appears structured, intentional, and often aesthetically convincing. From a purely engineering perspective, the success of diffusion models remains somewhat surprising: a method built on noise reduction ends up producing images that people readily interpret as meaningful representations of the world.
Limitations of the current approach
At the same time, the constraints of this method are fundamental rather than incidental. An image diffusion model has never interacted with the world it depicts. It has no bodily experience, no sense of physical causality, and no direct understanding of space, weight, or function. Everything it "knows" about reality was learned only by looking at images.
The way it works (removing noise) also affects how the model can change or refine its outputs. Once the diffusion process is underway, making large, structural revisions is difficult and often impossible. The model cannot easily decide to move an object to a different part of the image, rethink the composition, or reframe the scene from a new perspective in the way a human artist can. Instead of revisiting earlier decisions, it tends to smooth and adjust what is already there, reinforcing existing structure rather than reorganizing it. The result is a system that excels at local refinement and stylistic coherence, but struggles with deliberate, high-level rethinking.
AI slop
Beyond the technical limitations already discussed, current image generators also lack the ability to supply meaningful intent or context on their own. Any purpose an image has must be provided externally by a human, and when that effort is absent or minimal, the result is what is commonly labeled AI slop. In this sense, slop is not defined by the use of AI, but by the absence of human judgment: images generated without a clear reason for existing, without an audience in mind, and without responsibility for what they communicate. A typical example is the mass production of generic "beautiful" images (fantasy portraits, dramatic landscapes, hyper-polished illustrations) posted or reused without explanation, selection, or curation. Individually, such images may be competent or even impressive, but collectively they form a flood of interchangeable visuals that carry no intent.
AI-generated images are not slop when a human brings a clear intention and message to the process. For someone who knows what they want to communicate but lacks the manual skill, time, or consistency to draw or render it themselves, an image generator can function as a legitimate expressive tool. A writer visualizing a fictional world, a designer exploring concepts, or an educator illustrating an abstract idea may use AI to externalize what already exists in their mind. In these cases, the images are not ends in themselves but vehicles for meaning, guided by deliberate choices of a human. In these cases though, it is a good practice to acknowledge the use of AI.
Using AI image generators well, however, is far from trivial. Producing a coherent and satisfactory set of images often requires hours of experimentation, many iterations, careful prompt refinement, selective discarding of outputs, and sometimes manual editing or post-processing. The user must learn how the system behaves, where it tends to drift, and how to steer it back toward the intended result. In that sense, effective use of image generators may itself become a new kind of skill, a kind of "managerial" skill - knowing how to direct the AI and curate the results, deciding which images are worth making at all.
Conclusion
We appear to be living through a technological shift comparable in scale to the Industrial Revolution. Then, as now, new tools provoked fear, resistance, and justified anxiety about lost livelihoods and disrupted professions. Many forms of manual labor were displaced, yet society as a whole eventually became more productive, with machines taking over tasks that were repetitive, dangerous, or inefficient. There is no guarantee that the transition we are entering will be painless, but history suggests that outright refusal rarely leads to good outcomes. One can cautiously hope that this transformation, too, will find a stable equilibrium.
At this moment, the more constructive response seems to be neither rejection nor blind embracement, but engagement. Rather than dismissing AI by pointing out its failures, it is more useful to understand how it works, where it excels, and where it breaks down. One should learn to use it thoughtfully, explore its limits, and resist the temptation to rely on it completely, especially while the technology is still in its infancy. Such approach will allow it to remain a tool rather than a crutch.
It has become fashionable to ridicule AI-generated content, particularly when it produces obvious errors or awkward results. Yet the more interesting fact is that, much of the time, it does not. The ease with which AI can generate outputs that are difficult to distinguish from human-made work should prompt less laughter and more reflection. It reveals how thin the surface signals of "humanness" often are, and how little is sometimes required to create the appearance of being human (this reminds me of a section in The Hitchhiker's Guide to the Galaxy, in which the mice wanted to replace Arthur's brain by a simple mechanical one - Douglas Adams definitely was an artist, who walked very close to the chaotic side of creativity - in a good way).

Those pesky mice
And yes, this blog post too was created with the help of an AI. That way, it only took me two and a half hour to write it instead of the otherwise expected 10+. I hope it is more readable this way too.
Recent comments
Even though you used AI (because you're not too fluent in English) to write this article, you did a great job talking about the craftsmanship of monumental artworks and entertainment as well as the age of AI.
I personally use AI for helping me with Linux and coding in general; I use duck.ai for general use questions, and ChatGPT to write code.
AI is a useful tool, except where people abuse its intended purpose, especially on TikTok, Instagram and Facebook AI-generated videos of cat tales and Jesus-shrimp and crying military babies with cringe-looking shoes.
AI is like many things in life it has its pros and cons.
Its up to each person to decide how to use it in a responsible way.
BooTeresa1006
on December 21st
5
I consider myself anti-AI, but in the sense that I want to use it as little as possible, and not support content made with virtually no human thought. I’m fully aware that AI can be used for good, and I’ll never say never to the idea that I might one day have a legitimate use for it, but most of the uses of AI that I’ve seen just feel like a huge waste of power.
Frost
about 15 hours ago
0
Wait is the team behind this pro ai???? huh???