Hooroo Jackson

The Preparatory Century is Over

From Generative Animation: Volume 4 of The New Machine Cinema

From McCay to Machine

Before it became an industry, before it crystallized into departments and pipelines and armies of specialized workers, animation emerged as a strange new fact about images: that the dead could be made to move. The phenakistoscope, the zoetrope, the praxinoscope were parlor toys proving a principle before anyone fully understood its implications. 

It has been said that animation’s early history was almost illogical. It was too specialized, too sophisticated; its boom met the very moment of the camera’s iteration. 

By the first decade of the twentieth century, names such as J. Stuart Blackton and Émile Cohl’s Fantasmagorie (1908) showed primitive forms of lines moved by their own logic unbound from photographed reality. These were rough and unstable works, and that is precisely what matters about them historically. 

People tend to take the most industrialized, capital-heavy, institutionally stabilized version of say Disney, Pixar, or large Japanese studios, as animation and mistake it for the definition of the form itself. This is the same mistake people make when they confuse Hollywood with cinema. Hollywood is one industrial formation of cinema. It is not cinema’s ontology. So too with animation. 

If we want to understand where AI belongs in animation history, we have to clear away that confusion first unto the ontology of animation itself. We have to return to the frontier, where a continuity appears. The history of animation is the history of the form continually streamlining itself, reconfiguring its labor, simplifying its procedures, and changing the relationship between artist, image, and motion. Every major stage of the evolution of animation has involved a restructuring of that relationship. Artificial intelligence is simply the latest, most accelerated expression of this evolution already in progress.

McCay and the Founding Paradox

Winsor McCay is the correct starting point for animation as art.

With Gertie the Dinosaur (1914), McCay stood before a vaudeville audience and drew a dinosaur on stage in real time; the projected creature then obeyed his commands, to their delight. Gertrie has temperament, rhythm, responsiveness, appearing to possess an interior life. 

McCay produced that illusion essentially alone, drawing thousands of images by hand, on rice paper, over backgrounds he redrew for each frame. The labor was so staggering that Gertie was as much an endurance feat as an artistic statement. Who would actually sit down and draw a cartoon frame by frame before cartoons even existed? McCay proved the medium through sheer impossibility. 

That paradox sits at the foundation of the entire medium. Animation offers the greatest imaginative freedom and demands the greatest physical labor per second of screen time. A live-action filmmaker records the physical world around him. An animator must build every frame of the world by hand. The freedom is total and the cost is total, this is the contradiction the rest of animation history keeps trying to solve.

After this, the studio system took McCay’s paradox and scaled at. 

The Fleischer brothers introduced the rotoscope in 1915, tracing over live-action footage to produce more fluid motion on screen. This was the first major instance of what would become animation’s recurring negotiation: importing mechanical assistance to reduce the labor of pure hand-drawing at the cost of imaginative autonomy. The rotoscope tethered animation to live-action reference, exchanging one set of constraints for another. There at its outset, animation was already solving ways to streamline itself.

But it was Walt Disney who transformed the methodology into an industrial system. By dividing production among key animators, in-betweeners, ink-and-paint departments, background artists, camera operators, and specialists of every kind, Disney made feature-length animation economically viable at scale. Snow White and the Seven Dwarfs (1937) locked in a public idea that would haunt the medium for nearly a century: animation must equal a visible and painstaking human exertion. Its cost, its production must be front and center, or at least in parallel of the work itself. The discussion becomes not only about Snow White, but about its innovations, its development, its use of multiplane camera, its roto.

This equation, the myth of a genius industrialist building an empire has governed animation’s moral imagination ever since.

Once the Disney model triumphed, people began to confuse the pipeline with the ontology. Thousands of drawings. Hundreds of workers. Years of refinement. The image came to be treated as a monument to toil. This was historically understandable but it was never philosophically correct.

The cost of the studio solution was authorship. The animator who drew Snow White’s face was not the animator who drew the dwarfs, who was not the artist who painted the forest, who was not the technician who photographed the cels. The director coordinated, but the hand, the very unit of animation, was now collective. To be an animator increasingly meant to be one hand among many, contributing to a vision no single person could fully own. Factory work had been elevated to art, and in the process the artform itself became identified with the factory.

This was famously said by animation pioneer McCay during an animation conference he invited to in the late 1920s: “Animation should be an art, that is how I conceived it. But as I see what you fellows have done with it is make it into a trade, not an art, but a trade. Bad luck.”

His bitterness might have set the tone for the subsequent century as a warning call. Rather than sit within the binary, we come to expand the definition of the arts to accomodate its largest, industrial craft. 

The implications would only compound. Industrial animation required enormous capital. Capital required commercial returns. Commercial returns narrowed the imaginative range the medium’s freedom theoretically permitted, necessitating financial obligations only toward what would sell. The production problem was solved by creating a commerce problem equally constraining.

The artform of industry thereby becomes what can be worked around these financial consideration. There, you’re no longer getting art, but a certain artform of capitalism itself. This becomes, outside definitional struggles, rewarding to study in its own right, because the window is so narrow in such crushing evolutionary constraints, what emerges will be innately, wonders or monstrosities. This goes on to this day with cultural obsessions over box office totals. The film itself becomes almost incidental.

Warner Bros. and Tex Avery offered one of the earliest serious alternatives to Disney realism, forming slapstick cartoons as we know them. Where Disney pursued ever-greater fidelity to physical motion, Avery embraced animation’s right to violate the physical world entirely. Characters stretched beyond anatomy, ignored gravity, destroyed the frame, reassembled themselves. Facial expressions were exagerrated, sound effects were ridiculous. It was all cheaper than Disney’s realism, but the cheapness became its own artform in philosophy. 

Animation should do what live-action cannot.

UPA pushed this further. With films like Gerald McBoing-Boing, the medium was stripped to flat color, graphic abstraction, angular design, and minimal movement. This too was partly economic necessity. But in animation history, economic necessity repeatedly becomes aesthetic doctrine, it was almost standard practice. When you cannot afford the dominant model, you theorize your way out of it. The result became the aforementioned wonder emerging from constraint, forming an entirely new proposition about what animation is for.

This pattern repeats continually across the entire medium. A production bottleneck appears. Someone finds a shortcut. The shortcut becomes an aesthetic. The aesthetic becomes a philosophy. Then history retroactively acts as though the philosophy came first. In the brutal process of history, the artist is never granted a continuum of process. 

Ralph Bakshi’s films showed the same tension from another direction. Heavy rotoscoping functioned as both crutch and style, an unstable attempt to push animation toward adult material and greater narrative scale without access to the old industrial means. Bakshi revealed that animation’s cultural ceiling was imposed by the industry’s commercial structure, with no limitation from the form itself. Don Bluth’s break from Disney represented the last major effort to preserve classical hand-drawn prestige outside the Disney machine. His wager was that labor itself could still be felt by the audience as quality. He was right about the quality and wrong about the economics. By the 1990s, theatrical hand-drawn animation at that level had become structurally untenable for almost anyone. Soon it would become untenable even for Disney.

This represents the real continuity: animation keeps trying to preserve imaginative freedom while lowering the production burden required to achieve it. The continuum begins to emerge. 

Television made the issue brutal. Hanna-Barbera demanded volume: episode after episode, delivered on schedule, at a fraction of theatrical budgets. The response lead to a production grammar built around reduction. Characters moved while only their mouths while their bodies remained static. Walk cycles repeated. Backgrounds looped. In-betweens vanished. Labor was shaved wherever it could be shaved.

More people saw this stripped-down form than ever saw Disney’s full-animation ideal, to the point where one could watch fully fledged animation masterpieces and it might feel wrong to them. 

The cheapest version became the most culturally dominant and in doing so, television animation exposed something essential: the medium could survive massive reductions in labor and still remain animation. 

Every stock cycle, every held cel, every repeated background, every cut corner was a small act of labor reduction. The logical endpoint of that trajectory is not hard to see. The form has always been moving toward doing more with less. AI is simply the point where “less” becomes the most radical, optimized, and logical iteration of the artform.

By the 1970s and 1980s, American studios were offshoring portions of animation labor to South Korea, Taiwan, the Philippines, and elsewhere. The artistic decisions stayed in Los Angeles while the physical work moved to wherever it was cheapest. This break is perhaps more radical than any evolution in technology, where the work itself becomes distant and dispersed.

Anime emerged as one of strongest historic proofs that the Disney model never defined animation in the first place. Beginning with Osamu Tezuka and Astro Boy, Japanese animation developed under intense material constraints.

Limited frame rates, held cels, dramatic camera moves over static images, emphasis on composition, mood, timing, and impact over continuous motion: this was a real reorganization of the medium. 

This destroyed, at least in principle, the Western assumption that more labor equals better animation. Scarcity has continually mutated the aesthetic language of animation, shattering any notion of any sort of nostalgic purity in form. We would have to go back to some five year window in the 1920s or 1930s where the artform went full bore preceding its streamlining, however, that is far from what any current day purist is ever calling for.

Continually, new aesthetic laws emerge because new economic conditions demand them. Under one system, the animator is a choreographer of motion. Under another, the animator is equally an illustrator, editor, composer of stillness, architect of emphasis.

That is why anime is central to this essay. It proves that animation was never identical with one dominant labor model. The Disney idea that animation’s highest form is maximum fluidity sustained by maximum manpower was only one branch during one brief window. 

Computer animation should already have shattered the labor myth, rather instead it merely adopted it and re-wrote the medium to include itself. It is important to note that every argument against AI animation came against computer animation first.

Ivan Sutherland’s Sketchpad (1963) and the early experiments at Bell Labs demonstrated that images could be generated and manipulated mathematically. Peter Foldès’s Hunger (1974) used early keyframe interpolation; Lucasfilm’s The Adventures of André and Wally B. (1984) showed that a computer could simulate motion blur and lighting. 

When Pixar released Toy Story in 1995, the entire CG pipeline replaced cels with polygons, ink with shaders, and hand-painted backgrounds with procedural textures. For the first time a feature-length animated film required no physical drawings or physical models.

The cel disappeared. The drawing disappeared. The hand no longer produced the frame directly. Instead, the animator manipulated rigs, models, lighting setups, simulations, and rendering systems. 

This should have produced a philosophical crisis. But that is not what happened. The culture quietly transferred the romance of the hand to the workstation. The theology remained intact; only its iconography changed. Certainly the arguments against it in favor of hand drawn purity were logical, but the logic was not accommodated. 

The “handcrafted” aura survived because the labor survived. Hundreds of workers still spent years inside a specialized pipeline. The suffering was still there, just relocated into software. So the public and the profession accepted CGI as legitimate animation. This is revealing. It proves that what was actually being defended was never the hand in any literal sense. It was the labor floor. As long as the work remained arduous, expensive, and socially restricted, the medium felt morally safe. 

But more, the arguments were never decided through ideological battles, they were decided in the marketplace and then retroactively made to justify its inevitability.

Stop-motion clarifies the same point from another angle. A puppet moved through increments is no less animated than a drawing, a cel, or a digital rig. Different material, same underlying proposition: life is imposed on the inert. And yet the medium that was born to free cinema from the tyranny of the physical set and the live actor has, at every stage, imposed its own set of material assumptions about what counts as legitimate. Animation has never been so against its most core principles.

Spider-Man: Into the Spider-Verse and its sequel are among the most important animation films of the last decade not only because they are beautiful, but because they accidentally prove too much. These are CGI films designed to feel hand-drawn. Ink lines, halftone dots, variable frame rates, graphic stylization, printed-comic texture. With these films, style has been separated from process.

The result was widely celebrated as a creative breakthrough, but it was also, inadvertently, the strongest possible argument for AI animation.

Spider-Verse proved that style is separable from method. If a CGI pipeline can deliver a hand-drawn feel, then the production method is already irrelevant to the viewer’s encounter with the film. 

The audience experiences animation through the image, the rhythm, the affect, the directorial result. The worker remains invested in the method because labor conditions, expertise, and professional identity hang on it, but the image itself has already broken free.

Once that happens, a historical line snaps. A hand-drawn look can emerge from a pipeline that never touched a pencil. A stop-motion feel can be generated without a single puppet. A retro anime rhythm can be synthesized without passing through the original industrial conditions that birthed it. Style has become variable.

A Very Long Carriage Ride took that principle to its logical conclusion. The same film existed simultaneously in two distinct animation ontologies, classic 2D and stop-motion, generated from the same AI production pipeline. At that point, visual language can no longer be what makes the film fundamentally itself. What remains underneath is directorial vision, narrative architecture, and the film’s core identity across forms.

This is one of the most important developments in animation history. Not because a new style appeared, but because style itself ceased to be ontologically fixed.

Every major shift in animation history follows this exact pattern: a new technology reduces one form of labor. The freed capacity is redirected toward either aesthetic ambition or economic efficiency. The community then absorbs the shift by arguing that the remaining labor is what “really” matters.

The rotoscope reduced the labor of inventing naturalistic motion.

The studio system reduced the burden of individual authorship by distributing work across many hands.

Television reduced the labor of full animation.

Outsourcing reduced labor costs by moving work offshore.

CGI reduced the labor of physical drawing.

AI is the first major animation system whose central promise is to collapse the floor itself.

That is why the shock is so intense.

For the first time in animation history, a single person can direct a feature-length animated film without drawing a single frame, sculpting a single puppet, or rigging a single digital model. The labor that remains is purely directorial: vision, selection, sequencing, narrative architecture. Only the mind is left.

Contemporary critics misunderstand the historical situation. They imagine AI animation as a bad Disney, a bad Pixar, a bad anime, a bad stop-motion. They judge it by the settled standards of mature systems. But early proofs never flatter the systems they replace. Frontier works are unstable by nature. They exist to establish the new floor.

AI strikes the medium so sharply because it threatens the sacredness of that labor story, the whole mythology in which the quality of the work is measured by the visible cost of its making. But sacredness and ontology have nothing to do with each other. The hand is one historical means of producing animated motion, the means that happened to dominate for a century because no alternative existed. Animation history shows the opposite of what the labor mythologists claim: the medium has always sought ways to produce greater worlds, greater movement, greater scale, and greater flexibility with less resistance between conception and realization. Every major stage was a fight against bottlenecks.

What AI does to animation is what generative synthesis does to the image: the twenty-four drawings per second are no longer drawn at all. They are dreamed. The machine synthesizes entire sequences from a space that has never known a physical frame. This is the first animation technology that is medium-agnostic, capable of producing any visual language from the same process. It contains hand-drawn animation, and stop-motion, and CGI, and styles that have no name yet, simultaneously.

If one insists on an analogue, the nearest is the frontier era: Blackton, Cohl, McCay, the first decades when the medium was still proving what it could be. But even that analogy is incomplete. AI did not spend ten years slowly stumbling toward a basic proof. It compressed about twenty five years of animation history into the development span of a single year. This I have posited before, AI is not reaching for an endpoint to match traditional animation; that bar has more or less been crossed. The endpoint is to move beyond the traditional module entirely.

There, my own animated features matter as foundational proofs.

DreadClub: Vampire’s Verdict directly answered whether AI animation could begin with coherence, stylistic identity, and emotional shape rather than stopping at a hello-world demonstration. The point was that it arrived already behaving like an authored film.

A Very Long Carriage Ride asked a more dangerous question: can one film exist in multiple valid animation ontologies at once? That was not just a stylistic experiment proving that AI could meet the stop motion methodology at its standard practice; more, style has detached from identity. 

My Boyfriend is a Superhero!? pushed the parametric logic further. Beyond proving that AI could make a Pixar-style movie, the issue was no longer only style, but character ontology itself. If style can vary, what else can vary while the film remains the film? The protagonist? The social reading of a role? The identity structure of the narrative? Once animation becomes parameterized, it opens entirely new grammars.

Each production forced the grammar to mutate again, which is how animation has always advanced, at every stage of its history, under every set of tools it has ever used. We did not come out of the gate making animated films and stop there. This would be a limited reading of the technology and the circumstance we found ourselves in. We came out of the gate making animated films with new ontologies. 

It was important not to spend all available resources trying to perfectly imitate the traditional module, because a new module must generate a new grammar. Otherwise it is not a new module at all, only a nostalgic one.

The labor argument against AI mistakes one historical means for the medium itself.

The hand was a means.

The cel was a means.

The puppet was a means.

The rig was a means.

The pipeline was a means.

None of these were animation’s essence. They were successive arrangements through which animation temporarily organized the relationship between imagination and motion. The medium kept moving and streamlining. That is the real continuity.

Winsor McCay drew thousands of images to bring Gertie to life. I directed thousands of images to bring my films to life. McCay’s hand held the pencil. Mine held the machine. The question is not whether those are identical acts, the question is whether the difference disqualifies one from animation history. The history answers no.

More than that, it answers something stronger.

Animation was always moving toward this horizon: greater plasticity, greater speed, greater mutability, greater authorship, less resistance between intention and motion. The hand-drawn era, the studio era, the television era, the anime era, the CGI era, none of them were final resting places. They were stages in a much longer process of the medium trying to solve its own founding paradox.

McCay had total imaginative freedom and an almost impossible production burden. The studio system reduced the burden by fracturing authorship. Television reduced the burden by reducing motion. Anime reduced the burden by reorganizing the image. CGI reduced the burden by virtualizing the frame. AI reduces the burden by treating motion itself as generative rather than manually built.

Seen this way, AI is not outside animation history, nor after it, nor against it. It is the latest stage in animation’s longest tendency: the effort to preserve imaginative freedom while stripping away the resistance that made that freedom historically unbearable.

The preparatory century is over.

Animation arrives upon the next stage.