Welles’ Law and the Ogden Nash Dilemma

by David Benjamin 

“What finally saved the movies was the introduction of narrative.”

— Arthur Knight, The Liveliest Art

 

MADISON, Wis.— Most contemporary writers crib from the movies. I do so voluminously and shamelessly. I grew up watching movies. I was the first kid in my grade to shlep downtown and go to the pictures all by myself. My first solo flick, at age eleven, was Peter Sellers in The Mouse That Roared.

Years later, reluctantly and inexpertly, I started covering the technology beat. I even ghostwrote a book about inventions. This might explain why I associate the recent death of movie maker William Friedkin with Moore’s Law. 

Moore’s Law? Okay, it all started in 1965 when Gordon Moore, co-founder of Intel, the semiconductor giant, noticed that the transistor count in an integrated circuit doubles about every two years. Since then, Moore’s Law has explained the leapfrog acceleration of many technologies—like RAM and flash memory, sensor electronics and how many pixels can dance on the display of a digital Nikon. 

What’s this got to do with movies? Well, if you’ve taken a film course, you know that cinematic method has always trailed its technology. Think of Singing in the Rain, a movie whose story unfolds at the moment that “talkies” killed the silent-film era and Hollywood scrambled to reinvent an entire industry—overnight.

Motion pictures date at least back to 1861, when Coleman Sellers spun images in his Kinematoscope. Twenty-seven years later, Tom Edison’s assistant, William Kennedy Laurie Dickson, combined Edison’s phonograph with George Eastman’s invention of celluloid film and devised the sprocket—those notches on a roll of film—without which film projection would have been impossible.

Every film era has heralded technical advances—some as minor as Vaseline on the lens to make Mary Pickford appear softer, younger, more tactile—and some as revolutionary as Technicolor, Cinemascope and handheld-camera stabilizers. 

Some techstorms have flopped. Every overhyped effort to introduce 3-D films has failed, simply because folks don’t like to wear cardboard glasses at the movies. 

Some have changed everything. In the psychedelic climax of Kubrick’s 2001: A Space Odyssey (1968), computer graphics proved itself. Since then, every filmmaker has had to pack digital pyrotechnics in his or her toolbox.

No one dared to claim movies as “art” until 1915, when D.W. Griffith released Birth of a Nation, an epic twelve-reel paean to the Ku Klux Klan. Despite his racist, revanchist messaging, Griffith opened possibilities in technique, technology and narrative that his fellow filmmakers had yet to imagine. 

Still, it was long after Griffith and Charlie Chaplin that film became a subject you could study in college. When movies took on the aura of art—alongside opera, drama and Renoir—the starry-eyed director ceased to see his vocation as merely to amuse and astonish the gum-chewing masses. Now, he could express himself!

There might be a distinct moment when movies became cinema and ascended to academic worthiness. When that was, I don’t know. But if I were to guess, it would fall between the 1941 release of Orson Welles’ Citizen Kane and the seminal appearance of James Agee’s film column in The Nation, in 1942. 

In Kane, Welles’ devised techniques that still mystify students of filmcraft. Agee, a powerful voice in American letters, dignified the scrutiny of film, as art. Over the next thirty years, movie technology made epic advances while film criticism became s literary cottage industry. Filmmakers plunged into a period of experimentation, idiosyncrasy and ferment that glorified them—in the French New Wave formulation—as “auteurs.”

Film grew vastly in revenue, prestige and range, at a rate suggestive of Moore’s Law. While studios continued pumping out romances, Westerns, mysteries, costume spectacles and Doris Day, avant-garde filmmakers undertook experiments that defied Hollywood convention, tested effects never seen before and introduced a pantheon of offbeat movie stars. While defying the censorship of the 1934 Hayes Code, a generation of maverick directors delighted, scandalized and confused moviegoers. Some of our movie houses turned into “art houses,” often without popcorn. We sang along with American Graffiti. We wondered what the hell Blue Velvet was supposed to be getting at. 

William Friedkin was part of the revolution, but he emerged, I think, as it was losing steam—a supposition that comes back around to Gordon Moore. Lately, many high-tech analysts are saying that Moore’s Law has run its course. There is no perpetual-motion machine. Pure originality in every category of knowledge is both ephemeral and finite. Citizen Kane launched filmmaking into an era of rampant creativity, much of which exploited new technologies. But Orson Welles could have—he might have—predicted that the creative tempest he inspired would eventually face the same fate as Moore’s Law. It could not last.  

Welles, whose ego was as splendid as his talent, might say, “There’s nothing new under the sun, except what I make of it.” Call this Welles’ Law. 

Sooner or later, an art form—more accurately, craft—runs out of novel variations. No craftsman, however, is fazed by this seeming limitation. A potter spins and fires a thousand amphorae, all the same shape, but each unique in color, texture, glaze, imagery, detail and inner glow. Variation is the lifeblood of art. 

Which brings me back around to Friedkin, a director in whom I was not much interested before his obituary. In the ’70s, Friedkin drew praise for originality without being original. His metier mimicked the noir movies of the Depression. The French Connection was a feverish variation on a theme of Howard Hawks. With The Exorcist, Friedkin was acclaimed for rescuing the horror genre from B-movie status, a distinction that overlooks F.W. Murnau, who filmed his chilling masterpiece, Nosferatu, in 1922. It was only later on that horror flicks went tacky and had to be “saved” by the likes of Friedkin.

If any aspect of Friedkin’s oeuvre was fresh and new, it was his ability to create moving images that are visually riveting and unsightly at the same time. Critics described his work as gritty, grubby, sleazy, grisly, visceral and “cinema vérité”. Film students—now compelled to gird their loins and view, re-view, then compose incisive commentary on Linda Blair’s spinning head or the homophobic squalor of Cruising (1980)—face what I call the Ogden Nash Dilemma. Nash once wrote, “O Duty/ Why hast thou not the visage of a sweetie or a cutie? … ” 

You gotta watch, to get your grade and to belong in the class discussion, but it’s not much fun. 

Technology and technique make filmcraft possible. Every new director has studied method and innovation in Scorsese, Spielberg, Bunuel, Truffaut, Fellini and all the other cool “auteurs.” Each wannabe director possesses more tools and tricks of the trade than all the filmmakers who’ve gone before. But adding even more tricks seems less and less likely to make better movies.

In my experience, most re-watchable films reflect the pragmatism of Welles’ Law. Fool around if you will, but don’t leave the audience behind. You can try awfully hard to do something nobody’s ever done before, but this is the media age. We’ve seen every gimmick. We’re fighting FX Fatigue. You can’t surprise us.

Or can you?

At the turn of the 20th century, when Georges Méliès cobbled together celluloid versions of Cinderella and Bluebeard, his technology was primitive. But he brought stories to life in a novel form. Now, as Moore’s Law slows and technology often proves more distraction than discovery, we can fall safely back on the material, regardless of method. We can trust the ingenuity of the storyteller, who has always been able, somehow, to take the familiar and twist it into surprise, to vary variations previously varied and “catch the conscience of the king.”