A familiar construct for students of European history is what is known as “The Long Nineteenth Century,” a period bookended by the French Revolution and the start of the Great War. The Great War. That is what it used to be called, before it was diminished by its rechristening as World War I, to distinguish it from the even more horrific conflict that was to follow just two decades hence. It is the latter that in retrospect tends to overshadow the former. Some are even tempted to characterize one as simply a continuation of the other, but that is an oversimplification. There was in fact far more than semantics to that designation of “Great War,” and historians are correct to flag it as a definitive turning point, for by the time it was over Europe’s cherished notions of civilization—for better and for worse—lay in ruins, and her soil hosted not only the scars of vast, abandoned trenches, but the bones of millions who once held the myths those notions turned out to be dear in their heads and their hearts.
The war ended with a stopwatch of sorts. The Armistice that went into effect on November 11, 1918 at 11AM Paris time marked the end of hostilities, a synchronized moment of collective European consciousness it is said all who experienced would recall for as long as they lived. Of course, something like 22 million souls—military and civilian—could not share that moment: they were the dead. Nearly three thousand died that very morning, as fighting continued right up to the final moments when the clock ran out.
What happened next? There is a tendency to fast forward because we know how it ends: the imperfect Peace of Versailles, the impotent League of Nations, economic depression, the rise of fascism and Nazism, American isolationism, Hitler invades Poland. In the process, so much is lost. Instead, Daniel Schönpflug artfully slows the pace with his well-written, highly original strain of microhistory, A World on Edge: The End of the Great War and the Dawn of a New Age. The author, an internationally recognized scholar and adjunct professor of history at the Free University of Berlin, blends the careful analytical skills of a historian with a talented pen to turn out one of the finest works in this genre to date.
First, he presses the pause button. That pause—the Armistice—is just a fragment of time, albeit one of great significance. But it is what follows that most concerns Schönpflug, who has a great drama to convey and does so through the voices of an eclectic array of characters from various walks of life across multiple geographies. When the action resumes, alternating and occasionally overlapping vignettes chronicle the postwar years from the unique, often unexpected vantage points of just over two dozen individuals—some very well known, others less so—who were to leave an imprint of larger or smaller consequence upon the changed world they walked upon.
There is Harry S Truman, who regrets that the military glory he aspired to as a boy has eluded him, yet is confident he has acquitted himself well, and cannot wait to return home to marry his sweetheart Bess and—ironically—vows he will never fire another shot as long as he lives. Former pacifist and deeply religious Medal of Honor winner Sergeant Alvin York receives a hero’s welcome Truman could only dream of, but eschews offers of money and fame to return to his backwoods home in Tennessee, where he finds purpose by leveraging his celebrity to bring roads and schools to his community. Another heroic figure is Sergeant Henry Johnson, of the famed 369th Infantry known as the “Harlem Hellfighters,” who incurred no less than twenty-one combat injuries fending off the enemy while keeping a fellow soldier from capture, but because of his skin color returns to an America where he remains a second-class citizen who does not receive the Medal of Honor he deserves until its posthumous award by President Barack Obama nearly a century later. James Reese Europe, the regimental band leader of the “Harlem Hellfighters,” who has been credited with introducing jazz to Europe, also returns home to an ugly twist of fate.
And there’s Käthe Kollwitz, an artist who lost a son in the war and finds herself in the uncertain environment of a defeated Germany engulfed in street battles between Reds and reactionaries, both flanks squeezing the center of a nascent democracy struggling to assert itself in the wake of the Kaiser’s abdication. One of the key members of that tenuous center is Matthias Erzberger, perhaps the most hated man in the country, who had the ill luck to be chosen as the official who formally accedes to Germany’s humiliating terms for Armistice, and as a result wears a target on his back for the rest of his life. At the same time, the former Kaiser’s son, Crown Prince Wilhelm von Preussen, is largely a forgotten figure who waits in exile for a call to destiny that never comes. Meanwhile in Paris, Marshal Ferdinand Foch lobbies for Germany to pay an even harsher price, as journalist Louise Weiss charts a new course for women in publishing and longs to be reunited with her lover, Milan Štefánik, an advocate for Czechoslovak sovereignty.
Others championing independence elsewhere include Nguyễn Tất Thành (later Hồ Chí Minh), polishing plates and politics while working as a dishwasher in Paris; Mohandas Gandhi, who barely survives the Spanish flu and now struggles to hold his followers to a regimen of nonviolent resistance in the face of increasingly violent British repression; T.E. Lawrence, increasingly disillusioned by the failure of the victorious allies to live up to promises of Arab self-determination; and, Terence MacSwiney, who is willing to starve himself to death in the cause of Irish nationhood. No such lofty goals motivate assassin Soghomon Tehlirian, a survivor of the Armenian genocide, who only seeks revenge on the Turks; nor future Auschwitz commandant Rudolf Höss, who emerges from the war an eager and merciless recruit for right-wing paramilitary forces.
There are many more voices, including several from the realms of art, literature, and music such as George Grosz, Virginia Woolf, and Arnold Schönberg. The importance of the postwar evolution of the arts is underscored in quotations and illustrations that head up each chapter. Perhaps the most haunting is Paul Nash’s 1918 oil-on-canvas of a scarred landscape entitled—with a hint of either optimism or sarcasm—We Are Making a New World. All the stories the voices convey are derived from their respective letters, diaries, and memoirs; only in the “Epilogue” does the reader learn that some of those accounts are clearly fabricated.
Many of my favorite characters in A World on Edge are ones that I had never heard of before, such as Moina Michael, who was so inspired by the sacrifice of those who perished in the Great War that she singlehandedly led a campaign to memorialize the dead with the poppy as her chosen emblem for the fallen, an enduring symbol to this very day. But I found no story more gripping than that of Marina Yurlova, a fourteen year old Cossack girl who became a child soldier in the Russian army, was so badly wounded she was hospitalized for a year, then entered combat once more during the ensuing civil war and was wounded again, this time by the Bolsheviks. Upon recovery, Yurlova embarked upon a precarious journey on foot through Siberia that lasted a month before she was able to flee Russia for Japan and eventually settle in the United States, where despite her injuries she became a dancer of some distinction.
I am a little embarrassed to admit that I received an advance reader’s edition (ARC) of A World on Edge as part of an early reviewer’s program way back in November 2018, but then let it linger in my to-be-read (TBR) pile until I finally got around to it near the end of June 2020. I loved the book but did not take any notes for later reference. So, by the time I sat down to review it in January 2021, given the size of the cast and the complexity of their stories, I felt there was no way I could do justice to the author and his work without re-reading it—so I did, over just a couple of days! And that is the true beauty of this book: for all its many characters, competing storylines, and what turns out to be multilevel, deeply profound messaging, for something of the grand saga that it is it remains a fast-paced, exciting read. Schönpflug’s technique of employing bit players to recount an epic tale succeeds so masterfully that the reader is hardly aware of what has been happening until the final pages are being turned. This is history, of course, this is indeed nonfiction, but yet the result invites a favorable comparison to great literature, to a collection of short stories by Ernest Hemingway, or to a novel by André Brink. If European history is an interest, A World on Edge is not only a recommended read, but a required one.
Women are conspicuously absent in most Civil War chronicles. With a few notable exceptions—Clara Barton, Harriet Tubman, Mary Todd Lincoln—female figures largely appear in the literature as bit players, if they make an appearance at all. Author Karen Abbott seeks a welcome redress to this neglect with Liar Temptress Soldier Spy: Four Women Undercover in the Civil War, an exciting and extremely well-written, if deeply flawed account of some ladies who made a significant contribution to the war effort, north and south.
The concept is sound enough. Abbott focuses on four very different women and relates their respective stories in alternating chapters. There is Belle Boyd, a teenage seductress with a lethal temper who serves as rebel spy and courier; Emma Edmonds, who puts on trousers to masquerade as Frank Thompson and joins the Union army; Rose O’Neal Greenhow, an attractive widow who romances northern politicians to obtain intel for the south; and, Elizabeth Van Lew, a prominent Richmond abolitionist who maintains a sophisticated espionage ring that infiltrates the inner circles of the Confederate government. Each of these is worthy of book-length treatment, but weaving their exploits together is an effective technique that makes for a readable and compelling narrative.
I had never heard of Karen Abbott—the pen name for Abbott Kahler—a journalist and highly acclaimed best-selling author dubbed the “pioneer of sizzle history” by USA Today. She is certainly a gifted writer, and unlike all too many works of history, her prose is fast-moving and engaging. I was swept along by her colorful recounting of the 1861 Battle of Bull Run, with flourishes such as: “Union troops fumbled backward and the Confederates rammed forward, a brutal and uneven dance, with soldiers felled like rotting trees.” I got so carried away I almost made it through the following passage without stumbling:
Some Northern soldiers claimed that every angle, every viewpoint, offered a fresh horror. The rebels slashed throats from ear to ear. They sliced off heads and dropkicked them across the field. They carved off noses and ears and testicles and kept them as souvenirs. They propped the limp bodies of wounded soldiers against trees and practiced aiming for the heart. They wrested muskets and swords from the clenched hands of corpses. They plunged bayonets deep into the backsides of the maimed and the dead. They burned the bodies, collecting “Yankee shin-bones” to whittle into drumsticks, and skulls to use as steins. [p34]
Almost. But I have a master’s degree in history and have spent a lifetime studying the American Civil War, and I have never heard this account of such barbarism at Bull Run. So I paused and flipped to Abbott’s notes for the corresponding page at the back of the book, where with a whiff of insouciance she admits that: “Throughout the war both the North and the South exaggerated the atrocities committed by the enemy, and it’s difficult to determine which incidents were real and which were apocryphal.” [p442] Which is another way of saying that her account is highly sensationalized, if not outright fabrication.
To my mind, Abbott commits an unpardonable sin here. A little research reveals that there were in fact a handful of allegations of brutality in the course of the battle, including the mutilation of corpses, but much of it anecdotal. There were several episodes of Confederate savagery later in the war, principally inflicted upon black soldiers in blue uniforms, but that is another story. How many readers of a popular history would without question take her at her word about what transpired at Bull Run? How many when confronted with stories of testicles taken as souvenirs would think to consult her citations? Lively paragraphs like this may certainly make for “sizzle”—but where’s the history? Historical novels have their place—The Killer Angels, by Michael Shaara, and Gore Vidal’s Lincoln, are among my favorites—but that is not the same thing as history, which must abide by a strict allegiance to fact-based reporting, informed analysis, and documentation. Apparently, this author demonstrates little loyalty to such constraints.
I read on, but with far more skepticism. Abbott’s style is seductive, so it’s easy to keep going. But sins do continue to accumulate. I have a passing familiarity with three of the four main characters, but fact-checking remained essential. Certainly the best known and most consequential was Van Lew, a heroic figure who aided the escape of prisoners of war and provided key intelligence to Union forces in the field. Greenhow is often cited as her counterpart working for the southern cause. Belle Boyd, on the other hand, has become a creature of legend who turns up more frequently in fiction or film than in history texts. I had never heard of Emma Edmonds, but I came to find her story the most fascinating of them all.
It seems that the more documented the subject—such as Van Lew, for example—the closer Abbott’s portrait comes to reliable biography. Beyond that, the imaginative seems to intrude, indeed dominate. The astonishing tale of Emma Edmonds has her not only impersonating a male Union soldier, but also variously posing as an Irish peddler and in blackface disguised as a contraband, engaged in thrilling espionage missions behind enemy lines! It rang of the stuff that Thomas Berger’s Little Big Man was made of. I was suitably sucked in, but also wary. And rightly so: Abbott’s version of Emma Edmonds’ life is based almost entirely on Edmonds’ own memoir, with little that corroborates it, but the author doesn’t bother to reveal that in the narrative. That Edmonds pretended to be a man in order to enlist seems plausible; her spy missions perhaps only fantasy. We simply just don’t know; a true historian would help us draw conclusions. Abbott seems content to let it play out as so much drama to tickle her audience.
But the worst of all is when the time comes to reveal the fate of luckless Confederate spy Greenhow, who drowns when her lifeboat capsizes with Union vessels bearing down on the steamer she abandoned, the moment where the superlative talent of Abbott’s pen collides with her concomitant disloyalty to scholarship:
She was sideways, upside down, somersaulting inside the wet darkness. She screamed noiselessly, the water rushing in. She tried to hold her breath—thirty seconds, sixty, ninety—before her mouth gave way and water filled it again. Tiny streams of bubbles escaped from her nostrils. A burning scythed through her chest. That bag of gold yanked like a noose around her neck. Her hair unspooled and leeched to her skin, twining around her neck. She tried to aim her arms up and her legs down, to push and pull, but every direction seemed the same. No moonlight skimmed along the surface, showing her the way; there was no light at all. [p389]
Entertaining, right? Outstanding writing, correct? Solid history—of course not! Imagining Greenhow’s final agonizing moments of life with a literary flourish may very well enrich the pages of a work of fiction, but it is nothing less than an outrage to a work of history.
This book was a fun read. Were it a novel I would likely give it high marks. But that is not how it is packaged. Emma Edmonds pretended to be a man to save the Union. Karen Abbott pretends to be a historian to sell books. Both make for great stories. But don’t confuse either with reliable history.
When I visited New York’s Metropolitan Museum of Art some years ago, the object I found most stunning was the “Monteleone Chariot,” a sixth century Etruscan bronze chariot inlaid with ivory. I stood staring at it, transfixed, long enough for my wife to shuffle her feet impatiently. Still I lingered, dwelling on every detail, especially the panels depicting episodes from the life of Homeric hero Achilles. By that time, I had read The Iliad more than once, and had long been immersed in studies of ancient Greece. How was it then, I wondered, that I could speak knowledgeably about Solon and Pisistratus, but yet know so little about the Etruscans who crafted that chariot in the same century those notables walked the earth?
Long before anyone had heard of the Romans, city-states of Etruria dominated the Italian peninsula—and, along with Carthage and a handful of Greek poleis—the central Mediterranean, as well. Later, Rome would absorb, crush or colonize all of them. In the case of the Etruscans, it was to be a little of each. And somehow, somewhat incongruously, over the millennia Etruscan civilization—or at least what the living, breathing Etruscans would have recognized as such—has been lost to us. But not lost in the way we usually think of “lost civilizations,” like Teotihuacan, for instance, or the Indus Valley, where what remains are ruins of a vanished culture that disappeared from living memory, an undeciphered script, and even the uncertain ethnicity of its inhabitants. The Etruscans, on the other hand, were never forgotten, their alphabet can be read although their language largely defies translation, and their DNA lingers in at least some present-day Italians. Yet, by all accounts they are nevertheless lost, and tantalizingly so.
Such a conundrum breeds frustration, of course: Romans supplanted the Etruscans but hardly exterminated them. Moreover, unlike other civilizations deemed “lost to history,” the Etruscans appear in ancient texts going as far back as Hesiod. There are also hundreds of excavated tombs, rich with decorative art and grave goods, the latter top-heavy with Greek imports they clearly treasured. So how can we know so much about the Etruscans and at the same time so little? Fortunately, Lucy Shipley, who holds a PhD in Etruscan archaeology, comes to a rescue of sorts with her well-written, delightful contribution to the scholarship, entitled simply The Etruscans, a volume in the digest-sized Lost Civilization series published by Reaktion Books.
Most Etruscan studies are dominated by discussions of the ancient sources and—most prominently—the tombs, which are nothing short of magnificent. But where does that lead us? Herodotus references the Etruscans, as does Livy. But are the sources reliable? Rather dubious, as it turns out. Herodotus may be a dependable chronicler of the Hellenes, but anyone who has read his comically misguided account of Egyptian life and culture is aware how far he can stray from reality. And Roman authors such as Livy routinely trumped a decidedly negative perspective, most evident in disdainful memories of the unwelcome semi-legendary Etruscan kings that are said to have ruled Rome until the overthrow of “Tarquin the Proud” in 509 BCE.
Then there are the tombs. Attempts to extrapolate what ancient life was like from the art that decorates the tombs of the dead—awe inspiring as it may be—can present a distorted picture (pun fully intended!) that ignores all but the wealthiest elite slice of the population. Much like Egyptology’s one-time obsession for pyramids and the pharaoh’s list tended to obscure the no less interesting lives of the non-royal—such as those of the workers who collected daily beer rations and left graffiti within the walls of pyramids they constructed—the emphasis on tombs that is standard to Etruscan studies reveals little of the lives of the vast majority of ordinary folks that peopled their world.
Shipley neatly sidesteps these traditional traps by failing to be constrained by them. Instead, she relies on her training as an archaeologist to ask questions: what do we know about the Etruscans and how do we know it? And, perhaps more critically: what don’t we know and why don’t we know it? In the process, she brings a surprisingly fresh look to an enigmatic people in a highly readable narrative suitable to both academic and popular audiences. Arranged thematically rather than chronologically, the author selects a specific artifact or site for each chapter to serve as a visual trigger for the discussion. Because Shipley is so talented with a pen, it is worth pausing to let her explain her methodology in her own words:
Why focus on the archaeology? Because it is the very materiality, the physicality, the toughness and durability of things and the way they insidiously slip and slide into every corner of our lives that makes them so compelling … We are continually making and remaking ourselves, with the help of things. I would argue that the past is no different in this respect. It’s through things that we can get at the people who made, used and ultimately discarded them—their projects of self-production are as wrapped up in stuff as our own. And always, wrapped up in these things, are fundamental questions about how we choose to be in the world, questions that structure our actions and reactions, questions that change and challenge how we think and what we feel. Questions and objects—the two mainstays of human experience. [p19-20]
Shipley’s approach succeeds masterfully. Because many of these objects—critical artifacts for the archaeologist but often also spectacular works of art for the casual observer—are rendered in full color in this striking edition, the reader is instantly hooked: effortlessly chasing the author’s captivating prose down a host of intriguing rabbit holes in pursuit of answers to the questions she has mated with these objects. Along the way, she showcases the latest scholarship with a concise treatment of a broad range of topics informed by the kind of multi-disciplinary research that defines twenty-first century historical inquiry.
This includes DNA studies of both cattle and human populations in an attempt to resolve the long debate over Etruscan origins. While Herodotus and legions of other ancient and modern detectives have long pointed to legendary migrations from Anatolia, it turns out that the Etruscans are likely autochthonous, speaking a pre-Indo European language that may possibly be related to the one spoken by Ötzi, the mummified iceman, thousands of years ago. Shipley also takes the time to explain how it is that we can read enough of the Etruscan alphabet to decipher proper names while remaining otherwise frustrated in efforts aimed at meaningful translation. Much that we identify as Roman was borrowed from Etruria, but as Rome assimilated the Etruscans over the centuries, their language was left behind. Later, Etruscan literature—like all too much of the classical world—fell victim to the zeal of early Christians in campaigns to purge any remnants of paganism. Most offensive in this regard were writings that described the practices of the “haruspex,” a specialist who sought to divine the future by examining the livers of sacrificial animals, an Etruscan ritual later integrated into Roman religious practices. Texts of haruspices appear prominently in the “hit lists” drawn up by Christian thinkers Tertullian and Arnobius.
My favorite chapter is entitled “Super Rich, Invisible Poor,” which highlights the inevitable distortion that results from the attention paid to the exquisite art and grave goods of the wealthy elite at the expense of the sizeable majority of the inhabitants of a dozen city-states comprised of numerous towns, villages and some larger cities with populations thought to number in the tens of thousands. Although, to be fair, this has hardly been deliberate: there remains a stark scarcity in the archaeological record of the teeming masses, so to speak. While it may smack of the cliché, the famous aphorism “Absence of evidence is not evidence of absence” should be triple underscored here! The Met’s Monteleone Chariot, originally part of an elaborate chariot burial, makes an appearance in this chapter, but perhaps far more fascinating is a look at the great complex of workshops at a site called Poggio Civitate, more than a hundred miles from Monteleone, where skilled craftspeople labored to produce a whole range of goods in the same century that chariot was fashioned. But what of those workers? There seemed to be no trace of them. You can clearly detect the author’s delight as she describes recent excavations that uncovered remains of a settlement that likely housed them. Shipley returns again and again to her stated objective of connecting the material culture to the living Etruscans who were once integral to it.
Another chapter worthy of superlatives is “Sex, Lives and Etruscans.” While it is tempting to impose modern notions of feminism on earlier peoples, Etruscan women do seem to have had claimed lives of far greater independence than their classical contemporaries in Greece and Rome. And there are also compelling hints at an openness in sexuality—including wife-sharing—that horrified ancient observers who nevertheless thrilled in recounting licentious tales of wicked Etruscan behavior! Shipley describes tomb art that depicts overt sex acts with multiple partners, while letting the reader ponder whether legendary accounts of Etruscan profligacy are given to hyperbole or not.
In addition to beautiful illustrations and an engaging narrative, this volume also features a useful map, a chronology, recommended reading, and plenty of notes. It is rare that any author can so effectively tackle a topic so wide-ranging in such a compact format, so Shipley deserves special recognition for turning out such an outstanding work. The Etruscans rightly belongs on the shelf of anyone eager to learn more about a people who certainly made a vital contribution to the history of western civilization.
In Hearts of Atlantis, Stephen King channels the fabled lost continent as metaphor for the glorious promise of the sixties that vanished so utterly that nary a trace remains. Atlantis sank, King declares bitterly in his fiction. He has a point. If you want to chart the actual moments those collective hopes and dreams were swamped by currents of reaction and finally submerged in the merciless wake of a new brand of unforgiving conservatism, you absolutely must turn to Reaganland: America’s Right Turn 1976-1980, Rick Perlstein’s brilliant, epic political history of an era too often overlooked that surely echoes upon America in 2020 with far greater resonance than perhaps any before or since. But be warned: you may need forearms even bigger than the sign-spinning guy in the Progressive commercial to handle this dense, massive 914-page tome that is nevertheless so readable and engaging that your wrists will tire before your interest flags.
Reaganland is a big book because it is actually several overlapping books. It is first and foremost the history of the United States at an existential crossroads. At the same time, it is a close account of the ill-fated presidency of Jimmy Carter. And, too, it is something of a “making of the president 1980.” This is truly ambitious stuff, and that Perlstein largely succeeds in pulling it off should earn him wide and lasting accolades both as a historian and an observer of the American experience.
Reaganland is the final volume in a series launched nearly two decades ago by Perlstein, a progressive historian, that chronicles the rise of the right in modern American politics. Before the Storm focused on Goldwater’s ascent upon the banner of far-right conservatism. This was followed by Nixonland, which profiled a president who thrived on division and earned the author outsize critical acclaim; and, The Invisible Bridge, which revealed how Ronald Reagan—stridently unapologetic for the Vietnam debacle, for Nixon’s crimes, and for angry white reaction to Civil Rights—brought notions once the creature of the extreme right into the mainstream, and began to pave the road that would take him to the White House. Reaganland is written in the same captivating, breathless style Perlstein made famous in his earlier works, but he has clearly honed his craft: the narrative is more measured, less frenetic, and is crowned with a strong concluding chapter—something conspicuously absent in The Invisible Bridge.
The grand—and sometimes allied—causes of the Sixties were Civil Rights and opposition to the Vietnam War, but concomitant social and political revolutions spawned a myriad of others that included antipoverty efforts for the underprivileged, environmental activism, equal treatment for homosexuals and other marginalized groups such as Native Americans and Chicano farm workers, constitutional reform, consumer safety, and most especially equality for women, of which the right to terminate a pregnancy was only one component. The common theme was inclusion, equality, and cultural secularism. The antiwar movement came to not only dominate but virtually overshadow all else, but at the same time served as a unifying factor that stitched together a kind of counterculture coat of many colors to oppose an often stubbornly unyielding status quo. When the war wound down, that fabric frayed. Those who once marched together now marched apart.
This fragmentation was not generally adversarial; groups once in alliance simply went their own ways, organically seeking to advance the causes dear to them. And there was much optimism. Vietnam was history. Civil Rights had made such strides, even if there remained so much unfinished business. Much of what had been counterculture appeared to have entered the mainstream. It seemed like so much was possible. At Woodstock, Grace Slick had declared that “It’s a new dawn,” and the equality and opportunity that assurance heralded actually seemed within reach. Yet, there were unseen, menacing clouds forming just beneath the horizon.
Few suspected that forces of reaction quietly gathering strength would one day unite to destroy the progress towards a more just society that seemed to lie just ahead. Perlstein’s genius in Reaganland lies in his meticulous identification of each of these disparate forces, revealing their respective origin stories and relating how they came to maximize strength in a collective embrace. The Equal Rights Amendment, riding on a wave of massive bipartisan public support, was but three states away from ratification when a bizarre woman named Phyllis Schlafly seemingly crawled out of the woodwork to mobilize legions of conservative women to oppose it. Gay people were on their way to greater social acceptance via local ordinances which one by one went down to defeat after former beauty queen and orange juice hawker Anita Bryant mounted what turned into a nationwide campaign of resistance. The landmark Roe v. Wade case that guaranteed a woman’s right to choose sparked the birth of a passionate right-to-life movement that soon became the central creature of the emerging Christian evangelical “Moral Majority,” that found easy alliance with those condemning gays and women’s lib. Most critically—in a key component that was to have lasting implications, as Perlstein deftly underscores—the Christian right also pioneered a political doctrine of “co-belligerency” that encouraged groups otherwise not aligned to make common ground against shared “enemies.” Sure, Catholics, Mormons and Jews were destined to burn in a fiery hell one day, reasoned evangelical Protestants, but in the meantime they could be enlisted as partners in a crusade to combat abortion, homosexuality and other miscellaneous signposts of moral decay besetting the nation.
That all this moral outrage could turn into a formidable political dynamic seems to have been largely unanticipated. But, as Perlstein reminds us, maybe it should not have been so surprising: candidate Jimmy Carter, himself deeply religious and well ahead in the 1976 race for the White House, saw a precipitous fifteen-point drop in the polls after an interview in Playboy where he admitted that he sometimes lusted in his heart. Perhaps the sun wasn’t quite ready to come up for that new dawn after all.
Of course, the left did not help matters, often ideologically unyielding in its demand to have it all rather than settle for some, as well as blind to unintended consequences. Nothing was to alienate white members of the national coalition to advance civil rights for African Americans more than busing, a flawed shortcut that ignored the greater imperative for federal aid to fund and rebuild decaying inner-city schools, de facto segregated by income inequality. Efforts to advance what was seen as a far too radical federal universal job guarantee ended up energizing opposition that denied victory to other avenues of reform. And there’s much more. Perlstein recounts the success of Ralph Nader’s crusade for automobile safety, which exposed carmakers for deliberately skimping on relatively inexpensive design modifications that could have saved countless lives in order to turn out even greater profits. Auto manufacturers were finally brought to heel. Consumer advocacy became a thing, with widespread public support and frequent industry acquiescence. But even Nader—not unaware of consequences, unintended or otherwise—advised caution when a protégé pressed a campaign to ban TV ads for sugary cereals that targeted children, predicting with some prescience that “if you take on the advertisers you will end up with so many regulators with their bones bleached in the desert.” [p245] Captains of industry Perlstein terms “Boardroom Jacobins” were stirred to collective action by what was perceived as regulatory overreach, and big business soon joined hands to beat all such efforts back.
Meanwhile, subsequent to Nixon’s fall and Ford’s defeat to Carter in 1976, pundits—not for the last time—prematurely foretold the extinction of the Republican Party, leaving stalwart policy wonks on the right seemingly adrift, clinging to their opposition to the pending Salt II arms agreement and the Panama Canal Treaty, furiously wielding oars of obstruction but yet still lacking a reliable vessel to stem the tide. Bitterly opposed to the prevailing wisdom that counseled moderation to ensure not only relevance but survival, they chafed at accommodation with the Ford-Kissinger-Rockefeller wing of the party that preached détente abroad and compromise at home. They looked around for a new champion … and once again found Ronald Reagan!
The former Bedtime for Bonzo co-star and corporate shill had launched his political career railing against communists concealed in every cupboard, as well as shrewdly exploiting populist rage at long-haired antiwar demonstrators. As governor of California he directed an especially violent crackdown known as “Bloody Thursday” on non-violent protesters at UC Berkeley’s People’s Park that resulted in one death and hundreds of injuries after overzealous police fired tear gas and shotguns loaded with buckshot at the crowd. In a comment that eerily presaged Trump’s “very fine people on both sides” remark, Reagan declared that “Once the dogs of war have been unleashed, you must expect … that people … will make mistakes on both sides.” But a year later he was even less apologetic, proclaiming that “If it takes a bloodbath, let’s get it over with.” This was their candidate, who—remarkably one would think—had nearly snatched the nomination away from Ford in ’76, and then went on to cheer party unity while campaigning for Ford with even less enthusiasm than Bernie Sanders exhibited for Hillary Clinton in 2016. Many hold Reagan at least partially responsible for Ford’s loss in the general election.
But Reagan’s neglect of Ford left him neatly positioned as the front-runner for 1980. As conservatives dug in, others of the party faithful recoiled in horror, fearing a repeat of the drubbing at the polls they took in 1964 with Barry “extremism in defense of liberty is no vice” Goldwater at the top of the ticket. And Reagan did seem extreme, perhaps more so than Goldwater. The sounds of sabers rattling nearly drowned out his words every time he mentioned the U.S.S.R. And he said lots of truly crazy things, both publicly and privately, once even wondering aloud over dinner with columnist Jack Germond whether “Ford had staged fake assassination attempts to win sympathy for his renomination.” Germond later recalled that “He was always a man with a very loose hold on the real world around him.” [p617] Germond had a good point: Reagan once asserted that “Fascism was really the basis for the New Deal,” boosted the valuable recycling potential of nuclear waste, and insisted that “trees cause more pollution than automobiles do”—prompting some joker at a rally to decorate a tree with a sign that said “Chop me down before I kill again.”
But Reagan had a real talent with dog whistles, launching his campaign with a speech praising “states’ rights” at a county fair near Philadelphia, Mississippi, where three civil rights workers were murdered in 1964. He once boasted he “would have voted against the Civil Rights Act of 1964,” claimed “Jefferson Davis is a hero of mine,” and bemoaned the Voting Rights Act as “humiliating to the South.” A whiff of racism also clung to his disdain for Medicaid recipients as a “a faceless mass, waiting for handouts,” and his recycling ad nauseum of his dubious anecdote of a “Chicago welfare queen” with twelve social security cards who bilked the government out of $150,000. Unreconstructed whites ate this red meat up. Nixon’s “southern strategy” reached new heights under Reagan.
But a white southerner who was not a racist was actually the president of the United States. Despite the book’s title, the central protagonist of Reaganland is Jimmy Carter, a man who arrived at the Oval Office buoyed by public confidence rarely seen in the modern era—and then spent four years on a rollercoaster of support that plummeted far more often than it climbed. At one point his approval rating was a staggering 77% … at another 28%—only four points above where Nixon’s stood when he resigned in disgrace. These days, as the nonagenarian Carter has established himself as the most impressive ex-president since John Quincy Adams, we tend to forget what a truly bad president he was. Not that he didn’t have good intentions, only that—like Woodrow Wilson six decades before him—he was unusually adept at using them to pave his way to hell. A technocrat with an arrogant certitude that he had all the answers, he arrived on the Beltway with little idea of how the world worked, a family in tow that seemed like they were right out of central casting for a Beverly Hillbillies sequel. He often gravely lectured the public on what was really wrong with the country—and then seemed to lay blame upon Americans for outsize expectations. And he dithered, tacking this way and that way, alienating both sides of the aisle in a feeble attempt to seem to stand above the fray.
In fairness, he had a lot to deal with. Carter inherited a nation more socio-economically shook up than any since the 1930s. In 1969, the United States had proudly put a man on the moon. Only a few short years later, a country weaned on wallowing in American exceptionalism saw factories shuttered, runaway inflation, surging crime, cities on the verge of bankruptcy, and long lines just to gas up your car at an ever-skyrocketing cost. And that was before a nuclear power plant melted down, Iranians took fifty-two Americans hostage, and Soviet tanks rolled into Afghanistan. All this was further complicated by a new wave of media hype that saw the birth of the “bothersiderism” that gives equal weight to scandals legitimate or spurious—an unfortunate ingredient that remains so baked into current reporting.
Perhaps the most impressive part of Reaganland is Perlstein’s superlative rendering of what America was like in the mid-70s. Stephen King’s horror is often so effective at least in part due to the fads, fast food, and pop music he uses as so many props in his novels. If that stuff is real, perhaps ghosts or killer cars could be real, as well. Likewise, Perlstein brings a gritty authenticity home by stepping beyond politics and policy to enrich the narrative with headlines of serial killers and plane crashes, of assassination and mass suicide, adroitly resurrecting the almost numbing sense of anxiety that informed the times. DeNiro’s Taxi Driver rides again, and the reader winces through every page.
Carter certainly had his hands full, especially as the hostage crisis dragged on, but it hardly ranked up there with Truman’s Berlin Airlift or JFK’s Cuban missiles. There were indeed crises, but Carter seemed to manufacture even more—and to get in his own way most of the time. And his attempts to reassure consistently backfired, fueling even more national uncertainty. All this offered a perfect storm of opportunity for right-wing elements who discovered co-belligerency was not only a tactic but a way of life. Against all advice and all odds, Reagan—retaining his “very loose hold on the real world around him”—saw no contradiction bringing his brand of conservatism to join forces with those maligning gays, opposing abortion, stonewalling the ERA, and boosting the Christian right. Corporate CEO’s—Perlstein’s “Boardroom Jacobins”—already on the defensive, were more than ready to finance it. Carter, flailing, played right into their hands. Already the most right-of-center Democratic president of the twentieth century, he too shared that weird vision of the erosion of American morality. And Perlstein reminds us that the debacle of financial deregulation usually traced back to Reagan actually began on Carter’s watch, the seeds sown for the wage stagnation, growth of income inequality, and endless cycles of recession that has been de rigueur in American life ever since. Carter failed to make a good closing argument for why he should be re-elected, and the unthinkable occurred: Ronald Reagan became president of the United States. The result was that the middle-class dream that seemed so much in jeopardy under Carter was permanently crushed once Reagan’s regime of tax cuts, deregulation, and the supply-side approach George H.W. Bush rightly branded as “voodoo economics” became standard operating policy. Progressive reform sputtered and stalled. The little engine that FDR had ignited to manifest a social and economic miracle for America crashed and burned forever on the vanguard of Reaganomics.
Some readers might be intimidated by the size of Reaganland, but it’s a long book because it tells a long story, and it contains lots of moving parts. Perlstein succeeds magnificently because he demonstrates how all those parts fit together, replete with the nuance and complexity critical to historical analysis. Is it perfect? Of course not. I’m a political junkie, but there were certain segments on policy and legislative wrangling that seemed interminable. And if Perlstein mentioned “Boardroom Jacobins” just one more time, I might have screamed. But these are quibbles. This is without doubt the author’s finest book, and I highly recommend it, as both an invaluable reference work and a cover-to-cover read.
In Hearts of Atlantis, Stephen King imagines the sixties as bookended by JFK’s 1963 assassination and John Lennon’s murder in 1980. Perlstein seems to follow that same school of thought, for the final page of Reaganland also wraps up with Lennon’s untimely death. In an afterword to his work of fiction, King muses: “Although it is difficult to believe, the sixties are not fictional; they actually happened.” If you are more partial to nonfiction and want the real story of how the sixties ended, of how Atlantis sank, you must read Reaganland.
[Note: this review goes to press just a few days before the most consequential presidential election in modern American history. This book and this review are reminders that elections do matter.]
Nearly two decades have passed since Charles Freeman published The Closing of the Western Mind: The Rise of Faith and the Fall of Reason, a brilliant if controversial examination of the intellectual totalitarianism of Christianity that dated to the dawn of its dominance of Rome and the successor states that followed the fragmentation of the empire in the West. Freeman argues persuasively that the early Christian church vehemently and often brutally rebuked the centuries-old classical tradition of philosophical enquiry and ultimately drove it to extinction with a singular intolerance of competing ideas crushed under the weight of a monolithic faith. Not only were pagan religions prohibited, but there would be virtually no provision for any dissent with official Christian doctrine, such that those who advanced even the most minor challenges to interpretation were branded heretics and sent to exile or put to death. That tragic state was to define medieval Europe for more than a millennium.
Now the renowned classical historian has returned with a follow-up epic, The Awakening: A History of the Western Mind AD 500-1700, recently published in the UK (and slated for U.S. release, possibly with a different title) which recounts the slow—some might brand it glacial—evolution of Western thought that restored legitimacy to independent examination and analysis, that eventually led to a celebration, albeit a cautious one, of reason over blind faith. In the process, Freeman reminds us that quality, engaging narrative history has not gone extinct, while demonstrating that it is possible to produce a work that is so well-written it is readable by a general audience while meeting the rigorous standards of scholarship demanded by academia. That this is no small achievement will be evident to anyone who—as I do—reads both popular and scholarly history and is struck by the stultifying prose that often typifies the academic. In contrast, here Freeman takes a skillful pen to reveal people, events and occasionally obscure concepts, much of which may be unfamiliar to those who are not well versed in the medieval period.
The fall of Rome remains a subject of debate for historians. While traditional notions of sudden collapse given to pillaging Vandals leaping over city walls and fora engulfed in flames have long been revised, competing visions of a more gradual transition that better reflect the scholarship sometimes distort the historiography to minimize both the fall and what was actually lost. And what was lost was indeed dramatic and incalculable. If, to take just one example, sanitation can be said to be a mark of civilization, the Roman aqueducts and complex network of sewers that fell into disuse and disrepair meant that fresh water was no longer reliable, and sewage that bred pestilence was to be the norm for fifteen centuries to follow. It was not until the late nineteenth century that sanitation in Europe even approached Roman standards. So, whatever the timeline—rapid or gradual—there was indeed a marked collapse. Causes are far more elusive. But Gibbon’s largely discredited casting of Christianity as the villain that brought the empire down tends to raise hackles in those who suspect someone like Freeman attempting to point those fingers once more. But Freeman has nothing to say about why Rome fell, only what followed. The loss of the pursuit of reason was to be as devastating for the intellectual health of the post-Roman world in the West as sanitation was to prove for its physical health. And here Freeman does squarely take aim at the institutional Christian church as the proximate cause for the subsequent consequences for Western thought. This is well-underscored in the bleak assessment that follows in one of the final chapters in The Closing of the Western Mind:
Christian thought that emerged in the early centuries often gave irrationality the status of a universal “truth” to the exclusion of those truths to be found through reason. So the uneducated was preferred to the educated and the miracle to the operation of natural laws … This reversal of traditional values became embedded in the Christian tradition … Intellectual self-confidence and curiosity, which lay at the heart of the Greek achievement, were recast as the dreaded sin of pride. Faith and obedience to the institutional authority of the church were more highly rated than the use of reasoned thought. The inevitable result was intellectual stagnation … [p322]
Awakening picks up where Closing leaves off as the author charts the “Reopening of the Western Mind” (this was the working title of his draft!) but the new work is marked by far greater optimism. Rather than dwell on what has been lost, Freeman puts focus not only upon the recovery of concepts long forgotten but how rediscovery eventually sparked new, original thought, as the spiritual and later increasingly secular world danced warily around one another—with a burning heretic all too often staked between them on Europe’s fraught intellectual ballroom. Because the timeline is so long—encompassing twelve centuries—the author sidesteps what could have been a dull chronological recounting of this slow progression to narrow his lens upon select people, events and ideas that collectively marked milestones on the way that comprise thematic chapters to broaden the scope. This approach thus transcends what might have been otherwise parochial to brilliantly convey the panoramic.
There are many superlative chapters in Awakening, including the very first one, entitled “The Saving of the Texts 500-750.” Freeman seems to delight in detecting the bits and pieces of the classical universe that managed to survive not only vigorous attempts by early Christians to erase pagan thought but the unintended ravages of deterioration that is every archivist’s nightmare. Ironically, the sacking of cities in ancient Mesopotamia begat conflagrations that baked inscribed clay tablets, preserving them for millennia. No such luck for the Mediterranean world, where papyrus scrolls, the favored medium for texts, fell to war, natural disasters, deliberate destruction, as well as to entropy—a familiar byproduct of the second law of thermodynamics—which was not kind in prevailing environmental conditions. We are happily still discovering papyri preserved by the dry conditions in parts of Egypt—the oldest dating back to 2500 BCE—but it seems that the European climate doomed papyrus to a scant two hundred years before it was no more.
Absent printing presses or digital scans, texts were preserved by painstakingly copying them by hand, typically onto vellum, a kind of parchment made from animal skins with a long shelf life, most frequently in monasteries by monks for whom literacy was deemed essential. But what to save? The two giants of ancient Greek philosophy, Plato and Aristotle, were preserved, but the latter far more grudgingly. Fledgling concepts of empiricism in Aristotle made the medieval mind uncomfortable. Plato, on the other hand, who pioneered notions of imaginary higher powers and perfect forms, could be (albeit somewhat awkwardly) adapted to the prevailing faith in the Trinity, and thus elements of Plato were syncretized into Christian orthodoxy. Of course, as we celebrate what was saved it is difficult not to likewise mourn what was lost to us forever. Fortunately, the Arab world put a much higher premium on the preservation of classical texts—an especially eclectic collection that included not only metaphysics but geography, medicine and mathematics. When centuries later—as Freeman highlights in Awakening—these works reached Europe, they were to be instrumental as tinder to the embers that were to spark first a revival and then a revolution in science and discovery.
My favorite chapter in Awakening is “Abelard and the Battle for Reason,” which chronicles the extraordinary story of scholastic scholar Peter Abelard (1079-1142)—who flirted with the secular and attempted to connect rationalism with theology—told against the flamboyant backdrop of Abelard’s tragic love affair with Héloïse, a tale that yet remains the stuff of popular culture. In a fit of pique, Héloïse’s father was to have Abelard castrated. The church attempted something similar, metaphorically, with Abelard’s teachings, which led to an order of excommunication (later lifted), but despite official condemnation Abelard left a dramatic mark on European thought that long lingered.
There is too much material in a volume this thick to cover competently in a review, but the reader will find much of it well worth the time. Of course, some will be drawn to certain chapters more than others. Art historians will no doubt be taken with the one entitled “The Flowering of the Florentine Renaissance,” which for me hearkened back to the best elements of Kenneth Clark’s Civilisation, showcasing not only the evolution of European architecture but the author’s own adulation for both the art and the engineering feat demonstrated by Brunelleschi’s dome, the extraordinary fifteenth century adornment that crowns the Florence Cathedral. Of course, Freeman does temper his praise for such achievements with juxtaposition to what once had been, as in a later chapter that recounts the process of relocating an ancient Egyptian obelisk weighing 331 tons that had been placed on the Vatican Hill by the Emperor Caligula, which was seen as remarkable at the time. In a footnote, Freeman reminds us that: “One might talk of sixteenth-century technological miracles, but the obelisk had been successfully erected by the Egyptians, taken down by the Romans, brought by sea to Rome and then re-erected there—all the while remaining intact!” [p492n]
If I was to find a fault with Awakening, it is that it does not, in my opinion, go far enough to emphasize the impact of the Columbian Experience on the reopening of the Western mind. There is a terrific chapter devoted to the topic, “Encountering the Peoples of the ‘Newe Founde Worldes,’” which explores how the discovery of the Americas and its exotic inhabitants compelled the European mind to examine other human societies whose existence had never before even been contemplated. While that is a valid avenue for analysis, it yet hardly takes into account just how earth-shattering 1492 turned out to be—arguably the most consequential milestone for human civilization (and the biosphere!) since the first cities appeared in Sumer—in a myriad of ways, not least the exchange of flora and fauna (and microbes) that accompanied it. But this significance was perhaps greatest for Europe, which had been a backwater, long eclipsed by China and the Arab middle east. It was the Columbian Experience that reoriented the center of the world, so to speak, from the Mediterranean to the Atlantic, which was exploited to the fullest by the Europeans who prowled those seas and first bridged the continents. It is difficult to imagine the subsequent accomplishments—intellectual and otherwise—had Columbus not landed at San Salvador. But this remains just a quibble that does not detract from Freeman’s overall accomplishment.
Full disclosure: Charles Freeman and I began a long correspondence via email following my review of Closing. I was honored when he selected me as one of his readers for his drafts of Awakening, which he shared with me in 2018, but at the same time I approached this responsibility with some trepidation: given Freeman’s credentials and reputation, what if I found the work to be sub-standard? What if it was simply not a good book? How would I address that? As it was, these worries turned out to be misplaced. It is a magnificent book and I am grateful to have read much of it as a work in progress, and then again after publication. I did submit several pages of critical commentary to assist the author, to the best of my limited abilities, hone a better final product, and to that end I am proud see my name appear in the “Acknowledgments.”
I do not usually talk about formats in book reviews, since the content is typically neither enhanced nor diminished by its presentation in either a leather-bound tome or a mass-market paperback or the digital ink of an e-book, but as a bibliophile I cannot help but offer high praise to this beautiful, illustrated edition of Awakening published by Head of Zeus, even accented by a ribbon marker. It has been some time since I have come across a volume this attractive without paying a premium for special editions from Folio Society or Easton Press, and in this case the exquisite art that supplements the text transcends the ornamental to enrich the narrative.
Interest in the medieval world has perhaps waned over time. But that is, of course, a mistake. How we got from point A to point B is an important story, even if it has never been told before as well as Freeman has told it in Awakening. And it is not an easy story to tell. As the author acknowledges in a concluding chapter: “Bringing together the many different elements that led to the ‘awakening of the western mind’ is a challenge. It is important to stress just how bereft Europe was, economically and culturally, after the fall of the Roman empire compared to what it had been before.” [p735]
Those of us given to dystopian fiction, concerned with the fragility of republics and civilization, and wondering aloud in the midst of a global pandemic and the rise of authoritarianism what our descendants might recall of us if it all fell to collapse tomorrow cannot help but be intrigued by how our ancestors coped—for better or for worse—after Rome was no more. If you want to learn more about that, there might be no better covers to crack than Freeman’s The Awakening. I highly recommend it.
NOTE: My review of Freeman’s earlier work appears here:
“When people ask me if I went to film school, I tell them, ‘No, I went to films,’” Quentin Tarantino famously quipped. While I’m no iconic director, I too “went to films,” in a manner of speaking. I was raised by my grandmother in the 1960s—with a little help from a 19” console TV in the living room and seven channels delivered via rooftop antenna. When cartoons, soaps, or prime time westerns and sitcoms like Bonanza and Bewitched weren’t broadcasting, all the remaining airtime was filled with movies. All kinds of movies: drama, screwball comedies, war movies, gangster movies, horror movies, sci-fi, musicals, love stories, murder mysteries—you name the genre, it ran. And ran. And ran. For untold hours and days and weeks and years.
Grandma—rest in peace—loved movies. Just loved them. All kinds of movies. But she didn’t have much of a discerning eye: for her, The Treasure of the Sierra Madre was no better or worse than Bedtime for Bonzo. At first, I didn’t know any better either, and whether I was four or fourteen I watched whatever was on, whenever she was watching. But I took a keen interest. The immersion paid dividends. My tastes evolved. One day I began calling them films instead of movies, and even turned into something of a “film geek,” arguing against the odds that Casablanca is a better picture than Citizen Kane, promoting Kubrick’s Paths of Glory over 2001, and shamelessly confessing to screening Tarantino’s Kill Bill I and II back-to-back more than a dozen times. In other words, I take films pretty seriously. So, when I noticed that Seduction: Sex, Lies, and Stardom in Howard Hughes’s Hollywood was up for grabs in an early reviewer program, I jumped at the opportunity. I was not to be disappointed.
In an extremely well-written and engaging narrative, film critic and journalist Karina Longworth has managed to turn out a remarkable history of Old Hollywood, in the guise of a kind of biography of Howard Hughes. In films, a “MacGuffin” is something insignificant or irrelevant in itself that serves as a device to trigger the plot. Examples include the “Letters of Transit” in Casablanca, the statuette in The Maltese Falcon, and the briefcase in Tarantino’s Pulp Fiction. Howard Hughes himself is the MacGuffin of sorts in Seduction, which is far less about him than his female victims and the peculiar nature of the studio system that enabled predators like Hughes and others who dominated the motion picture industry.
Howard Hughes was once one of the most famous men in America, known for his wealth and genius, a larger-than-life legend noted both for his exploits as aviator and flamboyance as a film producer given to extravagance and star-making. But by the time I was growing up, all that was in the distant past, and Hughes was little more than a specter in supermarket tabloids, an eccentric billionaire turned recluse. It was later said that he spent most days alone, sitting naked in a hotel room watching movies. Long unseen by the public, at his death he was nearly unrecognizable, skeletal and covered in bedsores. Director Martin Scorsese resurrected him for the big screen in his epic biopic “The Aviator,” headlined by Leonardo DiCaprio and a star-studded cast, which showcased Hughes as a heroic and brilliant iconoclast who in turn took on would-be censors, the Hollywood studio system, the aviation industry and anyone who might stand in the way of his quest for glory—all while courting a series of famed beauties. Just barely in frame was the mental instability, the emerging Obsessive-Compulsive Disorder that later brought him down.
Longworth finds Hughes a much smaller and more despicable man, an amoral narcissist and manipulator who was seemingly incapable of empathy for other human beings. (Yes, there is indeed a palpable resemblance to a certain president!) While Hughes carefully crafted an image of a titan who dominated the twin arenas of flight and film, in Longworth’s portrayal he seems to crash more planes than he lands, and churns out more bombs than blockbusters. In the public eye, he was a great celebrity, but off-screen he comes off as an unctuous villain, a charlatan whose real skill set was self-promotion empowered by vast sums of money and a network of hangers-on. The author gives him his due by denying him top billing as the star of the show, rather giving scrutiny to those in his orbit, the females in supporting roles whom he in turn dominated, exploited and discarded. You can almost hear refrains of Carly Simon’s You’re So Vain interposed in the narrative, taunting the ghost of Hughes with the chorus: “You probably think this song is about you”—which by the way would make a great soundtrack if there’s ever a screen adaptation of the book.
If not Hughes, the real main character is Old Hollywood itself, and with a skillful pen, Longworth turns out a solid history—a decidedly feminist history—of the place and time that is nothing less than superlative. The author recreates for us the early days before the tinsel, when a sleepy little “dry” town no one had ever heard of almost overnight became the celluloid capital of the country. Pretty girls from all over America would flock there on a pilgrimage to fame; most disappointed, many despairing, more than a few dead. Nearly all were preyed upon by a legion of the contemptible, used and abused with a thin tissue of lies and promises that anchored them not only to the geography but to the predominantly male movers and shakers who dominated the studio system that literally dominated everything else. This is a feminist history precisely because Longworth focuses on these women—more specifically ten women involved with Hughes—and through them brilliantly captures Hollywood’s golden age as manifested in both the glamorous and the tawdry.
Howard Hughes was not the only predator in Tinseltown, of course, but arguably its most depraved. If Hollywood power-brokers overpromised fame to hosts of young women just to bed them, for Hughes sex was not even always the principal motivation. It went way beyond that, often to twisted ends perhaps unclear to even Hughes himself. He indeed took many lovers, but those he didn’t sleep with were not exempt to his peculiar brand of exploitation. What really got Howard Hughes off was exerting power over women, controlling them, owning them. He virtually enslaved some of these women, stripping them of their individual freedom of will for months or even years with vague hints at eventual stardom, abetted by assorted handlers appointed to spy on them and report back to him. Even the era of “Me Too” lacks the appropriate vocabulary to describe his level of “creepy!”
One of the women he apparently did not take to bed was Jane Russell. Hughes cast the beautiful, voluptuous nineteen year old in The Outlaw, a film that took forever to produce and release largely due to his fetishistic obsession with Russell’s breasts—and the way these spilled out of her dress in a promotional poster that provoked the ire of censors. Longworth’s treatment of the way Russell unflappably endured her long association with Hughes—despite his relentless domination over her life and career—is just one of the many delightful highlights in Seduction.
The Outlaw, incidentally, was one of the movies I recall watching with Grandma back in the day. Her notions of Hollywood had everything to do with the glamorous and the glorious, of handsome leading men and lovely leading ladies up on the silver screen. I can’t help wondering what she might think if she learned how those ladies were tormented by Hughes and other moguls of the time. I wish I could tell her about it, about this book. Alas, that’s not possible, but I can urge anyone interested in this era to read Seduction. If authors of film history could win an Academy Award, Longworth would have an Oscar on her mantle to mark this outstanding achievement.
Imagine there’s a virus sweeping across the land claiming untold victims, the agent of the disease poorly understood, the population in terror of an unseen enemy that rages mercilessly through entire communities, leaving in its wake an exponential toll of victims. As this review goes to press amid an alarming spike in new Coronavirus cases, Americans don’t need to stretch their collective imagination very far to envisage that at all. But now look back nearly two and a half centuries and consider an even worse case scenario: a war is on for the existential survival of our fledgling nation, a struggle compromised by mass attrition in the Continental Army due to another kind of virus, and the epidemic it spawns is characterized by symptoms and outcomes that are nothing less than nightmarish by any standard, then or now. For the culprit then was smallpox, one of the most dread diseases in human history.
This nearly forgotten chapter in America’s past left a deep impact on the course of the Revolution that has been long overshadowed by outsize events in the War of Independence and the birth of the Republic. This neglect has been masterfully redressed by Pox Americana: The Great Smallpox Epidemic of 1775-82, a brilliantly conceived and extremely well-written account by Pulitzer Prize-winning historian Elizabeth A. Fenn. One of the advantages of having a fine personal library in your home is the delight of going to a random shelf and plucking off an edition that almost perfectly suits your current interests, a volume that has been sitting there unread for years or even decades, just waiting for your fingertips to locate it. Such was the case with my signed first edition of Pox Americana, a used bookstore find that turned out to be a serendipitous companion to my self-quarantine for Coronavirus, the great pandemic of our times.
As horrific as COVID-19 has been for us—as of this morning we are up to one hundred thirty four thousand deaths and three million cases in the United States, a significant portion of the more than half million dead and nearly twelve million cases worldwide—smallpox, known as “Variola,” was far, far worse. In fact, almost unimaginably worse. Not only was it more than three times more contagious than Coronavirus, but rather than a mortality rate that ranges in the low single digits with COVID (the verdict’s not yet in), variola on average claimed an astonishing thirty percent of its victims, who often suffered horribly in the course of the illness and into their death throes, while survivors were frequently left disfigured by extensive scarring, and many were left blind. Smallpox has a long history that dates back to at least the third century BCE, as evidenced in Egyptian mummies. There were reportedly still fifteen million cases a year as late as 1967. In between it claimed untold hundreds of millions of lives over the years—some three hundred million in the twentieth century alone—until its ultimate eradication in 1980. There is perhaps some tragic irony that we are beset by Coronavirus on the fortieth anniversary of that milestone …
I typically eschew long excerpts for reviews, but Variola was so horrifying and Fenn writes so well that I believe it would be a disservice to do other than let her describe it here:
Headache, backache, fever, vomiting, and general malaise all are among the initial signs of infection. The headache can be splitting; the backache, excruciating … The fever usually abates after the first day or two … But … relief is fleeting. By the fourth day … the fever creeps upward again, and the first smallpox sores appear in the mouth, throat, and nasal passages …The rash now moves quickly. Over a twenty-four-hour period, it extends itself from the mucous membranes to the surface of the skin. On some, it turns inward, hemorrhaging subcutaneously. These victims die early, bleeding from the gums, eyes, nose, and other orifices. In most cases, however, the rash turns outward, covering the victim in raised pustules that concentrate in precisely the places where they will cause the most physical pain and psychological anguish: The soles of the feet, the palms of the hands, the face, forearms, neck, and back are focal points of the eruption … If the pustules remain discrete—if they do not run together— the prognosis is good. But if they converge upon one another in a single oozing mass, it is not. This is called confluent smallpox … For some, as the rash progresses in the mouth and throat, drinking becomes difficult, and dehydration follows. Often, an odor peculiar to smallpox develops… Patients at this stage of the disease can be hard to recognize. If damage to the eyes occurs, it begins now … Scabs start to form after two weeks of suffering … In confluent or semiconfluent cases of the disease, scabbing can encrust most of the body, making any movement excruciating … [One observation of such afflicted Native Americans noted that] “They lye on their hard matts, the poxe breaking and mattering, and runing one into another, their skin cleaving … to the matts they lye on; when they turne them, a whole side will flea of[f] at once.” … Death, when it occurs, usually comes after ten to sixteen days of suffering. Thereafter, the risk drops significantly … and unsightly scars replace scabs and pustules … the usual course of the disease—from initial infection to the loss of all scabs—runs a little over a month. Patients remain contagious until the last scab falls off … Most survivors bear … numerous scars, and some are blinded. But despite the consequences, those who live through the illness can count themselves fortunate. Immune for life, they need never fear smallpox again. [p16-20]
Smallpox was an unfortunate component of the siege of Boston by the British in 1775, but—as Fenn explains—it was far worse for Bostonians than the Redcoats besieging them. This was because smallpox was a fact of life in eighteenth century Europe—a series of outbreaks left about four hundred thousand people dead every year, and about a third of the survivors were blinded. As awful as that may seem, it meant that the vast majority of British soldiers had been exposed to the virus and were thus immune. Not so for the colonists, who not only had experienced less outbreaks but frequently lived in more rural settings at a greater distance from one another, which slowed exposure, leaving a far smaller quantity of those who could count on immunity to spare them. Nothing fuels the spread of a pestilence better than a crowded bottlenecked urban environment—such as Boston in 1775—except perhaps great encampments of susceptible men from disparate geographies suddenly crammed together, as was characteristic of the nascent Continental Army. To make matters worse, there was some credible evidence that the Brits at times engaged in a kind of embryonic biological warfare by deliberately sending known infected individuals back to the Colonial lines. All of this conspired to form a perfect storm for disaster.
Our late eighteenth-century forebears had a couple of things going for them that we lack today. First of all, while it was true that like COVID there was no cure for smallpox, there were ways to mitigate the spread and the severity that were far more effective than our masks and social distancing—or misguided calls to ingest hydroxychloroquine, for that matter. Instead, their otherwise primitive medical toolkit did contain inoculation, an ancient technique that had only become known to the west in relatively recent times. Now, it is important to emphasize that inoculation—also known as “variolation”—is not comparable to vaccination, which did not come along until closer to the end of the century. Not for the squeamish, variolation instead involved deliberately inserting the live smallpox virus from scabs or pustules into superficial incisions in a healthy subject’s arm. The result was an actual case of smallpox, but generally a much milder one than if contracted from another infected person. Recovered, the survivor would walk away with permanent immunity. The downside was that some did not survive, and all remained contagious for the full course of the disease. This meant that the inoculated also had to be quarantined, no easy task in an army camp, for example.
The other thing they had going for them back then was a competent leader who took epidemics and how to contain them quite seriously—none other than George Washington himself. Washington was not president at the time, of course, but he was the commander of the Continental Army, and perhaps the most prominent man in the rebellious colonies. Like many of history’s notable figures, Washington was not only gifted with qualities such as courage, intelligence, and good sense, but also luck. In this case, Washington’s good fortune was to contract—and survive—smallpox as a young man, granting him immunity. But it was likewise the good fortune of the emerging new nation to have Washington in command. Initially reluctant to advance inoculation—not because he doubted the science but rather because he feared it might accelerate the spread of smallpox—he soon concluded that only a systematic program of variolation could save the army, and the Revolution! Washington’s other gifts—for organization and discipline—set in motion mass inoculations and enforced isolation of those affected. Absent this effort, it is likely that the War of Independence—ever a long shot—may not have succeeded.
Fenn argues convincingly that the course of the war was significantly affected by Variola in several arenas, most prominently in its savaging of Continental forces during the disastrous invasion of Quebec, which culminated in Benedict Arnold’s battered forces being driven back to Fort Ticonderoga. And in the southern theater, enslaved blacks flocked to British lines, drawn by enticements to freedom, only to fall victim en masse to smallpox, and then tragically find themselves largely abandoned to suffering and death as the Brits retreated. There is a good deal more of this stuff, and many students of the American Revolution will find themselves wondering—as I did—why this fascinating perspective is so conspicuously absent in most treatments of this era?
Remarkably, despite the bounty of material, emphasis on the Revolution only occupies the first third of the book, leaving far more to explore as the virus travels to the west and southwest, and then on to Mexico, as well as to the Pacific northwest. As Fenn reminds us again and again, smallpox comes from where smallpox has been, and she painstakingly tracks hypothetical routes of the epidemic. Tragic bystanders in its path were frequently Native Americans, who typically manifested more severe symptoms and experienced greater rates of mortality. It has been estimated that perhaps ninety percent of pre-contact indigenous inhabitants of the Americas were exterminated by exposure to European diseases for which they had no immunity, and smallpox was one of the great vehicles of that annihilation. Variola proved to be especially lethal as a “virgin soil” epidemic, and Native Americans not unexpectedly suffered far greater casualties than other populations, resulting in death on such a wide scale that entire tribes simply disappeared to history.
No review can properly capture all the ground that Fenn covers in this outstanding book, nor praise her achievement adequately. It is especially rare when a historian combines a highly original thesis with exhaustive research, keen analysis, and exceptional talent with a pen to deliver a magnificent work such as Pox Americana. And perhaps never has there been a moment when this book could find a greater relevance to readers than to Americans in 2020.
If you have studied evolution inside or outside of the classroom, you have no doubt encountered the figure of Jean-Baptiste Lamarck and the discredited notion of the inheritance of acquired characteristics attributed to him known as “Lamarckism.” This has most famously been represented in the example of giraffes straining to reach fruit on ever-higher branches, which results in the development of longer necks over succeeding generations. Never mind that Lamarck did not develop this concept—and while he echoed it, it remained only a tiny part of the greater body of his work—he was yet doomed to have it unfortunately cling to his legacy ever since. This is most regrettable, because Lamarck—who died three decades before Charles Darwin shook the spiritual and scientific world with his 1859 publication of On the Origin of Species—was actually a true pioneer in the field of evolutionary biology that recognized there were forces at work that put organisms on an ineluctable road to greater complexity. It was Darwin who identified this force as “natural selection,” and Lamarck was not only denied credit for his contributions to the field, but otherwise maligned and ridiculed.
But even if he did not invent the idea, what if Lamarck was right all along to believe, at least in part, that acquired characteristics can be passed along transgenerationally after all—perhaps not on the kind of macro scale manifested by giraffe necks, but in other more subtle yet no less critical components to the principles of evolution? That is the subject of Lamarck’s Revenge: How Epigenetics is Revolutionizing Our Understanding of Evolution’s Past and Present, by the noted paleontologist Peter Ward. The book’s cover naturally showcases a series of illustrated giraffes with ever-lengthening necks! Ward is an enthusiast for the relatively new, still developing—and controversial—science of epigenetics, which advances the hypothesis that certain circumstances can trigger markers that can be transmitted from parent to child by changing the gene expression without altering the primary structure of the DNA itself. Let’s imagine a Holocaust survivor, for instance: can the trauma of Auschwitz cut so deep that the devastating psychological impact of that horrific experience will be passed on to his children, and his children’s children?
This is heady stuff, of course. We should pause for the uninitiated and explain the nature of Darwinian natural selection—the key mechanism of the Theory of Evolution—in its simplest terms. The key to survival for all organizations is adaptation. Random mutations occur over time, and if one of those mutations turns out to be better adapted to the environment, it is more likely to reproduce and thus pass along its genes to its offspring. Over time, through “gradualism,” this can lead to the rise of new species. Complexity breeds complexity, and that is the road traveled by all organisms that has led from the simplest prokaryote unicellular organism—the 3.5-billion-year-old photosynthetic cyanobacteria—to modern homo sapiens sapiens. This is, of course, a very, very long game; so long in fact that Darwin—who lived in a time when the age of the earth was vastly underestimated—fretted that there was not enough time for evolution as he envisioned it to occur. Advances in geology later determined that the earth was about 4.5 billion years old, which solved that problem, but still left other aspects of evolution unexplained by gradualism alone. The brilliant Stephen Jay Gould (along with Niles Eldredge) came along in 1972 and proposed that rather than gradualism most evolution more likely occurred through what he called “punctuated equilibrium,” often triggered by a catastrophic change in the environment. Debate has raged ever since, but it may well be that evolution is guided by forces of both gradualism and punctuated equilibrium. But could there still be other forces at work?
Transgenerational epigenetic inheritance represents another so-called force and is at the cutting edge of research in evolutionary biology today. But has the hypothesis of epigenetics been demonstrated to be truly plausible? And the answer to that is—maybe. In other words, there does seem to be studies that support transgenerational epigenetic inheritance, most famously—as detailed in Lamarck’s Revenge—in what has been dubbed the “Dutch Hunger Winter Syndrome,” that saw children born during a famine smaller than those born before the famine, and with a later, greater risk of glucose intolerance, conditions then passed down to successive generations. On the other hand, the evidence for epigenetics has not been as firmly established as some proponents, such as Ward, might have us believe.
Lamarck’s Revenge is a very well-written and accessible scientific account of epigenetics for a popular audience, and while I have read enough evolutionary science to follow Ward’s arguments with some competence, I remain a layperson who can hardly endorse or counter his claims. The body of the narrative is comprised of Ward’s repeated examples of what he identifies as holes in traditional evolutionary biology that can only be explained by epigenetics. Is he right? I simply lack the expertise to say. I should note that I received this book as part of an “Early Reviewers” program, so I felt a responsibility to read it cover-to-cover, although my own interest lapsed as it moved beyond my own depth in the realm of evolutionary biology.
I should note that this is all breaking news, and as we appraise it we should be mindful of how those on the fringes of evangelicalism, categorically opposed to the science of human evolution, will cling to any debate over mechanisms in natural selection to proclaim it all a sham sponsored by Satan—who has littered the earth with fossils to deceive us—to challenge the truth of the “Garden of Eden” related in the Book of Genesis. Once dubbed “Creationists,” they have since rebranded themselves in association with the pseudoscience of so-called “Intelligent Design,” which somehow remains part of the curriculum at select accredited universities. Science is self-correcting. These folks are not, so don’t ever let yourself be distracted by their fictional supernatural narrative. Evolution—whether through gradualism and/or punctuated equilibrium and/or epigenetics—remains central to both modern biology and modern medicine, and that is not the least bit controversial among scientific professionals. But if you want to find out more about the implications of epigenetics for human evolution, then I recommend that you pick up Lamarck’s Revenge and challenge yourself to learn more!
Note: While you are at it, if you want to learn more about 3.5-billion-year-old photosynthetic cyanobacteria, I highly recommend this:
Here was buried Thomas Jefferson, Author of the Declaration of American Independence, of the Statute of Virginia for religious freedom & Father of the University of Virginia.
Thomas Jefferson wrote those very words and sketched out the obelisk they would be carved upon. For those who have studied him, that he not only composed his own epitaph but designed his own grave marker was—as we would say in contemporary parlance—just “so Jefferson.” His long life was marked by a catalog of achievements; these were intended to represent his proudest accomplishments. Much remarked upon is the conspicuous absence of his unhappy tenure as third President of the United States. Less noted is the omission of his time as Governor of Virginia during the Revolution, marred by his humiliating flight from Monticello just minutes ahead of British cavalry. Of the three that did make the final cut, his role as author of the Declaration has been much examined. The Virginia statute—seen as the critical antecedent to First Amendment guarantees of religious liberty—gets less press, but only because it is subsumed in a wider discussion of the Bill of Rights. But who really talks about Jefferson’s role as founder of the University of Virginia?
That is the ostensible focus of Thomas Jefferson’s Education, by Alan Taylor, perhaps the foremost living historian of the early Republic. But in this extremely well-written and insightful analysis, Taylor casts a much wider net that ensnares a tangle of competing themes that not only traces the sometimes-fumbling transition of Virginia from colony to state, but speaks to underlying vulnerabilities in economic and political philosophy that were to extend well beyond its borders to the southern portion of the new nation. Some of these elements were to have consequences that echoed down to the Civil War; indeed, still echo to the present day.
Students of the American Civil War are often struck by the paradox of Virginia. How was it possible that this colony—so central to the Revolution and the founding of the Republic, the most populous and prominent, a place that boasted notable thinkers like Jefferson, Madison and Marshall, that indeed was home to four of the first five presidents of the new United States—could find itself on the eve of secession such a regressive backwater, soon doomed to serve as the capitol of the Confederacy? It turns out that the sweet waters of the Commonwealth were increasingly poisoned by the institution of human chattel slavery, once decried by its greatest intellects, then declared indispensable, finally deemed righteous. This tragedy has been well-documented in Susan Dunn’s superlative Dominion of Memories: Jefferson, Madison & the Decline of Virginia, as well as Alan Taylor’s own Pulitzer Prize winning work, The Internal Enemy: Slavery and the War in Virginia 1772-1832. What came to be euphemistically termed the “peculiar institution” polluted everything in its orbit, often invisibly except to the trained eye of the historian. This included, of course, higher education.
If the raison d’être of the Old Dominion was to protect and promote the interests of the wealthy planter elite that sat atop the pyramid of a slave society, then really how important was it for the scions of Virginia gentlemen to be educated beyond the rudimentary levels required to manage a plantation and move in polite society? And after all, wasn’t the “honor” of the up-and-coming young “masters” of far greater consequence than the aptitude to discourse in matters of rhetoric, logic or ethics? In Thomas Jefferson’s Education, Taylor takes us back to the nearly forgotten era of a colonial Virginia when the capitol was located in “Tidewater” Williamsburg and rowdy students—wealthy, spoiled sons of the planter aristocracy with an inflated sense of honor—clashed with professors at the prestigious College of William & Mary who dared to attempt to impose discipline upon their bad behavior. A few short years later, Williamsburg was in shambles, a near ghost town, badly mauled by the British during the Revolution, the capitol relocated north to “Piedmont” Richmond, William & Mary in steep decline. Thomas Jefferson’s determination over more than two decades to replace it with a secular institution devoted to the liberal arts that welcomed all white men, regardless of economic status, is the subject of this book. How he realized his dream with the foundation of the University of Virginia in the very sunset of his life, as well as the spectacular failure of that institution to turn out as he envisioned it is the wickedly ironic element in the title of Thomas Jefferson’s Education.
The author is at his best when he reveals the unintended consequences of history. In his landmark study, American Revolutions: A Continental History, 1750-1804, Taylor underscores how American Independence—rightly heralded elsewhere as the dawn of representative democracy for the modern West—was at the same time to prove catastrophic for Native Americans and African Americans, whose fate would likely have been far more favorable had the colonies remained wedded to a British Crown that drew a line for westward expansion at the Appalachians, and later came to abolish slavery throughout the empire. Likewise, there is the example of how the efforts of Jefferson and Madison—lauded for shaking off the vestiges of feudalism for the new nation by putting an end to institutions of primogeniture and entail that had formerly kept estates intact—expanded the rights of white Virginians while dooming countless numbers of the enslaved to be sold to distant geographies and forever separated from their families.
In Thomas Jefferson’s Education, the disestablishment of religion is the focal point for another unintended consequence. For Jefferson, an established church was anathema, and stripping the Anglican Church of its preferred status was central to his “Statute of Virginia for Religious Freedom” that was later enshrined in the First Amendment. But it turns out that religion and education were intertwined in colonial Virginia’s most prominent institution of higher learning, Williamsburg’s College of William & Mary, funded by the House of Burgesses, where professors were typically ordained Anglican clergymen. Moreover, tracts of land known as “glebes” that were formerly distributed by the colonial government for Anglican (later Episcopal) church rectors to farm or rent, came under assault by evangelical churches allied with secular forces after the Revolution in a movement that eventually was to result in confiscation. This put many local parishes—once both critical sponsors of education and poor relief—into a death spiral that begat still more unintended consequences that in some ways still resonate to the present-day politics and culture of the American south. As Taylor notes:
The move against church establishment decisively shifted public finance for Virginia. Prior to the revolution, the parish tax had been the greatest single tax levied on Virginians; its elimination cut the local tax burden by two thirds. Poor relief suffered as the new County overseers spent less per capita than had the old vestries. After 1790, per capita taxes, paid by free men in Virginia, were only a third of those in Massachusetts. Compared to northern states, Virginia favored individual autonomy over community obligation. Jefferson had hoped that Virginians would reinvest their tax savings from disestablishment by funding the public system of education for white children. Instead county elites decided to keep the money in their pockets and pose as champions of individual liberty. [p57-58]
For Jefferson, a creature of the Enlightenment, the sins of medievalism inherent to institutionalized religion were glaringly apparent, yet he was blinded to the positive contributions it could provide for the community. Jefferson also frequently perceived his own good intentions in the eyes of others who simply did not share them because they were either selfish or indifferent. Jefferson seemed to genuinely believe that an emphasis on individual liberty would in itself foster the public good, when in reality—then and now—many take such liberty as the license to simply advance their own interests. For all his brilliance, Jefferson was too often naïve when it came to the character of his countrymen.
Once near-universally revered, the legacy of Thomas Jefferson often triggers ambivalence for a modern audience and poses a singular challenge for historical analysis. A central Founder, Jefferson’s bold claim in the Declaration “that all men are created equal” defined both the struggle with Britain and the notion of “liberty” that not only came to characterize the Republic that eventually emerged, but gave echo with a deafening resonance to the French Revolution—and far beyond to legions of the oppressed yearning for the universal equality that Jefferson had asserted was their due. At the same time, over the course of his lifetime Jefferson owned hundreds of human beings as chattel property. One of the enslaved almost certainly served as concubine to bear him several offspring who were also enslaved, and she almost certainly was the half-sister of Jefferson’s late wife.
The once popular view that imagined that Jefferson did not intend to include African Americans in his definition of “all men” has been clearly refuted by historians. And Jefferson, like many of his elite peers of the Founding generation—Madison, Monroe, and Henry—decried the immorality of slavery as institution while consenting to its persistence, to their own profit. Most came to find grounds to justify it, but not Jefferson: the younger Jefferson cautiously advocated for abolition, while the older Jefferson made excuses for why it could not be achieved in his lifetime—made manifest in his much quoted “wolf by the ear” remark—but he never stopped believing it an existential wrong. As Joseph Ellis underscored in his superb study, American Sphinx, Jefferson frequently held more than one competing and contradictory view in his head simultaneously and was somehow immune to the cognitive dissonance such paradox might provoke in others.
It is what makes Jefferson such a fascinating study, not only because he was such a consequential figure for his time, but because the Republic then and now remains a creature of habitually irreconcilable contradictions remarkably emblematic of this man, one of its creators, who has carved out a symbolism that varies considerably from one audience to another. Jefferson, more than any of the other Founders, was responsible for the enduring national schizophrenia that pits federalism against localism, a central economic engine against entrepreneurialism, and the well-being of a community against personal liberties that would let you do as you please. Other elements have been, if not resolved, forced to the background, such as the industrial vs. the agricultural, and the military vs. the militia. Of course, slavery has been abolished, civil rights tentatively obtained, but the shadow of inequality stubbornly lingers, forced once more to the forefront by the murder of George Floyd; I myself participated in a “Black Lives Matter” protest on the day before this review was completed.
Perhaps much overlooked in the discussion but no less essential is the role of education in a democratic republic. Here too, Jefferson had much to offer and much to pass down to us, even if most of us have forgotten that it was his soft-spoken voice that pronounced it indispensable for the proper governance of both the state of Virginia and the new nation. That his ambition extended only to white, male universal education that excluded blacks and women naturally strikes us as shortsighted, even repugnant, but should not erase the fact that even this was a radical notion in its time. Rather than disparage Jefferson, who died two centuries ago, we should perhaps condemn the inequality in education that persists in America today, where a tradition of community schools funded by property taxes meant that my experience growing up in a white, middle class suburb in Fairfield, CT translated into an educational experience vastly superior to that of the people of color who attended the ancient crumbling edifices in the decaying urban environment of Bridgeport less than three miles from my home. How can we talk about “Black Lives Matter” without talking about that?
The granite obelisk that marked Jefferson’s final resting place was chipped away at by souvenir hunters until it was relocated in order to preserve it. A joint resolution of Congress funded the replacement, erected in 1883, that visitors now encounter at Monticello. The original obelisk now incongruously sits in a quadrangle at the University of Missouri, perhaps as far removed from Jefferson’s grave as today’s diverse, co-ed institution of UVA at Charlottesville is at a distance from the both the university he founded and the one he envisioned. We have to wonder if Jefferson would be more surprised to learn that African Americans are enrolled at UVA—or that in 2020 they only comprise less than seven percent of the undergraduate population? And what would he make of the white supremacists who rallied at Charlottesville in 2017 and those who stood against them? I suspect a resurrected Jefferson would be no less enigmatic than the one who walked the earth so long ago.
Alan Taylor has written a number of outstanding works—I’ve read five of them—and he has twice won the Pulitzer Prize for History. He is also, incidentally, the Thomas Jefferson Memorial Foundation Professor of History at the University of Virginia, so Thomas Jefferson’s Education is not only an exceptional contribution to the historiography but no doubt a project dear to his heart. While I continue to admire Jefferson even as I acknowledge his many flaws, I cannot help wondering how Taylor—who has so carefully scrutinized him—personally feels about Thomas Jefferson. I recall that in the afterword to his magnificent historical novel, Burr, Gore Vidal admits: “All in all, I think rather more highly of Jefferson than Burr does …” If someone puts Alan Taylor on the spot, I suppose that could be as good an answer as any …
Note: I have reviewed other works by Alan Taylor here:
Nolite te bastardes carborundorum could very well be the Latin phrase most familiar to a majority of Americans. Roughly translated as “Don’t let the bastards grind you down,” it has been emblazoned on tee shirts and coffee mugs, trotted out as bumper sticker and email signature, and—most prominently—has become an iconic feminist rallying cry for women. That this famous slogan is not really Latin or any language at all, but instead a kind of schoolkid’s “mock Latin,” speaks to the colossal cultural impact of the novel where it first made its appearance in 1985, The Handmaid’s Tale, by Margaret Atwood, as well as the media then spawned, including the 1990 film featuring Natasha Richardson, and the acclaimed series still streaming on Hulu. Consult any random critic’s list of the finest examples in the literary sub-genre “dystopian novels,” and you will likely find The Handmaid’s Tale in the top five, along with such other classic masterpieces as Orwell’s 1984, Huxley’s Brave New World and Bradbury’s Fahrenheit 451, which is no small achievement for Atwood.
For anyone who has not been locked in a box for decades, The Handmaid’s Tale relates the chilling story of the not-too-distant-future nation of “Gilead,” a remnant of a fractured United States that has become a totalitarian theonomy that demands absolute obedience to divine law, especially the harsh strictures of the Old Testament. A crisis in fertility has led to elite couples relying on semi-enslaved “handmaids” who serve as surrogates to be impregnated and carry babies to term for them, which includes a bizarre ritual where the handmaid lies in the embrace of the barren wife while being penetrated by the “Commander.” The protagonist is known as “Offred”—or “Of Fred,” the name of this Commander—but once upon a time, before the overthrow of the U.S., she was an independent woman, a wife, a mother. It is Offred who one day happens upon Nolite te bastardes carborundorum scratched upon the wooden floor on her closet, presumably by the anonymous handmaid who preceded her.
Brilliantly structured as a kind of literary echo of Geoffrey Chaucer’s The Canterbury Tales, employing Biblical imagery—the eponymous “handmaid” based upon the Old Testament account of Rachel and her handmaid Bilhah—and magnificently imagining a horrific near-future of a male-dominated society where all women are garbed in color-coded clothing to reflect their strictly assigned subservient roles, Atwood’s narrative achieves the almost impossible feat of imbuing what might otherwise smack of the fantastic with the highly persuasive badge of the authentic.
The 1990 film adaptation—which also starred Robert Duvall as the Commander and Faye Dunaway as his infertile wife Serena Joy—was largely faithful to the novel, while further fleshing out the character of Offred. But it is has been the Hulu series, updated to reflect a near-contemporary pre-Gilead America replete with cell phones and technology—and soon to beget (pun fully intended!) a fourth season—which both embellished and enriched Atwood’s creation for a new generation and a far wider audience. And it has enjoyed broad resonance, at least partially due to its debut in early 2017, just months after the presidential election. The coalition of right-wing evangelicals, white supremacists, and neofascists that has come to coalesce around the Republican Party in the Age of Trump has not only brought new relevance to The Handmaid’s Tale, but has also seen its scarlet handmaid’s cloaks adopted by many women as the de rigueur uniform of protest in the era of “Me Too.” Meanwhile, the series—which is distinguished by an outstanding cast of fine ensemble actors, headlined by Elisabeth Moss as Offred—has proved enduringly terrifying for three full seasons, while largely maintaining its authenticity.
Re-enter Margaret Atwood with The Testaments: The Sequel to The Handmaid’s Tale, released thirty-four years after the original novel. As a fan of both the book and the series, I looked forward to reading it, though my anticipation was tempered by a degree of trepidation based upon my time-honored conviction that sequels are ill-advised and should generally be avoided. (If Godfather II was the rare exception in film, Thomas Berger’s The Return of Little Big Man certainly proved the rule for literature!) Complicating matters, Atwood penned a sequel not to her own novel, but rather to the Hulu series, which brought back memories of Michael Crichton’s awkward The Lost World, written as a follow-up to Spielberg’s Jurassic Park movie rather than his own book.
My fears were not misplaced.
The action in The Testaments takes place in both Gilead and in Atwood’s native Canada, which remains a bastion of freedom and democracy for those who can escape north. The timeframe is roughly fifteen years after the conclusion of Hulu’s Season Three. The narrative is told from the alternating perspectives of three separate protagonists, one of whom is Aunt Lydia, the outsize brown-clad villain of book and film known for both efficiency and brutality in her role as a “trainer” of handmaids. Aunt Lydia turns out to have both a surprising pre-Gilead backstory as well as a secret life as an “Aunt,” although there are no hints of these in any previous works. Still, I found the Lydia portion of the book most interesting, and perhaps the more plausible in a storyline that often flirts with the farfetched.
In order to sidestep spoilers, I cannot say much about the identities of the other two main characters, who are each subject to surprise “reveals” in the narrative—except that I personally was less surprised than was clearly intended. Oh yes, I get it: the butler did it … but I still have hundreds of pages ahead of me. But that was not the worst of it.
The beauty of the original novel and the series has remained a remarkably consistent authenticity, despite an extraordinary futuristic landscape. The test of all fiction—but most especially in science-fiction, fantasy, and the dystopian—is: can you successfully suspend disbelief? For me, The Testaments fails this test again and again, most prominently when one of our “unrevealed” characters—an otherwise ordinary teenage girl—is put through something like a “light” version of La Femme Nikita training, and then in short order trades high school for a dangerous undercover mission without missing a beat! Moreover, her character is not well-drawn, and the words put in her mouth ring counterfeit. It seems evident that the eighty-year-old Atwood does not know very many sixteen-year-old girls, and culturally this one acts and sounds like she was raised thirty years ago and then catapulted decades into the future. Overall, the plot is contrived, the action inauthentic, the characters artificial.
This is certainly not vintage Atwood, although some may try to spin it that way. The Handmaid’s Tale was not a one-hit wonder: Atwood is a prolific, accomplished author and I have read other works—including The Penelopiad and The Year of the Flood—that underscore her reputation as a literary master. But not this time. In my disappointment, I was reminded of my experience with Khaled Hosseini, whose The Kite Runner was a superlative novel that showcased a panoply of complex themes and nuanced characters that remained with me long after I closed the cover. That was followed by A Thousand Splendid Suns, which though a bestseller was dramatically substandard to his earlier work, peopled with nearly one-dimensional caricatures assigned to be “good” or “evil” navigating a plot that smacked more of soap-opera than subtlety.
The Testaments too has proved a runaway bestseller, but it is the critical acclaim that I find most astonishing, even scoring the highly prestigious 2019 Booker Award—though I can’t bear to think of it sitting on the same shelf alongside … say … Richard Flanagan’s The Narrow Road to the Deep North, which took the title in 2014. It is tough for me to review a novel so well-received that I find so weak and inconsequential, especially when juxtaposed with the rest of the author’s catalog. I keep holding out hope that someone else might take notice that the emperor really isn’t wearing any clothes, but the bottom line is that lots of people loved this book; I did not.
On the other hand, a close friend countered that fiction, like music, is highly subjective. But I take some issue with that. Perhaps you personally might not have enjoyed Faulkner’s The Sound and the Fury, or Hemingway’s A Farewell to Arms, for that matter, but you cannot make the case that these are bad books. I would argue that The Testaments is a pretty bad book, and I would not recommend it. But here, it seems, I remain a lone voice in the literary wilderness.