In April 1962, President John F. Kennedy hosted a remarkable dinner for more than four dozen Nobel Prize winners and assorted other luminaries drawn from the top echelons of the arts and sciences. With his characteristic wit, JFK pronounced it “The most extraordinary collection of talent, of human knowledge, that has ever been gathered together at the White House with the possible exception of when Thomas Jefferson dined alone.” One of the least prominent guests that evening was the novelist William Styron, who attended with his wife Rose, and recalled his surprise at the invitation. Styron was not yet then the critically-acclaimed, Pulitzer Prize-winning literary icon he was to later become, but he was hardly an unknown figure, and it turns out that his most recent novel of American expatriates, Set This House on Fire, was the talk of the White House in the weeks leading up to the event. So he had the good fortune to dine not only with the President and First Lady, but with the likes of John Glenn, Linus Pauling, and Pearl Buck—and in the after-party forged a long-term intimate relationship with the Kennedy family.
My first Styron was The Confessions of Nat Turner, which I read as a teen. Its merits somewhat unfairly subsumed at the time by the controversy it sparked over race and remembrance, it remains a notable achievement, as well as a reminder that literature is not synonymous with history, nor should it be held to that account. I found Set This House on Fire largely forgettable, but as an undergrad was utterly blown away when I read Lie Down in Darkness, his first novel and a true masterpiece that while yet indisputably original clearly evoked the Faulknerian southern gothic. I went on to read anything by the author I could get my hands on. Also a creature of controversy upon publication, Sophie’s Choice, winner of the National Book Award for Fiction in 1980, remains in my opinion one of the finest novels of the twentieth century.
I thought I had read all of Styron’s fiction, so it was with certain surprise that I learned from a friend who is both author and bibliophile of the existence of A Tidewater Morning, a collection of three novellas I had somehow overlooked. I bought the book immediately, and packed it to take along for a pandemic retreat to a Vermont cabin in the woods where I read it through in the course of the first day and a half of the getaway, parked in a comfortable chair on the porch sipping hot coffee in the morning and cold beer in late afternoon. Perhaps it was the fact that this was our first breakaway from months of quarantine isolation, or maybe it was the alcohol content of the IPA I was tossing down, but there was definitely a palpable emotional tug for me reading Styron again—works previously unknown to me no less—so many decades after my last encounter with his work, back when I was a much younger man than the one turning these pages. The effect was more pronounced, I suppose, because the semi-autobiographical stories in this collection look back to Styron’s own youth in the Virginia Tidewater in the 1930s and were written when he too was a much older man.
“Love Day,” the first tale of the collection, has him as a young Marine in April 1945 yet untested in combat, awaiting orders to join the invasion of Okinawa and wrestling the ambivalence of chasing heroic destiny while privately entertaining “gut-heaving frights.” There’s much banter among the men awaiting their fate, but the story of real significance is told through flashbacks to an episode some years prior, he still a boy in the back seat of his father’s Oldsmobile, broken down on the side of the road. War is looming—the very war he is about to join—although it was far from certain then, but the catastrophe of an unprepared America overrun by barbaric Japanese invaders is the near-future imagined in the Saturday Evening Post piece the boy is reading in the back of the stalled car. Simmering tempers flare when he lends voice to the prediction. His mother, stoic in her leg brace, slowly dying of a cancer known to all but unacknowledged, had earlier furiously rebuked him for mouthing a racist epithet and now upbraided him again for characterizing the Japanese as “slimy butchers,” while belittling the notion of a forthcoming war. Unexpectedly, his father—a mild, highly-educated man quietly raging at his own inability to effect a simple car repair—lashes out at his wife, branding her “idiotic” and “a fool” for her naïve idealism, then crumbles under the weight of his words to beg her forgiveness. It is a dramatic snapshot not only of a moment of a family in turmoil, but of a time and a place that has long faded from view. Only Styron’s talent with a pen could leave us with so much from what is after only a few pages.
The third story is the title tale, “A Tidewater Morning,” which revisits the family to follow his mother’s final, agonizing days. It concludes with both the boy and his father experiencing twin if unrelated epiphanies. It’s a good read, but I found it a bit overwrought, lacking the subtlety characteristic of Styron’s prose.
Sandwiched between these two is my own favorite, “Shadrach,” the story of a 99-year-old former slave—sold away to an Alabama plantation in antebellum days—who shows up unpredictably with the dying wish to be buried in the soil of the Dabney property where he was born. The problem is that the Dabney descendant currently living there is a struggling, dirt-poor fellow who could be a literary cousin of one of the Snopes often resident in Faulkner novels. The law prohibits interring a black man on his property, and he likewise lacks the means to afford to bury him elsewhere. On the surface, “Shadrach” appears to be a simple story, but on closer scrutiny reveals itself to be a very complex one peopled with multidimensional characters and layered with vigorous doses of both comedy and tragedy.
I highly recommend Styron to those who have not yet read him. For the uninitiated, (spoiler alert!) I will close this review with a worthy passage:
“Death ain’t nothin’ to be afraid about,” he blurted in a quick, choked voice … “Life is where you’ve got to be terrified!” he cried as the unplugged rage spilled forth. … Where in the goddamned hell am I goin’ to get the money to put him in the ground? … I ain’t got thirty-five-dollars! I ain’t got twenty-five dollars! I ain’t got five dollars!” … “And one other thing!” He stopped. Then suddenly his fury—or the harsher, wilder part of it—seemed to evaporate, sucked up into the moonlit night with its soft summery cricketing sounds and its scent of warm loam and honeysuckle. For an instant he looked shrunken, runtier than ever, so light and frail that he might blow away like a leaf, and he ran a nervous, trembling hand through his shock of tangled black hair. “I know, I know,” he said in a faint, unsteady voice edged with grief. “Poor old man, he couldn’t help it. He was a decent, pitiful old thing, probably never done anybody the slightest harm. I ain’t got a thing in the world against Shadrach. Poor old man.” …
“And anyway,” Trixie said, touching her husband’s hand, “he died on Dabney ground like he wanted to. Even if he’s got to be put away in a strange graveyard.”
“Well, he won’t know the difference,” said Mr. Dabney. “When you’re dead nobody knows the difference. Death ain’t much.” [p76-78]
Africa. My youth largely knew of it only through the distorted lens of racist cartoons peopled with bone-in-their-nose cannibals, B-grade movies showcasing explorers in pith helmets who somehow always managed to stumble into quicksand, and of course Tarzan. It was still even then sometimes referred to as the “Dark Continent,” something that was supposed to mean dangerous and mysterious but also translated, for most of us, into the kind of blackness that was synonymous with race and skin color.
My interest in Africa came via the somewhat circuitous route of my study of the Civil War. The central cause of that conflict was, of course, human chattel slavery, and nearly all the enslaved were descendants of lives stolen from Africa. So, for me, a closer scrutiny of the continent was the logical next step. One of the benefits of a fine personal library is that there are hundreds of volumes sitting on shelves waiting for me to find the moment to find them. Such was the case for Africa: A Biography of the Continent, by John Reader, which sat unattended but beckoning for some two decades until a random evening found a finger on the spine and then the cover was open and the book was in my lap. I did not turn back.
With a literary flourish rarely present in nonfiction combined with the ambitious sweep of something like a novel of James Michener, Reader attempts nothing less than the epic as he boldly surveys the history of Africa from the tectonic activities that billions of years ago shaped the continent, to the evolution of the single human species that now populates the globe, to the rise and fall of empires, to colonialism and independence, and finally to the twin witness of the glorious and the horrific in the peaceful dismantling of South African apartheid and the Rwandan genocide. In nearly seven hundred pages of dense but highly readable text, the author succeeds magnificently, identifying the myriad differences in peoples and lifeways and environments while not neglecting the shared themes that then and now much of the continent holds in common.
Africa is the world’s second largest continent, and it hosts by far the largest number of sovereign nations: with the addition of South Sudan in 2011—twelve years after Reader’s book was published—there are now fifty-four, as well as a couple of disputed territories. But nearly all of these states are artificial constructs that are relics of European colonialism, lines on maps once penciled in by elite overlords in distant drawing rooms in places like London, Paris, Berlin, and Brussels, and those maps were heavily influenced by earlier incursions by the Spanish, Portuguese, and Dutch. Much of the poverty, instability, and often dreadful standards of living in Africa are the vestiges of these artificial borders that mostly ignored prior states, tribes, clans, languages, religions, identities, lifeways. When their colonial masters, who had long raped the land for its resources and the people for their self-esteem, withdrew in the whirlwind decolonization era of 1956-1976—some at the strike of the pen, others at the point of the sword—the exploiters left little of value for nation-building to the exploited beyond the mockery of those boundaries. That of the ancestral that had been lost in the process, had been irrevocably lost. That is one of Reader’s themes. But there is so much more.
The focus is, as it should be, on sub-Saharan Africa; the continent’s northern portion is an extension of the Mediterranean world, marked by the storied legacies of ancient Greeks, Carthaginians, Romans, and the later Arab conquest. And Egypt, then and now, belongs more properly to the Middle East. But most of Africa’s vast geography stretches south of that, along the coasts and deep into the interior. Reader delivers “Big History” at its best, and the sub-Saharan offers up an immense arena for the drama that entails—from the fossil beds that begat Homo habilis in Tanzania’s Olduvai Gorge, to the South African diamond mines that spawned enormous wealth for a few on the backs of the suffering of a multitude, to today’s Maasai Mara game reserve in Kenya that we learn is not as we would suppose a remnant of some ancient pristine habitat, but rather a breeding ground for the deadly sleeping sickness carried by the tsetse fly that turned once productive land into a place unsuitable for human habitation.
Perhaps the most remarkable theme in Reader’s book is population sustainability and migration. While Africa is the second largest of earth’s continents, it remains vastly underpopulated relative to its size. Given the harsh environment, limited resources, and prevalence of devastating disease, there is strong evidence that it has likely always been this way. Slave-trading was, of course, an example of a kind forced migration, but more typically Africa’s history has long been characterized by a voluntary movement of peoples away from the continent, to the Middle East, to Europe, to all the rest of the world. Migration has always been—and remains today—subject to the dual factors of “push” and “pull,” but the push factor has dominated. That is perhaps the best explanation for what drove the migrations of archaic and anatomically modern humans out of Africa to populate the rest of the globe. The recently identified 210,000-year-old Homo sapiens skull in a cave in Greece reminds us that this has been going on a very long time. Homo erectus skulls found in Dmansi, Georgia that date to 1.8 million years old underscore just how long!
Slavery is, not unexpectedly, also a major theme for Reader, largely because of the impact of the Atlantic slave trade on Africa and how it forever transformed the lifeways of the people directly and indirectly affected by its pernicious hold—culturally, politically and economically. The slavery that was a fact of life on the continent before the arrival of European traders closely resembled its ancient roots; certainly race and skin color had nothing to do with it. As noted, I came to study Africa via the Civil War and antebellum slavery. To this day, a favored logical fallacy advanced by “Lost Cause” apologists for the Confederate slave republic asks rhetorically “But their own people sold them as slaves, didn’t they?” As if this contention—if it was indeed true—would somehow expiate or at least attenuate the sin of enslaving human beings. But is it true? Hardly. Captors of slaves taken in raids or in war by one tribe or one ethnicity would hardly consider them “their own people,” any more than the Vikings that for centuries took Slavs to feed the hungry slave markets of the Arab world would have considered them “their own people.” This is a painful reminder that such notions endure in the mindset of the deeply entrenched racism that still defines modern America—a racism derived from African chattel slavery to begin with. It reflects how outsiders might view Africa, but not how Africans view themselves.
The Atlantic slave trade left a mark on every African who was touched by it as buyer, seller or unfortunate victim. The insatiable thirst for cheap labor to work sugar (and later cotton) plantations in the Americas overnight turned human beings into Africa’s most valuable export. Traditions were trampled. An ever-increasing demand put pressure on delivering supply at any cost. Since Europeans tended to perish in Africa’s hostile environment of climate and disease, a whole new class of “middle-men” came to prominence. Slavery, which dominated trade relations, corrupted all it encountered and left scars from its legacy upon the continent that have yet to fully heal.
This review barely scratches the surface of the range of material Reader covers in this impressive work. It’s a big book, but there is not a wasted page or paragraph, and it neither neglects the diversity nor what is held in common by the land and its peoples. Are there flaws? The included maps are terrible, but for that the publisher should be faulted rather than the author. To compensate, I hung a map of modern Africa on the door of my study and kept a historical atlas as companion to the narrative. Other than that quibble, the author’s achievement is superlative. Rarely have I read something of this size and scope and walked away so impressed, both with how much I learned as well as the learning process itself. If you have any interest in Africa, this book is an essential read. Don’t miss it.
Some years ago, I had the pleasure to stay in a historic cabin on a property in Spotsylvania that still hosts extant Civil War trenches. Those who imagine great armies clad in blue and grey massed against each other with pennants aloft on open fields would not be wrong for the first years of the struggle, but those trenches better reflect the reality of the war as it ground to its slow, bloody conclusion in its final year. Those last months contained some of the greatest drama and most intense suffering of the entire conflict, yet often receive far less attention than deserved. A welcome redress to this neglect is Hymns of the Republic: The Story of the Final Year of the American Civil War, by journalist and historian S.C. Gwynne, that neatly marries literature to history and resurrects for us the kind of stirring narratives that once dominated the field.
Looking back, for all too many Civil War buffs it might seem that a certain Fourth of July in 1863—when in the east a battered Lee retreated from Gettysburg on the same day that Vicksburg fell in the west—marked the beginning of the end for the Confederacy. But experts know that assessment is overdrawn. Certainly, the south had sustained severe body blows on both fronts, but the war yet remained undecided. Like the colonists four score and seven years prior to that day, these rebels did not need to “win” the war, only to avoid losing it. As it was, a full ninety-two weeks—nearly two years—lay ahead until Appomattox, some six hundred forty-six days of bloodshed and uncertainty for both sides, most of what truly mattered compressed into the last twelve months of the war. And, tragically, those trenches played a starring role.
Hymns of the Republic opens in March 1864, when Ulysses Grant—architect of the fall of Vicksburg that was by far the more significant victory on that Independence Day 1863—was brought east and given command of all Union Armies. In the three years since Fort Sumter, the war had not gone well in the east, largely as the result of a series of less-than-competent northern generals who had squandered opportunities and been repeatedly driven to defeat or denied outright victory by the wily tactician, Robert E. Lee. The seat of the Confederacy at Richmond—only a tantalizing ninety-five miles from Washington—lay unmolested, while European powers toyed with the notion of granting them recognition. The strategic narrative in the west was largely reversed, marked by a series of dramatic Union victories crafted by skilled generals, crowned by Grant’s brilliant campaign that saw Vicksburg fall and the Confederacy virtually cut in half. But all eyes had been on the east, to Lincoln’s great frustration. Now events in the west were largely settled, and Lincoln brought Grant east, confident that he had finally found his general who would defeat Lee and end the war. But while Lincoln’s instincts proved sound in the long term, misplaced optimism for an early close to the conflict soon evaporated. More than a year of blood and tears lay ahead.
Much of the battle tactics are a familiar story—Grant Takes Command was the exact title of a Bruce Catton classic—but Gwynne updates the narrative with the benefit of the latest scholarship that not only looks beyond the stereotypes of Grant and Lee, but the very dynamics of more traditional treatments focused solely upon battles and leaders. Most prominently, he resurrects the African Americans that until somewhat recently were for too long conspicuously absent from much Civil War history, buried beneath layers of propaganda spun by unreconstructed Confederates who fashioned an alternate history of the war—the “Lost Cause” myth—that for too long dominated Civil War studies and still stubbornly persists both in right-wing politics and the curricula of some southern school systems to this day. In the process, Gwynne restores the role of African Americans as central players to the struggle who have long been erased from the history books.
Erased. Remarkably, most Americans rarely thought of blacks at all in the context of the war until the film Glory (1989) and Ken Burns’ docuseries The Civil War (1990) came along. And there are still books—Joseph Wheelan’s Their Last Full Measure: The Final Days of the Civil War, published in 2015, springs to mind—that demote these key actors to bit parts. Yet, without enslaved African Americans there would have never been a Civil War. The centrality of slavery to secession has been just as incontrovertibly asserted by the scholarly consensus as it has been vehemently resisted by Lost Cause proponents who would strike out that uncomfortable reference and replace it with the euphemistic “States’ Rights,” neatly obscuring the fact that southern states seceded to champion and perpetuate the right to own dark-complected human beings as chattel property. Social media is replete with concocted fantasies of legions of “Black Confederates,” but the reality is that about a half million African Americans fled to Union lines, and so many enlisted to make war on their former masters that by the end of the war fully ten percent of the Union army was comprised of United States Colored Troops (USCT). Blacks knew what the war was about, and ultimately proved a force to be reckoned with that drove Union victory, even as a deeply racist north often proved less than grateful for their service.
Borrowing a page from the latest scholarship, Gwynne points to the prominence of African Americans throughout the war, but especially in its final months—marked both by remarkable heroism and a trail of tragedy. His story of the final year of the conflict commences with the massacre at Fort Pillow in April 1864 of hundreds of surrendering federal troops—the bulk of whom were uniformed blacks—by Confederates under the command of Nathan Bedford Forrest. The author gives Forrest a bit of a pass here—while the general was himself not on the field, he later bragged about the carnage—but Gwynne rightly puts focus on the long-term consequences, which were manifold.
The Civil War was the rare conflict in history not marred by wide scale atrocities—except towards African Americans. Lee’s allegedly “gallant” forces in the Gettysburg campaign kidnapped blacks they encountered to send south into slavery, and while Fort Pillow might have been the most significant open slaughter of black soldiers by southerners, it was hardly the exception. Confederates were enraged to see blacks garbed in uniform and sporting a rifle, and thus they were frequently murdered once disarmed rather than taken prisoner like their white counterparts. Something like a replay of Fort Pillow occurred at the Battle of the Crater during the siege of Petersburg, although the circumstances were more ambiguous, as the blacks gunned down in what rebels termed a “turkey shoot” were not begging for their lives as at Pillow. This was not far removed from official policy, of course: the Confederate government threatened to execute or sell into slavery captured black soldiers, and refused to consider them for prisoner exchange. This was a critical factor that led to the breakdown of the parole and exchange processes that had served as guiding principles throughout much of the war. The result bred conditions on both sides that led to the horrors of overcrowding and deplorable conditions in places like Georgia’s Andersonville and Camp Douglas in Chicago.
Meanwhile, Grant was hardly disappointed with the collapse of prisoner exchange. To his mind, anything that denied the south men or materiel would hasten the end of the war, which was his single-minded pursuit. Grant has long been subjected to calumnies that branded him “Grant the Butcher” because he seemed to throw lives away in hopeless attempts to dislodge a heavily fortified enemy. The most infamous example of this was Cold Harbor, which saw massive Union casualties. But Lee’s tactical victory there—it was to be his last of the war—further depleted his rapidly diminishing supply of men and arms which simply could not be replaced. Grant had a strategic vision that set him apart from the rest. That Lee pushed on as the odds shrunk for any outcome other than ultimate defeat came to beget what Gwynne terms “the Lee paradox: the more the Confederates prolonged the war, the more the Confederacy was destroyed.” [p252] And that destruction was no unintended consequence, but a deliberate component of Grant’s grand strategy to prevent food, munitions, pack animals, and slave labor from supporting the enemy’s war effort. Gwynne finds fault with Sherman’s generalship, but his “march to the sea” certainly achieved what had been intended. And while a northern public divided between those who would make peace with the rebels and those impatient with both Grant and Lincoln for an elusive victory, it was Sherman who delivered Atlanta and ensured the reelection of the president, something much in doubt even in Lincoln’s own mind.
There is far more contained within the covers of this fine work than any review could properly summarize. Much to his credit, the author does not neglect those often marginalized by history, devoting a well-deserved chapter to Clara Barton entitled “Battlefield Angel.” And the very last paragraph of the final chapter settles upon Juneteenth, when—far removed from the now quiet battlefields—the last of the enslaved finally learned they were free. Thus, the narrative ends as it has begun, with African Americans in the central role in the struggle too often denied to them in other accounts. For those well-read in the most recent scholarship, there is little new in Hymns of the Republic, but the general audience will find much to surprise them, if only because a good deal of this material has long been overlooked. Perhaps Gwynne’s greatest achievement is in distilling a grand story from the latest historiography and presenting it as the kind of exciting read Civil War literature is meant to be. I highly recommend it.
A familiar construct for students of European history is what is known as “The Long Nineteenth Century,” a period bookended by the French Revolution and the start of the Great War. The Great War. That is what it used to be called, before it was diminished by its rechristening as World War I, to distinguish it from the even more horrific conflict that was to follow just two decades hence. It is the latter that in retrospect tends to overshadow the former. Some are even tempted to characterize one as simply a continuation of the other, but that is an oversimplification. There was in fact far more than semantics to that designation of “Great War,” and historians are correct to flag it as a definitive turning point, for by the time it was over Europe’s cherished notions of civilization—for better and for worse—lay in ruins, and her soil hosted not only the scars of vast, abandoned trenches, but the bones of millions who once held the myths those notions turned out to be dear in their heads and their hearts.
The war ended with a stopwatch of sorts. The Armistice that went into effect on November 11, 1918 at 11AM Paris time marked the end of hostilities, a synchronized moment of collective European consciousness it is said all who experienced would recall for as long as they lived. Of course, something like 22 million souls—military and civilian—could not share that moment: they were the dead. Nearly three thousand died that very morning, as fighting continued right up to the final moments when the clock ran out.
What happened next? There is a tendency to fast forward because we know how it ends: the imperfect Peace of Versailles, the impotent League of Nations, economic depression, the rise of fascism and Nazism, American isolationism, Hitler invades Poland. In the process, so much is lost. Instead, Daniel Schönpflug artfully slows the pace with his well-written, highly original strain of microhistory, A World on Edge: The End of the Great War and the Dawn of a New Age. The author, an internationally recognized scholar and adjunct professor of history at the Free University of Berlin, blends the careful analytical skills of a historian with a talented pen to turn out one of the finest works in this genre to date.
First, he presses the pause button. That pause—the Armistice—is just a fragment of time, albeit one of great significance. But it is what follows that most concerns Schönpflug, who has a great drama to convey and does so through the voices of an eclectic array of characters from various walks of life across multiple geographies. When the action resumes, alternating and occasionally overlapping vignettes chronicle the postwar years from the unique, often unexpected vantage points of just over two dozen individuals—some very well known, others less so—who were to leave an imprint of larger or smaller consequence upon the changed world they walked upon.
There is Harry S Truman, who regrets that the military glory he aspired to as a boy has eluded him, yet is confident he has acquitted himself well, and cannot wait to return home to marry his sweetheart Bess and—ironically—vows he will never fire another shot as long as he lives. Former pacifist and deeply religious Medal of Honor winner Sergeant Alvin York receives a hero’s welcome Truman could only dream of, but eschews offers of money and fame to return to his backwoods home in Tennessee, where he finds purpose by leveraging his celebrity to bring roads and schools to his community. Another heroic figure is Sergeant Henry Johnson, of the famed 369th Infantry known as the “Harlem Hellfighters,” who incurred no less than twenty-one combat injuries fending off the enemy while keeping a fellow soldier from capture, but because of his skin color returns to an America where he remains a second-class citizen who does not receive the Medal of Honor he deserves until its posthumous award by President Barack Obama nearly a century later. James Reese Europe, the regimental band leader of the “Harlem Hellfighters,” who has been credited with introducing jazz to Europe, also returns home to an ugly twist of fate.
And there’s Käthe Kollwitz, an artist who lost a son in the war and finds herself in the uncertain environment of a defeated Germany engulfed in street battles between Reds and reactionaries, both flanks squeezing the center of a nascent democracy struggling to assert itself in the wake of the Kaiser’s abdication. One of the key members of that tenuous center is Matthias Erzberger, perhaps the most hated man in the country, who had the ill luck to be chosen as the official who formally accedes to Germany’s humiliating terms for Armistice, and as a result wears a target on his back for the rest of his life. At the same time, the former Kaiser’s son, Crown Prince Wilhelm von Preussen, is largely a forgotten figure who waits in exile for a call to destiny that never comes. Meanwhile in Paris, Marshal Ferdinand Foch lobbies for Germany to pay an even harsher price, as journalist Louise Weiss charts a new course for women in publishing and longs to be reunited with her lover, Milan Štefánik, an advocate for Czechoslovak sovereignty.
Others championing independence elsewhere include Nguyễn Tất Thành (later Hồ Chí Minh), polishing plates and politics while working as a dishwasher in Paris; Mohandas Gandhi, who barely survives the Spanish flu and now struggles to hold his followers to a regimen of nonviolent resistance in the face of increasingly violent British repression; T.E. Lawrence, increasingly disillusioned by the failure of the victorious allies to live up to promises of Arab self-determination; and, Terence MacSwiney, who is willing to starve himself to death in the cause of Irish nationhood. No such lofty goals motivate assassin Soghomon Tehlirian, a survivor of the Armenian genocide, who only seeks revenge on the Turks; nor future Auschwitz commandant Rudolf Höss, who emerges from the war an eager and merciless recruit for right-wing paramilitary forces.
There are many more voices, including several from the realms of art, literature, and music such as George Grosz, Virginia Woolf, and Arnold Schönberg. The importance of the postwar evolution of the arts is underscored in quotations and illustrations that head up each chapter. Perhaps the most haunting is Paul Nash’s 1918 oil-on-canvas of a scarred landscape entitled—with a hint of either optimism or sarcasm—We Are Making a New World. All the stories the voices convey are derived from their respective letters, diaries, and memoirs; only in the “Epilogue” does the reader learn that some of those accounts are clearly fabricated.
Many of my favorite characters in A World on Edge are ones that I had never heard of before, such as Moina Michael, who was so inspired by the sacrifice of those who perished in the Great War that she singlehandedly led a campaign to memorialize the dead with the poppy as her chosen emblem for the fallen, an enduring symbol to this very day. But I found no story more gripping than that of Marina Yurlova, a fourteen year old Cossack girl who became a child soldier in the Russian army, was so badly wounded she was hospitalized for a year, then entered combat once more during the ensuing civil war and was wounded again, this time by the Bolsheviks. Upon recovery, Yurlova embarked upon a precarious journey on foot through Siberia that lasted a month before she was able to flee Russia for Japan and eventually settle in the United States, where despite her injuries she became a dancer of some distinction.
I am a little embarrassed to admit that I received an advance reader’s edition (ARC) of A World on Edge as part of an early reviewer’s program way back in November 2018, but then let it linger in my to-be-read (TBR) pile until I finally got around to it near the end of June 2020. I loved the book but did not take any notes for later reference. So, by the time I sat down to review it in January 2021, given the size of the cast and the complexity of their stories, I felt there was no way I could do justice to the author and his work without re-reading it—so I did, over just a couple of days! And that is the true beauty of this book: for all its many characters, competing storylines, and what turns out to be multilevel, deeply profound messaging, for something of the grand saga that it is it remains a fast-paced, exciting read. Schönpflug’s technique of employing bit players to recount an epic tale succeeds so masterfully that the reader is hardly aware of what has been happening until the final pages are being turned. This is history, of course, this is indeed nonfiction, but yet the result invites a favorable comparison to great literature, to a collection of short stories by Ernest Hemingway, or to a novel by André Brink. If European history is an interest, A World on Edge is not only a recommended read, but a required one.
Women are conspicuously absent in most Civil War chronicles. With a few notable exceptions—Clara Barton, Harriet Tubman, Mary Todd Lincoln—female figures largely appear in the literature as bit players, if they make an appearance at all. Author Karen Abbott seeks a welcome redress to this neglect with Liar Temptress Soldier Spy: Four Women Undercover in the Civil War, an exciting and extremely well-written, if deeply flawed account of some ladies who made a significant contribution to the war effort, north and south.
The concept is sound enough. Abbott focuses on four very different women and relates their respective stories in alternating chapters. There is Belle Boyd, a teenage seductress with a lethal temper who serves as rebel spy and courier; Emma Edmonds, who puts on trousers to masquerade as Frank Thompson and joins the Union army; Rose O’Neal Greenhow, an attractive widow who romances northern politicians to obtain intel for the south; and, Elizabeth Van Lew, a prominent Richmond abolitionist who maintains a sophisticated espionage ring that infiltrates the inner circles of the Confederate government. Each of these is worthy of book-length treatment, but weaving their exploits together is an effective technique that makes for a readable and compelling narrative.
I had never heard of Karen Abbott—the pen name for Abbott Kahler—a journalist and highly acclaimed best-selling author dubbed the “pioneer of sizzle history” by USA Today. She is certainly a gifted writer, and unlike all too many works of history, her prose is fast-moving and engaging. I was swept along by her colorful recounting of the 1861 Battle of Bull Run, with flourishes such as: “Union troops fumbled backward and the Confederates rammed forward, a brutal and uneven dance, with soldiers felled like rotting trees.” I got so carried away I almost made it through the following passage without stumbling:
Some Northern soldiers claimed that every angle, every viewpoint, offered a fresh horror. The rebels slashed throats from ear to ear. They sliced off heads and dropkicked them across the field. They carved off noses and ears and testicles and kept them as souvenirs. They propped the limp bodies of wounded soldiers against trees and practiced aiming for the heart. They wrested muskets and swords from the clenched hands of corpses. They plunged bayonets deep into the backsides of the maimed and the dead. They burned the bodies, collecting “Yankee shin-bones” to whittle into drumsticks, and skulls to use as steins. [p34]
Almost. But I have a master’s degree in history and have spent a lifetime studying the American Civil War, and I have never heard this account of such barbarism at Bull Run. So I paused and flipped to Abbott’s notes for the corresponding page at the back of the book, where with a whiff of insouciance she admits that: “Throughout the war both the North and the South exaggerated the atrocities committed by the enemy, and it’s difficult to determine which incidents were real and which were apocryphal.” [p442] Which is another way of saying that her account is highly sensationalized, if not outright fabrication.
To my mind, Abbott commits an unpardonable sin here. A little research reveals that there were in fact a handful of allegations of brutality in the course of the battle, including the mutilation of corpses, but much of it anecdotal. There were several episodes of Confederate savagery later in the war, principally inflicted upon black soldiers in blue uniforms, but that is another story. How many readers of a popular history would without question take her at her word about what transpired at Bull Run? How many when confronted with stories of testicles taken as souvenirs would think to consult her citations? Lively paragraphs like this may certainly make for “sizzle”—but where’s the history? Historical novels have their place—The Killer Angels, by Michael Shaara, and Gore Vidal’s Lincoln, are among my favorites—but that is not the same thing as history, which must abide by a strict allegiance to fact-based reporting, informed analysis, and documentation. Apparently, this author demonstrates little loyalty to such constraints.
I read on, but with far more skepticism. Abbott’s style is seductive, so it’s easy to keep going. But sins do continue to accumulate. I have a passing familiarity with three of the four main characters, but fact-checking remained essential. Certainly the best known and most consequential was Van Lew, a heroic figure who aided the escape of prisoners of war and provided key intelligence to Union forces in the field. Greenhow is often cited as her counterpart working for the southern cause. Belle Boyd, on the other hand, has become a creature of legend who turns up more frequently in fiction or film than in history texts. I had never heard of Emma Edmonds, but I came to find her story the most fascinating of them all.
It seems that the more documented the subject—such as Van Lew, for example—the closer Abbott’s portrait comes to reliable biography. Beyond that, the imaginative seems to intrude, indeed dominate. The astonishing tale of Emma Edmonds has her not only impersonating a male Union soldier, but also variously posing as an Irish peddler and in blackface disguised as a contraband, engaged in thrilling espionage missions behind enemy lines! It rang of the stuff that Thomas Berger’s Little Big Man was made of. I was suitably sucked in, but also wary. And rightly so: Abbott’s version of Emma Edmonds’ life is based almost entirely on Edmonds’ own memoir, with little that corroborates it, but the author doesn’t bother to reveal that in the narrative. That Edmonds pretended to be a man in order to enlist seems plausible; her spy missions perhaps only fantasy. We simply just don’t know; a true historian would help us draw conclusions. Abbott seems content to let it play out as so much drama to tickle her audience.
But the worst of all is when the time comes to reveal the fate of luckless Confederate spy Greenhow, who drowns when her lifeboat capsizes with Union vessels bearing down on the steamer she abandoned, the moment where the superlative talent of Abbott’s pen collides with her concomitant disloyalty to scholarship:
She was sideways, upside down, somersaulting inside the wet darkness. She screamed noiselessly, the water rushing in. She tried to hold her breath—thirty seconds, sixty, ninety—before her mouth gave way and water filled it again. Tiny streams of bubbles escaped from her nostrils. A burning scythed through her chest. That bag of gold yanked like a noose around her neck. Her hair unspooled and leeched to her skin, twining around her neck. She tried to aim her arms up and her legs down, to push and pull, but every direction seemed the same. No moonlight skimmed along the surface, showing her the way; there was no light at all. [p389]
Entertaining, right? Outstanding writing, correct? Solid history—of course not! Imagining Greenhow’s final agonizing moments of life with a literary flourish may very well enrich the pages of a work of fiction, but it is nothing less than an outrage to a work of history.
This book was a fun read. Were it a novel I would likely give it high marks. But that is not how it is packaged. Emma Edmonds pretended to be a man to save the Union. Karen Abbott pretends to be a historian to sell books. Both make for great stories. But don’t confuse either with reliable history.
When I visited New York’s Metropolitan Museum of Art some years ago, the object I found most stunning was the “Monteleone Chariot,” a sixth century Etruscan bronze chariot inlaid with ivory. I stood staring at it, transfixed, long enough for my wife to shuffle her feet impatiently. Still I lingered, dwelling on every detail, especially the panels depicting episodes from the life of Homeric hero Achilles. By that time, I had read The Iliad more than once, and had long been immersed in studies of ancient Greece. How was it then, I wondered, that I could speak knowledgeably about Solon and Pisistratus, but yet know so little about the Etruscans who crafted that chariot in the same century those notables walked the earth?
Long before anyone had heard of the Romans, city-states of Etruria dominated the Italian peninsula—and, along with Carthage and a handful of Greek poleis—the central Mediterranean, as well. Later, Rome would absorb, crush or colonize all of them. In the case of the Etruscans, it was to be a little of each. And somehow, somewhat incongruously, over the millennia Etruscan civilization—or at least what the living, breathing Etruscans would have recognized as such—has been lost to us. But not lost in the way we usually think of “lost civilizations,” like Teotihuacan, for instance, or the Indus Valley, where what remains are ruins of a vanished culture that disappeared from living memory, an undeciphered script, and even the uncertain ethnicity of its inhabitants. The Etruscans, on the other hand, were never forgotten, their alphabet can be read although their language largely defies translation, and their DNA lingers in at least some present-day Italians. Yet, by all accounts they are nevertheless lost, and tantalizingly so.
Such a conundrum breeds frustration, of course: Romans supplanted the Etruscans but hardly exterminated them. Moreover, unlike other civilizations deemed “lost to history,” the Etruscans appear in ancient texts going as far back as Hesiod. There are also hundreds of excavated tombs, rich with decorative art and grave goods, the latter top-heavy with Greek imports they clearly treasured. So how can we know so much about the Etruscans and at the same time so little? Fortunately, Lucy Shipley, who holds a PhD in Etruscan archaeology, comes to a rescue of sorts with her well-written, delightful contribution to the scholarship, entitled simply The Etruscans, a volume in the digest-sized Lost Civilization series published by Reaktion Books.
Most Etruscan studies are dominated by discussions of the ancient sources and—most prominently—the tombs, which are nothing short of magnificent. But where does that lead us? Herodotus references the Etruscans, as does Livy. But are the sources reliable? Rather dubious, as it turns out. Herodotus may be a dependable chronicler of the Hellenes, but anyone who has read his comically misguided account of Egyptian life and culture is aware how far he can stray from reality. And Roman authors such as Livy routinely trumped a decidedly negative perspective, most evident in disdainful memories of the unwelcome semi-legendary Etruscan kings that are said to have ruled Rome until the overthrow of “Tarquin the Proud” in 509 BCE.
Then there are the tombs. Attempts to extrapolate what ancient life was like from the art that decorates the tombs of the dead—awe inspiring as it may be—can present a distorted picture (pun fully intended!) that ignores all but the wealthiest elite slice of the population. Much like Egyptology’s one-time obsession for pyramids and the pharaoh’s list tended to obscure the no less interesting lives of the non-royal—such as those of the workers who collected daily beer rations and left graffiti within the walls of pyramids they constructed—the emphasis on tombs that is standard to Etruscan studies reveals little of the lives of the vast majority of ordinary folks that peopled their world.
Shipley neatly sidesteps these traditional traps by failing to be constrained by them. Instead, she relies on her training as an archaeologist to ask questions: what do we know about the Etruscans and how do we know it? And, perhaps more critically: what don’t we know and why don’t we know it? In the process, she brings a surprisingly fresh look to an enigmatic people in a highly readable narrative suitable to both academic and popular audiences. Arranged thematically rather than chronologically, the author selects a specific artifact or site for each chapter to serve as a visual trigger for the discussion. Because Shipley is so talented with a pen, it is worth pausing to let her explain her methodology in her own words:
Why focus on the archaeology? Because it is the very materiality, the physicality, the toughness and durability of things and the way they insidiously slip and slide into every corner of our lives that makes them so compelling … We are continually making and remaking ourselves, with the help of things. I would argue that the past is no different in this respect. It’s through things that we can get at the people who made, used and ultimately discarded them—their projects of self-production are as wrapped up in stuff as our own. And always, wrapped up in these things, are fundamental questions about how we choose to be in the world, questions that structure our actions and reactions, questions that change and challenge how we think and what we feel. Questions and objects—the two mainstays of human experience. [p19-20]
Shipley’s approach succeeds masterfully. Because many of these objects—critical artifacts for the archaeologist but often also spectacular works of art for the casual observer—are rendered in full color in this striking edition, the reader is instantly hooked: effortlessly chasing the author’s captivating prose down a host of intriguing rabbit holes in pursuit of answers to the questions she has mated with these objects. Along the way, she showcases the latest scholarship with a concise treatment of a broad range of topics informed by the kind of multi-disciplinary research that defines twenty-first century historical inquiry.
This includes DNA studies of both cattle and human populations in an attempt to resolve the long debate over Etruscan origins. While Herodotus and legions of other ancient and modern detectives have long pointed to legendary migrations from Anatolia, it turns out that the Etruscans are likely autochthonous, speaking a pre-Indo European language that may possibly be related to the one spoken by Ötzi, the mummified iceman, thousands of years ago. Shipley also takes the time to explain how it is that we can read enough of the Etruscan alphabet to decipher proper names while remaining otherwise frustrated in efforts aimed at meaningful translation. Much that we identify as Roman was borrowed from Etruria, but as Rome assimilated the Etruscans over the centuries, their language was left behind. Later, Etruscan literature—like all too much of the classical world—fell victim to the zeal of early Christians in campaigns to purge any remnants of paganism. Most offensive in this regard were writings that described the practices of the “haruspex,” a specialist who sought to divine the future by examining the livers of sacrificial animals, an Etruscan ritual later integrated into Roman religious practices. Texts of haruspices appear prominently in the “hit lists” drawn up by Christian thinkers Tertullian and Arnobius.
My favorite chapter is entitled “Super Rich, Invisible Poor,” which highlights the inevitable distortion that results from the attention paid to the exquisite art and grave goods of the wealthy elite at the expense of the sizeable majority of the inhabitants of a dozen city-states comprised of numerous towns, villages and some larger cities with populations thought to number in the tens of thousands. Although, to be fair, this has hardly been deliberate: there remains a stark scarcity in the archaeological record of the teeming masses, so to speak. While it may smack of the cliché, the famous aphorism “Absence of evidence is not evidence of absence” should be triple underscored here! The Met’s Monteleone Chariot, originally part of an elaborate chariot burial, makes an appearance in this chapter, but perhaps far more fascinating is a look at the great complex of workshops at a site called Poggio Civitate, more than a hundred miles from Monteleone, where skilled craftspeople labored to produce a whole range of goods in the same century that chariot was fashioned. But what of those workers? There seemed to be no trace of them. You can clearly detect the author’s delight as she describes recent excavations that uncovered remains of a settlement that likely housed them. Shipley returns again and again to her stated objective of connecting the material culture to the living Etruscans who were once integral to it.
Another chapter worthy of superlatives is “Sex, Lives and Etruscans.” While it is tempting to impose modern notions of feminism on earlier peoples, Etruscan women do seem to have had claimed lives of far greater independence than their classical contemporaries in Greece and Rome. And there are also compelling hints at an openness in sexuality—including wife-sharing—that horrified ancient observers who nevertheless thrilled in recounting licentious tales of wicked Etruscan behavior! Shipley describes tomb art that depicts overt sex acts with multiple partners, while letting the reader ponder whether legendary accounts of Etruscan profligacy are given to hyperbole or not.
In addition to beautiful illustrations and an engaging narrative, this volume also features a useful map, a chronology, recommended reading, and plenty of notes. It is rare that any author can so effectively tackle a topic so wide-ranging in such a compact format, so Shipley deserves special recognition for turning out such an outstanding work. The Etruscans rightly belongs on the shelf of anyone eager to learn more about a people who certainly made a vital contribution to the history of western civilization.
In Hearts of Atlantis, Stephen King channels the fabled lost continent as metaphor for the glorious promise of the sixties that vanished so utterly that nary a trace remains. Atlantis sank, King declares bitterly in his fiction. He has a point. If you want to chart the actual moments those collective hopes and dreams were swamped by currents of reaction and finally submerged in the merciless wake of a new brand of unforgiving conservatism, you absolutely must turn to Reaganland: America’s Right Turn 1976-1980, Rick Perlstein’s brilliant, epic political history of an era too often overlooked that surely echoes upon America in 2020 with far greater resonance than perhaps any before or since. But be warned: you may need forearms even bigger than the sign-spinning guy in the Progressive commercial to handle this dense, massive 914-page tome that is nevertheless so readable and engaging that your wrists will tire before your interest flags.
Reaganland is a big book because it is actually several overlapping books. It is first and foremost the history of the United States at an existential crossroads. At the same time, it is a close account of the ill-fated presidency of Jimmy Carter. And, too, it is something of a “making of the president 1980.” This is truly ambitious stuff, and that Perlstein largely succeeds in pulling it off should earn him wide and lasting accolades both as a historian and an observer of the American experience.
Reaganland is the final volume in a series launched nearly two decades ago by Perlstein, a progressive historian, that chronicles the rise of the right in modern American politics. Before the Storm focused on Goldwater’s ascent upon the banner of far-right conservatism. This was followed by Nixonland, which profiled a president who thrived on division and earned the author outsize critical acclaim; and, The Invisible Bridge, which revealed how Ronald Reagan—stridently unapologetic for the Vietnam debacle, for Nixon’s crimes, and for angry white reaction to Civil Rights—brought notions once the creature of the extreme right into the mainstream, and began to pave the road that would take him to the White House. Reaganland is written in the same captivating, breathless style Perlstein made famous in his earlier works, but he has clearly honed his craft: the narrative is more measured, less frenetic, and is crowned with a strong concluding chapter—something conspicuously absent in The Invisible Bridge.
The grand—and sometimes allied—causes of the Sixties were Civil Rights and opposition to the Vietnam War, but concomitant social and political revolutions spawned a myriad of others that included antipoverty efforts for the underprivileged, environmental activism, equal treatment for homosexuals and other marginalized groups such as Native Americans and Chicano farm workers, constitutional reform, consumer safety, and most especially equality for women, of which the right to terminate a pregnancy was only one component. The common theme was inclusion, equality, and cultural secularism. The antiwar movement came to not only dominate but virtually overshadow all else, but at the same time served as a unifying factor that stitched together a kind of counterculture coat of many colors to oppose an often stubbornly unyielding status quo. When the war wound down, that fabric frayed. Those who once marched together now marched apart.
This fragmentation was not generally adversarial; groups once in alliance simply went their own ways, organically seeking to advance the causes dear to them. And there was much optimism. Vietnam was history. Civil Rights had made such strides, even if there remained so much unfinished business. Much of what had been counterculture appeared to have entered the mainstream. It seemed like so much was possible. At Woodstock, Grace Slick had declared that “It’s a new dawn,” and the equality and opportunity that assurance heralded actually seemed within reach. Yet, there were unseen, menacing clouds forming just beneath the horizon.
Few suspected that forces of reaction quietly gathering strength would one day unite to destroy the progress towards a more just society that seemed to lie just ahead. Perlstein’s genius in Reaganland lies in his meticulous identification of each of these disparate forces, revealing their respective origin stories and relating how they came to maximize strength in a collective embrace. The Equal Rights Amendment, riding on a wave of massive bipartisan public support, was but three states away from ratification when a bizarre woman named Phyllis Schlafly seemingly crawled out of the woodwork to mobilize legions of conservative women to oppose it. Gay people were on their way to greater social acceptance via local ordinances which one by one went down to defeat after former beauty queen and orange juice hawker Anita Bryant mounted what turned into a nationwide campaign of resistance. The landmark Roe v. Wade case that guaranteed a woman’s right to choose sparked the birth of a passionate right-to-life movement that soon became the central creature of the emerging Christian evangelical “Moral Majority,” that found easy alliance with those condemning gays and women’s lib. Most critically—in a key component that was to have lasting implications, as Perlstein deftly underscores—the Christian right also pioneered a political doctrine of “co-belligerency” that encouraged groups otherwise not aligned to make common ground against shared “enemies.” Sure, Catholics, Mormons and Jews were destined to burn in a fiery hell one day, reasoned evangelical Protestants, but in the meantime they could be enlisted as partners in a crusade to combat abortion, homosexuality and other miscellaneous signposts of moral decay besetting the nation.
That all this moral outrage could turn into a formidable political dynamic seems to have been largely unanticipated. But, as Perlstein reminds us, maybe it should not have been so surprising: candidate Jimmy Carter, himself deeply religious and well ahead in the 1976 race for the White House, saw a precipitous fifteen-point drop in the polls after an interview in Playboy where he admitted that he sometimes lusted in his heart. Perhaps the sun wasn’t quite ready to come up for that new dawn after all.
Of course, the left did not help matters, often ideologically unyielding in its demand to have it all rather than settle for some, as well as blind to unintended consequences. Nothing was to alienate white members of the national coalition to advance civil rights for African Americans more than busing, a flawed shortcut that ignored the greater imperative for federal aid to fund and rebuild decaying inner-city schools, de facto segregated by income inequality. Efforts to advance what was seen as a far too radical federal universal job guarantee ended up energizing opposition that denied victory to other avenues of reform. And there’s much more. Perlstein recounts the success of Ralph Nader’s crusade for automobile safety, which exposed carmakers for deliberately skimping on relatively inexpensive design modifications that could have saved countless lives in order to turn out even greater profits. Auto manufacturers were finally brought to heel. Consumer advocacy became a thing, with widespread public support and frequent industry acquiescence. But even Nader—not unaware of consequences, unintended or otherwise—advised caution when a protégé pressed a campaign to ban TV ads for sugary cereals that targeted children, predicting with some prescience that “if you take on the advertisers you will end up with so many regulators with their bones bleached in the desert.” [p245] Captains of industry Perlstein terms “Boardroom Jacobins” were stirred to collective action by what was perceived as regulatory overreach, and big business soon joined hands to beat all such efforts back.
Meanwhile, subsequent to Nixon’s fall and Ford’s defeat to Carter in 1976, pundits—not for the last time—prematurely foretold the extinction of the Republican Party, leaving stalwart policy wonks on the right seemingly adrift, clinging to their opposition to the pending Salt II arms agreement and the Panama Canal Treaty, furiously wielding oars of obstruction but yet still lacking a reliable vessel to stem the tide. Bitterly opposed to the prevailing wisdom that counseled moderation to ensure not only relevance but survival, they chafed at accommodation with the Ford-Kissinger-Rockefeller wing of the party that preached détente abroad and compromise at home. They looked around for a new champion … and once again found Ronald Reagan!
The former Bedtime for Bonzo co-star and corporate shill had launched his political career railing against communists concealed in every cupboard, as well as shrewdly exploiting populist rage at long-haired antiwar demonstrators. As governor of California he directed an especially violent crackdown known as “Bloody Thursday” on non-violent protesters at UC Berkeley’s People’s Park that resulted in one death and hundreds of injuries after overzealous police fired tear gas and shotguns loaded with buckshot at the crowd. In a comment that eerily presaged Trump’s “very fine people on both sides” remark, Reagan declared that “Once the dogs of war have been unleashed, you must expect … that people … will make mistakes on both sides.” But a year later he was even less apologetic, proclaiming that “If it takes a bloodbath, let’s get it over with.” This was their candidate, who—remarkably one would think—had nearly snatched the nomination away from Ford in ’76, and then went on to cheer party unity while campaigning for Ford with even less enthusiasm than Bernie Sanders exhibited for Hillary Clinton in 2016. Many hold Reagan at least partially responsible for Ford’s loss in the general election.
But Reagan’s neglect of Ford left him neatly positioned as the front-runner for 1980. As conservatives dug in, others of the party faithful recoiled in horror, fearing a repeat of the drubbing at the polls they took in 1964 with Barry “extremism in defense of liberty is no vice” Goldwater at the top of the ticket. And Reagan did seem extreme, perhaps more so than Goldwater. The sounds of sabers rattling nearly drowned out his words every time he mentioned the U.S.S.R. And he said lots of truly crazy things, both publicly and privately, once even wondering aloud over dinner with columnist Jack Germond whether “Ford had staged fake assassination attempts to win sympathy for his renomination.” Germond later recalled that “He was always a man with a very loose hold on the real world around him.” [p617] Germond had a good point: Reagan once asserted that “Fascism was really the basis for the New Deal,” boosted the valuable recycling potential of nuclear waste, and insisted that “trees cause more pollution than automobiles do”—prompting some joker at a rally to decorate a tree with a sign that said “Chop me down before I kill again.”
But Reagan had a real talent with dog whistles, launching his campaign with a speech praising “states’ rights” at a county fair near Philadelphia, Mississippi, where three civil rights workers were murdered in 1964. He once boasted he “would have voted against the Civil Rights Act of 1964,” claimed “Jefferson Davis is a hero of mine,” and bemoaned the Voting Rights Act as “humiliating to the South.” A whiff of racism also clung to his disdain for Medicaid recipients as a “a faceless mass, waiting for handouts,” and his recycling ad nauseum of his dubious anecdote of a “Chicago welfare queen” with twelve social security cards who bilked the government out of $150,000. Unreconstructed whites ate this red meat up. Nixon’s “southern strategy” reached new heights under Reagan.
But a white southerner who was not a racist was actually the president of the United States. Despite the book’s title, the central protagonist of Reaganland is Jimmy Carter, a man who arrived at the Oval Office buoyed by public confidence rarely seen in the modern era—and then spent four years on a rollercoaster of support that plummeted far more often than it climbed. At one point his approval rating was a staggering 77% … at another 28%—only four points above where Nixon’s stood when he resigned in disgrace. These days, as the nonagenarian Carter has established himself as the most impressive ex-president since John Quincy Adams, we tend to forget what a truly bad president he was. Not that he didn’t have good intentions, only that—like Woodrow Wilson six decades before him—he was unusually adept at using them to pave his way to hell. A technocrat with an arrogant certitude that he had all the answers, he arrived on the Beltway with little idea of how the world worked, a family in tow that seemed like they were right out of central casting for a Beverly Hillbillies sequel. He often gravely lectured the public on what was really wrong with the country—and then seemed to lay blame upon Americans for outsize expectations. And he dithered, tacking this way and that way, alienating both sides of the aisle in a feeble attempt to seem to stand above the fray.
In fairness, he had a lot to deal with. Carter inherited a nation more socio-economically shook up than any since the 1930s. In 1969, the United States had proudly put a man on the moon. Only a few short years later, a country weaned on wallowing in American exceptionalism saw factories shuttered, runaway inflation, surging crime, cities on the verge of bankruptcy, and long lines just to gas up your car at an ever-skyrocketing cost. And that was before a nuclear power plant melted down, Iranians took fifty-two Americans hostage, and Soviet tanks rolled into Afghanistan. All this was further complicated by a new wave of media hype that saw the birth of the “bothersiderism” that gives equal weight to scandals legitimate or spurious—an unfortunate ingredient that remains so baked into current reporting.
Perhaps the most impressive part of Reaganland is Perlstein’s superlative rendering of what America was like in the mid-70s. Stephen King’s horror is often so effective at least in part due to the fads, fast food, and pop music he uses as so many props in his novels. If that stuff is real, perhaps ghosts or killer cars could be real, as well. Likewise, Perlstein brings a gritty authenticity home by stepping beyond politics and policy to enrich the narrative with headlines of serial killers and plane crashes, of assassination and mass suicide, adroitly resurrecting the almost numbing sense of anxiety that informed the times. DeNiro’s Taxi Driver rides again, and the reader winces through every page.
Carter certainly had his hands full, especially as the hostage crisis dragged on, but it hardly ranked up there with Truman’s Berlin Airlift or JFK’s Cuban missiles. There were indeed crises, but Carter seemed to manufacture even more—and to get in his own way most of the time. And his attempts to reassure consistently backfired, fueling even more national uncertainty. All this offered a perfect storm of opportunity for right-wing elements who discovered co-belligerency was not only a tactic but a way of life. Against all advice and all odds, Reagan—retaining his “very loose hold on the real world around him”—saw no contradiction bringing his brand of conservatism to join forces with those maligning gays, opposing abortion, stonewalling the ERA, and boosting the Christian right. Corporate CEO’s—Perlstein’s “Boardroom Jacobins”—already on the defensive, were more than ready to finance it. Carter, flailing, played right into their hands. Already the most right-of-center Democratic president of the twentieth century, he too shared that weird vision of the erosion of American morality. And Perlstein reminds us that the debacle of financial deregulation usually traced back to Reagan actually began on Carter’s watch, the seeds sown for the wage stagnation, growth of income inequality, and endless cycles of recession that has been de rigueur in American life ever since. Carter failed to make a good closing argument for why he should be re-elected, and the unthinkable occurred: Ronald Reagan became president of the United States. The result was that the middle-class dream that seemed so much in jeopardy under Carter was permanently crushed once Reagan’s regime of tax cuts, deregulation, and the supply-side approach George H.W. Bush rightly branded as “voodoo economics” became standard operating policy. Progressive reform sputtered and stalled. The little engine that FDR had ignited to manifest a social and economic miracle for America crashed and burned forever on the vanguard of Reaganomics.
Some readers might be intimidated by the size of Reaganland, but it’s a long book because it tells a long story, and it contains lots of moving parts. Perlstein succeeds magnificently because he demonstrates how all those parts fit together, replete with the nuance and complexity critical to historical analysis. Is it perfect? Of course not. I’m a political junkie, but there were certain segments on policy and legislative wrangling that seemed interminable. And if Perlstein mentioned “Boardroom Jacobins” just one more time, I might have screamed. But these are quibbles. This is without doubt the author’s finest book, and I highly recommend it, as both an invaluable reference work and a cover-to-cover read.
In Hearts of Atlantis, Stephen King imagines the sixties as bookended by JFK’s 1963 assassination and John Lennon’s murder in 1980. Perlstein seems to follow that same school of thought, for the final page of Reaganland also wraps up with Lennon’s untimely death. In an afterword to his work of fiction, King muses: “Although it is difficult to believe, the sixties are not fictional; they actually happened.” If you are more partial to nonfiction and want the real story of how the sixties ended, of how Atlantis sank, you must read Reaganland.
[Note: this review goes to press just a few days before the most consequential presidential election in modern American history. This book and this review are reminders that elections do matter.]
Nearly two decades have passed since Charles Freeman published The Closing of the Western Mind: The Rise of Faith and the Fall of Reason, a brilliant if controversial examination of the intellectual totalitarianism of Christianity that dated to the dawn of its dominance of Rome and the successor states that followed the fragmentation of the empire in the West. Freeman argues persuasively that the early Christian church vehemently and often brutally rebuked the centuries-old classical tradition of philosophical enquiry and ultimately drove it to extinction with a singular intolerance of competing ideas crushed under the weight of a monolithic faith. Not only were pagan religions prohibited, but there would be virtually no provision for any dissent with official Christian doctrine, such that those who advanced even the most minor challenges to interpretation were branded heretics and sent to exile or put to death. That tragic state was to define medieval Europe for more than a millennium.
Now the renowned classical historian has returned with a follow-up epic, The Awakening: A History of the Western Mind AD 500-1700, recently published in the UK (and slated for U.S. release, possibly with a different title) which recounts the slow—some might brand it glacial—evolution of Western thought that restored legitimacy to independent examination and analysis, that eventually led to a celebration, albeit a cautious one, of reason over blind faith. In the process, Freeman reminds us that quality, engaging narrative history has not gone extinct, while demonstrating that it is possible to produce a work that is so well-written it is readable by a general audience while meeting the rigorous standards of scholarship demanded by academia. That this is no small achievement will be evident to anyone who—as I do—reads both popular and scholarly history and is struck by the stultifying prose that often typifies the academic. In contrast, here Freeman takes a skillful pen to reveal people, events and occasionally obscure concepts, much of which may be unfamiliar to those who are not well versed in the medieval period.
The fall of Rome remains a subject of debate for historians. While traditional notions of sudden collapse given to pillaging Vandals leaping over city walls and fora engulfed in flames have long been revised, competing visions of a more gradual transition that better reflect the scholarship sometimes distort the historiography to minimize both the fall and what was actually lost. And what was lost was indeed dramatic and incalculable. If, to take just one example, sanitation can be said to be a mark of civilization, the Roman aqueducts and complex network of sewers that fell into disuse and disrepair meant that fresh water was no longer reliable, and sewage that bred pestilence was to be the norm for fifteen centuries to follow. It was not until the late nineteenth century that sanitation in Europe even approached Roman standards. So, whatever the timeline—rapid or gradual—there was indeed a marked collapse. Causes are far more elusive. But Gibbon’s largely discredited casting of Christianity as the villain that brought the empire down tends to raise hackles in those who suspect someone like Freeman attempting to point those fingers once more. But Freeman has nothing to say about why Rome fell, only what followed. The loss of the pursuit of reason was to be as devastating for the intellectual health of the post-Roman world in the West as sanitation was to prove for its physical health. And here Freeman does squarely take aim at the institutional Christian church as the proximate cause for the subsequent consequences for Western thought. This is well-underscored in the bleak assessment that follows in one of the final chapters in The Closing of the Western Mind:
Christian thought that emerged in the early centuries often gave irrationality the status of a universal “truth” to the exclusion of those truths to be found through reason. So the uneducated was preferred to the educated and the miracle to the operation of natural laws … This reversal of traditional values became embedded in the Christian tradition … Intellectual self-confidence and curiosity, which lay at the heart of the Greek achievement, were recast as the dreaded sin of pride. Faith and obedience to the institutional authority of the church were more highly rated than the use of reasoned thought. The inevitable result was intellectual stagnation … [p322]
Awakening picks up where Closing leaves off as the author charts the “Reopening of the Western Mind” (this was the working title of his draft!) but the new work is marked by far greater optimism. Rather than dwell on what has been lost, Freeman puts focus not only upon the recovery of concepts long forgotten but how rediscovery eventually sparked new, original thought, as the spiritual and later increasingly secular world danced warily around one another—with a burning heretic all too often staked between them on Europe’s fraught intellectual ballroom. Because the timeline is so long—encompassing twelve centuries—the author sidesteps what could have been a dull chronological recounting of this slow progression to narrow his lens upon select people, events and ideas that collectively marked milestones on the way that comprise thematic chapters to broaden the scope. This approach thus transcends what might have been otherwise parochial to brilliantly convey the panoramic.
There are many superlative chapters in Awakening, including the very first one, entitled “The Saving of the Texts 500-750.” Freeman seems to delight in detecting the bits and pieces of the classical universe that managed to survive not only vigorous attempts by early Christians to erase pagan thought but the unintended ravages of deterioration that is every archivist’s nightmare. Ironically, the sacking of cities in ancient Mesopotamia begat conflagrations that baked inscribed clay tablets, preserving them for millennia. No such luck for the Mediterranean world, where papyrus scrolls, the favored medium for texts, fell to war, natural disasters, deliberate destruction, as well as to entropy—a familiar byproduct of the second law of thermodynamics—which was not kind in prevailing environmental conditions. We are happily still discovering papyri preserved by the dry conditions in parts of Egypt—the oldest dating back to 2500 BCE—but it seems that the European climate doomed papyrus to a scant two hundred years before it was no more.
Absent printing presses or digital scans, texts were preserved by painstakingly copying them by hand, typically onto vellum, a kind of parchment made from animal skins with a long shelf life, most frequently in monasteries by monks for whom literacy was deemed essential. But what to save? The two giants of ancient Greek philosophy, Plato and Aristotle, were preserved, but the latter far more grudgingly. Fledgling concepts of empiricism in Aristotle made the medieval mind uncomfortable. Plato, on the other hand, who pioneered notions of imaginary higher powers and perfect forms, could be (albeit somewhat awkwardly) adapted to the prevailing faith in the Trinity, and thus elements of Plato were syncretized into Christian orthodoxy. Of course, as we celebrate what was saved it is difficult not to likewise mourn what was lost to us forever. Fortunately, the Arab world put a much higher premium on the preservation of classical texts—an especially eclectic collection that included not only metaphysics but geography, medicine and mathematics. When centuries later—as Freeman highlights in Awakening—these works reached Europe, they were to be instrumental as tinder to the embers that were to spark first a revival and then a revolution in science and discovery.
My favorite chapter in Awakening is “Abelard and the Battle for Reason,” which chronicles the extraordinary story of scholastic scholar Peter Abelard (1079-1142)—who flirted with the secular and attempted to connect rationalism with theology—told against the flamboyant backdrop of Abelard’s tragic love affair with Héloïse, a tale that yet remains the stuff of popular culture. In a fit of pique, Héloïse’s father was to have Abelard castrated. The church attempted something similar, metaphorically, with Abelard’s teachings, which led to an order of excommunication (later lifted), but despite official condemnation Abelard left a dramatic mark on European thought that long lingered.
There is too much material in a volume this thick to cover competently in a review, but the reader will find much of it well worth the time. Of course, some will be drawn to certain chapters more than others. Art historians will no doubt be taken with the one entitled “The Flowering of the Florentine Renaissance,” which for me hearkened back to the best elements of Kenneth Clark’s Civilisation, showcasing not only the evolution of European architecture but the author’s own adulation for both the art and the engineering feat demonstrated by Brunelleschi’s dome, the extraordinary fifteenth century adornment that crowns the Florence Cathedral. Of course, Freeman does temper his praise for such achievements with juxtaposition to what once had been, as in a later chapter that recounts the process of relocating an ancient Egyptian obelisk weighing 331 tons that had been placed on the Vatican Hill by the Emperor Caligula, which was seen as remarkable at the time. In a footnote, Freeman reminds us that: “One might talk of sixteenth-century technological miracles, but the obelisk had been successfully erected by the Egyptians, taken down by the Romans, brought by sea to Rome and then re-erected there—all the while remaining intact!” [p492n]
If I was to find a fault with Awakening, it is that it does not, in my opinion, go far enough to emphasize the impact of the Columbian Experience on the reopening of the Western mind. There is a terrific chapter devoted to the topic, “Encountering the Peoples of the ‘Newe Founde Worldes,’” which explores how the discovery of the Americas and its exotic inhabitants compelled the European mind to examine other human societies whose existence had never before even been contemplated. While that is a valid avenue for analysis, it yet hardly takes into account just how earth-shattering 1492 turned out to be—arguably the most consequential milestone for human civilization (and the biosphere!) since the first cities appeared in Sumer—in a myriad of ways, not least the exchange of flora and fauna (and microbes) that accompanied it. But this significance was perhaps greatest for Europe, which had been a backwater, long eclipsed by China and the Arab middle east. It was the Columbian Experience that reoriented the center of the world, so to speak, from the Mediterranean to the Atlantic, which was exploited to the fullest by the Europeans who prowled those seas and first bridged the continents. It is difficult to imagine the subsequent accomplishments—intellectual and otherwise—had Columbus not landed at San Salvador. But this remains just a quibble that does not detract from Freeman’s overall accomplishment.
Full disclosure: Charles Freeman and I began a long correspondence via email following my review of Closing. I was honored when he selected me as one of his readers for his drafts of Awakening, which he shared with me in 2018, but at the same time I approached this responsibility with some trepidation: given Freeman’s credentials and reputation, what if I found the work to be sub-standard? What if it was simply not a good book? How would I address that? As it was, these worries turned out to be misplaced. It is a magnificent book and I am grateful to have read much of it as a work in progress, and then again after publication. I did submit several pages of critical commentary to assist the author, to the best of my limited abilities, hone a better final product, and to that end I am proud see my name appear in the “Acknowledgments.”
I do not usually talk about formats in book reviews, since the content is typically neither enhanced nor diminished by its presentation in either a leather-bound tome or a mass-market paperback or the digital ink of an e-book, but as a bibliophile I cannot help but offer high praise to this beautiful, illustrated edition of Awakening published by Head of Zeus, even accented by a ribbon marker. It has been some time since I have come across a volume this attractive without paying a premium for special editions from Folio Society or Easton Press, and in this case the exquisite art that supplements the text transcends the ornamental to enrich the narrative.
Interest in the medieval world has perhaps waned over time. But that is, of course, a mistake. How we got from point A to point B is an important story, even if it has never been told before as well as Freeman has told it in Awakening. And it is not an easy story to tell. As the author acknowledges in a concluding chapter: “Bringing together the many different elements that led to the ‘awakening of the western mind’ is a challenge. It is important to stress just how bereft Europe was, economically and culturally, after the fall of the Roman empire compared to what it had been before.” [p735]
Those of us given to dystopian fiction, concerned with the fragility of republics and civilization, and wondering aloud in the midst of a global pandemic and the rise of authoritarianism what our descendants might recall of us if it all fell to collapse tomorrow cannot help but be intrigued by how our ancestors coped—for better or for worse—after Rome was no more. If you want to learn more about that, there might be no better covers to crack than Freeman’s The Awakening. I highly recommend it.
NOTE: My review of Freeman’s earlier work appears here:
“When people ask me if I went to film school, I tell them, ‘No, I went to films,’” Quentin Tarantino famously quipped. While I’m no iconic director, I too “went to films,” in a manner of speaking. I was raised by my grandmother in the 1960s—with a little help from a 19” console TV in the living room and seven channels delivered via rooftop antenna. When cartoons, soaps, or prime time westerns and sitcoms like Bonanza and Bewitched weren’t broadcasting, all the remaining airtime was filled with movies. All kinds of movies: drama, screwball comedies, war movies, gangster movies, horror movies, sci-fi, musicals, love stories, murder mysteries—you name the genre, it ran. And ran. And ran. For untold hours and days and weeks and years.
Grandma—rest in peace—loved movies. Just loved them. All kinds of movies. But she didn’t have much of a discerning eye: for her, The Treasure of the Sierra Madre was no better or worse than Bedtime for Bonzo. At first, I didn’t know any better either, and whether I was four or fourteen I watched whatever was on, whenever she was watching. But I took a keen interest. The immersion paid dividends. My tastes evolved. One day I began calling them films instead of movies, and even turned into something of a “film geek,” arguing against the odds that Casablanca is a better picture than Citizen Kane, promoting Kubrick’s Paths of Glory over 2001, and shamelessly confessing to screening Tarantino’s Kill Bill I and II back-to-back more than a dozen times. In other words, I take films pretty seriously. So, when I noticed that Seduction: Sex, Lies, and Stardom in Howard Hughes’s Hollywood was up for grabs in an early reviewer program, I jumped at the opportunity. I was not to be disappointed.
In an extremely well-written and engaging narrative, film critic and journalist Karina Longworth has managed to turn out a remarkable history of Old Hollywood, in the guise of a kind of biography of Howard Hughes. In films, a “MacGuffin” is something insignificant or irrelevant in itself that serves as a device to trigger the plot. Examples include the “Letters of Transit” in Casablanca, the statuette in The Maltese Falcon, and the briefcase in Tarantino’s Pulp Fiction. Howard Hughes himself is the MacGuffin of sorts in Seduction, which is far less about him than his female victims and the peculiar nature of the studio system that enabled predators like Hughes and others who dominated the motion picture industry.
Howard Hughes was once one of the most famous men in America, known for his wealth and genius, a larger-than-life legend noted both for his exploits as aviator and flamboyance as a film producer given to extravagance and star-making. But by the time I was growing up, all that was in the distant past, and Hughes was little more than a specter in supermarket tabloids, an eccentric billionaire turned recluse. It was later said that he spent most days alone, sitting naked in a hotel room watching movies. Long unseen by the public, at his death he was nearly unrecognizable, skeletal and covered in bedsores. Director Martin Scorsese resurrected him for the big screen in his epic biopic “The Aviator,” headlined by Leonardo DiCaprio and a star-studded cast, which showcased Hughes as a heroic and brilliant iconoclast who in turn took on would-be censors, the Hollywood studio system, the aviation industry and anyone who might stand in the way of his quest for glory—all while courting a series of famed beauties. Just barely in frame was the mental instability, the emerging Obsessive-Compulsive Disorder that later brought him down.
Longworth finds Hughes a much smaller and more despicable man, an amoral narcissist and manipulator who was seemingly incapable of empathy for other human beings. (Yes, there is indeed a palpable resemblance to a certain president!) While Hughes carefully crafted an image of a titan who dominated the twin arenas of flight and film, in Longworth’s portrayal he seems to crash more planes than he lands, and churns out more bombs than blockbusters. In the public eye, he was a great celebrity, but off-screen he comes off as an unctuous villain, a charlatan whose real skill set was self-promotion empowered by vast sums of money and a network of hangers-on. The author gives him his due by denying him top billing as the star of the show, rather giving scrutiny to those in his orbit, the females in supporting roles whom he in turn dominated, exploited and discarded. You can almost hear refrains of Carly Simon’s You’re So Vain interposed in the narrative, taunting the ghost of Hughes with the chorus: “You probably think this song is about you”—which by the way would make a great soundtrack if there’s ever a screen adaptation of the book.
If not Hughes, the real main character is Old Hollywood itself, and with a skillful pen, Longworth turns out a solid history—a decidedly feminist history—of the place and time that is nothing less than superlative. The author recreates for us the early days before the tinsel, when a sleepy little “dry” town no one had ever heard of almost overnight became the celluloid capital of the country. Pretty girls from all over America would flock there on a pilgrimage to fame; most disappointed, many despairing, more than a few dead. Nearly all were preyed upon by a legion of the contemptible, used and abused with a thin tissue of lies and promises that anchored them not only to the geography but to the predominantly male movers and shakers who dominated the studio system that literally dominated everything else. This is a feminist history precisely because Longworth focuses on these women—more specifically ten women involved with Hughes—and through them brilliantly captures Hollywood’s golden age as manifested in both the glamorous and the tawdry.
Howard Hughes was not the only predator in Tinseltown, of course, but arguably its most depraved. If Hollywood power-brokers overpromised fame to hosts of young women just to bed them, for Hughes sex was not even always the principal motivation. It went way beyond that, often to twisted ends perhaps unclear to even Hughes himself. He indeed took many lovers, but those he didn’t sleep with were not exempt to his peculiar brand of exploitation. What really got Howard Hughes off was exerting power over women, controlling them, owning them. He virtually enslaved some of these women, stripping them of their individual freedom of will for months or even years with vague hints at eventual stardom, abetted by assorted handlers appointed to spy on them and report back to him. Even the era of “Me Too” lacks the appropriate vocabulary to describe his level of “creepy!”
One of the women he apparently did not take to bed was Jane Russell. Hughes cast the beautiful, voluptuous nineteen year old in The Outlaw, a film that took forever to produce and release largely due to his fetishistic obsession with Russell’s breasts—and the way these spilled out of her dress in a promotional poster that provoked the ire of censors. Longworth’s treatment of the way Russell unflappably endured her long association with Hughes—despite his relentless domination over her life and career—is just one of the many delightful highlights in Seduction.
The Outlaw, incidentally, was one of the movies I recall watching with Grandma back in the day. Her notions of Hollywood had everything to do with the glamorous and the glorious, of handsome leading men and lovely leading ladies up on the silver screen. I can’t help wondering what she might think if she learned how those ladies were tormented by Hughes and other moguls of the time. I wish I could tell her about it, about this book. Alas, that’s not possible, but I can urge anyone interested in this era to read Seduction. If authors of film history could win an Academy Award, Longworth would have an Oscar on her mantle to mark this outstanding achievement.
Imagine there’s a virus sweeping across the land claiming untold victims, the agent of the disease poorly understood, the population in terror of an unseen enemy that rages mercilessly through entire communities, leaving in its wake an exponential toll of victims. As this review goes to press amid an alarming spike in new Coronavirus cases, Americans don’t need to stretch their collective imagination very far to envisage that at all. But now look back nearly two and a half centuries and consider an even worse case scenario: a war is on for the existential survival of our fledgling nation, a struggle compromised by mass attrition in the Continental Army due to another kind of virus, and the epidemic it spawns is characterized by symptoms and outcomes that are nothing less than nightmarish by any standard, then or now. For the culprit then was smallpox, one of the most dread diseases in human history.
This nearly forgotten chapter in America’s past left a deep impact on the course of the Revolution that has been long overshadowed by outsize events in the War of Independence and the birth of the Republic. This neglect has been masterfully redressed by Pox Americana: The Great Smallpox Epidemic of 1775-82, a brilliantly conceived and extremely well-written account by Pulitzer Prize-winning historian Elizabeth A. Fenn. One of the advantages of having a fine personal library in your home is the delight of going to a random shelf and plucking off an edition that almost perfectly suits your current interests, a volume that has been sitting there unread for years or even decades, just waiting for your fingertips to locate it. Such was the case with my signed first edition of Pox Americana, a used bookstore find that turned out to be a serendipitous companion to my self-quarantine for Coronavirus, the great pandemic of our times.
As horrific as COVID-19 has been for us—as of this morning we are up to one hundred thirty four thousand deaths and three million cases in the United States, a significant portion of the more than half million dead and nearly twelve million cases worldwide—smallpox, known as “Variola,” was far, far worse. In fact, almost unimaginably worse. Not only was it more than three times more contagious than Coronavirus, but rather than a mortality rate that ranges in the low single digits with COVID (the verdict’s not yet in), variola on average claimed an astonishing thirty percent of its victims, who often suffered horribly in the course of the illness and into their death throes, while survivors were frequently left disfigured by extensive scarring, and many were left blind. Smallpox has a long history that dates back to at least the third century BCE, as evidenced in Egyptian mummies. There were reportedly still fifteen million cases a year as late as 1967. In between it claimed untold hundreds of millions of lives over the years—some three hundred million in the twentieth century alone—until its ultimate eradication in 1980. There is perhaps some tragic irony that we are beset by Coronavirus on the fortieth anniversary of that milestone …
I typically eschew long excerpts for reviews, but Variola was so horrifying and Fenn writes so well that I believe it would be a disservice to do other than let her describe it here:
Headache, backache, fever, vomiting, and general malaise all are among the initial signs of infection. The headache can be splitting; the backache, excruciating … The fever usually abates after the first day or two … But … relief is fleeting. By the fourth day … the fever creeps upward again, and the first smallpox sores appear in the mouth, throat, and nasal passages …The rash now moves quickly. Over a twenty-four-hour period, it extends itself from the mucous membranes to the surface of the skin. On some, it turns inward, hemorrhaging subcutaneously. These victims die early, bleeding from the gums, eyes, nose, and other orifices. In most cases, however, the rash turns outward, covering the victim in raised pustules that concentrate in precisely the places where they will cause the most physical pain and psychological anguish: The soles of the feet, the palms of the hands, the face, forearms, neck, and back are focal points of the eruption … If the pustules remain discrete—if they do not run together— the prognosis is good. But if they converge upon one another in a single oozing mass, it is not. This is called confluent smallpox … For some, as the rash progresses in the mouth and throat, drinking becomes difficult, and dehydration follows. Often, an odor peculiar to smallpox develops… Patients at this stage of the disease can be hard to recognize. If damage to the eyes occurs, it begins now … Scabs start to form after two weeks of suffering … In confluent or semiconfluent cases of the disease, scabbing can encrust most of the body, making any movement excruciating … [One observation of such afflicted Native Americans noted that] “They lye on their hard matts, the poxe breaking and mattering, and runing one into another, their skin cleaving … to the matts they lye on; when they turne them, a whole side will flea of[f] at once.” … Death, when it occurs, usually comes after ten to sixteen days of suffering. Thereafter, the risk drops significantly … and unsightly scars replace scabs and pustules … the usual course of the disease—from initial infection to the loss of all scabs—runs a little over a month. Patients remain contagious until the last scab falls off … Most survivors bear … numerous scars, and some are blinded. But despite the consequences, those who live through the illness can count themselves fortunate. Immune for life, they need never fear smallpox again. [p16-20]
Smallpox was an unfortunate component of the siege of Boston by the British in 1775, but—as Fenn explains—it was far worse for Bostonians than the Redcoats besieging them. This was because smallpox was a fact of life in eighteenth century Europe—a series of outbreaks left about four hundred thousand people dead every year, and about a third of the survivors were blinded. As awful as that may seem, it meant that the vast majority of British soldiers had been exposed to the virus and were thus immune. Not so for the colonists, who not only had experienced less outbreaks but frequently lived in more rural settings at a greater distance from one another, which slowed exposure, leaving a far smaller quantity of those who could count on immunity to spare them. Nothing fuels the spread of a pestilence better than a crowded bottlenecked urban environment—such as Boston in 1775—except perhaps great encampments of susceptible men from disparate geographies suddenly crammed together, as was characteristic of the nascent Continental Army. To make matters worse, there was some credible evidence that the Brits at times engaged in a kind of embryonic biological warfare by deliberately sending known infected individuals back to the Colonial lines. All of this conspired to form a perfect storm for disaster.
Our late eighteenth-century forebears had a couple of things going for them that we lack today. First of all, while it was true that like COVID there was no cure for smallpox, there were ways to mitigate the spread and the severity that were far more effective than our masks and social distancing—or misguided calls to ingest hydroxychloroquine, for that matter. Instead, their otherwise primitive medical toolkit did contain inoculation, an ancient technique that had only become known to the west in relatively recent times. Now, it is important to emphasize that inoculation—also known as “variolation”—is not comparable to vaccination, which did not come along until closer to the end of the century. Not for the squeamish, variolation instead involved deliberately inserting the live smallpox virus from scabs or pustules into superficial incisions in a healthy subject’s arm. The result was an actual case of smallpox, but generally a much milder one than if contracted from another infected person. Recovered, the survivor would walk away with permanent immunity. The downside was that some did not survive, and all remained contagious for the full course of the disease. This meant that the inoculated also had to be quarantined, no easy task in an army camp, for example.
The other thing they had going for them back then was a competent leader who took epidemics and how to contain them quite seriously—none other than George Washington himself. Washington was not president at the time, of course, but he was the commander of the Continental Army, and perhaps the most prominent man in the rebellious colonies. Like many of history’s notable figures, Washington was not only gifted with qualities such as courage, intelligence, and good sense, but also luck. In this case, Washington’s good fortune was to contract—and survive—smallpox as a young man, granting him immunity. But it was likewise the good fortune of the emerging new nation to have Washington in command. Initially reluctant to advance inoculation—not because he doubted the science but rather because he feared it might accelerate the spread of smallpox—he soon concluded that only a systematic program of variolation could save the army, and the Revolution! Washington’s other gifts—for organization and discipline—set in motion mass inoculations and enforced isolation of those affected. Absent this effort, it is likely that the War of Independence—ever a long shot—may not have succeeded.
Fenn argues convincingly that the course of the war was significantly affected by Variola in several arenas, most prominently in its savaging of Continental forces during the disastrous invasion of Quebec, which culminated in Benedict Arnold’s battered forces being driven back to Fort Ticonderoga. And in the southern theater, enslaved blacks flocked to British lines, drawn by enticements to freedom, only to fall victim en masse to smallpox, and then tragically find themselves largely abandoned to suffering and death as the Brits retreated. There is a good deal more of this stuff, and many students of the American Revolution will find themselves wondering—as I did—why this fascinating perspective is so conspicuously absent in most treatments of this era?
Remarkably, despite the bounty of material, emphasis on the Revolution only occupies the first third of the book, leaving far more to explore as the virus travels to the west and southwest, and then on to Mexico, as well as to the Pacific northwest. As Fenn reminds us again and again, smallpox comes from where smallpox has been, and she painstakingly tracks hypothetical routes of the epidemic. Tragic bystanders in its path were frequently Native Americans, who typically manifested more severe symptoms and experienced greater rates of mortality. It has been estimated that perhaps ninety percent of pre-contact indigenous inhabitants of the Americas were exterminated by exposure to European diseases for which they had no immunity, and smallpox was one of the great vehicles of that annihilation. Variola proved to be especially lethal as a “virgin soil” epidemic, and Native Americans not unexpectedly suffered far greater casualties than other populations, resulting in death on such a wide scale that entire tribes simply disappeared to history.
No review can properly capture all the ground that Fenn covers in this outstanding book, nor praise her achievement adequately. It is especially rare when a historian combines a highly original thesis with exhaustive research, keen analysis, and exceptional talent with a pen to deliver a magnificent work such as Pox Americana. And perhaps never has there been a moment when this book could find a greater relevance to readers than to Americans in 2020.
If you have studied evolution inside or outside of the classroom, you have no doubt encountered the figure of Jean-Baptiste Lamarck and the discredited notion of the inheritance of acquired characteristics attributed to him known as “Lamarckism.” This has most famously been represented in the example of giraffes straining to reach fruit on ever-higher branches, which results in the development of longer necks over succeeding generations. Never mind that Lamarck did not develop this concept—and while he echoed it, it remained only a tiny part of the greater body of his work—he was yet doomed to have it unfortunately cling to his legacy ever since. This is most regrettable, because Lamarck—who died three decades before Charles Darwin shook the spiritual and scientific world with his 1859 publication of On the Origin of Species—was actually a true pioneer in the field of evolutionary biology that recognized there were forces at work that put organisms on an ineluctable road to greater complexity. It was Darwin who identified this force as “natural selection,” and Lamarck was not only denied credit for his contributions to the field, but otherwise maligned and ridiculed.
But even if he did not invent the idea, what if Lamarck was right all along to believe, at least in part, that acquired characteristics can be passed along transgenerationally after all—perhaps not on the kind of macro scale manifested by giraffe necks, but in other more subtle yet no less critical components to the principles of evolution? That is the subject of Lamarck’s Revenge: How Epigenetics is Revolutionizing Our Understanding of Evolution’s Past and Present, by the noted paleontologist Peter Ward. The book’s cover naturally showcases a series of illustrated giraffes with ever-lengthening necks! Ward is an enthusiast for the relatively new, still developing—and controversial—science of epigenetics, which advances the hypothesis that certain circumstances can trigger markers that can be transmitted from parent to child by changing the gene expression without altering the primary structure of the DNA itself. Let’s imagine a Holocaust survivor, for instance: can the trauma of Auschwitz cut so deep that the devastating psychological impact of that horrific experience will be passed on to his children, and his children’s children?
This is heady stuff, of course. We should pause for the uninitiated and explain the nature of Darwinian natural selection—the key mechanism of the Theory of Evolution—in its simplest terms. The key to survival for all organizations is adaptation. Random mutations occur over time, and if one of those mutations turns out to be better adapted to the environment, it is more likely to reproduce and thus pass along its genes to its offspring. Over time, through “gradualism,” this can lead to the rise of new species. Complexity breeds complexity, and that is the road traveled by all organisms that has led from the simplest prokaryote unicellular organism—the 3.5-billion-year-old photosynthetic cyanobacteria—to modern homo sapiens sapiens. This is, of course, a very, very long game; so long in fact that Darwin—who lived in a time when the age of the earth was vastly underestimated—fretted that there was not enough time for evolution as he envisioned it to occur. Advances in geology later determined that the earth was about 4.5 billion years old, which solved that problem, but still left other aspects of evolution unexplained by gradualism alone. The brilliant Stephen Jay Gould (along with Niles Eldredge) came along in 1972 and proposed that rather than gradualism most evolution more likely occurred through what he called “punctuated equilibrium,” often triggered by a catastrophic change in the environment. Debate has raged ever since, but it may well be that evolution is guided by forces of both gradualism and punctuated equilibrium. But could there still be other forces at work?
Transgenerational epigenetic inheritance represents another so-called force and is at the cutting edge of research in evolutionary biology today. But has the hypothesis of epigenetics been demonstrated to be truly plausible? And the answer to that is—maybe. In other words, there does seem to be studies that support transgenerational epigenetic inheritance, most famously—as detailed in Lamarck’s Revenge—in what has been dubbed the “Dutch Hunger Winter Syndrome,” that saw children born during a famine smaller than those born before the famine, and with a later, greater risk of glucose intolerance, conditions then passed down to successive generations. On the other hand, the evidence for epigenetics has not been as firmly established as some proponents, such as Ward, might have us believe.
Lamarck’s Revenge is a very well-written and accessible scientific account of epigenetics for a popular audience, and while I have read enough evolutionary science to follow Ward’s arguments with some competence, I remain a layperson who can hardly endorse or counter his claims. The body of the narrative is comprised of Ward’s repeated examples of what he identifies as holes in traditional evolutionary biology that can only be explained by epigenetics. Is he right? I simply lack the expertise to say. I should note that I received this book as part of an “Early Reviewers” program, so I felt a responsibility to read it cover-to-cover, although my own interest lapsed as it moved beyond my own depth in the realm of evolutionary biology.
I should note that this is all breaking news, and as we appraise it we should be mindful of how those on the fringes of evangelicalism, categorically opposed to the science of human evolution, will cling to any debate over mechanisms in natural selection to proclaim it all a sham sponsored by Satan—who has littered the earth with fossils to deceive us—to challenge the truth of the “Garden of Eden” related in the Book of Genesis. Once dubbed “Creationists,” they have since rebranded themselves in association with the pseudoscience of so-called “Intelligent Design,” which somehow remains part of the curriculum at select accredited universities. Science is self-correcting. These folks are not, so don’t ever let yourself be distracted by their fictional supernatural narrative. Evolution—whether through gradualism and/or punctuated equilibrium and/or epigenetics—remains central to both modern biology and modern medicine, and that is not the least bit controversial among scientific professionals. But if you want to find out more about the implications of epigenetics for human evolution, then I recommend that you pick up Lamarck’s Revenge and challenge yourself to learn more!
Note: While you are at it, if you want to learn more about 3.5-billion-year-old photosynthetic cyanobacteria, I highly recommend this:
Here was buried Thomas Jefferson, Author of the Declaration of American Independence, of the Statute of Virginia for religious freedom & Father of the University of Virginia.
Thomas Jefferson wrote those very words and sketched out the obelisk they would be carved upon. For those who have studied him, that he not only composed his own epitaph but designed his own grave marker was—as we would say in contemporary parlance—just “so Jefferson.” His long life was marked by a catalog of achievements; these were intended to represent his proudest accomplishments. Much remarked upon is the conspicuous absence of his unhappy tenure as third President of the United States. Less noted is the omission of his time as Governor of Virginia during the Revolution, marred by his humiliating flight from Monticello just minutes ahead of British cavalry. Of the three that did make the final cut, his role as author of the Declaration has been much examined. The Virginia statute—seen as the critical antecedent to First Amendment guarantees of religious liberty—gets less press, but only because it is subsumed in a wider discussion of the Bill of Rights. But who really talks about Jefferson’s role as founder of the University of Virginia?
That is the ostensible focus of Thomas Jefferson’s Education, by Alan Taylor, perhaps the foremost living historian of the early Republic. But in this extremely well-written and insightful analysis, Taylor casts a much wider net that ensnares a tangle of competing themes that not only traces the sometimes-fumbling transition of Virginia from colony to state, but speaks to underlying vulnerabilities in economic and political philosophy that were to extend well beyond its borders to the southern portion of the new nation. Some of these elements were to have consequences that echoed down to the Civil War; indeed, still echo to the present day.
Students of the American Civil War are often struck by the paradox of Virginia. How was it possible that this colony—so central to the Revolution and the founding of the Republic, the most populous and prominent, a place that boasted notable thinkers like Jefferson, Madison and Marshall, that indeed was home to four of the first five presidents of the new United States—could find itself on the eve of secession such a regressive backwater, soon doomed to serve as the capitol of the Confederacy? It turns out that the sweet waters of the Commonwealth were increasingly poisoned by the institution of human chattel slavery, once decried by its greatest intellects, then declared indispensable, finally deemed righteous. This tragedy has been well-documented in Susan Dunn’s superlative Dominion of Memories: Jefferson, Madison & the Decline of Virginia, as well as Alan Taylor’s own Pulitzer Prize winning work, The Internal Enemy: Slavery and the War in Virginia 1772-1832. What came to be euphemistically termed the “peculiar institution” polluted everything in its orbit, often invisibly except to the trained eye of the historian. This included, of course, higher education.
If the raison d’être of the Old Dominion was to protect and promote the interests of the wealthy planter elite that sat atop the pyramid of a slave society, then really how important was it for the scions of Virginia gentlemen to be educated beyond the rudimentary levels required to manage a plantation and move in polite society? And after all, wasn’t the “honor” of the up-and-coming young “masters” of far greater consequence than the aptitude to discourse in matters of rhetoric, logic or ethics? In Thomas Jefferson’s Education, Taylor takes us back to the nearly forgotten era of a colonial Virginia when the capitol was located in “Tidewater” Williamsburg and rowdy students—wealthy, spoiled sons of the planter aristocracy with an inflated sense of honor—clashed with professors at the prestigious College of William & Mary who dared to attempt to impose discipline upon their bad behavior. A few short years later, Williamsburg was in shambles, a near ghost town, badly mauled by the British during the Revolution, the capitol relocated north to “Piedmont” Richmond, William & Mary in steep decline. Thomas Jefferson’s determination over more than two decades to replace it with a secular institution devoted to the liberal arts that welcomed all white men, regardless of economic status, is the subject of this book. How he realized his dream with the foundation of the University of Virginia in the very sunset of his life, as well as the spectacular failure of that institution to turn out as he envisioned it is the wickedly ironic element in the title of Thomas Jefferson’s Education.
The author is at his best when he reveals the unintended consequences of history. In his landmark study, American Revolutions: A Continental History, 1750-1804, Taylor underscores how American Independence—rightly heralded elsewhere as the dawn of representative democracy for the modern West—was at the same time to prove catastrophic for Native Americans and African Americans, whose fate would likely have been far more favorable had the colonies remained wedded to a British Crown that drew a line for westward expansion at the Appalachians, and later came to abolish slavery throughout the empire. Likewise, there is the example of how the efforts of Jefferson and Madison—lauded for shaking off the vestiges of feudalism for the new nation by putting an end to institutions of primogeniture and entail that had formerly kept estates intact—expanded the rights of white Virginians while dooming countless numbers of the enslaved to be sold to distant geographies and forever separated from their families.
In Thomas Jefferson’s Education, the disestablishment of religion is the focal point for another unintended consequence. For Jefferson, an established church was anathema, and stripping the Anglican Church of its preferred status was central to his “Statute of Virginia for Religious Freedom” that was later enshrined in the First Amendment. But it turns out that religion and education were intertwined in colonial Virginia’s most prominent institution of higher learning, Williamsburg’s College of William & Mary, funded by the House of Burgesses, where professors were typically ordained Anglican clergymen. Moreover, tracts of land known as “glebes” that were formerly distributed by the colonial government for Anglican (later Episcopal) church rectors to farm or rent, came under assault by evangelical churches allied with secular forces after the Revolution in a movement that eventually was to result in confiscation. This put many local parishes—once both critical sponsors of education and poor relief—into a death spiral that begat still more unintended consequences that in some ways still resonate to the present-day politics and culture of the American south. As Taylor notes:
The move against church establishment decisively shifted public finance for Virginia. Prior to the revolution, the parish tax had been the greatest single tax levied on Virginians; its elimination cut the local tax burden by two thirds. Poor relief suffered as the new County overseers spent less per capita than had the old vestries. After 1790, per capita taxes, paid by free men in Virginia, were only a third of those in Massachusetts. Compared to northern states, Virginia favored individual autonomy over community obligation. Jefferson had hoped that Virginians would reinvest their tax savings from disestablishment by funding the public system of education for white children. Instead county elites decided to keep the money in their pockets and pose as champions of individual liberty. [p57-58]
For Jefferson, a creature of the Enlightenment, the sins of medievalism inherent to institutionalized religion were glaringly apparent, yet he was blinded to the positive contributions it could provide for the community. Jefferson also frequently perceived his own good intentions in the eyes of others who simply did not share them because they were either selfish or indifferent. Jefferson seemed to genuinely believe that an emphasis on individual liberty would in itself foster the public good, when in reality—then and now—many take such liberty as the license to simply advance their own interests. For all his brilliance, Jefferson was too often naïve when it came to the character of his countrymen.
Once near-universally revered, the legacy of Thomas Jefferson often triggers ambivalence for a modern audience and poses a singular challenge for historical analysis. A central Founder, Jefferson’s bold claim in the Declaration “that all men are created equal” defined both the struggle with Britain and the notion of “liberty” that not only came to characterize the Republic that eventually emerged, but gave echo with a deafening resonance to the French Revolution—and far beyond to legions of the oppressed yearning for the universal equality that Jefferson had asserted was their due. At the same time, over the course of his lifetime Jefferson owned hundreds of human beings as chattel property. One of the enslaved almost certainly served as concubine to bear him several offspring who were also enslaved, and she almost certainly was the half-sister of Jefferson’s late wife.
The once popular view that imagined that Jefferson did not intend to include African Americans in his definition of “all men” has been clearly refuted by historians. And Jefferson, like many of his elite peers of the Founding generation—Madison, Monroe, and Henry—decried the immorality of slavery as institution while consenting to its persistence, to their own profit. Most came to find grounds to justify it, but not Jefferson: the younger Jefferson cautiously advocated for abolition, while the older Jefferson made excuses for why it could not be achieved in his lifetime—made manifest in his much quoted “wolf by the ear” remark—but he never stopped believing it an existential wrong. As Joseph Ellis underscored in his superb study, American Sphinx, Jefferson frequently held more than one competing and contradictory view in his head simultaneously and was somehow immune to the cognitive dissonance such paradox might provoke in others.
It is what makes Jefferson such a fascinating study, not only because he was such a consequential figure for his time, but because the Republic then and now remains a creature of habitually irreconcilable contradictions remarkably emblematic of this man, one of its creators, who has carved out a symbolism that varies considerably from one audience to another. Jefferson, more than any of the other Founders, was responsible for the enduring national schizophrenia that pits federalism against localism, a central economic engine against entrepreneurialism, and the well-being of a community against personal liberties that would let you do as you please. Other elements have been, if not resolved, forced to the background, such as the industrial vs. the agricultural, and the military vs. the militia. Of course, slavery has been abolished, civil rights tentatively obtained, but the shadow of inequality stubbornly lingers, forced once more to the forefront by the murder of George Floyd; I myself participated in a “Black Lives Matter” protest on the day before this review was completed.
Perhaps much overlooked in the discussion but no less essential is the role of education in a democratic republic. Here too, Jefferson had much to offer and much to pass down to us, even if most of us have forgotten that it was his soft-spoken voice that pronounced it indispensable for the proper governance of both the state of Virginia and the new nation. That his ambition extended only to white, male universal education that excluded blacks and women naturally strikes us as shortsighted, even repugnant, but should not erase the fact that even this was a radical notion in its time. Rather than disparage Jefferson, who died two centuries ago, we should perhaps condemn the inequality in education that persists in America today, where a tradition of community schools funded by property taxes meant that my experience growing up in a white, middle class suburb in Fairfield, CT translated into an educational experience vastly superior to that of the people of color who attended the ancient crumbling edifices in the decaying urban environment of Bridgeport less than three miles from my home. How can we talk about “Black Lives Matter” without talking about that?
The granite obelisk that marked Jefferson’s final resting place was chipped away at by souvenir hunters until it was relocated in order to preserve it. A joint resolution of Congress funded the replacement, erected in 1883, that visitors now encounter at Monticello. The original obelisk now incongruously sits in a quadrangle at the University of Missouri, perhaps as far removed from Jefferson’s grave as today’s diverse, co-ed institution of UVA at Charlottesville is at a distance from the both the university he founded and the one he envisioned. We have to wonder if Jefferson would be more surprised to learn that African Americans are enrolled at UVA—or that in 2020 they only comprise less than seven percent of the undergraduate population? And what would he make of the white supremacists who rallied at Charlottesville in 2017 and those who stood against them? I suspect a resurrected Jefferson would be no less enigmatic than the one who walked the earth so long ago.
Alan Taylor has written a number of outstanding works—I’ve read five of them—and he has twice won the Pulitzer Prize for History. He is also, incidentally, the Thomas Jefferson Memorial Foundation Professor of History at the University of Virginia, so Thomas Jefferson’s Education is not only an exceptional contribution to the historiography but no doubt a project dear to his heart. While I continue to admire Jefferson even as I acknowledge his many flaws, I cannot help wondering how Taylor—who has so carefully scrutinized him—personally feels about Thomas Jefferson. I recall that in the afterword to his magnificent historical novel, Burr, Gore Vidal admits: “All in all, I think rather more highly of Jefferson than Burr does …” If someone puts Alan Taylor on the spot, I suppose that could be as good an answer as any …
Note: I have reviewed other works by Alan Taylor here:
Nolite te bastardes carborundorum could very well be the Latin phrase most familiar to a majority of Americans. Roughly translated as “Don’t let the bastards grind you down,” it has been emblazoned on tee shirts and coffee mugs, trotted out as bumper sticker and email signature, and—most prominently—has become an iconic feminist rallying cry for women. That this famous slogan is not really Latin or any language at all, but instead a kind of schoolkid’s “mock Latin,” speaks to the colossal cultural impact of the novel where it first made its appearance in 1985, The Handmaid’s Tale, by Margaret Atwood, as well as the media then spawned, including the 1990 film featuring Natasha Richardson, and the acclaimed series still streaming on Hulu. Consult any random critic’s list of the finest examples in the literary sub-genre “dystopian novels,” and you will likely find The Handmaid’s Tale in the top five, along with such other classic masterpieces as Orwell’s 1984, Huxley’s Brave New World and Bradbury’s Fahrenheit 451, which is no small achievement for Atwood.
For anyone who has not been locked in a box for decades, The Handmaid’s Tale relates the chilling story of the not-too-distant-future nation of “Gilead,” a remnant of a fractured United States that has become a totalitarian theonomy that demands absolute obedience to divine law, especially the harsh strictures of the Old Testament. A crisis in fertility has led to elite couples relying on semi-enslaved “handmaids” who serve as surrogates to be impregnated and carry babies to term for them, which includes a bizarre ritual where the handmaid lies in the embrace of the barren wife while being penetrated by the “Commander.” The protagonist is known as “Offred”—or “Of Fred,” the name of this Commander—but once upon a time, before the overthrow of the U.S., she was an independent woman, a wife, a mother. It is Offred who one day happens upon Nolite te bastardes carborundorum scratched upon the wooden floor on her closet, presumably by the anonymous handmaid who preceded her.
Brilliantly structured as a kind of literary echo of Geoffrey Chaucer’s The Canterbury Tales, employing Biblical imagery—the eponymous “handmaid” based upon the Old Testament account of Rachel and her handmaid Bilhah—and magnificently imagining a horrific near-future of a male-dominated society where all women are garbed in color-coded clothing to reflect their strictly assigned subservient roles, Atwood’s narrative achieves the almost impossible feat of imbuing what might otherwise smack of the fantastic with the highly persuasive badge of the authentic.
The 1990 film adaptation—which also starred Robert Duvall as the Commander and Faye Dunaway as his infertile wife Serena Joy—was largely faithful to the novel, while further fleshing out the character of Offred. But it is has been the Hulu series, updated to reflect a near-contemporary pre-Gilead America replete with cell phones and technology—and soon to beget (pun fully intended!) a fourth season—which both embellished and enriched Atwood’s creation for a new generation and a far wider audience. And it has enjoyed broad resonance, at least partially due to its debut in early 2017, just months after the presidential election. The coalition of right-wing evangelicals, white supremacists, and neofascists that has come to coalesce around the Republican Party in the Age of Trump has not only brought new relevance to The Handmaid’s Tale, but has also seen its scarlet handmaid’s cloaks adopted by many women as the de rigueur uniform of protest in the era of “Me Too.” Meanwhile, the series—which is distinguished by an outstanding cast of fine ensemble actors, headlined by Elisabeth Moss as Offred—has proved enduringly terrifying for three full seasons, while largely maintaining its authenticity.
Re-enter Margaret Atwood with The Testaments: The Sequel to The Handmaid’s Tale, released thirty-four years after the original novel. As a fan of both the book and the series, I looked forward to reading it, though my anticipation was tempered by a degree of trepidation based upon my time-honored conviction that sequels are ill-advised and should generally be avoided. (If Godfather II was the rare exception in film, Thomas Berger’s The Return of Little Big Man certainly proved the rule for literature!) Complicating matters, Atwood penned a sequel not to her own novel, but rather to the Hulu series, which brought back memories of Michael Crichton’s awkward The Lost World, written as a follow-up to Spielberg’s Jurassic Park movie rather than his own book.
My fears were not misplaced.
The action in The Testaments takes place in both Gilead and in Atwood’s native Canada, which remains a bastion of freedom and democracy for those who can escape north. The timeframe is roughly fifteen years after the conclusion of Hulu’s Season Three. The narrative is told from the alternating perspectives of three separate protagonists, one of whom is Aunt Lydia, the outsize brown-clad villain of book and film known for both efficiency and brutality in her role as a “trainer” of handmaids. Aunt Lydia turns out to have both a surprising pre-Gilead backstory as well as a secret life as an “Aunt,” although there are no hints of these in any previous works. Still, I found the Lydia portion of the book most interesting, and perhaps the more plausible in a storyline that often flirts with the farfetched.
In order to sidestep spoilers, I cannot say much about the identities of the other two main characters, who are each subject to surprise “reveals” in the narrative—except that I personally was less surprised than was clearly intended. Oh yes, I get it: the butler did it … but I still have hundreds of pages ahead of me. But that was not the worst of it.
The beauty of the original novel and the series has remained a remarkably consistent authenticity, despite an extraordinary futuristic landscape. The test of all fiction—but most especially in science-fiction, fantasy, and the dystopian—is: can you successfully suspend disbelief? For me, The Testaments fails this test again and again, most prominently when one of our “unrevealed” characters—an otherwise ordinary teenage girl—is put through something like a “light” version of La Femme Nikita training, and then in short order trades high school for a dangerous undercover mission without missing a beat! Moreover, her character is not well-drawn, and the words put in her mouth ring counterfeit. It seems evident that the eighty-year-old Atwood does not know very many sixteen-year-old girls, and culturally this one acts and sounds like she was raised thirty years ago and then catapulted decades into the future. Overall, the plot is contrived, the action inauthentic, the characters artificial.
This is certainly not vintage Atwood, although some may try to spin it that way. The Handmaid’s Tale was not a one-hit wonder: Atwood is a prolific, accomplished author and I have read other works—including The Penelopiad and The Year of the Flood—that underscore her reputation as a literary master. But not this time. In my disappointment, I was reminded of my experience with Khaled Hosseini, whose The Kite Runner was a superlative novel that showcased a panoply of complex themes and nuanced characters that remained with me long after I closed the cover. That was followed by A Thousand Splendid Suns, which though a bestseller was dramatically substandard to his earlier work, peopled with nearly one-dimensional caricatures assigned to be “good” or “evil” navigating a plot that smacked more of soap-opera than subtlety.
The Testaments too has proved a runaway bestseller, but it is the critical acclaim that I find most astonishing, even scoring the highly prestigious 2019 Booker Award—though I can’t bear to think of it sitting on the same shelf alongside … say … Richard Flanagan’s The Narrow Road to the Deep North, which took the title in 2014. It is tough for me to review a novel so well-received that I find so weak and inconsequential, especially when juxtaposed with the rest of the author’s catalog. I keep holding out hope that someone else might take notice that the emperor really isn’t wearing any clothes, but the bottom line is that lots of people loved this book; I did not.
On the other hand, a close friend countered that fiction, like music, is highly subjective. But I take some issue with that. Perhaps you personally might not have enjoyed Faulkner’s The Sound and the Fury, or Hemingway’s A Farewell to Arms, for that matter, but you cannot make the case that these are bad books. I would argue that The Testaments is a pretty bad book, and I would not recommend it. But here, it seems, I remain a lone voice in the literary wilderness.
DISCLAIMER: The review that follows and the book that is its subject each include a fact-based timeline, political polemic, and inflammatory language, some or all of which may be highly offensive to certain individuals, especially those who identify with the MAGA movement or abjure critical thinking. If you or someone you care about fits that description, is highly sensitive, or is unable to handle views that contradict your political narrative, you are urged to stop reading now and put this review aside. Those who proceed further do so at their own risk, and this reviewer will hold himself blameless for any fits of rage, dangerous increases in blood pressure, or Rumpelstiltskin-like attempts to stomp the ground so hard that the reader sinks into a chasm, that may result from continuing beyond this point …
President Trump is facing a test to his presidency unlike any faced by a modern American leader. It’s not just that the special counsel looms large. Or that the country is bitterly divided over Mr. Trump’s leadership. Or even that his party might well lose the House to an opposition hellbent on his downfall. The dilemma—which he does not fully grasp—is that many of the senior officials in his own administration are working diligently from within to frustrate parts of his agenda and his worst inclinations. I would know. I am one of them.
That is the opening excerpt from an Op-Ed entitled “I Am Part of the Resistance Inside the Trump Administration” published in the New York Times on September 5, 2018, along with this note from the editors: “The Times is taking the rare step of publishing an anonymous Op-Ed essay. We have done so at the request of the author, a senior official in the Trump administration whose identity is known to us and whose job would be jeopardized by its disclosure.”
The Op-Ed was written on the eve of the mid-term elections, before the release of the Mueller report, the murder of Khashoggi, the shutdown of the Trump Foundation for what was described as “a shocking pattern of illegality,” the expulsion of most remaining adults-in-the-room including Mattis and Kelly and Rosenstein, the “perfect call” with Volodymyr Zelensky that led to impeachment—which was just one shocking by-product of an erratic foreign policy of appeasement to Putin, ongoing saber-rattling with the Ayatollah and kissy-face with Kim Jung-un, the granting of dispensation to Mohammed bin Salman, and the green-lighting of Erdoğan to take out our Kurdish allies in Syria, not to mention the continuing crisis at home of kids in cages, and the ousting of any civil servant who dared contradict the President with a fact-based narrative. And there was so very much more that it is truly a blur. In September 2019, Trump doctored a map with a Sharpie and flashed it on television to prove he was right all along about the path of Hurricane Dorian. In October 2019, the President of the United States actually expressed interest in constructing an electrified moat filled with alligators along the Mexican border and shooting migrants in the legs to slow them down! Who even remembers that now?
Shortly after the moat full of alligators rose to a brief crest in the 24 hour cable news cycle and then sank beneath the weight of the tide of whatever was next that no one can really recall anymore, while we collectively held our breaths for the next wave of … well, who knows what? … A Warning, by Anonymous—the same “senior Trump administration official” who was author of that NYT editorial—was published. A Warning set a record for preorders and made the bestseller list, and while the staggering revelations by a senior insider that it contains would have no doubt thrust any other administration into a tailspin so severe that it could never have recovered, this book—much like the misadventures it chronicles—is essentially as forgotten to an overwhelmed amnesiac public as the moat full of alligators. The notion that “nothing matters” has become such a cliché precisely because—as the subsequent impeachment acquittal underscored—when it comes to Trump, nothing truly does matter anymore. Or really ever has.
The thesis of A Warning—which picks up where the author’s editorial left off—is that 1) all hyperbole on left-leaning media aside, President Trump really is as he appears to the non-brainwashed observer: an unhinged, irrational, narcissistic, incompetent clown who left to his own devices would no doubt steer the clown car with all of us aboard right into the abyss; and 2) if not for the valiant efforts of the author and his or her furtive cohorts, working ceaselessly behind the scenes to curtail Trump’s most dangerous instincts, we would likely already be acquainted with said abyss. “Anonymous” claims that he/she is generally supportive of the administration’s conservative right-wing agenda, but fears what the President’s unbalanced behavior could bring. While Trump rambles on paranoiacally about the so-called imaginary “Deep State” plotting to undermine him, the author of A Warning refutes the notion of said “Deep State” while emphasizing what he/she terms the “Steady State,” an unidentified alliance at the top tier of “glorified government babysitters” who quietly strive to “keep the wheels from coming off the White House wagon.”
But apparently the axle nuts are getting looser every day, and those wheels are about to let go, as underscored in the very first chapter, aptly entitled “Collapse of the Steady State,” where the author admits that:
I was wrong about the “quiet resistance” inside the Trump Administration. Unelected bureaucrats and cabinet appointees were never going to steer Donald Trump in the right direction in the long run, or refine his malignant management style. He is who he is. Americans should not take comfort in knowing whether there are so-called adults in the room. We are not bulwarks against the president and shouldn’t be counted upon to keep him in check. That is not our job. That is the job of the voters …
If the original editorial was an attempt to reassure us that while the President was often indeed as mindlessly dangerous as a runaway bull amok in the national china shop, there was yet a significant presence of others sane and rational to rein him in before too much of value was irreparably wrecked, A Warning goes much further, urging a broad coalition to defeat him in 2020, especially targeting those in the right lane who otherwise cheer the lower taxes, frantic deregulation, and the ascent of ultraconservative Supreme Court justices that have been a byproduct of Trumpism. But does such a cohort actually exist?
For Trump and a polarized America in 2020, there are essentially four audiences to play to: 1) Donald Trump represents an existential threat to our values of freedom and democracy in our sacred Republic; 2) Donald Trump is a savior for America sent by the almighty God to restore our sacrosanct traditional values and lock up anyone who would even think about having an abortion; 3) Donald Trump is an absolutely offensive buffoon—of course—but the economy has been supercharged so why don’t they just let him do his job?; and, 4) Donald Trump is the same as Joe Biden, and if Bernie Sanders was President we’d all have free college and healthcare and everything else and if you don’t agree you should just die. A Warning makes a compelling argument, but I don’t see it changing anyone’s mind. Either the Emperor is wearing those new clothes or he isn’t.
Each chapter of A Warning is headed by a quotation from a former president—Madison, Washington, Jefferson, Kennedy, Reagan, etc.—that speaks to an aspect of government or the character of its leadership. What then follows are accounts of Trump’s resistance to expertise, paranoid ramblings, irrational behavior, and “malignant management style” that clearly stand as counterpoints to these ideals. At one point, the author reveals that: “Behind closed doors his own top officials deride him as an “idiot” and a “moron” with the understanding of a “fifth or sixth grader.” [p63] This excerpt that describes briefings with the President is a bit longish but perhaps most illustrative:
Early on, briefers were told not to send lengthy documents. Trump wouldn’t read them. Nor should they bring summaries to the Oval Office. If they must bring paper, then PowerPoint was preferred because he is a visual learner. Okay, that’s fine, many thought to themselves, leaders like to absorb information in different ways. Then officials were told that PowerPoint decks needed to be slimmed down. The president couldn’t digest too many slides. He needed more images to keep his interest—and fewer words. Then they were told to cut back the overall message (on complicated issues such as military readiness or the federal budget) to just three main points. Eh, that was still too much … Forget the three points. Come in with one main point and repeat it—over and over again, even if the president inevitably goes off on tangents—until he gets it. Just keep steering the subject back to it. One point. Just that one point. Because you cannot focus the commander-in-chief’s attention on more than one goddamned thing over the course of the meeting, okay? [p29-30]
This is just one of many persuasive arguments that the President is unfit for office, but again: whom is it likely to persuade?
A couple of things struck me about this book that have little to do with its message. First of all, it is not well-written. Not at all. It may be that it was deliberately dumbed-down to target a less educated audience, but I don’t think so. More likely, the author simply isn’t a very talented writer. A Warning has a conversational style, and my guess is that it was dictated and transcribed by someone who is not generally comfortable with a pen.
Second, the author attempts to use history to make his/her point—beyond quotes from presidents, there are also numerous references in the narrative that reach back to ancient Greece and Rome. But the effort is clumsy, at best, and at worst just completely off the mark. At one point, when tracing the origins of the GOP, the author identifies it with “states’ rights,” which while a core value of the modern Republican Party was a hundred fifty years ago closely associated with rival Democrats. [p95] (In fact, one could argue that today’s “Party of Lincoln” has little in common with Lincoln at all.) Elsewhere, there is an awkward tussle with fact-based history as the author struggles to mine democracy in ancient Greece for workable analogies with today’s politics. Athenian demagogue Cleon is cast as a cloak-wearing precursor to Trump “… who will sound familiar to readers … [as he] … inherited money from his father and leveraged it to launch a career in politics.” The famous episode from Thucydides that has Cleon calling for the slaughter of the Mytilenean rebels is posited as an alleged signpost to the decline and fall of Athenian democracy. The later massacre of the Melians is also referenced, as is the execution of Socrates, along with a wild claim that “the latter was an exclamation point on the death of Athenian democracy …” [p183-86] All this is not only completely out of context but downright silly, and—as any historian of ancient Greece would point out—the radical democracy of Athens actually thrived for decades after the death of Socrates in 399 BCE, and even persisted well beyond the subjugation of the polis by Phillip II in 338 BCE.
But that the author is both a bad writer and a lousy historian to my mind just adds to his/her authenticity, as a “senior Trump administration official.” After all, we know that the cabinet is comprised of second and third-rate individuals, and the quality—especially as we have made the shift to “acting” secretaries that don’t require Senate approval—has seen a pronounced downward slope. Of course, the author’s lack of talent hardly diminishes the tale that is told.
The reason A Warning lacks shock-value to some degree is because we have heard much or all of this before, from multiple sources, some more respected than others. While it might be easy to dismiss such schlocky work as Michael Wolff’s Fire and Fury: Inside the Trump White House, the much-celebrated expose of the administration that was frequently as long on bombshells as it was short on substantiation, it is far more difficult to ignore the chilling accounts from award-winning journalist Bob Woodward, whose 2018 book Fear: Trump in the White House identifies then-Secretary of Defense James Mattis as the source of the “fifth or sixth grader” quote. Woodward also reports then-Chief of Staff John Kelly describing the President as “unhinged”—exclaiming: “He’s an idiot. It’s pointless to try to convince him of anything. He’s gone off the rails. We’re in Crazytown.” Far more worrisome than such anecdotes is Woodward’s revelation that then-Chief Economic Adviser Gary Cohn—alarmed that Trump was about to sign a document ending a key trade agreement with South Korea that also dove-tailed with a security arrangement that would alert us to North Korean nuclear adventurism—simply stole the document off the President’s desk! And the President never missed it …
Much of this material has been substantiated by insiders, and there is certainly plenty of evidence to suggest Trump is utterly incapable of serving as Chief Executive. But would anything convince his loyal acolytes of this? Apparently not, which is why A Warning both preached to the chorus and otherwise fell on deaf ears. In February 2020, fifty-two Republican Senators voted to acquit Trump in his impeachment trial—and you can bet that most or all of these “august” legislators know exactly what Donald Trump is really like behind closed doors.
As this review goes to press, we are in the midst of global pandemic that has hit the United States far harder than it should have, largely due to the ongoing incompetence of the President, who is not unsurprisingly the very worst person to be in charge during what is surely the greatest threat to the nation since Pearl Harbor, perhaps since Fort Sumter. We need a Lincoln or a FDR or a JFK at the helm, and what we have is Basil Fawlty … although even that is unfair: Basil would have recognized that he was in over his head and sought Polly’s help, who would have enlisted Manual’s assistance, and we would at least have a chance. Trump, being Trump, believes he has all the answers; and thousands more succumb to the virus as the days go by …
So, who is the author of A Warning? Who exactly is “Anonymous?” There has been some speculation, but if I had to assign authorship, I would put my money on Kellyanne Conway. One clue that narrows it down a bit is that the tone in the narrative hints at a female voice rather than a male one, although I could be mishearing that. More persuasive is the style, which sounds an awful lot like Kellyanne in conversation, albeit spouting utterances diametrically opposed to those outrageous defenses of the President she concocts for the media. Perhaps most compelling is the fact that Kellyanne has uncharacteristically outlasted most members of the administration, especially striking in light of the fact that her husband, attorney George Conway, is a loud and prominent critic of the President that has long called for his removal from office. That Kellyanne has managed to somehow keep her job despite this suggests that she has something on Trump that guarantees her tenure, and makes me think she more than anyone inside that circus tent wants us to hear this warning of why the ringmaster must be denied four more years …
“From the end of World War II until 1980, virtually no American soldiers were killed in action while serving in the greater Middle East. Since 1980, virtually no American soldiers have been killed anywhere else. What caused that shift?”
That stark question appears as a blurb on the back cover of my edition of America’s War for the Greater Middle East: A Military History, Andrew J. Bacevich’s ambitious, brilliantly conceived if flawed chronicle which seeks to both answer that question and place it in its appropriate context. It is, of course, quite the tall order: how is it that a geography ever on the periphery of an American foreign policy that for decades could best be described as benign neglect came to not only dominate our national attention but be identified as central to our strategic interests? And how is that as this review goes to press—nearly four years after the publication of Bacevich’s book—America’s longest war in its history endures beyond its eighteenth year … in Afghanistan of all places?!
The short answer, I would posit, is oil. Bacevich is older than me, and I wasn’t yet driving at the time in 1969 when he notes dropping three bucks to fill up the tank of his new Mustang at 29.9 cents a gallon. But I was on the road just a few years later, and I recall sitting in long lines at the pump for fuel priced nearly ten times that, as well as the random guy who threatened to shoot a certain long-haired teenager for trying to cut line, and that same teen later learning how to siphon gas from parked cars. It was a time.
That tumultuous time stemmed, of course, from the 1973 oil embargo placed on the United States by OPEC (Organization of the Petroleum Exporting Countries) in retaliation for its support of Israel during the Yom Kippur War. Because he has styled his book “A Military History,” the author does not dwell on the gasoline shortage that so shook American self-confidence in the early 1970s, nor on the related and still unresolved Israeli-Palestinian conflict that remains as central to the theme of Middle East unrest as slavery was to the American Civil War. Instead, after a brief “Prologue,” Bacevich rapidly shifts focus to the Iran hostage crisis and the 1980 debacle that was Operation Eagle Claw, the aborted mission to rescue those hostages that resulted in those first American casualties referenced in that jacket blurb. This decision by the author to not accord oil and Israel their respective fundamental significance in far greater detail proves to be a weakness that tends to undermine an otherwise well-researched and well-written narrative history.
That author certainly has both the credentials and the skills worthy of the task before him. Andrew Bacevich is a career army officer, veteran of the Vietnam and Persian Gulf wars, who retired with the rank of colonel. He is also a noted historian and award-winning author, someone who has described himself as a “Catholic conservative,” but defies traditional labels of parties and politics. He is a pronounced critic of American military interventionism, George W. Bush’s advocacy for so-called “preventive wars,” and especially of the U.S. invasion of Iraq. In a kind of tragic irony, his own son, an army officer, was killed in combat in Iraq. I have read two of his previous books: Breach of Trust: How Americans Failed Their Soldiers and Their Country, and The Limits of Power: The End of American Exceptionalism, both magnificent treatises that reflect Bacevich’s ideological opposition to spending American lives needlessly in endless wars. But treatises don’t always translate well into narrative history—in fact these are and should be entirely separate channels—and Bacevich’s tendency to blur those boundaries here comes to weaken America’s War for the Greater Middle East.
The author points to repeated epic fails in Middle East policy that take us down all the wrong roads, while experts in and out of government shake their heads in bewilderment, yet one administration after another nevertheless presses on stubbornly. Bacevich is at his best when he underscores a series of unintended consequences on a road paved with occasional good intentions that not only exacerbate bad decision-making but cement unnecessary obligations to fickle, illusory allies that then put up almost insurmountable roadblocks to disentanglement. Two salient and substantial examples are: the poorly-conceived U.S. support for rebels opposed to the Russian-friendly regime in Afghanistan that was to spark Soviet intervention in 1979; and, subsequent U.S. backing for the Islamic fundamentalist Mujahideen that was to later spawn Al-Qaeda.
There is much more to come—more perhaps intended and incompetent rather than unintended—and much of that is either utterly unknown or long forgotten for most Americans, including the 1982 suicide-bombing of the Marine compound in Beirut that killed 241 but somehow failed to tarnish the “Teflon” presidency of Ronald Reagan, who retreated while euphemistically “redeploying.” From the vantage point of Washington, the greater enemy remained the Ayatollah, and all efforts were made to enable the brutal despot Saddam Hussein in his opportunistic war upon Iran, a decision that was to fuel Middle East instability for decades and lead to two future US conflicts with our former ally. And Reagan was still President and still all-Teflon in 1988 when the US shot down through either negligence or spite Iran Air Flight 655 over the Persian Gulf, a commercial airliner with 290 souls aboard. George H. W. Bush led a coalition to liberate Kuwait from our erstwhile ally Iraq, but then left a wounded, isolated and still dangerous Saddam to plague our future. But, of course, it was under George W. Bush that the tragedy that was 9-11 was hijacked and turned into a fanciful “War on Terror” that ultimately was to embolden Islamic fundamentalism, served as a pretext for an illegal invasion of Iraq that strengthened Iran and utterly destabilized the region, and later bred ISIL to terrorize multiple corridors of the Middle East. You can indeed draw almost a straight line from the Afghan Mujahideen of 1979 to ISIL suicide bombers today.
Bacevich is masterful with a pen, and his history is so well-written that there are literally no dry spots. The problem I found was with the tone, which while legitimately critical of American missteps is often needlessly arrogant, eye-rolling, even snarky—all of which detracts from the primary message, which is indeed spot-on. My politics often align closely with those of MSNBC host Rachel Maddow, but I simply cannot watch her show: I find her breathless exhalations and intimations of “How-could-anyone-be-so-stupid?” and “We-told-you-so” coupled with lip-curling grimaces intolerable. Bacevich is not that bad here by any means, but there is certainly a whiff of it that puts me off. Moreover, while he makes a cogent case for why just about every policy we put in place was wrong-headed, I would have much welcomed the author’s alternative recipes. Bacevich is a brilliant man: I truly wanted to know what he would have done differently if he was sitting behind the Resolute Desk instead of Carter or Reagan or Bush or any of the others.
Bacevich does deserve much credit for his far more panoramic view of what he rightly calls the “Greater Middle East,” as he widens the lens to focus upon the often neglected yet certainly related periphery of the Balkans and the Muslim population in the former Yugoslavia subjected to ethnic cleansing. Few mention Eastern Europe in the same breath as the Middle East, but for some five hundred years much of that geography was integral to the same Ottoman Empire that ruled over present-day Syria and Iraq. There is a common history that cannot be ignored. But just as I was disappointed elsewhere that Bacevich failed to highlight the background noise of the Israeli-Palestinian conflict that truly informs every conversation about Middle East affairs, in this case little was made of the bond between post-Soviet Russia and Slavs of “Greater Serbia,” which not only deeply influenced the Balkan Civil Wars but soured emerging US-Russian relations in its aftermath and resounded across the Islamic landscape. Likewise, the narrative swerves to take a peek at “Black Hawk Down” in Mogadishu, but the long history of ties between East Africa and Arabia remains unexplored.
America’s War for the Greater Middle East is divided into three parts: the first takes the reader to the conclusion of the Persian Gulf War (which Bacevich brands as “Second Gulf War”), and the second wraps up on the eve of 9-11. But it is the last part, dominated by the Iraq War, that strikes a markedly different tone and smacks of the more somber, perhaps coincidental to Bacevich’s own deeply personal loss, perhaps not. Alas, none of the sections are large enough to bear the weight of the material.
Rarely would I lobby for any book to be longer, but in this case the 370 pages in my edition—plus the copious notes and excellent maps—is simply not enough. The topic not only deserves but demands more. This book should either be three times longer or, better still, should be a three-volume series. A more comprehensive historical background—including the echo of the greater Ottoman heritage and the Russo-British grapple for Central Asia—of this entire milieu is requisite for getting a grasp upon how we got here. The Israeli-Palestinian conflict demands more focus. As does the Shia-Sunni division. And the relationships between Arab and non-Arab states, as well as the ties that transcend the regional to extend to Africa and Europe and beyond. There is no hope of a better grasp of all that has gone wrong with American entanglement in the Middle East without all of that and much more.
Given all these reservations, the reader of this review might be surprised that I nevertheless recommend this book. Warts and all, there is no other work out there that connects the dots of America’s involvement in the Middle East as well as it does, even as it cries for more depth, for more complexity. I would likely be less critical of this book if my admiration for Bacevich was less pronounced and my expectations for his work was not so high. Even if America’s War for the Greater Middle East falls short, it deserves to be on your reading list.
Can you imagine a President of the United States who blatantly ignores its conventions, ridicules its established order and appeals beyond these directly to the electorate, pledging to elevate the interests of the average citizen over those of the elite, whom he brands as corrupt, while scorning the courts, financial institutions, and any who stand in his way, polarizing the nation while he yet shamelessly exploits a partisan press and rewards his supporters with government jobs and favors? No, it’s not who you think, but it does at least partially explain why the current occupant of the White House often appears with a portrait of Andrew Jackson as a backdrop, a painting that he directed be displayed prominently in the Oval Office.
Jackson once loomed large in our collective cultural memory, but I suspect that memory is now a bit fuzzy for most Americans, who when pressed might at best tentatively identify him as the grim-looking fellow on the face of the twenty-dollar bill. Of course, Jackson has hardly been forgotten by historians, who have long recognized his centrality as the most consequential president of the antebellum era, although their assessments of him have seen a marked rise and fall over time. Once lionized as a giant in the emergence of a more democratic polity and a more egalitarian nation, a critical reexamination in the more recent historiography has revealed substantial “warts,” not only underscored by his leading role in “The Indian Removal Act” of 1830 that led to the deaths of thousands of Cherokees in the so-called “Trail of Tears,” but also in the ill-effects of the long echo of his “spoils system,” the dangerous naivety of his economic strategies including the “Bank War” that led to the Panic of 1837, as well as other forceful if misguided policies that some have argued set irrevocable forces in motion that later resulted in Civil War.
Andrew Jackson has been the subject of hundreds of biographies and related works. A prominent chapter has frequently been devoted to the Bank War, long framed as a flamboyant clash of wills between Jackson, who loathed banks, and the shrewd if hapless Nicholas Biddle, president of the Second Bank of the United States. A famous game of cat and mouse prevailed, as the standard tale has been told, with Jackson ultimately victorious, the bank abolished and Biddle sent packing in surprising and ignominious defeat.
It is such a familiar story that has received so much attention in the literature that it might seem unlikely that anything new could be said of it. So, there is then something of real genius in the astute reexamination showcased in the recently published monograph, The Bank War and the Partisan Press: Newspapers, Financial Institutions, and the Post Office in Jacksonian America, by Stephen W. Campbell. In this brilliant if not always easily accessible book, Campbell—a historian and lecturer at Cal Poly Pomona—challenges the orthodox narrative that puts Jackson and Biddle front-and-center to widen the lens to encompass the nuance and complexity that informs a long overlooked and far more intricate, multilayered confluence of people and events on both sides. The Bank War was indeed a great drama, but it turns out that there were many more essential players than Jackson and Biddle, and much more at stake than simply re-chartering the bank. As the subtitle suggests, Campbell notes that integral to the Bank War were common threads that ran between post offices, branch banks, and newspapers in what was indeed such a tangled weave that much went unnoticed or disregarded by historians prone to focus on the larger tapestry.
Today we might bemoan certain cable news propaganda vehicles that eschew reporting in favor of distorting, yet at its worst this phenomenon bears almost no resemblance to the partisan press of Jackson’s day, when there was little expectation of any kind of objectivity. In fact, valuable contracts for printing government documents were doled out to the politically simpatico, who were expected to promote the official line. Meanwhile, the Second National Bank through its branches had powerful financial incentives at hand to entice their allies in the press to champion their point of view.
Then there was the post office, which to us perhaps smacks of the anachronistic and irrelevant. Yet, its importance to early nineteenth century Americans cannot be overstated, since it effectively served as the sole vehicle for personal, business, and official communication. But it was not only first-class mail that passed through post offices, but also newspapers, so branches could—and did—act as a kind of local valve for what sort of media could be passed across the counter. It was after all Jackson’s Postmaster General, former newspaper editor Amos Kendall, who famously permitted southern postmasters to refuse to distribute abolitionist tracts, another spark that was to fan antebellum sectional flames. Odd as it may seem now, Postmaster General was the single most valuable cabinet office in that era because of the vast patronage it controlled. Through its direct and indirect influence over the press, the White House clearly stacked the deck against poor Biddle, who despite vast resources could not hope to compete in the arena of what today we might term “messaging.”
While little of this material is in itself new or groundbreaking, Campbell deserves much credit for being the first to astutely connect all the dots of these seemingly unrelated elements to the Bank War. But he goes further, articulately probing the economic realities of American life in the 1830s and deftly fitting the financial institutions of the day into the larger picture. The way banks and the economy functioned then would be almost unrecognizable to modern students of finance. Campbell peels back the fascinating if arcane layers of antebellum banking that other historians of the period have long neglected.
For the world of academia, The Bank War and the Partisan Press is a magnificent achievement, but alas much of it may remain unknown to the wider public because it is not always easily accessible to the general reader. This is not Campbell’s fault: he is after all quite skillful with a pen. But this was originally a thesis expanded into a book, so the strictures of academic writing sometimes weigh heavily on the account. Also problematic, perhaps, is that the text is somewhat rigidly compartmentalized, so that each sub-topic is exhaustively explored by chapter, rather than more seamlessly woven into the narrative. These are mere quibbles to a scholarly audience and hardly detract from the finished product, but I would like to see Campbell revisit this theme one day in another title designed to reach more readers of popular history. In the meantime, if you are a student of Jacksonian America, this is an essential read that receives my highest recommendation.
Did you know that the single greatest president in America’s first half-century was James Monroe? Even more than that, did you know that the most significant Founder of the fledgling Republic was James Monroe? That Monroe’s long-overlooked accomplishments and contributions dwarfed those of Washington, Jefferson and Madison and all the rest? That Monroe was a towering figure in both establishing and leading the new nation? I didn’t either, but that is the boast of The Last Founding Father: James Monroe and a Nation’s Call to Greatness, by Harlow Giles Unger.
Should you suspect that I am unfairly exaggerating the author’s bold claim, look no further than page two of the “Prologue” to learn that while Washington may have won American independence, his legacy was little more than a “fragile little nation” and his “… three successors—John Adams, Thomas Jefferson, and James Madison—were mere caretaker presidents who left the nation bankrupt, its people deeply divided, its borders under attack, its capital city in ashes.” It was, apparently, left to the heroic, brilliant, and larger-than-life character of James Monroe to step in and make America great, as summarized by Unger:
Monroe’s presidency made poor men rich, turned political allies into friends, and united a divided people … Political parties dissolved and disappeared. Americans of all political persuasions rallied around him under a single “Star Spangled Banner.” He created an era never seen before or since in American history … that propelled the nation and its people to greatness.
That’s from page three. I might have closed the cover after that burst of hyperbole, which better channels the ending of a Disney movie than a historian’s measured analysis. But then I checked the dust jacket bio to find that Unger is “A former Distinguished Visiting Fellow in American History at George Washington’s Mount Vernon … a veteran journalist, broadcaster, educator and historian … the author of sixteen books, including four other biographies of America’s Founding Fathers.” Perhaps I was misjudging him? So, I read on …
Spoiler alert: it does not get any better.
Presidential biography is a favorite of mine, and I have read more than a couple of dozen. For the uninitiated, the genre tends to diverge along three paths: the laudatory, the condemnatory and the analytical. While closer to the first category, The Last Founding Father really fits into none of these classifications. In fact, one might argue that it is less biography than hagiography, for the author is so consumed with awe by his subject that the latter is simply incapable of transgression in any arena. When I was a child, I could do no wrong in my grandmother’s eyes. If I did go astray, she would redefine right and wrong to suit the circumstances, so I always landed on the positive side of the equation. Unger offers similar dispensation for Monroe throughout this work.
Unger’s inflated reverence for Monroe should not diminish his subject’s importance to the early Republic, only compel us to examine the man and his legacy with a more critical eye. The list of “Founding Fathers”—a term only coined by Woodrow Wilson in 1916—is somewhat arbitrary, and Monroe does not even always make the cut. The essential seven that all historians agree upon are: John Adams, Benjamin Franklin, Alexander Hamilton, John Jay, Thomas Jefferson, James Madison, and George Washington. Other lists are more broad, and many also include Monroe, who was after all not only the fifth President of the United States, but also U.S. Senator, Ambassador to France and England, Secretary of State, and Secretary of War—at one point even holding the latter two cabinet positions simultaneously. Monroe’s tenure in the White House has famously been dubbed the “Era of Good Feelings,” but only school kids—and Unger, apparently—believe that this is because suddenly faction disappeared, and both rival politics and personalities gave way to a mythical fellowship. In fact, historians have long recognized that this period was characterized by the one-party rule of the Democratic-Republican Party that dominated after the disintegration of the Federalist Party, which had flirted with treason and been discredited in its opposition to the War of 1812. But Monroe’s Democratic-Republicans represented far more of a coalition of loose factions than the powerful central force that the party had been under the stewardship of Jefferson and Madison before him. The fissures unacknowledged by Unger were brewing all along, later made manifest in the Second Party System of Clay and Jackson.
Most studies of Monroe reveal a man of great personal courage with stalwart dedication to principle and service to his country. Few—Unger is the exception—credit him with the kind of intellectual brilliance seen in peers like Jefferson, Madison and Hamilton. Like Hamilton—who indeed once challenged him to a duel—Monroe seems to have possessed an outsize ego and a prickly sense of honor that was easily slighted if not subject to the praise and recognition he felt certain he rightly deserved, such as sole credit for the Louisiana Purchase! Nearly a decade earlier than that milestone, Monroe had served as ambassador to France but was later recalled by Washington, who found him too easily flattered and otherwise lacking in the traits essential to upholding American diplomatic interests. Monroe was stung by this, but in his long future in government service he was in turn to have fallings-out with both Jefferson and his old friend Madison, unable to tolerate differences in opinion and bristling in his perception of being ever snubbed by not being elevated to the prominence he felt due him. Like Jefferson, Madison’s presidency proved to be a disappointing chapter in a life marked by great achievements. But while the War of 1812 was hardly Madison’s finest hour, and Monroe indeed played a pivotal role during the existential crisis of the burning of Washington and its aftermath, Madison was hardly the bewildered, sniveling coward Unger portrays in his account, so incapacitated by events that Monroe had to heroically swoop in to serve as acting president and single-handedly rescue the Republic.
The many flaws in this biography are unfortunate, because Unger writes very well and citations are abundant, lending to the book the style and form of a solid history. On a closer look, however, the reader will find that the excerpts from primary sources that populate the narrative are often focused on superficial topics, such as food served at events, room furnishings, or styles of dress. And Unger seems to sport a weirdly singular crush on Monroe’s wife, Eliza, whom he describes as “beautiful” more than a dozen times in the text—and that before I gave up counting! Attractive of not, she seems as First Lady to have come off as cold and imperious, with aristocratic airs that she no doubt accumulated during her times abroad with her husband, when they lived often in a grand style that was well beyond their means. Oddly, far more paragraphs are devoted to descriptions of Eliza’s clothing and social activities, and her many debilitating illnesses, real or imagined, than to Monroe’s eight years in the White House.
A greater complaint is that for a book published as recently as 2009, conspicuous in its absence are the less privileged people that walked the earth in Monroe’s time, Native Americans and most especially the enslaved African Americans kept as chattel property by elite Virginia planters like Monroe—as well as Jefferson, Madison and Washington—something that manifestly flies in the face of recent historiographical trends. Although Monroe owned hundreds of human beings over the course of his lifetime, the reader would hardly know it from turning the pages of The Last Founding Father, where the enslaved are mentioned in passing if mentioned at all, such as: “Although Monroe had to sell some slaves to rescue [his brother] Joseph from bankruptcy, he held to the belief that brotherly ties were indissoluble …” [p207] Long before the more famous Nat Turner Revolt, there was Gabriel’s Rebellion, and Monroe was Governor of Virginia when it was repressed and twenty-five blacks were hanged in retribution. The slightly more than two pages given to this episode lacks critical analysis but credits Monroe with promptly calling out the militia to put down the uprising [p140-142]. Such a cursory treatment of the inherent contradictions of the institution of chattel slavery to the ideals of the new Republic are an inexcusable blemish on any work of a twenty-first century historian. Since there is much in the literature about the incongruity of Monroe the plantation master—much like Jefferson—at times decrying while yet sustaining the peculiar institution, we can only conclude that Unger deliberately passed over this material lest it cast some aspersion upon the adoring portrait that this volume advances.
It pains me to write a bad review of any book. After all, the author typically labors mightily to generate the product, while I can read it—or not—in my leisure. But I am passionate about both historical studies and the rigors of scholarship, which should apply even more scrupulously to someone such as Harlow Giles Unger, who not only possesses appropriate credentials but has written widely in the field, and thus owes the student of history far more than this, which after all does no real service to the reader—nor to James Monroe by overstating his achievements while failing to contextualize his role as a key figure in the early Republic with the nuance and complexity that his legacy deserves.
Astronaut William Anders began: “For all the people on Earth the crew of Apollo 8 has a message we would like to send you:
In the beginning God created the heaven and the earth.
And the earth was without form, and void; and darkness was upon the face of the deep.
And the Spirit of God moved upon the face of the waters. And God said, Let there be light: and there was light.
And God saw the light, that it was good: and God divided the light from the darkness.”
On Christmas Eve fifty-one years ago, millions in the United States and around the globe—including this then eleven year old boy—gathered breathlessly around their TV’s to watch the first live broadcast from space, an extraordinary transmission beamed back to earth from more than two hundred thousand miles away from an American spacecraft in orbit around the moon. The largest television audience to that date was treated to remarkable photographs of the forbidding moonscape, but far more awe-inspiring and humbling were the images they viewed of their very own living planet, appearing so tiny and so remote from such a great distance. The three astronauts closed out the broadcast by reading passages from the biblical book of Genesis. Lunar Module Pilot Bill Anders was followed by Command Module Pilot Jim Lovell, and then Commander Frank Borman, who added: “And from the crew of Apollo 8, we close with good night, good luck, a Merry Christmas, and God bless all of you – all of you on the good Earth.”
While this episode remains a heartwarming moment that celebrates both the universality of the human endeavor as well as the singularity of this accomplishment, it should not obscure the reality of what was really happening on that blue planet viewed from afar, of the wars and famines and cruelty and disasters that did not take a pause while space travelers read aloud from an ancient book that itself once gave witness to its same share of wars and famines and cruelty and disasters. Nor should it fail to remind us that these representatives of the earth blasted off from a badly fractured landscape at home.
The claim that America on this Christmas Eve of 2019 has never been this divided is at once refuted by a glance back to 1968, replete with acts of terror, campus unrest, cities in flames, mass demonstrations, political assassinations, and violence in the streets—the perfect storm of the increasingly unpopular war in Vietnam and the revolution of rising expectations among long-disenfranchised blacks frustrated by the pace of change. If there was a kind of unifying force that remained to serve as some sort of glue amid the chaos and dissonance of a splintered national polity it had to be the space program and its race for the moon. The actual moon landing was not until the following year, but 1968 closed with the remarkable Apollo 8 mission, the first manned spacecraft to orbit the moon, made more dramatic by that live Christmas Eve audio-video transmission from space that included those readings from Genesis, and later forever enshrined in our collective consciousness by the iconic photo “Earthrise” that depicts the earth rising over the moon’s horizon, snapped by astronaut Bill Anders, that is said to have inspired the environmental movement.
Martin W. Sandler revisits this existential moment that briefly comforted a troubled nation with the oversize and lavishly illustrated Apollo 8: The Mission That Changed Everything, directed at a young adult (YA) audience but suitable for all. I have read and reviewed Sandler before. The author has a talent for clear, concise writing that while targeting a younger readership does not dumb-down the topic, an otherwise frequent tarnish to this genre of nonfiction. I obtained this book as part of an Early Reviewer program and my copy was an Advanced Reader’s Copy (ARC) with black and white images, but the published edition is full-color and worth the purchase if only for the magnificent color photographs, though these are nicely enhanced by a well-written narrative that encompasses the totality of this highly significant space mission and its ramifications back home. The only caution I would add is that I have detected glaring historical errors in some of Sandler’s other works. I did not stumble upon any here, but then I am hardly an expert on the space program. Thus, the reader should trust but verify!
Some—at the time and since—have objected to the astronauts’ choice of verses from Genesis, as if there was an attempt to impose religion from the beyond, or to celebrate the Judeo-Christian experience at the expense of others. We should not be so hard on them; they were simply seeking some kind of universal message to inspire us all. That they may have failed to please everyone may only underscore how diverse we are even as we transcend the myth of race to acknowledge that we all share the very same DNA, the same hopes and dreams and fears and needs and especially the desire to love and be loved. Astronaut Bill Landers himself returned from space as an atheist, awed by his place in the vast universe. I am not a religious person: I celebrate Christmas as a time for peace and love and Santa Claus. But I can still, like the astronauts on Apollo 8 fifty-one years ago, wish my readers a good night, good luck, and a Merry Christmas to all of you on the good Earth.
Some years ago, I had the pleasure of reading the Booker-prize winning masterpiece Birdsong, by Sebastian Faulks, which motivated me to pick up a couple of his other novels for later consumption, including Engleby. One day, I randomly plucked it off the shelf and turned to the first page. Honestly, it was not easy to put down. Also, to be even more honest, there were times that I really wanted to.
As a reviewer, it sounds somewhat awkward or even unseemly to resort to a term like “creepy” to describe a novel, but that would most accurately describe the subtle if sustained punch in the gut I experienced while reading this one, propelled by a growing revulsion for the central character. As the narrative unfolds, that character—the eponymous Mike Engleby—is a working-class Brit on scholarship to “an ancient” university in the early 1970s. He comes across as a bit of an oddball, but for those of us who lived through this era that was hardly unusual nor especially undesirable, given that to be an iconoclast in those days was often seen as a virtue. But the reader cannot help but experience an emerging disquiet as Engleby develops an infatuation that veers to obsession that then turns more ominously to the outright stalking of his bright and beautiful classmate Jennifer Arkland. Along the way, there are flashbacks to the bitter poverty of Engleby’s youth, the regular beatings by his father, the quotidian brutality of his life at public school where he is condemned to the unfortunate nickname “Toilet” and subjected to an ongoing torment that stretches the limits of endurance to cruelty—the cumulative effect of which, it becomes clear, shapes him into a bully, a thief, a drug dealer, an opportunist. Flash forward again and Jennifer has disappeared, never found, presumed murdered.
Did Engleby murder her? Could he be a serial killer? Is he mere weirdo or sociopath? That’s for you to find out: I don’t believe in folding spoilers into reviews. But the narrative is laced with plenty of clues, scattered within an interior monologue that invites an uncertain sympathy for a protagonist whom at best provokes the uneasy, at worst the repellent. Yet, it is the genius of the author to tempt the reader to veer from repugnance to empathy, against all odds, even if this shift may prove temporary. And the reader, like it or not, is ensnared in an uncomfortable fascination with this very same well-crafted interior monologue, a kind of labyrinth pregnant with Engleby’s barely suppressed anxiety, which he overcompensates for with visions of grandeur and a disdainful arrogance for all others in his orbit—except perhaps, that is, for Jennifer Arkland. And then that anxiety grows contagious as the reader begins to question the reliability of the narrator! Are the things revealed by Engleby’s inner thoughts real or imagined? Is Faulks himself, acting as both wizard and jester, simply mocking us from behind the curtain?
The last time I found myself as deeply unsettled by a work of fiction, it was Perfume, by Patrick Süskind, the unlikely tale of an eighteenth-century serial killer, but that novel was tempered with a pronounced sense of the ironic if not especially comedic. Not so with this one: there’s nothing even a little bit funny about Engleby. For his part, Faulks proves himself a true artist of the written word, his pen taking full command of his character and his audience alike. I recommend it, even if it may keep you up at night.
The most consequential figure of what historians dub Europe’s “long nineteenth century” (1789-1914)—from the start of the French Revolution to the outbreak of World War I—came to virtually define the first part of that era while setting forces into motion that shaped all that was to follow. Over the course of a single decade, Napoleon Bonaparte controlled not only much of the territory on the continent, but the entirety of its destiny. When he fell from power, the peace that was crafted in his wake largely held for a full century. The Europe that was obliterated by the catastrophe of the Great War that followed was the Europe both made and unmade by Napoleon. And even well beyond that, in the nearly two centuries since he walked the earth, no other individual—not Bismarck, not Stalin, not Churchill, not even Hitler—has emerged in the West, for ill or for good, to rival his significance or challenge his legacy. Yet for most, these days Napoleon is, if not exactly a forgotten character, a much overlooked one, a rarely referenced ghost of a distant past whose specter though perhaps unnoticed nevertheless still haunts the twenty-first century capitals of Paris, London, Rome, Berlin, Warsaw and Moscow.
An outstanding remedy to our collective negligence is Napoleon: A Life, by Adam Zamoyski, a noted historian and author with a long resume who masterfully resurrects the outsize character that was the living man and places him in the context of his times. At nearly seven hundred pages, at first glance this hefty tome might seem intimidating, but Zamoyski writes so well that there are few sluggish spots in a fast-moving, highly accessible narrative that will likely take its place in the historiography as the definitive single-volume biography. And this is surely the treatment his subject deserves.
There could perhaps not have been a more unlikely individual to command the world stage and change the course of history than Napoleon Bonaparte, born to a family of minor Italian nobility of quite modest means on Corsica in 1769, somewhat ironically in the same year that the Republic of Genoa ceded the island to France. It may be a minor point but it certainly adds to that irony that the future Emperor of France apparently ever spoke French with an atrocious accent, which—knowing the conceit of those native to the language—could only have rankled those in his orbit, both friend and foe. Yet, this is just one of the many, many contradictions that cling to Napoleon’s person. As a child, he was sent to a religious school in France, and later attended a military academy, which led to his commission as a second lieutenant in the artillery.
It was the outbreak of the French Revolution a few short years later that catapulted him onto the world stage in a bizarre trajectory that saw him first as a fervent Corsican nationalist seeking the island’s independence from France, then a pro-republican pamphleteer allied with Robespierre, and then artillery commander at the Siege of Toulon, where he first demonstrated his military genius. He was wounded but survived to be promoted to brigadier general at the age of twenty-four and later placed in command in Italy, where he led the army to victory in virtually every battle, while taking time out to crush a Royalist rebellion in Paris. He also survived his association with Robespierre. Proving himself as gifted in the partisan arena as he was on the battlefield, he adroitly commandeered the dangerous and ever-shifting political ground of revolutionary France to engineer a coup and make himself dictator, euphemistically styled as First Consul of what was now a republic in little more than name only. He was just thirty years old. Within five years, he was Emperor of France in a retooled monarchy that both resembled and served as counterpoint to the ancien régime that revolution had swept away.
The rare general with talents equally exceptional in the tactical and the strategic, Napoleon managed both on and off the battlefield to defeat a succession of great power coalitions aligned against him until he commanded much of Europe directly or through his proxies, while crippling British trade through his “continental system” that controlled key ports. Like Alexander two millennia before him, Napoleon was brilliant, courageous, opportunistic and lucky—all the ingredients necessary for unparalleled triumph on such a grand scale. Unlike Alexander, he outlived his conquests to try to remake his realm, in his case by spreading liberal reforms, stamping out feudalism, promoting meritocracy and codifying laws. But he also lived to fall from power and to fall hard. At the risk of stretching the metaphor, the ancient Greeks invented the term hubris to describe the tragedy in the excessive pride personified by men just such as Napoleon. Whereas Alexander looked to Achilles and the Olympic pantheon, Napoleon looked only to his own “star,” which he fully relied upon to guarantee his success in every endeavor. And one day that star dimmed. He famously overreached with the ill-conceived invasion of Russia that turned to debacle, but it was more than that. For all his genius, he ruled the French Empire like a medieval lord—or a crime boss—placing on the thrones of puppet states that served him members of his extended family or his cronies, most of whom lacked competence or even loyalty. His dramatic rise was met with an equally dramatic fall, and he ended his days in exile on a remote island in the South Atlantic, slowly succumbing to what was likely stomach cancer at the age of fifty-one.
Of course, you could learn all of this from the prevailing literature—there are literally thousands of books that chronicle Napoleon—but Zamoyski’s rare achievement is to capture the essential nature of his subject, something that too often eludes biographers. The Napoleon he conjures for us is a basket of contradictions: at once kind, despotic, magnanimous, ruthless, noble, petty, confident, insecure, charismatic, and socially awkward. Zamoyski does not stoop to play psychoanalyst, but the Napoleon that emerges from the narrative often smacks of a narcissist and depressive who frequently rode waves of highs and lows. If nothing else, he was certainly a very peculiar man who was repellent to some just as others were somehow drawn to him irresistibly, a paradox perhaps captured best in this passage recounting the recollections of those who knew him as a young man:
He was out of his depth, not so much socially as in terms of simple human communication: he showed a curious lack of empathy which meant that he did not know what to say to people, and therefore either said nothing or something inappropriate. His gracelessness, unkempt appearance, and poor French … did not help … He could sit through a comedy … and remain impassive while the whole house laughed, and then laugh raucously at odd moments … [He once told] a tasteless joke about one of his men having his testicles shot off at Toulon, and laughing uproariously while all around sat horrified. Yet there was something about his manner that some found unaccountably attractive. [p92]
Zomoyski does not pass judgment on Napoleon, but deftly brings color, form and substance to his sketches of him so that the reader is rewarded with a genuine sense of familiarity with the living man, an accomplishment that cannot be overstated. If there is a flaw, it is that the work is skimpy on the historical backdrop, on the prequel to Napoleon; those not already well-schooled with the milieu of late eighteenth century Europe may be at a disadvantage. But this is perhaps a quibble, for to do so competently would have further swelled the size of the book and risked an unwieldy text. On the other hand, there is a welcome supply of many fine maps, as well as copious notes.
Napoleon’s ambition left thousands of dead in his wake, and he left his mark far beyond the Europe he transformed. Modern Egyptology was born out of Napoleon’s military campaign in Egypt; the famous “Rosetta Stone” was among the spoils of war, although it ultimately ended up in British rather than French hands. Napoleon was the force behind the Louisiana Purchase, which effectively doubled the size of the nascent United States. It was the impressment of American seamen during the Napoleonic Wars that was a leading Casus belli in the War of 1812, and it was British exhaustion at the conclusion of that conflict that spared the young republic a harsher price for peace. Look closely and you will find Napoleon’s fingerprints nearly everywhere—and you will see them in far greater detail if you treat yourself to Zamoyski’s magnificent biography, which surely does justice to his legacy.
[CORRECTION: the podcast version of this review misidentifies the location of Napoleon’s death as on an island in the Pacific rather than in the South Atlantic, which has been corrected in the written text above.]
While browsing a bookstore sometime in 1982, I picked up a thick hardcover entitled The Years of Lyndon Johnson: The Path to Power, by Robert A. Caro. I had never heard of Caro, but the jacket flap told of his winning the 1975 Pulitzer Prize for biography for his very first book, The Power Broker: Robert Moses and the Fall of New York. I had never heard of Moses either, but in the days before smartphones and Google might let me dig a little deeper, that accolade spoke directly to the author’s reputation. I did—and still do—like to browse bookstores and to read books about American presidents. The twenty bucks I shelled out to buy that book was probably most of the cash I had in my wallet that afternoon, something else that was and remains characteristic of me to this day: given a choice between buying lunch or a new book, I will almost always choose the latter. I mean, I can wait until dinner …
That volume of The Path to Power is 768 pages of small print, not including notes and back matter, of mostly dense material, but Caro’s voice is so commanding that I found myself both absorbed and obsessed. For those who have not read him, it is difficult to describe Caro’s style, which exists somewhere at the confluence of incisive reporting and towering epic, a kind of literary salad that blends the best of Edward R. Murrow and Robert Penn Warren—seasoned with a dash or two of Thucydides—that the reader is driven to devour.
There are great presidential biographers out there—think Robert Remini, David McCullough, Joseph Ellis, Jon Meacham—yet Caro is in a league all his own. And unlike the others, he has not been prolific, devoting the decades since the publication of The Path to Power to just three books, all part of his The Years of Lyndon Johnson saga, one of which—Master of the Senate—is a landmark synthesis of history and biography and politics that won him a second Pulitzer Prize in 2003. Another ten years passed before the release of The Passage of Power, which only just follows LBJ into his first months in the White House. Now an octogenarian still doggedly at work on what is to be the final book in the series, Caro has broken precedent by releasing a slim volume that is a study of the author rather than his subjects.
This latest book, Working: Researching, Interviewing, Writing, is less a memoir than a profile of what Caro has set out to do and how he has approached the process, as neatly summarized by the subtitle. Surprisingly, Caro is not a historian, but instead started off as a journalist who won the respect of an old-fashioned hardboiled editor when his diligence in the field turned up info vital to a storyline. The editor, who had barely acknowledged him before, advised: “Turn every page. Never assume anything. Turn every goddamned page.” That has been his mantra ever since.
Caro is fascinated by power and those who wield it, and especially by the ways power can be obtained and exercised outside of ordinary channels. For instance, his first subject— “master builder” Robert Moses—was never elected to any office, yet at one point simultaneously held twelve official titles and used his accumulated authority to preside over the utter and lasting reshaping of New York City and its suburbs. In his research on LBJ, by turning “every page,” Caro encountered an obscure reference that led him to learn that Lyndon Johnson’s political rise and own personal wealth was closely linked to a long-secret relationship with the principals of Brown & Root, a construction company that built roads and dams and was later enriched by government contracts sent their way by Johnson; in turn, their largesse was to overflow LBJ’s campaign coffers. The rest is—quite literally—history.
A silent partner in Caro’s award-winning achievements has long been his wife Ina, who has quietly devoted her life to aiding his research and managing the household so that he could concentrate entirely on his book projects. In Working, Caro reveals that Ina once sold their home—without telling him—in order to ensure their financial solvency. Another time, when he announced they were moving to the Texas Hill Country for three years to continue his research on LBJ, Ina cracked: “Why can’t you do a biography of Napoleon?” But she went along, without complaint. And Caro makes it clear that Ina was no mere admin or assistant: she often sat across from him at long library tables and turned over half of those “goddamned pages” herself.
By my own calculation, I have read nearly three thousand pages of Robert Caro in his four volumes on Lyndon Johnson. I eagerly and impatiently await the final book. I did not know what to expect from Working, which is closer to memoir than autobiography but truly defies categorization. Most great writers are incapable of talking about themselves without something like bitterness or bravado. Hemingway certainly couldn’t do it. Steinbeck—think Travels with Charley—was better at it, but he tended to conflate fiction and nonfiction along the way. Caro would have none of that. His work has always had a singular focus that has been about the unvarnished facts, about the warts and all, about the inconvenient truths that swirl about the lives of his subjects, and he delivers no more and certainly nothing less when he turns the lens on himself.
Working would be a party favor if written by anyone but Robert Caro. But because he is a magnificent writer gifted with extraordinary insight, it is a kind of a minor masterpiece packaged in an undersized edition that is an easy read of less than two hundred pages. If there is a fault, it is the odd inclusion of an interview with The Paris Review from 2016 that is not only superfluous but distracting; I would urge skipping it. But that’s a quibble. Even if you have never heard of Robert Caro yet are fascinated with history and how solid research serves as the foundation to analysis, interpretation and an ever-evolving historiography, you should read this. If you have read Caro’s other books, of course, then you must read this one!
In August 1831, in Virginia’s Southampton County, a literate, highly intelligent if eccentric enslaved man—consumed with such an outsize religious fervor that he was nicknamed “The Prophet” by those in his orbit—led what was to become the largest slave uprising in American history. Nat Turner’s Rebellion turned out to be a brief but bloody affair that resulted in the largely indiscriminate slaughter of dozens of whites—men, women, children, even infants—before it was put down. The failed revolt itself was and remains far less important than its repercussions and the dramatic echoes that still resounded many years hence during the secession crisis. Rarely would any historian of the American Civil War cite Nat Turner as a direct cause of the conflict—after all, the rebellion took place three decades prior to Fort Sumter—but it is almost always part of the conversation. Turner’s uprising not only reinforced but validated a deep-simmering paranoia of southern whites—who like ancient Spartans vastly outnumbered by Helots were often in the minority to their larger chattel population—and spawned a host of reactionary legislation in Virginia and throughout much of the south that outlawed teaching blacks to read and write, and prohibited religious gatherings without a white minister present. And while for those below the Mason-Dixon it was an underscore to the perils of their peculiar institution, at a time when abolitionism was in its infancy it also served to remind at least some of their northern brethren that the morally questionable practice of owning other human beings was part of the fabric of southern life. Indeed, one could argue that the true dawn of what we conceive of as the antebellum era began with Nat Turner.
For such a pivotal event in the nation’s past, the historiography has been somewhat scant. There is the controversial “confession” that Turner dictated to lawyer Thomas Ruffin Gray in the days between his capture, trial and hanging, which some take at face value and others dispute. But in the intervening years, surprisingly few scholars have carefully scrutinized the rebellion and its legacy, which remains far better known to a wider audience from William Styron’s Pulitzer Prize-winning novel The Confessions of Nat Turner than from the analytical authority of credentialed historians.
A welcome remedy can be found in The Land Shall be Deluged in Blood: A New History of the Nat Turner Revolt, a brilliant if uneven treatment of the uprising and its aftermath by Patrick H. Breen, first published in 2016, that likely will serve as the academic gold standard for some time to come. While giving a respectful nod to the existing historiography—which has tended to breed competing narratives that pronounce Turner hero or villain or madman—Breen, an Associate Professor of History at Providence College, instead went all in by conducting an impressive amount of highly original research that locates the revolt within the greater sphere of the changing nature of the institution of slavery in southeastern Virginia in the early 1830s, which as a labor mechanism was in fact in a slow but pronounced decline. Nat Turner and his uprising certainly did not occur in a vacuum, but prior to Breen’s keen analysis, the rebellion was generally interpreted out of its critical context, which thus distorted conclusions that often pronounced it an anomaly nurtured by a passionate if deranged figure. For the modern historian, of course, this is not all that shocking, since the uncomfortable dynamics found in the relationships of the enslaved with wider communities of whites and other blacks (both free and enslaved) has until recent times been typically afforded only superficial attention or entirely overlooked. It is nevertheless surprising—given the notoriety of the Turner revolt—that until Breen there was such a lack of scholarly focus in this arena.
The book has eight chapters but there are three clear divisions that follow a distinct if sometimes awkward chronology. The first part traces the start and course of the rebellion and presents the full cast of characters of conspirators and victims. The second is devoted to subsequent events, including both the extrajudicial murder by whites of blacks swept up in the initial hysteria spawned by the revolt, as well as the carefully orchestrated trials and executions of many of the participants. The final and shortest section concerns the fate of Nat Turner himself, who evaded capture for two months—long after many of his accomplices had been tried and hanged.
The general reader may find the first part slow-going. The story of the revolt should be an exciting read, especially given the passion of prophecy that consumed Turner and the violence that it begat with its slaughter of innocents by an unlikely band of recruits whose motives were ambiguous. Instead, the prose at times is so dispassionate that the drama slips away. In my opinion, this is less Breen’s fault—he is, after all, a talented writer—than the stultifying structure of academic writing that burdens the field, the unfortunate reason why most best-selling works of history are not written by historians. But I would encourage the discouraged to press on, because the effort is intellectually rewarding; the author has deftly stripped away myth and legend to separate fact from the surmise and invention pregnant in other accounts. If there can be such a thing as a definitive study of the Nat Turner rebellion, Breen has delivered it.
It is clear from the character of the narrative that follows that Breen’s true passion lies in the aftermath of the revolt, where he serves as revisionist to what has long been taken for granted as settled history. This is as it should be, because it was the repercussions of the rebellion and the way it was remembered (north and south) in the thirty years leading up to secession that was always of far greater importance to history than the uprising itself. And it is unfortunately this echo—much of which has been unsubstantiated—which has tainted later scholarship. The central notion that prevailed, which Breen challenges, is that the reaction to Nat Turner was a widespread bloodbath of African Americans by unruly mobs whose suspicion was that all blacks were complicit or were simply driven by revenge. The other, also disputed by Breen, is that whatever trust might have once existed between white masters and the enslaved had forever evaporated, the former ever in fear that the latter were secretly plotting a repeat of the Turner episode. Finally, Breen takes issue with the view of many historians that the authorial voice in Turner’s “confession” is unreliable because it was dictated to a white man who was guided by his own agenda when he published it.
Breen refutes the first by lending scrutiny to the empirical evidence in the extant records of the enslaved population. A little general background for the uninitiated here: the enslaved were treated as taxable chattel property in the antebellum era, so meticulous records were kept and a good deal of that survives. Many slave-owners insured their human “property,” often through insurance companies based in the north. If an enslaved person was convicted of a capital crime, the state compensated the slave-owner for the executed offender. Breen, as a good historian, simply reviewed the records to determine if prevailing views of the rebellion’s aftermath were accurate or exaggerated. What he learned was that there was indeed much hyperbole in reports of widespread massacres of African Americans. Yes, certain individuals and militias did commit atrocities by murdering blacks, and sometimes torturing them first. But the numbers were vastly overstated. And local officials quickly put a stop to this, motivated perhaps far less by ethical concerns than in an effort to protect valuable “property” from the extrajudicial depredations of the mob, whose owners would not then be duly compensated. Breen should be commended for his careful research—which demonstrates that long-accepted reports of mass murder are simply unsupported by the records—yet it seems astonishing that those who came before him failed to follow the same road of due diligence that he traveled. This should underscore to all budding historians out there that there remains lots of solid history work ahead, even and especially in otherwise familiar areas like this one where what turns out to be a flawed analysis has long been taken for granted as the scholarly consensus.
This business of assigning value to chattel human property is uncomfortable stuff for modern students of this era, but as those who have read The Price for Their Pound of Flesh, Daina Ramey Berry’s outstanding treatment of the topic, it is absolutely essential to understanding how slavery operated in the antebellum south. The Land Shall be Deluged in Blood steps beyond the specifics of Nat Turner to offer a wider perspective in this vein, as well. The enslaved were often subject to the arbitrary sanctions of their masters, but those accused of capital crimes were technically granted a kind of due process of law. Breen points out that special courts of “Oyer and Terminer” that lacked juries—the same kind that convicted and hanged those accused of witchcraft in Salem—were ordained in Virginia to judge such cases. Initially enacted to expedite the trial process of the enslaved, the courts—captained by five magistrates who were typically wealthy slave-owners, and which duly supplied defense attorneys to the accused—came to have the opposite effect, convicting only about a third of those brought before them. [p108] Much of the reason for these results seems to be connected to an effort to limit the cost of the state for compensation for those sent to the gallows for their crimes.
It turns out that these same courts also had a tempering effect on the trials of those accused of taking part in the rebellion. But this time, it wasn’t only about the money. Breen argues convincingly that the elite magistrates who controlled the trial process also created and marketed to the wider community a reassuring narrative that the uprising was a small affair involving only a small number of the misguided. In the end, eighteen were executed, more than a dozen were transported and there were even some acquittals. Thus, state liability was limited, and the peculiar institution was protected.
That reassurance seems to have been effective: freedom of movement for the enslaved subsequent to the revolt was not as constrained as some have maintained, as evidenced by the fact that Nat Turner was discovered in hiding and betrayed by other enslaved individuals who were hardly prohibited from wandering alone after dark. By the time Nat Turner was captured and executed, the rebellion was almost already history. As to the veracity of Turner’s “confessions” to Grey, Breen makes a compelling argument in support of Turner’s words as recorded, but that will likely remain both controversial and open to interpretation. So too will the person of Nat Turner. The horror of human chattel slavery might urge us to cheer Nat and his accomplices in their revolt, while the murder of babies in the course of events can’t help but give us pause. Likewise, we might harshly judge those white slave-owners who dared to judge them. But, of course, that is not the strict business of historians, who must sift through the nuance and complexity of people and events to get to the bottom of what really happened, warts and all.
I first learned of The Land Shall be Deluged in Blood when I sat enthralled by Breen’s presentation of his research at the Civil War Institute (CWI) 2019 Summer Conference at Gettysburg College, and I purchased a copy at the college bookstore. While I have some quibbles with the style and arrangement of the book, especially to the strict adherence to chronology that in part weakens the narrative flow, the author has made an invaluable contribution to the historiography with what is surely the authoritative account of the Nat Turner Rebellion. This is and should be required reading for all students of the antebellum era.
NOTE: My review of The Price for Their Pound of Flesh is here:
There’s an abiding irony to the fact that the United Nations, formed in the wake of a catastrophic global war to keep the peace, instead gave sanction to the first and most significant multinational armed conflict since World War II, not even five full years after Japan’s capitulation. It never would have happened had Stalin not ordered Soviet delegates to boycott that Security Council session in protest over the seating of Chiang Kai-shek’s government-in-exile on Taiwan instead of Mao’s de facto People’s Republic of China. It might never have happened if United States President Truman was not under enormous political pressure due to a hysterical campaign of right-wing outrage known as “Who Lost China” born out of Mao’s surprise victory in 1949, the same year that the Cold War grew much hotter when the Soviets successfully tested an atomic bomb, and fears of global communist domination magnified. It probably never would have found the support of so many other nations if the memories of appeasement to Hitler were still not so fresh and compelling.
“It”—of course—was the Korean War, which took place on a wide swath of East Asian geography that remains unresolved to this very day. Historically, the Korean peninsula hosted at various times both competing kingdoms and a unitary state but was always dominated by its more powerful neighbors: China, Russia and Japan. In 1910, Japan annexed Korea, and an especially brutal occupation ensued. Following the Japanese defeat, the peninsula was divided at the 38th parallel into two zones administered in the north by the Soviet Union and in the south by the United States. Cold War politics enabled the creation of two separate states in the two zones, each mutually hostile to one another. In June 1950, the Soviet-backed communist regime in the north invaded the pro-western capitalist state in the south, which spawned a UN resolution to intervene and launched the Korean War. At first South Korea fared poorly, but an American-led multinational coalition eventually pushed communist forces back across the 38th parallel. The fateful decision was then made by the Truman Administration to pursue the enemy and expand full-scale combat operations into North Korea. This brought China into the war and a long bloody struggle to stalemate ensued. Like a weird Twilight Zone loop, more than sixty-six years later a state of war still exists on the peninsula, and Kim Jong-un—the erratic supreme leader of a now nuclear-armed North Korea who regularly taunts the United States—is the grandson of supreme leader Kim Il-sung, whose invasion of the south sparked the conflict!
The origins, history and consequences of the Korea War makes for a fascinating story that—especially given both its scope and its dramatic contemporary echo—has received far less attention in the literature than it deserves. Unfortunately, Michael Pembroke’s recent attempt, Korea: Where the American Century Began, contributes almost nothing worthwhile to the historiography. This is a shame, because Pembroke—a self-styled historian who currently serves as a judge of the Supreme Court of New South Wales, Australia—is a talented writer who seems to have conducted significant research for this work. Alas, he squanders it all on what turns out to be little more than a lengthy philippic that serves as a multilayered condemnation of the United States.
As the subtitle suggests, Pembroke’s bitter polemic is directed not only at US intervention in Korea, but at the subsequent muscular but misguided American foreign policy that has begat a series of often pointless wars at a terrible cost in blood and treasure not only for the United States but also for the allies and adversaries in her orbit. Many—including this reviewer—might be in rough agreement with a good portion of that assessment. But the author sacrifices all credibility with a narrative that repeatedly acts as apologist for Mao, Kim Il-sung and even Stalin! For Pembroke, Truman takes on an outsize stature of a bloodthirsty monster who is not satisfied with the hundreds of thousands he vaporized at Hiroshima and Nagasaki, but is willing and even eager to sacrifice millions more in order to achieve his nefarious goal of global domination. Stalin and Mao, on the other hand, simply had their reasons, and were often misunderstood. Left unexplained is why, invested with that motivation and given that the United States in that era had overwhelming strategic nuclear and conventional superiority, Truman and his successors chose not to deploy that capability to pave a dramatic sanguinary road to hegemony.
To my mind, America’s war in Korea was a calamitous misstep, further exacerbated by the escalation that ensued with the crossing of the 38th parallel after achieving the initial objective of driving communist forces from the south. And one could make a good argument that none of the seemingly endless conflicts the United States has engaged in since that time was worth the life of a single American serviceman or woman. Yet, it is a hideous distortion to disfavorably juxtapose America—warts and all—with the endemic mass murder of Stalin’s Soviet Union. History, as I have often noted, is a matter of complexity and nuance, a perspective that seems utterly alien to Michael Pembroke in a book that is neither a history nor an analysis but simply an almost breathless diatribe that reduces characters to caricature and events to a bizarre comic book style of exposing villainy—but in this case all the villains happen to be American.
Because I received this book as part of an early reviewer’s program, I felt an obligation to plod through it to the very last page. In other circumstances, I would have abandoned it far, far earlier. As a reviewer, rarely would I suggest that a work has absolutely no value to a reader, but here I will make an exception: the best-case scenario for this book is for it to go out of print.
The best book I ever read about Theodore Roosevelt was actually about a river, with T.R. in a supporting role. By lending focus to just a single episode in the colorful drama of his remarkable life in The River of Doubt, Candice Millard’s insight and gifted prose delivered a superlative study of the existential Roosevelt that has often eluded biographers, while recounting the little-known challenge of his sunset years that nearly broke him.
Millard brings a similar technique to her third and most recent effort, Hero of the Empire: The Boer War, a Daring Escape and the Making of Winston Churchill. With pen dipped in the inkwells of careful scholarship as well as great storytelling, the author adroitly marries history and literature to deliver an unexpectedly original and fascinating tale that reads like something from Robert Louis Stevenson. If there are similarities to her earlier work, there is also a twist, with the storied figures in nearly inverse circumstances. Rather than the late-in-life challenge that nearly does the central character in, this is the chronicle of a young man’s extraordinary adventure that was to launch his long celebrity.
Not that Churchill was ever really anonymous. But first: is it even possible to imagine a young Churchill? Think of the man and what comes to mind is the steely but beefy, even rotund British leader who was already all of sixty-five years old when he became Prime Minister at the onset of World War II, after many decades both in and out of power. (And he was to live yet another two decades after Hitler’s defeat, again both in and out of power!) But the Churchill of Hero of the Empire is a slight fellow in his early twenties with an outsize ego and seemingly boundless ambition who talks too much and annoys most of those in his orbit. Yet, even then, he was hardly unknown, born into the upper echelons of the aristocracy, scion of a famous father who committed a kind of political suicide before his own early death, and the celebrated and sometimes notorious American beauty Jennie Randolph, a brilliant iconoclast legendary for her many lovers. Before the action unfolds in Hero of the Empire, the twenty-four-year-old Winston had already traveled much of the world, had a brief career as an army officer, served as war correspondent, published two books, and made an unsuccessful run for Parliament.
Anticipating what would become known as the Second Boer War and determined to be in the thick of the fray, in 1899 Churchill obtained credentials as a journalist and set off for Cape Town, then on to Ladysmith amid fierce hostilities. Journalist or not, when his train came under Boer attack, he took the lead and mounted a heroic defense that although it ultimately ended with his capture is credited with saving countless lives of those aboard, most of whom were in uniform. His time as prisoner of war and his bold escape is the central focus of the narrative.
Telling this story as well as Millard does might well be achievement enough, but this book succeeds far beyond that because the author not only brings a singular authenticity to her portrait of Churchill, but also to the wider canvas of the milieu that was England, the British empire, and the Boer republics at the turn of the century. This is especially impressive because rather than a trained historian, Millard comes to her craft with a master’s degree in literature, although there is no lack of citations to underscore the meticulous research that is the foundation of her work.
Millard’s account of Churchill’s escape from prison in Pretoria is no less than thrilling, tracing his footsteps as he wandered alone in unknown territory, stowed away on freight trains, and even concealed himself for a time in the bowels of a mine. Eventually he made it to safety, hundreds of miles away at what was then Portuguese East Africa. The British public followed Churchill’s exploits with great excitement, and at war’s end he returned home to wide acclaim. His next attempt at Parliament met with success; his long career in politics and public service had begun.
What would any Churchill book be without the anecdotes born of his eccentricities? Hero of the Empire has its share, especially as it recounts his captivity, where he demonstrated that regardless of his circumstances he was and ever would be a creature of the elite. So it was that as P.O.W. Churchill nevertheless regularly indulged in fine wines, traced troop movements on wall-size maps, and was only missed after his audacious escape because the local barber he had hired refused to be turned away by fellow prisoners when the time came for his regularly scheduled haircut!
Churchill has fallen out of favor to large portions of our modern audience. His racism, his imperialism, his misogyny, are all somewhat cringeworthy nearly one hundred fifty years after his birth. And it is not all political correctness: many of his views were well out of step with others more enlightened in his own era. At the same time, warts and all, Churchill was indeed a great man. It is impossible to imagine England under the siege of the Nazi war machine without Churchill cheering the Brits on, collaborating with FDR, demanding the sacrifice of the nation, and his clarion call to “Never, never, never give in.” The character, the determination, the heroism, the steadfastness of that iconic figure is already manifest in the form of that spindly young overconfident fellow brought back to life for us once more in the pages of this fine book. There are indeed too few characters like Winston Churchill to animate our history, and far too few writers like Candice Millard to deliver such readable accounts of past times.
From the start of the Civil War, enslaved African Americans sensed the opportunity for freedom as Union forces seized territory at the outer margins of seceded states. Initially, there was the odd phenomenon of officers in blue uniforms turning over escapees to their slave masters. But all that changed in 1861 at Fort Monroe, at the southern tip of the Virginia Peninsula, when the famously chameleonlike General Benjamin Butler refused to return the three enslaved men who fled to his lines. Butler himself, at least at this stage of his life, could care less about blacks, slave or free, but reasoning that the Fugitive Slave Act no longer applied to the seceded states, and observing that every enslaved person serving as support behind Confederate lines freed up a white soldier to fire upon Union ranks, Butler ruled that such escapees be treated as “contrabands” of war and confiscated. Contraband was an unfortunate term that equated the enslaved with property instead of people, but it nevertheless stuck—but then so too did Butler’s policy, which only a few months later was enshrined by Congress in the Confiscation Act of 1861.
What began as a trickle to Butler’s fort turned into a veritable flood that eventually was to bring something like a half-million formerly enslaved people to seek shelter with the Union army over the next four years. About one-fifth of these would later serve, often heroically, as soldiers in the United States Colored Troops (USCT), but what about the other roughly four hundred thousand? What became of them? If their fate never occurred to you before, it is because the story of this huge, largely anonymous population has remained conspicuous in its absence in much of the vast historiography of the Civil War—at least until Amy Murrell Taylor’s brilliant, groundbreaking recent book, Embattled Freedom: Journeys through the Civil War’s Slave Refugee Camps.
Fleeing to Union lines was only possible if the army was in your vicinity, which put this option out of reach to much of the south’s enslaved population. That approximately one-seventh of the Confederacy’s enslaved population of 3.5 million fled to the surmised safety of Union lines when this limited opportunity knocked gives lie to the notion that the “peculiar institution” was benign and that the majority of the enslaved were satisfied with their lot—a sadly resurgent fiction promoted by “Lost Cause” apologists that has again found an unfortunate home within contemporary political discourse. These 500,000 men, women and children—and yes, Taylor learned, there were indeed significant numbers of children—were of course not “contrabands” but refugees, as that term was understood both then and now. And they fled, usually in great peril, with little more than the rags on their backs, to what may have been a promise of freedom but also an unknown future fraught with difficulty.
What would become of them? It turns out that rather than a single shared outcome there was a variety of experiences that depended upon geography, the fortunes of war, and the arbitrary rule of local commanders. Neither the Union army nor the civilian north was prepared for the phenomenon of hundreds of thousands of black refugees, and the result was often not favorable to those who were the most vulnerable. At the dawn of the war, abolitionists still comprised only a tiny minority in the United States. Most of the north remained deeply racist, and those championing “free soil” generally had little concern for the welfare of African Americans on either side of the Mason-Dixon line. This reality informed policy, which even when well-intentioned tended to be patronizing, and was in fact frequently ignored. Embattled Freedom describes how orders were issued mandating both payments and provisions for refugees, who if physically capable were expected to provide the kind of support to the army as paid laborers that they might otherwise have given to the Confederate effort as slaves. But in practice, they were rarely paid, their wages euphemistically diverted to the “general welfare,” or simply stolen by dishonest opportunists. And military necessity trumped all: there was a war on, blood was being shed, and the existential future of the nation was at stake. Refugees would ever remain a lower priority, at the mercy of the corrupt or the indifferent. Rarely consulted, decisions were made for them that often proved less than ideal. The author treats us to a number of examples of this, but perhaps the most ironic is the campaign by well-meaning missionaries to equip refugee shelters with windows, when their occupants assiduously eschewed these for the sake of privacy and security.
Then there was the case of the Emancipation Proclamation, which freed the enslaved in Confederate-controlled territory, but paradoxically did not apply to areas controlled by the Union army. Only a rather obscure directive that would cashier any soldier returning a person to slavery served as an unlikely safety-net for refugees. More significantly, there was the border state of Kentucky, which when it opted not to join the Confederacy became the largest slave state in the Union, something that endured until the Thirteenth Amendment was ratified, well beyond the end of the war. Refugee camps in Kentucky were ringed by slaveowners; wandering outside of camp could result in capture and enslavement that could be nearly impossible to dispute by a black person in a state where slavery was both legal and widespread.
Refugees ever lived at risk elsewhere in what can only be described as uncertain sanctuaries. Camps evolved into “freedman’s villages”—replete with churches, schools, stores and tidy public squares—that sprang up at the edges of Confederate territory occupied by Union troops, but long-term security was tenuous, dependent entirely on these garrisons. If the army was redeployed, refugees were suddenly thrust into great danger and forced to flee once more lest they be captured and returned to slavery by roving bands of locals. It is well documented that Confederates habitually executed USCT troops wounded or seeking surrender. Less familiar perhaps was the devastation visited upon these undefended villages by rebels and their partisan allies enraged at the formerly enslaved living in freedom in their midst. Hunger often accompanied the refugee, even in the best of circumstances; a camp or village razed and burned could portend starvation.
The end of the war and abolition seemed to suggest a new beginning, but optimism was short-lived. Lincoln’s untimely death sent Andrew Johnson to the White House. The new president was deeply hostile to African Americans, and ensuing years saw pardons issued to former CSA political and military elites, property returned to once dislodged slave masters, and refugees terrorized and murdered, ultimately driven off the lands that once hosted thriving freedman’s villages. Where can you see a freedman’s village today? You can’t: they were all plowed under, sometimes along with the bones of occupants less than willing to be displaced.
Embattled Freedom is an especially valuable resource because it contains not only a panoramic view of the refugee experience but an expertly narrowed lens that zooms in upon a handful of individuals that Taylor’s careful research has redeemed from obscurity. Especially fascinating is the saga of Edward and Emma Whitehurst, an enslaved couple that had managed over time to stockpile a surprisingly large savings through Edward’s side work, in a unique arrangement with his owner. Fleeing slavery, the entrepreneurial Whitehurst’s turned their nest egg into a highly successful and profitable store at a refugee camp in Virginia—only to one day lose it all to retreating Union forces desperate for supplies. There is also the inspiring story of Eliza Bogan of Helena, Arkansas, who as refugee leaves the harsh existence of picking cotton behind only to endure one obstacle after another in her pursuit of life as a free woman in uncertain circumstances. There are other stories, as well. These personal studies not only enrich a well-written narrative, but ever engage the reader well beyond the typical scholarly work.
A week after I finished reading Embattled Freedom, I sat in the audience during Amy Taylor’s presentation at the Civil War Institute Summer Conference 2019 at Gettysburg College, which highlighted both her passion and her scholarship. During the Q&A, I asked what surprised her most during her research. Hard-pressed to answer, she finally settled on the number of children that turned up in the refugee population. I would suggest that as a topic for her next book. In the meantime, drop everything and read Embattled Freedom. You will not regret it.
A few years ago, I had the honor of being selected for a key role on a team engaged in scanning, transcribing and digitizing a trove of recently rediscovered letters, diaries and narratives of the Massachusetts 31st Infantry, which turned up more than a century after these were compiled by their regimental historian but left unpublished. In a lifetime of studying the American Civil War, soldiers’ letters were hardly new to me, of course, but I found myself surprisingly emotional as I became one of the very first in so many decades to get a glimpse at the sometimes-hidden hearts of these long-dead souls. And there was something else: rather than the random excerpt, often highlighted for its dramatic impact, that makes a familiar appearance in the pages of history books, these materials represent continuous strands of communication by nearly two dozen individuals, some of which stretched over a three-year period. The stories they tell run the gamut from the mundane to the comedic to the horrific, but collectively the nature and the personalities of the storytellers emerge to reveal authenticity in their experience too frequently lost in grand narratives about the war. A careful read of a man’s letters home over several years often unexpectedly expose truths that are omitted or deliberately distorted by the correspondent.
This overarching point is subtly but expertly made again and again in historian Peter S. Carmichael’s magnificent work, The War for the Common Soldier: How Men Thought, Fought and Survived in Civil War Armies, certainly one of the most significant recent contributions to the historiography. As primary sources, surviving letters from the front are critical and invaluable, but even more critical may be interpretation, which can be misled by taking these at face value, or plucking them out of context, or being seduced by the words of a man who wants his wife or mother—or especially himself—to believe that he is courageous or confident or committed to his cause when only some or none of those may be true.
In a dense, but highly readable account that brings a surprisingly fresh perspective to a frequently overlooked aspect of Civil War studies, Carmichael defies often prevailing generalizations of soldiers north and south that tend to predominate in the literature, reminding the reader that a tendency to oversimplification distorts the reality on the ground. Something like a total of 2.75 million men fought on both sides in the Civil War. These were living, breathing human beings, not simply the statistical figures fed into databases to produce the broad generalities pervasive in many narratives. At the same time, he does not fail to locate and identify the commonalities in the rank and file that exist in multiple arenas, but his skillful approach to this end is guided by the nuance and complexity that is the mark of a great historian.
Carmichael’s well-written chronicle explores almost all aspects of a soldier’s life in camp, on the march and in battle, but that nuance is made most manifest in the chapter entitled “Desertion and Military Justice.” The accepted wisdom has long argued that bounty jumpers constituted the majority of those shot for desertion over the course of the war, and perhaps with some justification. But while the numbers underscore that there were plenty who likely fit that profile, Carmichael’s research demonstrates that such a broad brush obscures a reality that saw men on both sides leaving the lines and returning, frequently more than once, and typically with little or no penalty. This was especially common among Confederates, who usually fled not out of cowardice or convenience but rather to aid starving families back home desperate for survival. And there was, in many cases, a fine line between AWOL and desertion. It is surprising how often luck or simply the vagaries of enforcement separated men made to sit on their own coffins with eyes bandaged while the firing squad formed up from those docked a month’s pay instead. It does seem that Lincoln’s moral compass was more finely oriented to the circumstances of the soldier missing from his company—even if this found friction among the Union brass—than was the case on the other side, for the reality was that by percentage far more men clad in gray were put to death than those in blue, and some of these were mass executions before the lines. What is clear is that on both sides, the common soldier—even the veteran accustomed to the gore and slaughter of battle—was deeply disturbed when compelled to witness the cold-blooded murder of a fellow soldier, even if he thought the man got his just deserts.
A review such as this cannot possibly touch upon all of the themes Carmichael surveys in this outstanding study, but I was especially drawn to his treatment of the phenomenon of malingering, which instantly found a familiar face in Cpl. Joshua W. Hawkes, one of my men from the 31st, who bragged in letters to his mother about his health while he served away from the cannon fire as part of the occupation army in New Orleans, even taking swipes at those pretending to be ill to avoid duty. Yet later, on the very eve of combat, he fell victim first to “diarrhoea” and then to a bewildering set of ever-shifting complaints that kept him confined to a hospital bed for months until he was eventually discharged for disability. I read this man’s letters in isolation, of course, but Carmichael’s impressive research demonstrates not only that this soldier’s manufactured symptoms put him in the company of thousands of other “shirkers,” but also underscores how difficult it was for doctors equipped with the primitive diagnostic tools of mid-nineteenth century medicine to distinguish the truly afflicted from those talented at feigning illness to avoid combat or earn a discharge. As such, there were men who genuinely suffered sent back to come under enemy fire, while others who were quite healthy succeeded in dodging the same.
Some years after my project with the 31st, I was given access to a private collection of unpublished letters from George W. Gould, a Massachusetts private killed at the bloody battle of Cold Harbor in 1864. I transcribed his correspondence and created a website for public access to honor him, and I visit his grave in Paxton MA several times a year. When I placed a flag on his grave to commemorate Memorial Day 2019, I found myself in somber reflection of not only the sacrifice of Private Gould, but also of the vast territory covered in The War for the Common Soldier, because although his name appears nowhere in the narrative this book is surely about George W. Gould and every man who marched alongside him, as well as every man he marched against in opposition with musket held high. Pvt. George W. Gould and Cpl. Joshua W. Hawkes are just two of the millions who either gasped their last breaths on Civil War battlefields or drank beer at memorials in the decades that followed. If you want to understand that terrible war, you should indeed visit battlefields and explore the latest historiography, but you should also pause to read Carmichael’s superlative work. The truth is that you will never comprehend the Civil War until you come to understand the Civil War soldier. Some books should be required reading. This is one of them.
[REVIEW ADDENDUM: Some years back, I had the great honor of being selected for a key role on a team engaged in scanning, transcribing and digitizing a trove of recently rediscovered letters, diaries and narratives of the Massachusetts 31st Infantry—a regiment that first served with Benjamin Butler as an occupying force in New Orleans, and later as part of the Red River campaign under Nathaniel Banks—which turned up in the archives of the Lyman & Merrie Wood Museum of Springfield History more than a century after they were compiled by their regimental historian but left unpublished due to his untimely death. These materials can be accessed at: https://31massinf.wordpress.com
I found Carmichael’s treatment of malingerers especially fascinating, because it related to my own work with the Massachusetts 31st and Cpl. Joshua W. Hawkes, who in letters to his mother made dozens of references to his generally good health during the first portion of his service, where he thrived as part of the occupying force under Benjamin Butler in New Orleans. In one missive from the autumn of 1862 [letter 10/18/62], he even bragged about how quickly he recovered from the “ague” while taking a swipe at those who pretended to be ill, noting that while he was “back to duty now there is so much playing off sick I do not wish any such name.” Ironically then, in April 1863, on the eve of what would have been his first foray into combat, [letter 04/17/63] Hawkes was beset with “diarrhoea” [SIC] which eventually led to his return to New Orleans, this time to the St. James Hospital, where a bewildering set of ever-shifting complaints kept him confined—but not incapable of eating fairly well, such as “an egg in the morning, a piece of toasted bread each meal and a little claret wine,” [letter 6/4/63] and occasionally exploring the city when granted a pass—until he eventually succeeded in gaining a discharge for disability in July 1863. In one of his more histrionic letters to mother, he proclaims:
“I am perhaps disposed to magnify my ails, but when I have seen men brought in here who had been forced to march with diarrhoea [SIC] … coming here too weak to walk and living but a week or two, then I have thought it was not best to beg to be sent away to the exposures of an army on active duty in the field. They can call me a coward, a shirk, what they choose, but I think it a duty to take care of my health not only for myself but on my mother’s account, what do you think of this logic?” [letter 06/04/63]
Apparently, this “logic” served Hawkes’ well, since he was sent home without ever coming under enemy fire and lived on until 1890!
Some years after my project with the 31st, I was given access to a private collection of the unpublished letters of Pvt. George W. Gould, who was killed at the bloody battle of Cold Harbor in 1864. He has come to serve as my “adopted” Civil War soldier, so by honoring him I likewise honor all of those who have made the ultimate sacrifice. I scanned and transcribed his letters and created a website to honor him, which can be accessed at: https://resurrectinglostvoices.com
I have attached this addendum not because these particular soldiers who fell or survived have a greater or lesser import than any of the other hundreds of thousands who served in the American Civil War, but rather to add meaningful context, and to underscore the essential point of Carmichael’s wonderful book, which is that you must read far more deeply into what these men had to say in their letters home if you really want to try to understand the war at all.]
Tasmanian author Richard Flanagan has written seven novels, one of which—Gould’s Book of Fish—I would rank among the very finest of twenty-first century literature to date. I primarily read books of history, biography and science these days, but I do stray to the realm of fiction from time to time. When I happen upon a writer whose literary output not only consistently transcends the best published fiction of its day, but is so iconic that it comes to define its own genre—Cormac McCarthy and Haruki Murakami also come to mind—I latch on to that novelist and set out to read their full body of work. Wanting marks my completion of all of Flanagan’s novels, and it turns out that I saved one of the very best for the very last.
There is irony here because I have long resisted it, based upon its off-putting description on Flanagan’s Wikipedia page—“Wanting tells two parallel stories: about the novelist Charles Dickens in England, and Mathinna, an Aboriginal orphan adopted by Sir John Franklin, the colonial governor of Van Diemen’s Land, and his wife, Lady Jane Franklin”—which struck me as a formula for fictional disaster! It turns out that I could not have been more wrong.
While several of Flanagan’s novels include characters from history, it would not be accurate to tag these as historical fiction, the way that category is generally understood. But then, the author’s work often defies classification. Flanagan is all about redefining genres—or creating new ones. Think Gabriel Garcia Marquez, John Irving, André Brink: Richard Flanagan truly belongs in that league.
The real Sir John Franklin did indeed serve as Lieutenant Governor of Van Diemen’s Land (today’s Tasmania), but he is better remembered as the arctic explorer who made a tragic end in 1847 in a disastrous attempt to chart the Northwest Passage, when his ships became icebound, resulting in his death as well as that of his entire crew. The legend of the lost expedition he commanded, and the true fate of his crew, have been the subject of much speculation right down to the present day, and Franklin has often been lionized for his heroism. But the John Franklin of Wanting is not only less heroic, but rather instead a grotesque, self-absorbed, disturbing individual. Franklin and his equally narcissistic wife, Lady Jane—desperate for a child of her own—ignore prevailing taboos to adopt Mathinna, also a historic figure, one of the few full-blooded aborigines still remaining on the island after a sustained reign of terror by colonial settlers and a succession of pandemics had reduced their numbers to near extinction. What at first glance smacks of altruism masks more questionable desires by each of the Franklins—their brand of “wanting”—that Mathinna comes to fulfill, or fails to fulfill. The tragedy of Mathinna is brilliantly revealed through the nuance and complexity of a masterfully written narrative that subtly draws the reader in to expose a series of horrors hidden among the mundane that is ever chilling yet never stoops to the gratuitous.
As if these characters and themes were not sufficiently complicated for any work of fiction, the novel contains an equally compelling parallel tale, told in alternating chapters, of author Charles Dickens in London, some ten thousand miles away. The connection of the Franklins to Dickens was a visit by Lady Jane to the famed novelist, seeking his support. In the years after her husband was lost to the Arctic, Lady Jane devoted her life both to memorializing him and sponsoring expeditions to locate him, in the feeble hope that he survived. Then evidence emerged that Franklin was in fact dead, hinting that in their last gasps he and the crew resorted to cannibalism to survive. Franklin’s widow will have none of it, and she enlists the aid of England’s most celebrated figure to defend Franklin’s honor against such horrid innuendo. Dickens, a Victorian rags-to-riches miracle who is both brilliant and wildly successful while yet morose and dissatisfied, haunted by the death of a favored child and locked in a loveless marriage, is plagued by his own sort of “wanting.” The intersection of his unrequited deepening well of discontent and Lady Jane’s determination to restore her husband’s reputation serves as the linchpin of the novel, spawning new purpose in Dickens even as Lady Jane basks in anticipation of the martyred explorer’s vindication. Dickens is far more intelligent and far more accomplished than either of the hapless Franklins, but despite his genius and outsize public persona he shares a similar unmistakable shallowness in his nature. In Flanagan’s Wanting, Dickens struggles to exist outside of the characters in his novels, and then takes it upon himself to produce, direct and cast himself in a role on the stage that permits him to stand before an audience as the heroic, romantic figure he longed to be.
Fiction reviews should largely avoid spoilers so I will leave it here, but history buffs will certainly google the main characters to learn what really happened. It won’t be giving much away to note that six years after Wanting was published in 2008, the wreck of the HMS Erebus—one of Franklin’s ships—was discovered, and two years after that his second ship was found, the HMS Terror, said to be in pristine condition. Even prior to that, evidence that cannibalism was in fact part of the crew’s final days was substantiated, contradicting both Lady Jane and the ardent defense mounted by Dickens. I will withhold the fate of poor Mathinna, other than to note that her gripping story—in the novel and in real life—will likely shadow the reader long after the last page of this book is turned.
I believe that every fiction review should include a snippet of the author’s own pen for those unfamiliar with their style and talent. This bit concerns a minor character—if any of Flanagan’s characters can be said to be minor ones—an aging actress in Dickens’ London:
On the night she had received the news of Louisa’s death, leaving her the only surviving member of her family, Mrs Ternan had stifled her weeping with a pillow so her daughters would not hear her heart breaking and would never suspect what she now knew: that every death of those you love is the death also of so many shared memories and understanding, of a now irretrievable part of your own life; that every death is another irrevocable step in your own dying, and it ends not with the ovation of a full house, but the creak and crack and dust of the empty theatre. [p90]
That powerful excerpt is just a tiny sample of Flanagan’s superlative prose. Wanting ranks amongst his finest novels, which in addition to Gould’s Book of Fish should also include Death of a River Guide, and The Narrow Road to the Deep North, although there is not a bad one in the catalog. For the uninitiated who would like to experience Flanagan’s art, Wanting is a great place to start. Perhaps you may find yourself, like this reviewer, going on to read them all.
As a reader, some of my most serendipitous finds have been plucked off the shelves of used bookshops. Such was the case some years ago with Cradle of Life: The Discovery of Earth’s Earliest Fossils, by J. William Schopf, a fascinating account of how the author in 1965 was the first to discover Precambrian microfossils of prokaryotic life in stromatolitic sediments in Australia’s Apex chert dated to 3.5 billion years ago, the oldest confirmed evidence for life on earth at the time. My 2017 review of Cradle of Life—nearly twenty years after it was first published—sparked an email exchange with Bill Schopf that later led to his sending me a signed edition of his most recent book, Life in Deep Time: Darwin’s “Missing” Fossil Record. He did not ask me to read and review it, but naturally I did.
In this work, Schopf—an unusually modest man of outsize accomplishment—typically credits good fortune rather than his own estimable talents, often emphasizing the centrality of teamwork in the pursuit of sound science, as well as frequently paying tribute to the notion that each discovery and its discoverers are after all “standing on the shoulders of the giants” that preceded them. A young grad student when he first got into the game, at seventy-seven the author now remains the most significant living survivor of those paleobiologists that devoted decades in an effort to identify and substantiate traces of the most ancient forms of life on the planet. He feels the clock ticking, and thus is strongly motivated by a desire to leave a record of the journey that led to such consequential discoveries now that most of his peers have passed on.
The result is Life in Deep Time, a curious book—actually something of a blend of three different kinds of books—that succeeds more often than not in its efforts, even if at times it can be an uphill climb for the general reader. It is first and foremost a memoir that dwells for a surprisingly long time on the author’s youth and upbringing, which can be awkward at times because of his decision to employ a third-person limited literary technique in the narrative, so that it is “Bill wondered about …” rather than “I wondered about …” Early on, the reader might grow a bit impatient as Bill negotiates high school, often under the disapproving glare of his father, an admirable man who nevertheless sets impossibly high standards for his son and is quite difficult to please. Yet, even then Schopf is ever the optimist, always grateful for that which goes his way, and treating that which does not as a valuable learning experience. Rather than being scarred from the travails of enduring a demanding parent, he seems to sit in awe of a father who sets challenges that are always another chalk-mark higher than Bill can grasp. Such circumstances for another might leave that child a substance abuser or a ne’er-do-well, but it simply inspires Bill Schopf to be the best-of-the-best, fully absent an uncontainable ego or an axe to grind.
Beyond memoir, the second focal point of the book recounts Schopf’s scientific achievements, while paying tribute to those he worked with, many of whom are little known or entirely unknown outside of the paleobiology community. Science, the author repeatedly underscores, is a team effort. While the ever-modest Schopf does not dodge the recognition he clearly deserves for his key contributions to the field, he makes certain that credit gets appropriately shared among mentors and colleagues and even assistants.
Schopf’s work has spawned controversy that sometimes spilled over into the public arena. In the first case, there was pushback on his remarkable find of those 3.5-billion-year-old microfossils. Peer-reviewed science upheld his claim, although a prominent rival paleobiologist continued to dispute it. In the second, Schopf was brought in by NASA in 1996 to evaluate the extraordinary if premature announcement that life had been identified in a Martian meteorite, which was trumpeted by scientists, politicians and the media. Schopf was skeptical, and subsequent careful research proved him correct. The author’s well-written examination of these controversies is both coherent and enlightening, although blemished a bit by the continued use of that third-person limited literary technique, which feels especially awkward as he answers his critics through the narrative.
Schopf’s greatest triumph was certainly his discovery of those ancient fossils in Australia’s Apex chert, detailed in Cradle of Life and revisited in Life in Deep Time. Modern science has established that the earth is a little more than 4.5 billion years old, but in the mid-nineteenth century, when Charles Darwin devised his theory of evolution, no one could be sure what the true age of the planet was, although most scientists knew it was far older than the six thousand years that theologians claimed. In his groundbreaking 1859 treatise, On the Origin of Species, Darwin estimated that the erosion of England’s Sussex Weald must have taken some 300 million years, but he was taken to task on this by the famed Lord Kelvin, who publicly scolded that the earth could not possibly be older than 100 million years. Whatever the actual number, Darwin was deeply troubled because the process of natural selection that he envisioned would take much, much longer in order for higher life forms to evolve. In the century that followed Darwin, greater scientific sophistication established the true age of the earth with greater specificity, but it turned out that identifying the planet’s earliest life forms proved quite elusive. This is because traces of these unicellular organisms lacking a membrane-bound nucleus—the prokaryotes that include Archaea and Bacteria—can be maddeningly difficult to identify, and often actually appear to be inorganic remains with strikingly similar characteristics. A famous false positive in this venue set paleobiology back for many decades. As a result, even as late as 1965, Schopf’s find of 3.5-billion-year-old microfossils of prokaryotic life proved controversial, although eventually gained full acceptance by the scientific community.
The science behind all this is remarkably complex, and that is the third focus in Life in Deep Time, a welcome addition for those comfortable with textbooks on paleobiology, but often inaccessible to the general reader. I am trained in history rather than science, so I found some challenging moments in Cradle of Life that had me re-reading a paragraph or two, but much of it was indeed comprehensible to me as a non-scientist, which is not always the case with the final section of Life in Deep Time, which casually includes sentences such as this one:
“By this time, Bill had gained sufficient knowledge of the chemistry of kerogen, the coaly carbonaceous matter of which ancient microscopic fossils are composed, that he imagined that if the dominating polycyclic aromatic ring structures of the fossil kerogen were irradiated with an appropriate wavelength of laser light, they too would fluoresce and produce the images he sought.” [p186]
Material like this is certainly not impenetrable for an educated reader, but long discourses in this vein can lose a wider audience not schooled in paleobiology. Perhaps this content, although critical to scientists reading the book, might have been better placed in the appendix so as not to lose the flow of an otherwise engaging narrative.
While portions of Life in Deep Time may be difficult to navigate for the general reader, I would nevertheless recommend it. Bill Schopf is a remarkable man, a great scientist and a fine writer. The various threads of the tale he relates here add up to a storied saga of the evidenced-based search for the earliest life on the planet, as well as that of the distinguished if often otherwise anonymous men and women who were responsible for marking one of the greatest milestones in recent scientific history. The voice of Bill Schopf is a humble yet commanding one: it deserves to be heard.
Apparently, Sigmund Freud spent the final year of his long and productive life as a refugee from the Nazi menace, in a house in London that is now a museum to his legacy. On the great exile’s preserved desk still sits a good number of statuettes from ancient cultures that he collected, including on one corner a carved stone baboon—known as the “Baboon of Thoth”—symbolic of that ancient Egyptian deity identified with both writing and wisdom. “Freud’s housekeeper recalled that he often stroked the smooth head of the stone baboon, like a favourite pet.” [p13] This anecdote serves as an introduction to Egypt, by Christina Riggs, a 2017 addition to the wonderful Lost Civilizations series that also features volumes devoted to the Etruscans, the Persians, and the Goths.
I was so taken by one of these—The Indus, by Andrew Robinson—that I put the others on a birthday list later fulfilled by my wonderful wife, so I now own the remainder of the set, each one destined to sit in queue in my ever-lengthening TBR until its time arrives. Egypt came up first. But it turns out that Riggs’ book stands apart from the others because it is not at all a history of Egyptian civilization, but rather a studied essay on the numerous ways that ancient Egypt came to be understood by subsequent cultures, its historical record manipulated and frequently distorted to support forced interpretations that suited its various interpreters. The toolkit deployed to construct sometimes elaborate visions that reflected far more kindly upon the later civilizations that succeeded it rather than accurately representing the ancient one that inspired these included its monumental architecture, its tomb painting, its mummified dead, its hieroglyphs, even abstract and unfounded notions of race and superiority—as well as, of course, objets d’art like the “Baboon of Thoth.”
Riggs, whose background is in art and archaeology, writes well and presents a series of articulate arguments to support her examination of all the ways Egypt has echoed down through the ages. It is often overlooked that to the first century Roman tourists who scribbled graffiti on tombs in the Nile valley, the pyramids of Giza were more ancient by half a millennium than those long-dead Romans are to us today! So, it is a very long echo indeed. Alas, for all of Rigg’s talent, I myself made a poor audience for her narrative. I opened the cover yearning to learn more about Egypt, not more about how we recall it. I might not have made the mistake had I noticed at the outset how her title—which is absent the definitive article—differed from the others in the series. There is The Indus, The Barbarians, The Etruscans. Riggs’ edition is simply Egypt. That should have been a clue! But that is, as we say on the street, “my bad,” not the author’s. Despite this, I did find enough to hold my interest, to finish the book, and to recommend it—but only to those with a far greater interest in art history and interpretation than I possess.
A small island called “Bermeja” in the Gulf of Mexico that was first charted in 1539 was—after an extensive search of the coordinates—found to be a “phantom” that never actually existed in that latitude, or anywhere else for that matter. It turns out that this kind of thing is not unusual, that countless phantom islands, some the stuff of great legend, appeared on countless charts dating back well beyond the so-called “Age of Discovery” to the very earliest maps of antiquity. What is unusual about Bermeja is that its nonexistence was only determined in 2009, after showing up on maps for almost five hundred years!
The reader first encounters Bermejo in the “Introduction” to The Phantom Atlas: The Greatest Myths, Lies and Blunders on Maps, by Edward Brooke-Hitching, a delightful, beautifully illustrated volume that is marked by both the eclectic and the eccentric. But the island that never was also later gets its due in its own chapter, along with a wonderful, detailed map of its alleged location. This is just one of nearly sixty such chapters that explores the mythical and the fantastical, ranging from the famous and near-famous—such as the Lost Continent of Atlantis and the Kingdom of Prester John—to the utterly obscure, like Bermeja, and the near-obscure, like the island of Wak-Wak. While the latter, also known as Waq-Waq in some accounts, apparently existed only in the imagination of the author of one of the tales in One Thousand and One Nights, it nevertheless made it into the charts courtesy of Muhammad al-Idrisi, a respected twelfth-century Arab cartographer.
But The Phantom Atlas is not just all about islands. There are mythical lands, like El Dorado and the Lost City of the Kalahari; cartographic blunders, such as mapping California and Korea as islands; even persistent wrong-headed notions like the Flat Earth. There is also a highly entertaining chapter devoted to the outlandish beings that populate the 1493 “Nuremberg Chronicle Map,” featuring such wild and weird creatures as the “six-handed man,” hairy women known as “Gorgades,” the four-eyed Ethiopian “Nistyi,” and the dog-headed “Cynocephali.” That at least some audiences once entertained the notion that such inhabitants thrived in various corners of the globe is a reminder that the exotic characters invented by Jonathan Swift for Gulliver’s Travels were not so outrageous after all.
One of the longer and most fascinating chapters, entitled “Earthly Paradise,” relates the many attempts to fix the Biblical Garden of Eden to a physical, mapped location. The author places that into the context of a wider concept that extends far beyond the People of the Book to a universal longing that he suggests is neatly conjured up with the Welsh word “Hiraeth,” which he loosely defines as “an overwhelming feeling of grief and longing for one’s people and land of the past, a kind of amplified spiritual homesickness for a place one has never been to.” [p92] It is charming prose like that which marks Brooke-Hitching as a talented writer and distinguishes this volume from so many other atlases that are often simply a collection of maps mated with text to serve as a kind of obligatory device to fill out the pages. In happy contrast, there are enchanting stories attached to these maps, and the author is a master raconteur. But the maps and other illustrations, nearly all in full color, clearly steal the show in The Phantom Atlas.
Because I obtained this book as part of an Early Reviewers program, I felt an obligation to read it cover-to-cover, but that is hardly necessary. A better strategy is to simply pick up the book and let it open to any page at random, then feast your eyes on the maps and pause to read the narrative—if you can take your eyes off the maps! From al-Idrisi’s 1154 map of Wak-Wak, to Ortelius’s 1598 map of the Tartar Kingdom, to a 1939 map of Antarctica featuring Morrell’s Island—which of course does not really exist—you are guaranteed to never grow bored with the visual content or the chronicles.
There are, it should be noted, a couple of drawbacks in arrangement and design, but these are to be laid at the feet of the publisher, not the author. First of all, the book is organized alphabetically—from the Strait of Anian to the Phantom Lands of the Zeno—rather than grouped thematically, which would have no doubt made for a more sensible editorial alternative. Most critically, while the volume is somewhat oversize, the pages are hardly large enough to do the maps full justice, even with the best reading glasses. Perhaps the cost was prohibitive but given the quality of the art this well-deserves treatment in a much grander coffee table size edition. Still, despite these quibbles, fans of both cartography and the mysteries of history will find themselves drawn to this fine book.
The phantom island of Bermeja, featured in an 1846 map.
When I was growing up in the 1960s, the Civil War was often dubbed a struggle of “brother against brother,” uttered with a smack of wonderment at how it was that a nation united by so many commonalities could have could come apart like that, only one short century prior, taking more than six hundred thousand lives in the process? Then, as the centrality of slavery came to be properly emphasized, both historiography and sentiment shifted. Certainly, there were plenty of families divided by war—perhaps most famously Mary Lincoln’s, whose brothers fought for the Confederacy—but the real division turned out to be geographic and defined more by the South’s “peculiar institution” than habit or climate. Alexis de Tocqueville’s oft-cited anecdotal 1835 comments, in Democracy in America, that sharply characterized the vast cultural gulf that lay between free and slave states on opposite sides of the Ohio River, turned out to reflect a true demarcation that saw two different visions of America evolve within a single nation. Slavery defined the south, even if most southerners were not slaveowners, so that long before secession the south had indeed become another country.
That such conclusions can also be overdrawn was brilliantly demonstrated by historian Edward Ayers in his magnificent 2003 work, In the Presence of Mine Enemies: War in the Heart of America, 1859-1863, which surveys the “Great Valley” that stretches north of the Mason-Dixon line to encompass Franklin County, Pennsylvania, and south of it to include Augusta County, Virginia. Slavery was indeed part of the fabric of life in the lower valley in cities like Staunton, Virginia, yet on the eve of the war its citizens still had much more in common than not with denizens of the upper valley in cities like Chambersburg, Pennsylvania, which was free soil. Relationships that went well beyond trade flourished in a porous border of communities that shared a largely common identity. Much of Augusta County was old Whig and staunchly Unionist; when the secession crisis was upon them most fiercely resisted calls to leave the Union. But when Virginia joined the Confederacy, those loyalties quickly shifted. Franklin County had little sympathy for what it viewed as the treason of their southern brethren. Men from both sides eagerly—or not so eagerly, depending upon the man—grabbed muskets and rushed off to the killing fields in the name of honor and duty or simply obligation. The war truly tore the Great Valley asunder, and before it was through both sides were littered with death and destruction utterly unimaginable just a few years earlier.
In his latest work, The Thin Light of Freedom: The Civil War and Emancipation in the Heart of America, Ayers picks up where he left off, taking the saga from the critical turning points of the war that characterized the summer of 1863 to Appomattox and its aftermath, and beyond that through Reconstruction and what was to be its tragic legacy for African Americans. In chapters often bracketed by an italicized overview that puts events in the valley in context with the wider perspective of the war, Ayers narrows the lens to focus upon key individuals emblematic of the struggle on the ground. It is in these human stories that it becomes clear that the noise of cannon fire, calls to glory, and the plaintive cries of the wounded and the dying coming from the valley was actually something of a small-scale version of the greater thunder that echoed across the national landscape in a terrible, bloody conflict that claimed so very many lives before the guns fell silent.
Virginia’s Shenandoah Valley had long been the bread basket for the Confederacy, and its citizens still proudly recalled when Stonewall Jackson made a mockery of three Union armies in its environs in his brilliant Valley Campaign of 1862. But Jackson was dead now, a victim of friendly fire at Chancellorsville, and Federal forces threatened both the farms and the rails that delivered their precious products to grey-clad stomachs. One of the chief motives that took Lee—sans his most famed lieutenant—to Gettysburg was an attempt to divert Union forces to take the pressure off valley farmers and protect cherished crops. Despite his failure there, the valley did win a brief respite, and—to Lincoln’s great chagrin—Lee managed an orderly retreat with a wounded yet still formidable army that was to persist in the field for nearly two full years. In between, the men in Lee’s army were forsworn from the kind of destruction and plunder that they found so abhorrent in the ravages—both real and imagined—visited upon the Shenandoah by the Union, which was universally branded by southerners as uncivilized. The exception was to be the African American, formerly enslaved or just of a matching color, that the Army of Northern Virginia gleefully rounded up and sent south to become chattel. Their version of civilization remained unrattled by such acts of cruelty.
The point has been made that the “total war” of the twentieth century was presaged by the acts of Union forces upon civilians in the Civil War, but that is manifestly overdrawn. Even at its height, as Sherman marched to the sea and Sheridan despoiled the Shenandoah, Grant’s strategic imperative designed to deny the Confederacy foodstuffs and matériel hardly resulted in the slaughter of innocents seen in 1914 and beyond. At the same time, for those who lived through it, it seemed a line had been crossed from an earlier age, even if historians might argue that same line had already been crossed by the British some four-score years prior. There was palpable pain on both sides, even if the south suffered more as the war ground on to its final conclusion. Federal forces indeed quite ruthlessly put farms and factories out of business in the Shenandoah. Earlier restraint eventually gave way, and Confederates mercilessly and without regret retaliated by burning Chambersburg, Pennsylvania in 1864.
Virginia was, of course, the central battleground of the Civil War in the eastern theater, so she had more stories to tell. Many of these stories come from the valley, and some truly tear at the heartstrings:
On [one Virginia] farm, a Union officer ordered a fine mare bridled and led away. When the mare’s colt followed its mother, the farm woman begged them not to take an animal so young that it could be of no use to an army. The officer agreed the animal was useless and simply commanded one of his men to shoot the colt. The woman wept over its body. People remembered these stories for generations. [p240]
But the tear in the reader’s eye for the dead colt and the sobbing woman is quickly washed away and replaced with horror as Ayers recounts another telling episode:
[In Saltville, in 1864,] Confederates, enraged after discovering that they were fighting against black men, killed the wounded African-American soldiers left behind after the failed Union attack. [Diaries of those at the scene reported] … Confederate soldiers … “shooting every wounded negro they could find” [and] that scouts “went all over the field and … sung the death knell of many a poor negro who was unfortunate enough not to be killed yesterday. Our men took no negro prisoners. Great numbers of them were killed yesterday and today…” [General John Breckinridge arrived and] “ordered that the massacre should be stopped. He rode away and—the shooting went on. The men could not be restrained.” The murder continued for six more days, culminating with guerrillas forcing their way into a makeshift hospital at Emory & Henry College and shooting men, black and white, in their beds … A Richmond newspaper printed a tally that showed telling numbers: 150 black Union soldiers had been killed and only 6 wounded, while 106 white soldiers had been killed and 80 wounded. The ratios testified that dozens of wounded African-Americans had been killed …The Richmond paper celebrated the rare Confederate victory over all the “niggers” and Federal troops. [p242-43]
In extremely well-written accounts like these that marry a passionate narrative to solid history aimed at both the scholarly and popular audience, Ayers artfully brings the heartbreaking realities of war in the valley on both sides to our modern doorstep, forbidding us to look away, and compelling us to pick up and cradle the truth of what really transpired.
Of course, as the postwar “Lost Cause” myth took hold, we know now that stories like the dead colt would not only frequently be repeated, but magnified and romanticized, while the slaughter of wounded blacks in Saltville would deliberately be erased. Since most histories of the conflict end at Appomattox or shortly thereafter, readers are denied the painful epilogue of how that came to be so. Here Ayers bucks that trend and keeps going all the way to 1902.
A potent strain in the most recent historiography argues convincingly that while the north claimed military victory, the south ultimately won the Civil War. A week after Lee’s surrender, Lincoln was dead, replaced by Tennessean Andrew Johnson, who welcomed the ex-Confederates who seized political power as the states that had seceded were restored to the Union, while demonstrating little regard for the millions of formerly enslaved African Americans cast adrift in a hostile and economically devastated southern landscape. Despite the efforts of the “Radical Republicans” who controlled Congress to seek justice for blacks through Reconstruction, Johnson dominated events, and blacks found themselves terrorized and murdered by former Confederate elites who would not tolerate steps towards fairness and equality. With emancipation, the former “three-fifths” rule that defined representation was no more, and with millions of blacks now counted as actual persons, newly readmitted states actually gained more political power than they had possessed in the antebellum years. Institutionalized terror kept African Americans from the ballot box and transformed their status into that of second-class citizen, which was hardly challenged in the century to follow.
If the valley was a kind of microcosm of the Civil War in America, by extending his narrative Ayers superbly demonstrates that so too was its unfortunate aftermath for African Americans. The Thin Light of Freedom is an outstanding work on multiple levels, not least in its success in conjuring empathy for all of the victims on both sides, and guiding us to a greater appreciation of how and why the many unresolved elements of that long ago conflict continue to resonate, often uncomfortably, for the United States in the twenty-first century.
It was late and I was on my way home, rock n’ roll blasting on the car radio. It was the one-week anniversary of our very first apartment together as a couple, so there was a kind of glow around the day. Then the music cut off abruptly and the news broke: John Lennon had been shot. John Lennon was dead. When the tunes resumed, it was all Beatles and Lennon solo stuff. One of the songs was, of course, Imagine. Tears streamed down my face. It was December 8, 1980.
Imagine had been recorded and released in 1971, but as the year 1980 closed out that already felt like fifty years ago. The Vietnam War and Nixon were long gone. The sense of radicalism, of tumult—as well as innovative creative expression in music and the arts—had slipped away, its wake littered with the detritus of cocaine, schlocky pop music, and a kind of national ennui. Most men, including myself, didn’t wear their hair shoulder-length anymore. Almost exactly a month before Lennon’s murder, Ronald Reagan was elected President, leaving many of us far more shaken than stirred.
John Lennon had recently reemerged after a long hiatus from the studio and public life. He was just forty, but he looked much older than that. Double Fantasy—his first album in five years, featuring songs by John and Yoko—was released just three weeks before his death. I personally found it weak and disappointing. But I bought it just days after it hit the record stores—of course—it was music from John Lennon! Lennon had been my favorite Beatle, as well as a kind of personal hero: a peace activist, an iconoclast, a man who found himself trapped by the money and fame and lifestyle that others salivated for, a man willing to throw it all away (well, perhaps not all the money) for the love of his life, avant-garde artist Yoko Ono, even if many of us were puzzled by his obsession with her. It turned out that the sum of its parts that was the Beatles would ever far outshine the solo work of its members, including Lennon, but perhaps his best work was the album Imagine that featured that eponymous song of hope that remains a soft-rock national anthem. John’s murder sent Double Fantasy skyrocketing on the charts, if not to critical acclaim, but Imagine is the real legacy of John Lennon.
Thirty-eight Christmases after Lennon’s assassination, the stark white cover of the beautiful outsize volume Imagine John Yoko emerged beneath festive wrapping paper, a gift from my wife. Compiled by Yoko, but with author credits to John and Yoko Lennon, this gorgeous coffee table edition boasts extensive interviews, black and white photography, liner notes, illustrations, and ephemera, crafted to tell the “definitive inside story” of the making of the Imagine album and film of the same name at their English country mansion estate, Tittenhurst Park.
The spotlight is not only upon John and Yoko, but also on a generous cast of characters, including co-producer Phil Spector, then-giants of the music scene such as George Harrison, Nicky Hopkins and Mike Pinder, as well as lesser-known figures, plus all sorts of production assistants and the often uncredited folks who each play a significant if not always acknowledged role in the final cut of a masterpiece like Imagine. Interview excerpts are not dated; some are contemporary to production, while others look back from decades ahead. Sadly, like Lennon, many have passed on, including Harrison and Hopkins; King Curtis, who sat in on saxophone, was murdered in late summer of that same year. Ironically, Phil Spector and drummer Jim Gordon—of Derek and the Dominos fame—are both in prison serving life sentences for murder. Almost all the rest who are still alive have faded into obscurity. But thumbing through this magnificent book, for a moment it is the early part of 1971 again: John Lennon is just thirty, madly and obsessively in love with the older Yoko Ono, who just as madly and obsessively reciprocates. John has left the Beatles behind, his long collaboration and once-close friendship with Paul McCartney on the rocks, but there is a palpable sense of great promise in what the future holds for John and Yoko.
The very next day after I began perusing Imagine John Yoko—and before it turned into a cover-to-cover read for me—I dug out my old vinyl copy of Imagine and gave it a spin. I had not listened to it in many years and I had forgotten what a truly great album it is. The title track tends to get all the attention, but to my mind Gimme Some Truth is the best song on the record. Other iconic tunes include Crippled Inside, Jealous Guy and I Don’t Want to be a Soldier. Some might argue that none of it lives up to Strawberry Fields Forever or Happiness is a Warm Gun, but there’s little doubt that the collection of songs on Imagine is outstanding and certainly Lennon’s best post-Beatles work. It was re-listening to the album after all this time that led me to carefully read, rather than skim, the entire book. Along the way, I also screened the Blu-ray DVD that contains the full length “rockumentary” film Imagine, replete with innovative music videos from the Imagine album as well as selections from Yoko’s Fly album, as well as a companion “making-of-Imagine” film entitled Gimme Some Truth. Icing on the cake includes cameos from Andy Warhol, Fred Astaire, Dick Cavett and Jack Palance. I highly recommend these audio-visual companions to the book to help to make it come to life in all its brilliance once more.
The highlight of the book and the film is John in the “White Room” at Tittenhurst recording Imagine, singing and playing on the all-white Steinway grand piano that he gave to Yoko for her birthday that year, while Yoko slowly opens a series of white shutters to let light stream in. At the end, Yoko is seated beside John at the piano, and they exchange looks that reflect such a degree of genuine mutual love and affection and admiration that that one single moment serves to validate the entire project. The combined experience of immersing myself in the book, the album and the films made me not only come to better appreciate the superlative achievement of Imagine, but also the integral role that Yoko represented as artist and inspiration throughout. Like much of the public, back in the day I found it difficult to grasp John’s utter infatuation with Yoko, but the testimony of so many in this book underscores Yoko’s essential piece in the creation of this masterpiece. At the same time, listening to her vocals on portions of the Imagine film have yet to convince me that she has talent as a singer. Still, Yoko was clearly full partner to Imagine, not some assistant. It would never have been if not for her presence in John’s life.
One of my favorite bits in the book and in the Gimme Some Truth film feature Claudio, a Vietnam Vet suffering from PTSD, who was found to be living for some days in the woods at Tittenhurst. Claudio had become convinced that John was communicating with him through his lyrics. Disheveled and confused, he is brought before John, who tells him that “I’m just a guy who writes songs,” and patiently explains to an obviously crestfallen Claudio that the lyrics have nothing to do with him. There is a brief pause, and then John, with much empathy, asks: “Are you hungry?” John then brings him in and feeds him at his table. Claudio was both disturbed and obsessed with John Lennon, and the recounting of this episode made me wonder how things might have turned out differently if John had managed to similarly engage someone else who was disturbed and obsessed with him—Mark David Chapman—before it was too late.
On the final pages of Imagine John Yoko, they each speak to us. There’s an excerpt from an interview with John saying of he and Yoko that “We’d like to be remembered as the Romeo and Juliet of the 1970s.” When asked if he had a picture of “When I’m 64,” John replied:
“I hope we’re a nice old couple living off the coast of Ireland or something like that—looking at our scrapbook of madness. My ultimate goal is for Yoko and I to be happy and try and make other people happy through our happiness. I’d like everyone to remember us with a smile . . . The whole of life is a preparation for death. I’m not worried about dying. When we go, we’d like to leave behind a better place.” [p298]
Those days of turning scrapbook pages were, sadly, not to be. As a fan, as a reviewer, I would urge you to buy this book and to read it, but it is not for me but rather for Yoko to deliver the coda, of course:
“It was such an incredible loss when I think about it . . . See, most people think, ‘Well, he’s a rocker and just kind of rough, maybe,’ but no. At home he was a very gentle person and extremely concerned about me but also concerned about the world too. I still miss him, especially now because the world is not quite right and everybody seems to be suffering. And if he was here it would have been different, I think. I think that in many ways John was a simple Liverpool man right to the end. He was a chameleon, a bit of a chauvinist, but so human. In our fourteen years together he never stopped trying to improve himself from within. We were best friends. To me, he is still alive. Death alone doesn’t extinguish a flame and a spirit like John.” [p298]