Featured

Review of: On Tyranny (Graphic Edition): Twenty Lessons from the Twentieth Century, by Timothy Snyder, Illustrated by Nora Krug

My favorite moment of my favorite film is the “La Marseillaise” scene in Casablanca that has freedom fighter Victor Laszlo, portrayed by Paul Henreid, abruptly break off a conversation with café-owner Rick Blaine, played by Humphrey Bogart, when he overhears German soldiers in the bar singing a patriotic military song, and with dramatic purpose that underscores his own outrage intervenes to have the orchestra take up the French national anthem instead. The bandleader hesitates, Bogie nods his assent. At first, the Germans persist, but soon nearly all the patrons at Rick’s join in, including, perhaps most memorably, the young French woman Yvonne, shown earlier consorting with a German soldier, now stridently vocalizing each syllable of “La Marseillaise” with tears streaming down her face, until the volume and force of the anthem drowns out the Germans and they surrender to the circumstances. As the music fades, Yvonne cries out reflexively, “Vive la France!”  I have screened Casablanca more than two dozen times, but the “La Marseillaise” scene grips me anew in each instance; I feel chills, and tears well up in my eyes every time. It is just a movie, of course, but the symbolism is stark and powerful nonetheless, a poignant metaphor of how ordinary people can—even with tiny measures—resist fascism.

The “La Marseillaise” scene flashed over me as I turned the pages of On Tyranny: Twenty Lessons from the Twentieth Century, by distinguished Yale professor and historian Timothy Snyder, originally published in 2017 and later reissued in this splendid Graphic Edition (2021) beautifully illustrated by Nora Krug. But while Victor Laszlo and Yvonne are fictional celluloid heroes of a staged drama, Snyder looks back to actual individuals who confronted horrific circumstances when state fascism and rising totalitarianism convulsed Europe in the twentieth century, and connects the dots to the unsettling strength of emergent strains of neofascism that threaten to consume increasingly brittle democratic institutions in the West.

But identifying elements of fascism is not always easy, since while no less menacing these typically take on forms far more subtle than swastikas sewn to a shirt. In 2014, Hillary Clinton was roundly pilloried for casting Vladimir Putin’s naked aggression in Ukraine in the same realm as Adolf Hitler’s adventurism in the Sudetenland and the Austrian Anschluss. I lack Clinton’s stature, certainly, but I was similarly rebuked in my own circles on the eve of the 2016 presidential election when I drew lines from Trump’s MAGA to Hitler’s Nazis.

But Timothy Snyder has proved a reliable guide for these matters, most prominently in his The Road to Unfreedom: Russia, Europe, America [2018], a magnificent work of unassailable scholarship that clearly established that such analogies are hardly hyperbolic—and prescient enough to anticipate Putin’s malignant strain of revanchism that later saw Russian tanks rolling into Ukraine in a full-scale invasion, an act of unprovoked aggression not seen in Europe since World War II. The latter made the cable news, but there’s been far more equally sinister stuff floating just beneath the radar for some years that untrained eyes have failed to detect.

Snyder argues that Putin has carefully and cleverly sculpted a rebranded neofascism for the millennium, and that his fingerprints are everywhere: in efforts to fracture the NATO alliance, by championing Brexit to weaken the European Union, in vitriolic campaigns against so-called “immigrant invasions,” as well as others promoting antifeminism, homophobia, and hypermasculinity—and especially in election interference in the United States! Snyder posits that Putin helped fashion the fictional candidate “Donald Trump successful businessman,” who was then marketed to the American people. Paul Manafort was the last advisor to the last pro-Russian president of Ukraine before he became the first campaign manager to Donald Trump. That Trump is indeed Putin’s puppet is a secret hiding in plain sight.

By the courageous acts of some, the ineptitude of Trump himself, and a certain amount of luck, America weathered his four year tenure, if only—as January 6th reminds us—just barely. But our democracy is unlikely to survive a second go-around. Which is why in this election year  recognizing and confronting fascism, in efforts both small and large, is so vital to the future of our fragile Republic. For those paying attention, the United States in the early 2020s has begun to feel disturbingly like Germany in 1930s. But how does the average American distinguish reality from propaganda, skillfully broadcast from both Moscow and Mar-a-Lago? Or, even more challenging, detect signs of fascism in MAGA, however blatant these might seem to members of the intelligentsia like Snyder?

This is an obstacle too often underestimated. Donald Trump has bragged about his support among the lower educated, a too-true if uncomfortable reflection of a vote-casting cohort overlooked at our own peril. The problem with The Road to Unfreedom, for all of its superlative craftsmanship, is that it is directed towards intellectuals and the politically sophisticated, reducing both its reach and its appeal to a wider and arguably more significant audience. Which is exactly what makes On Tyranny—especially in this standout graphic edition—such a critical and indeed far more accessible implement in our arsenal to combat fascism. Moreover, a younger demographic, weaned on graphic novels and plagued with a certain contempt for political institutions, is more likely to find enlightenment, perhaps even epiphany, between the covers of this slender publication.

Timothy Snyder

In marked contrast to The Road to Unfreedom, Snyder’s On Tyranny is a brief and easy read. The entire volume could be consumed in a single sitting, although I deliberately stretched it out over several days in order to soak in the messaging. The Twenty Lessons from the Twentieth Century of the subtitle are rendered as twenty chapters that look to the past and present to predict the grim future that lies ahead without an active intervention he assigns to all of us, collectively. In his words and the accompanying illustrations, the echoes from some ninety years past shriek loudly into our current political maelstrom. It may take a keen ear to otherwise catch that tune, but Snyder makes certain those sounds are unmistakable.

The first chapter, with its lesson “Do not obey in advance,” speaks most consequentially to just how the complacency of an obedient population enables the oppressor. The Nazis were pleasantly surprised at how effortlessly Austrians ceded their own sovereignty in the Anschluss,  and colluded to persecute the Jews among them. Tyrants don’t always have to seize control; sometimes it is handed to them. Hitler himself first gained political power through elections, as did the communists in Czechoslovakia in 1946. But what was to cement the absolute rule that followed was the anticipatory obedience that Snyder pronounces the true political tragedy, that conformity from a docile population that facilitates absolute rule until it can no longer be reversed—be that be Hitler’s Reich, Soviet style communism, or some other less flamboyantly ornamented authoritarian regime. In the end, totalitarianism, however packaged, is always a terrifying similar creature.

But its disguise can be quite compelling. One way to unmask it is to “Believe in truth” (Lesson Ten) and to defend that truth unfailingly against “alternate facts” being foisted upon you by those who work to blur the boundaries of reality with questionable notions that confirm a specific narrative. Snyder lectures that: “To abandon facts is to abandon freedom. If nothing is true, then no one can criticize power, because there is no basis upon which to do so.” This is  uncomfortably familiar territory these days, recognizable in everything from unscientific attacks on vaccines and climate science, to a whitewashing of the insurrection, to the “big lie” of election denial, along with a prevailing whiff of vague yet menacing conspiracy hovering about every discussion. What if Big Pharma is forcing dangerous vaccines into our bloodstreams? What if climate scientists are covertly colluding to advance a green agenda? What if Nancy Pelosi engineered the assault on Congress? What if Biden is not the legitimate president? The power and reach of social media dwarfs the capacity of legitimate news to keep up, as the unsophisticated and the paranoid alike are almost effortlessly swept into a maze of rabbit holes that look to distort as well as discredit empirical evidence in order to market a faith in the unfounded that promotes skillfully devised misinformation.

Illustrated by Nora Krug (p60-61)

Snyder correctly identifies the process as “… open hostility to verifiable reality, which takes the form of presenting inventions and lies as if they were facts.” Donald Trump, of course, is the master of this mechanism. The author reports Trump averaging six lies daily in 2017, and twenty-seven a day by 2020. The Washington Post more specifically quantified that as an astonishing 30,573 false or misleading claims over a total of four years! Such is how a “fictional counterworld” is constructed, one of “magical thinking” that is “the open embrace of contradiction.” The result breeds chaos and uncertainty and finally a fear of disorder that can only be addressed by the seemingly benevolent “strong man,” the tyrant-in-waiting with all the answers, eager to come to the rescue with feigned benevolence, declaring “I alone can fix it.” Snyder turns to history to remind us that this is nothing new, that the house that MAGA built is chillingly similar to the ones fascists of the past called home.

There are eighteen more lessons, all of them valuable, but my own favorite is the final one which makes for an entire chapter in two sentences: “Be as courageous as you can. If none of us is prepared to die for freedom, then all of us will die under tyranny.” For me, those lines brought to mind Hans and Sophie Scholl, idealistic young German siblings guillotined by Hitler’s regime for handing out pamphlets associated with the doomed anti-Nazi “White Rose” movement. There have been many other such martyrs to freedom over time, but even more who survived and lived to see the day that their own tyrants were tumbled and human dignity restored. We can only do what we can. We can only be as courageous as we can be.

In the closing scenes of Casablanca, Bogie risks his life against long odds to urge Ilsa, played by Ingrid Bergman, the love of his life as well as the wife of Victor Laszlo, to step onto a plane poised for departure to be at Victor’s side in his ongoing crusade against fascism. Rick famously tells her: “Ilsa, I’m no good at being noble, but it doesn’t take much to see that the problems of three little people don’t amount to a hill of beans in this crazy world.” Here Rick, Ilsa, and Victor are being as courageous as they can be.

Buy On Tyranny. Read it more than once. Share it with your friends and family. These are perilous times. Fascists walk in our midst wearing red caps. Be as courageous as you can. And while you’re at it, hum a few bars of “La Marseillaise.”

 

THE TWENTY LESSONS

  • Do not obey in advance.
  • Defend institutions.
  • Beware the one-party state.
  • Take responsibility for the face of the world.
  • Remember professional ethics.
  • Be wary of paramilitaries.
  • Be reflective if you must be armed.
  • Stand out.
  • Be kind to our language.
  • Believe in truth.
  • Investigate
  • Make eye contact and small talk.
  • Practice corporeal politics.
  • Establish a private life.
  • Contribute to good causes.
  • Learn from peers in other countries.
  • Listen for dangerous words.
  • Be calm when the unthinkable arrives.
  • Be a patriot.
  • Be as courageous as you can.

Review of: The Road to Unfreedom: Russia, Europe, America, by Timothy Snyder

“Trump’s false or misleading claims total 30,573 over 4 years,”  The Washington Post. January 24, 2021.  https://www.washingtonpost.com/politics/2021/01/24/trumps-false-or-misleading-claims-total-30573-over-four-years/

Review of: At the Heart of the White Rose: Letters and Diaries of Hans and Sophie Scholl, edited by Inge Jens

 

Featured

Review of: The Saddest Words: William Faulkner’s Civil War, by Michael Gorra

For every Southern boy fourteen years old, not once but whenever he wants it, there is the instant when it’s still not yet two o’clock on that July afternoon in 1863, the brigades are in position behind the rail fence, the guns are laid and ready in the woods and the furled flags are already loosened to break out and Pickett himself with his long oiled ringlets and his hat in one hand probably and his sword in the other looking up the hill waiting for Longstreet to give the word and it’s all in the balance …

No, that is not a snippet plucked from a Shelby Foote anecdote delivered with mellifluous voice in his signature cadence on the Ken Burns docuseries, The Civil War, but a passage from William Faulkner’s 1948 novel, Intruder in the Dust. Foote, writer and raconteur, who masqueraded as historian, is celebrated as much in some circles for his three volume narrative history of the Civil War as he is by a much wider audience for his extensive on-camera commentary on the docuseries that articulates the southern perspective in thinly disguised “Lost Cause” soundbites that deftly excised slavery from any conversation about the war. Faulkner also was no historian, nor did he pretend to be, but he certainly understood that slavery was the central cause of the war as well as its tragic aftermath for the denizens of the south, for blacks as well as for whites, even if he had difficulty saying that out loud, although we do hear it quite loud and clear through his carefully crafted characters in the drama and poetry that decorated the prose of his magnificent fiction. Slavery and its Jim Crow offspring poisoned the south, and the toxin was no less potent in Faulkner’s day than it was on that July afternoon in 1863.

Faulkner & Foote

The excerpt above references the moment just prior to the doomed Confederate assault known as Pickett’s Charge on the third and final day of the Battle of Gettysburg, seen by many then and now as the turning point of the Civil War, although careful students of the conflict would tell you that another far more consequential Union victory, the fall of Vicksburg, which cut the Confederacy in half, took place just one day later, more than a thousand miles distant in Faulkner’s native Mississippi. Faulkner aficionados glean that too, not least because its significance is subtly underlined in the short story “Ambuscade” (1934) that later serves as the opening installment of The Unvanquished (1938), when Colonel Sartoris’ young son Bayard and his enslaved companion Ringo eavesdrop on the colonel’s revelation that Vicksburg is gone just as the family’s silver, packed in a trunk, is shuttled out to be buried in the orchard.

William Faulkner

But the point here, for the purposes of Faulkner’s fiction—as well as the real-life tragedy of the south that still prevails today, well beyond the sesquicentennial of the Civil War—is that the “what-if” of the war’s outcome persistently echoes across far too much of the southern landscape in 2024: if not as loudly as it did in 1865 or in 1962, the year of Faulkner’s death, it yet remains all too perceptible socio-economically and politically. Nothing ever spoke to that  phenomenon better than a more famous Faulkner quotation found in another novel, Requiem for a Nun (1951): “The past is never dead. It’s not even past.” And possibly nothing proves its endurance better than the fact that this election year has seen at once bans against teaching about slavery and race in some southern states and, more remarkably, pro-secession candidates vying for office in Texas, perhaps grown men still fantasizing about that instant when it’s still not yet two o’clock on that July afternoon in 1863.

The past and present together underscore the relevance of The Saddest Words: William Faulkner’s Civil War, a brilliant and extremely well-written blend of history, biography, literary criticism, and travel writing by Michael Gorra, professor of literature at Smith College. After years of reading, studying, and teaching Faulkner, Gorra decided to take it to the next level, and he set out to visit the various geographies where Faulkner walked the earth, battlefields where southern blood was shed, and the likely environs of the fictional characters—Compsons, Sutpens, Snopes, and a host of memorable African Americans—that inhabited the author’s imaginary Yoknapatawpha County. Had he stopped there, the end result might have been another academic biography peppered with literary analysis. But instead, Gorra—who correctly identifies the Civil War and its repercussions as existential to Faulkner’s literary themes—assigned himself a rigorous self-study of the war and its wider implications. In the process, the author discovered what today’s historians have long recognized, that there was and remains more than one war: the actual war as it occurred, with all of its ramifications, and the way the war is remembered, especially in the south. There are multiple versions of the latter, both conflicting and overlapping, informed at once by truth and by imagination.

The most twisted and most stubborn of these is known as the “Myth of the Lost Cause,” that has the south waging a righteous if hopeless quest for liberty against a rapacious north intent on domination. Eventually, the heroic south is overwhelmed by sheers numbers of men and materiel, and goes down to an honorable defeat, only to fall victim to northern plunderers in the Reconstruction days that follow. Here it is solely a white man’s war, brother against brother, forged by incompatible forces arrayed in opposition: states’ rights vs. federalism, agriculture vs. industry, rural vs. urban, free trade vs. tariffs. In this version, slavery is almost beside the point, and blacks are essentially expunged from history.  African Americans appear in cameo roles when they show up at all, as harmless servants in the south’s peculiar institution, which is presented as something benign, even benevolent, that would have simply faded away on its own had Lincoln not launched what is still known in some circles as the “War of Northern Aggression.” More recently, blacks make an awkward reappearance in some odd strands of Lost Cause, now recast as comprising legions of imaginary uniformed “Black Confederates” who eagerly stand guard with their masters to defend southern sovereignty. Otherwise, blacks disappear almost without a trace. Gone are the millions held in chattel slavery, the half million  that self-emancipated by fleeing to the Union lines, the nearly two hundred thousand that fought for Union in the United States Colored Troops (USCT)—and most thoroughly erased are the many, many thousands of camp slaves that accompanied Confederate armies throughout the war, including Lee’s infantry at Gettysburg, a fact likely unknown to that fourteen year old dreaming of southern victory.

The Lost Cause is a vile lie, but like all effective lies it is infused with elements of truth. Of course, slavery was not the only cause of the Civil War—just ninety-five percent of it! The key is to focus on the other five percent, and that effort was so successful that this fictitious story became America’s story. So successful that the United States became the only nation in the world to host hundreds of monuments to traitors and rebels across its landscape, many that still preside over public squares today. So successful that it was integrated into the historiography that dominated American education for a century to follow. And ingredients of that distorted curriculum even touched me, growing up in New England in the 1960s, dramatically reinforced on our family’s console TV as the networks commonly replayed Gone with the Wind, the histrionic paean to Lost Cause: an endless loop of the hapless enslaved Prissy incongruously shrieking the “De Yankees is comin!” in terror rather than celebration. I was a Connecticut boy, a state that saw thousands of lives sacrificed in the cause of Union, but I pretended to be a Confederate soldier when I played war, so deeply sympathetic was I to the southern cause. There was only one black child in my elementary school, so I did not find it odd that blacks made few appearances in my textbooks.

But I was a voracious reader, even as a young teen, and books shaped me. I read deeply in American history, fell in love with the Civil War era, and began to discover that what I had learned in school was not only superficial but woefully incomplete and conspicuously misleading. I also read a good deal of fiction in those days, across multiple genres. I think I was fifteen when I discovered Faulkner, and the first novel I read is one of his most challenging to follow or comprehend, The Sound and the Fury (1929). The first section consists of nearly sixty pages propelled solely by a vehicle manufactured from disjointed bits of the stream of consciousness of Benjy Compson, a severely intellectually disabled adult—tagged as an “idiot” in Faulkner’s day—who experiences time as directionless in an interior monologue that speeds along a twisting road of sharp turns from 1928 to 1912 to 1902 and swerves back again repeatedly, with no signs or guard rails to assist the reader, a marvelous journey motif in nonlinear time instead of distance. I found myself reading and re-reading paragraphs and pages, again and again, often lost but relishing the long, strange trip, a dictionary habitually at my elbow as I struggled against an onslaught of vocabulary both unfamiliar and intimidating. I loved every minute of it! And, in that early 1970s acid-infused era, Faulkner’s style here, verging on the phantasmagoric, seemed the perfect companion to the likes of Jefferson Airplane and Pink Floyd.

In my teens, I did not connect Faulkner to the Civil War, but literature and history were then, and today remain, my two great passions. I would read many more Faulkner novels and short stories in the years to come, and my fascination with the Civil War was a part of my motivation, decades later, to return to school to obtain a master’s degree in history. By then the connection between Faulknerian themes and the tortured legacy of the war was apparent.

But it was not until I read The Saddest Words that I came to understand how inextricable that link truly was. Faulkner (and Foote for that matter) grew up indoctrinated in a version of Lost Cause more virulent than that which touched my northern classroom, a memory of the war and Reconstruction so far removed from reality that it amounted to a greater fiction than any of Faulkner’s novels—a fairy tale mandatory to explain to later generations why the weird world they inhabited existed as it did, lest they be crushed by cognitive dissonance. But, as Gorra detects in his superb analysis, it is Faulkner’s characters who speak to truth, even if the living, breathing William Faulkner could not articulate those contradictions. The violence, the rape, the incest, the guilt, the despair that are part and parcel of the body of Faulkner’s works are a kind of subliminal confession that the author is well aware of the actual horror that disfigures southern life that real life pretends away. His white protagonists voice this. His black characters—who speak in dialect now judged offensive—bear authentic witness in what is left unsaid.

In The Sound and the Fury and its cousin, the even more daunting Absalom, Absalom! (1936), Faulkner dwells upon “miscegenation,” a term anachronistic today that once served in the white south as an epithet for race-mixing. Gorra notes that the neologism itself only dates to 1863 (and I recall it still wielded as a cudgel in Dixiecrat rhetoric in the 1960s), although it certainly reflected a fear deeply rooted in the antebellum. But what could never be uttered aloud in the south was that the kind of race-mixing deemed revolting was strictly limited to that which might occur consensually between a white woman and a black man. Because the reality was that the institution of slavery sponsored a vast mixing of the races, but that was primarily the product of the white men of the planter aristocracy coupling with black girls and black women held as chattel property, and it was almost always nonconsensual.

They preached against a dread of a “racial amalgamation” while essentially engineering it; the enslaved population on any given plantation frequently included those who were children of those who owned and worked them. There were contemporary observations at Monticello that among the enslaved were light-skinned blacks with red hair and freckles who bore more than a passing resemblance to Jefferson. Gorra cites the familiar observation from southern diarist Mary Chestnut that: “The mulattos one sees in every family … resemble the white children. Any lady is ready to tell you who is the father of all the mulatto children in everybody’s household but her own. Those, she seems to think, drop from the clouds.” Of course, we now know that none of this is strictly hypothetical: genome-wide analysis reveals that the DNA of African Americans contains on average about twenty-five percent European ancestry. We can make an educated guess that there are few traces of consent in those numbers.

Blacks and whites were always together in those days, although in clearly defined roles. Gorra refers to an episode in The Sound and the Fury when Harvard-bound Quentin Compson has cause to reflect on race when he sits next to a black man on a bus in 1910, something that was common to the north then but taboo in Jim Crow Mississippi. But that was not always the case; segregation was invented in the north. At one time, free Boston blacks, subject to discrimination on rail travel, while hardly envying their enslaved brethren marveled that southern railroads did not separate the races. (Massachusetts finally desegregated railcars in the 1840s.) It was not until the 1880s that segregation was characteristic to southern life, and that was only obtained by the failure of Reconstruction and the rise of “Redemption.” No longer enslaved, blacks were terrorized and murdered as former Confederate officers and officials returned to power and cowed the southern black population into second class status, stripped of rights granted by the Fourteenth and Fifteenth Amendments, while the rest of the nation collectively averted its eyes. This is, by the way, not ancient history; I was seven years old when Congress passed the Civil Rights Act in 1964: “The past is never dead. It’s not even past.”

Of course, “separate but equal” always translated into separate and unequal, but southern whites and blacks could not ever really be separate—and that’s the rub! Faulkner saw that through the eyes of his white characters who lived in terror of incest and race-mixing only to turn around and see a world outlined inescapably by these implications. It is likely that the enslaved Sally Hemings, who bore Jefferson several children, was the half-sister of his late wife, Martha.

It is said Foote and Faulkner met, and even developed a sort of friendship. Foote was a stubborn defender of southern culture. Deep down, many of Faulkner’s characters seem to hate the south. I can’t help but wonder if the two ever talked about that. The Sound and the Fury’s Quentin Compson was certainly consumed by it, by the purity of southern women, by the conundrum of race, by a devotion to honor, so much so that he discovers that he cannot leave the ill-fated south behind him even at Harvard, more than a thousand miles from Mississippi, and in his anguish he takes his own life. But first he conjures a memory of something his father once said to him:

every man is the arbiter of his own virtues but let no man prescribe for another mans well-being and i temporary and he was the saddest word of all there is nothing else in the world its not despair until time its not even time until it was

Michael Gorra

Gorra’s synthesis of Faulkner’s fiction, Civil War memory, and the echo of systematic racism that yet stains America is nothing short of superlative. That he achieves this while probing sometime arcane avenues of literature, history, and historiography—while ever maintaining the reader’s interest—is especially impressive. If I was to find fault, it is only that towards the end of the volume, the author seems to drift away from the connective tissue in his thesis and wander off into what is clearly his first love, a detailed literary analysis of Faulkner’s prose. But that is a quibble. And truth be told, I now feel inspired to turn to my own shelves and once more dig deeply into my Faulkner collection. In this arena, I must confess that Gorra has truly humbled me: I have read The Sound and the Fury no less than three times, but his commentary on it makes it clear that I still did not entirely understand what Faulkner was trying to say, after all. I suppose I must go back and get to know Benjy again, one more time!

 

 

Featured

Review of: The Russo-Ukrainian War: The Return of History, by Serhii Plokhy

On February 24, 2022, Russian tanks rolled into Ukraine, an act of unprovoked aggression not seen in Europe since World War II that summoned up ominous historical parallels. Memories of Munich resurfaced, as well as the price paid for inaction. The West heard terrifying if unmistakable echoes in the rumble of armored vehicles and boots on the ground, and this time responded rapidly and unhesitatingly to both condemn Russia and steadfastly stand with Ukraine. Post-Trump—the former president seemed to have a kind of boyhood crush on Russia’s strongman Vladimir Putin—the United States, led now by the Biden Administration, acted decisively to partner in near-unanimity with the European Union and a newly re-emboldened NATO to provide political, economic, and especially military aid to beleaguered Ukrainians.

The world watched in horror as Russian missiles took aim at civilian targets. But there was also widespread admiration for Ukraine’s President Volodymyr Zelenskyy, who defied offers to assist his flight to a safe haven abroad by reportedly declaring that: “The fight is here: I need ammunition, not a ride.” But while most Ukrainians were indeed grateful for the outpouring of critical support from abroad, there was also background noise fraught with frustration: Russia had actually been making war on Ukraine since 2014, even if much of the planet never seemed to notice it.

Since, at least until very recently, most Americans could not easily locate Ukraine on a map, it is perhaps less than surprising that few were aware of the active Russian belligerency in Ukraine for the eight years prior to the full scale invasion that made cable news headlines. Many still do not know what the current war is really about. That vast sea of the uninformed is the best audience for The Russo-Ukrainian War: The Return of History [2023] by award-winning Harvard professor and historian Serhii Plokhy.

Map courtesy Nations Online Project

The conflict in Ukraine has spawned two competing narratives, and although only one is fact-based, the other—advanced by Putin and his neofascist allies in Europe and the United States—has gained dangerous currency as of late. In the fantasy “world according to Putin,” Ukraine is styled as a “near abroad” component integral to Russia with a shared heritage and culture that makes it inseparable from the Russian state. At the same time, Ukraine has brought invasion upon itself by seeking to ally itself with Russia’s enemies. And, somehow concomitantly, Ukraine is also a rogue state run by Nazis—never mind that Zelenskyy himself is of Jewish heritage—that obligates Moscow’s intervention in order to protect the Ukrainian and Russian populations under threat. That none of this is true and that much of it is neither logical nor even rational makes no difference. Putin and his puppets just keep repeating it, because as we know from Goebbels’ time, if you keep repeating a lie it becomes the truth.

And that truth is more complicated, so of course far more difficult to rebut. It is always challenging for nuance to compete with talking points, especially when the latter are reinforced in well-orchestrated efforts peddled by a sophisticated state-run propaganda machine that has an international reach. Ukraine and Russia, as well as Belarus, do indeed share a cultural heritage that can be traced back to the ninth century Kyivan Rus’ state, but then a similar claim can be made about France and Germany and their roots in the Carolingian Empire a bit farther to the west—with the same lack of relevance to their respective rights to sovereignty in the modern day. And Russian origins actually belong to fourteenth century Muscovy, not Kyiv. In its long history, Ukraine has been incorporated into Tsarist Russia and its successor state, the Soviet Union, but its vast parcels were also at various times controlled by Mongols, by the Polish–Lithuanian Commonwealth, by Austria, and even by a Turkish khanate. Yet, Ukraine always stubbornly clung to its distinct sovereign identity, even when—like Poland under partition—it was not a sovereign nation, and even as the struggle to achieve statehood ever persisted. That is quite a story in itself, and no one tells that story better than Plokhy himself in his erudite text, The Gates of Europe: A History of Ukraine [2015, rev.2021], a dense, well-researched, deep dive into the past that at once fully establishes Ukraine’s right to exist, while expertly placing it into the context of Europe’s past and present. Alas, it leans to the academic in tone and thus poses a challenge to a more general audience.

Fortunately, The Russo-Ukrainian War is far more readable and accessible, without sacrificing the impressive scholarship that marks the foundation of all Plokhy’s work. And thankfully the course of Ukraine’s recent past—the focus here—is far less convoluted than in prior centuries. While contrary to Putin’s claim, Ukraine is not an inextricable element of the Russian state, their modern history has certainly between intertwined. But that changed in the post-Soviet era, and the author traces the paths of each in the decades since Ukraine’s independence and Russia’s drift under Putin’s rule from a fledgling democracy to neofascist authoritarianism.

Ukraine became a sovereign state in 1991 upon the dissolution of the USSR, along with a number of former Soviet republics in Eastern Europe, the Caucasus, and Central Asia. Overnight, Ukraine became the second largest European nation (after Russia) and found itself hosting the world’s third largest nuclear arsenal on its territory. As part of an agreement dubbed the “Trilateral Statement,” Ukraine transferred its nuclear weapons to Russia for destruction in exchange for security assurances from Russia, Britain, and the United States. This crucial moment is too often overlooked in debates over aid to Ukraine. Not only has Russia plainly violated this agreement that the United States remains obligated to uphold, but there surely could have been no Russian invasion had Ukraine hung on to those nukes.

Ukraine suffered mightily in its decades as a Soviet republic—most notably during Stalin’s infamous man-made famine known as the “Holodomor” (1932-33) that killed millions of Ukrainians—but 1991 and its aftermath saw a peaceful divorce and both nations go their separate ways. Each suffered from economic dislocation, corruption, and political instability at this new dawn, but despite shortcomings throughout this transition, Ukrainians looked to the West, saw greater integration with Europe as central to their future, and embraced democracy, if sometimes imperfectly.

Meanwhile, Russia stumbled. Some of this can be laid to missed opportunities by the West for more significant economic aid and firmer support for emerging democratic institutions when Russia needed it most, but much of it was organic, as well. Vladimir Putin, a little-known figure, stepped into a leadership role. With slow, calculated, and somewhat astonishing proficiency, former KGB operative Putin gradually dismantled democracy while generally preserving its outward forms, cementing his control in an increasingly authoritarian state—one which most recently seems barreling towards a kind of Stalinist totalitarianism. Along the way, Putin crafted an ideological framework for his vision of a new Russia, born again as a “great power,” by borrowing heavily from 1930s era fascism, resurrected and transformed for the millennium.

Interestingly, while I was reading The Russo-Ukrainian War, I also read The Road to Unfreedom [2018], Timothy Snyder’s brilliant study of how neofascism has gripped the West and Putin’s pivotal role in its course: interfering in US elections, sponsoring Trump’s candidacy, seeking to destabilize NATO, encouraging Brexit in the UK—and an aggressive revanchist effort to annex Ukraine to an emergent twenty-first century Russian Empire. Snyder both confirms the general outline of Plokhy’s narrative and zooms out to put a wider lens on the dangerous implications in these cleverly choreographed diabolical maneuvers that go well beyond the borders of Ukraine to put threat to the very future of Western democracy. As such, Putin may imagine himself as a kind of latter-day Peter the Great, and sometimes act as Stalin, but the historical figure he most closely imitates is Adolf Hitler.

Like Hitler, Putin first sought to achieve his objectives without war. For Ukraine, that meant bribery, disinformation, election interference, and other tactics. And Putin nearly succeeded with former president Viktor Yanukovych—who attempted to effect a sharp turn away from the West while placing Ukraine firmly into Russia’s orbit—until he was toppled from power and fled to Moscow in 2014. A furious Putin replayed Hitler’s moves in Sudetenland and in the Austrian Anschluss: puppet separatists agitated for independence and launched civil war in Ukraine’s east, and Crimea was annexed by Russia following a mock referendum. The war in Ukraine had begun.

The Obama Administration, in concert with the West, responded with economic sanctions that proved tepid, at best, and went on with their business. Ukrainians fought courageously in the east to defend what remained of their territory against Russian aggression. Meanwhile, Donald Trump moved into the Oval Office, voicing overt hostility towards NATO while projecting a startling brand of comraderie with Vladimir Putin. Snyder wryly observes in The Road to Unfreedom that the last advisor to the last pro-Russian president of Ukraine, Viktor Yanukovych, was none other than Paul Manafort, who then became the campaign manager to candidate Donald Trump. You can’t make this stuff up.

If Snyder sometimes leans to the polemic, Plokhy strictly sticks to history, even if the two authors’ perspectives essentially run parallel. The Russo-Ukrainian War is most of all a well-written, competent history of those two nations and of their collisions on and off the battlefield that spawned a full-scale war—one that did not need to occur except to further Putin’s neofascist nationalist ambitions. If I can find fault, it is only that in his sympathy for the Ukrainian cause, Plokhy is sometimes too forgiving of its key players. In the current conflict, Ukraine is most certainly in the right, but that is not to say that it can do no wrong. Still, especially as I can locate much of the same material in Snyder’s work, I cannot point to any inaccuracies. The author knows his subject, demonstrates rigorous research, and can cite his sources, which means there are plenty of notes for those who want to delve deeper. I should add that this edition also boasts great maps that are quite helpful for those less familiar with the geography. Plokhy is an accomplished scholar, but an advanced degree is not necessary to comprehend the contents. Anyone can come to this book and walk away with a wealth of knowledge that will cut through the smokescreen of propaganda broadcast not only on Russian TV, but in certain corners of the American media.

This review goes to press on the heels of Putin’s almost-certain assassination of his most prominent political opponent, Alexei Navalny, in an Arctic gulag where he had been confined under harsh conditions for championing democracy and standing against the war in Ukraine, and just days away from the second anniversary of the Russian invasion, as Ukrainian forces abandon the city of Avdiivka and struggle to hold on elsewhere while American aid withers under pressure from Trump’s MAGA allies in the House of Representatives, who went on recess in a deliberate tactic to sidestep a vote on aid to Ukraine already approved by the Senate. Trump himself, the likely Republican nomination for president this year, recently underscored his longstanding enmity towards NATO by publicly declaring—in a “Bizarro World” inverse of the mutual defense guaranteed in Article 5—that he would invite Russia to attack any member nation behind on its dues, a chilling glimpse of what another Trump term in the White House would mean for the security of both Europe and America. Trump once again lives up to his alarming caricature in Timothy Snyder’s The Road to Unfreedom: the fictional character “Donald Trump successful businessman” that was manufactured by Putin and then marketed to the American public. And just a week before Navalny’s murder, former FOX News host Tucker Carlson conducted a softball “interview” with Putin that gifted him a platform to assert Russia’s right to Ukraine and even cast blame on Poland for Hitler’s invasion in 1939. We have truly come full circle, and it is indeed the return of history.

These are grim moments for Ukraine. But also for America, for the West, for the free world. With all the propaganda, the misinformation, the often fake news hysteria of social media, the average American voter may not know what to believe about Ukraine. For a dose of reality, I would urge them to read The Russo-Ukrainian War. And, given the stakes this November—not only for Ukraine’s sovereignty but for the very survival of American democracy—I would advise them to take great care when casting their ballot, because a vote for Putin’s candidate is a vote for Putin, and perhaps the end of the West as we know it.

 

Link to my review of:  The Gates of Europe: A History of Ukraine, by Serhii Plokhy

Link to my review of:  The Road to Unfreedom: Russia, Europe, America, by Timothy Snyder

 

 

 

Featured

Review of: Ota Benga: The Pygmy in the Zoo, by Phillips Verner Bradford & Harvey Blume

In 1904, the notorious Apache warrior Geronimo, now in his mid-seventies, was a federal prisoner of war on loan to the St. Louis World’s Fair, which belongs to our nation’s uncomfortable collective memory for its numerous ethnographic exhibits of so-called “primitive” humans which included, in addition to Native Americans, the Tlingit, indigenous to Alaska, and the Igorot, an aboriginal population from the Philippines who were billed as “headhunters,” as well as Congolese pygmies. Geronimo developed rapport with one of latter, an amiable nineteen year old Mbuti tribesman named Ota Benga, imaginatively advertised as a “cannibal,” who stood four foot eleven inches and whose smile showcased teeth ceremonially sharpened to fine points. The old medicine man presented him with an arrowhead as a gift; they were, of course, all in this kind of zoo together.

But only two years later that very metaphor materialized for Ota, whom after a brief stay at New York’s American Museum of Natural History found himself on display in the Monkey House at the Bronx Zoo, where he hung his hammock, wearing a loincloth and carrying a bow and arrow, or wandering the zoo grounds accompanied by an orangutan he had grown attached to—a captivating if unpaid attraction for amused onlookers. Just after the turn of century, fresh from the imperialist adventure that was the Spanish-American War, which had compelled Filipinos to trade one colonial power for another, “civilized” Americans delighted in the spectacle of gawking at “savages” in various contrived natural habitats—especially, it turned out, in of all places, New York City!

The hapless Ota’s surprising story, from his birth in central Africa through his unlikely travels across the United States, is the subject of Ota Benga: The Pygmy in the Zoo [1992], an entertaining if occasionally uneven account by dual authors Phillips Verner Bradford and Harvey Blume. It is also, actually, a dual biography, as Ota shares much space in the narrative with Samual Phillips Verner—the grandfather of one of the authors—an eccentric missionary who visited the Congo on a “specimen-gathering mission” for the Fair, and “collected” Ota Benga as one of those “specimens.” There are grander themes to parse, as well, that this set of authors may not have been up to. These run the gamut from the oppression that reigned in the Jim Crow south to the cruelty that characterized the Congo, and—especially—to this particular moment in time when an America now equipped with automobiles and electricity and even manned flight could yet shamelessly put human beings on display to at once juxtapose with and champion their alleged superiors shouldering their “white man’s burden.”

Bradford, an engineer who was inspired to write a biography of his colorful grandfather, recognized that Ota Benga was the hook that would attract readers, and set out to do the research. Blume was brought in to polish the manuscript. Neither were trained historians, which perhaps makes the finished product more readable, if less reliable; more on that later.

This storied grandfather, the aforementioned Samual Phillips Verner, was born in post-Civil War South Carolina to a former slaveholding family and grew up furnished with the deep-seated racism typical to his class and his time. Verner emerges here as an intense, academic prodigy who lingers upon troubling moral quandaries of right and wrong, while suffering from alternating episodes of mental illness—he once insisted he was the Hapsburg Emperor—and religious fervor. Throughout, he takes comfort in the Daniel Defoe novel Robinson Crusoe, as well as the real life adventures of Henry Morton Stanley and David Livingstone in distant, exotic Africa. The sum total of all this was to coalesce in Verner’s calling as a missionary to what was then commonly referred to as the “Dark Continent.” It is in Africa that he demonstrates his intelligence, his charm, his many capabilities, and his propensity for both earning enemies and cementing friendships. He also wrestles with the inherent prejudices he carries from the deep south that come to be challenged by the realities of the human experience. And, as in his boyhood, there are disturbing moral dilemmas to resolve. But what becomes increasingly clear as the pages are turned is that Verner is first and foremost a narcissist, and resolutions for any paradox of morality are always obtained by what suits Verner’s own circumstances most comfortably and most conveniently.

By his own account, Verner’s time in the Congo consisted of remarkable exploits that saw him establish rapport with various native peoples, including pygmies, as well as form an unlikely kind of alliance with a dangerous, otherwise unapproachable tribal king, and a near-fatal episode when he impaled his leg on a poisoned stake set for an animal trap. Along the way, he distinguishes himself by his courage, quick-thinking, and ingenuity—like a character out of Defoe, perhaps. Did it all really happen? Bradford reports Verner’s saga as history, although it is based almost entirely on his grandfather’s own recollections. As such, the reader cannot help but question the reliability of a fellow who once believed himself to be the Hapsburg Emperor!

African pygmies, much like the Khoisan peoples, have an ancient indigenous lineage that are genetically divergent from all other human populations. They may or may not be descendants of paleolithic hunter-gatherers of the central African rainforest. In Ota Benga’s time, the Mbuti, nomadic hunters, ranged within the artificially drawn borders of the Congo Free State, a vast territory that was for a time the personal fiefdom of Belgium’s King Leopold II, a land infamous for the widespread atrocities committed by Leopold’s private army, the dreaded Force Publique, that enforced strict rubber collection quotas through extreme methods of murder and mutilation. A human hand had to be turned in for every bullet issued to prove these were not wasted, so baskets of hands—including children’s hands—became symbolic of Leopold’s “Free State,” a realm of horrors that inspired Joseph Conrad’s Heart of Darkness. For those who have read Adam Hochschild’s magnificent work, King Leopold’s Ghost, there is nothing new here, but two of its protagonists, black missionary William Sheppard and Irish activist Roger Casement, who campaigned against Leopold’s reign of terror, turn up in this book, as well. Verner, it seems, was surprisingly unmoved by the carnage about him.

Verner contracted malaria. That illness, his leg injury, and the overall dissatisfaction of mission officials with his performance conspired to send him back to America, where he became famous for his reported feats and, based upon his background, won the assignment of procuring pygmies for The Louisiana Purchase Exposition (also known as the St. Louis World’s Fair). So, eight years later he returned on expedition, with the blessings of King Leopold himself, and in an accidental encounter with the Baschilele tribe stumbled upon Ota Benga at a slave market. Apparently Ota, away from his camp on a solo elephant hunt, as pygmies were wont to do, had returned to find piles of corpses, including his wife and children—victims of the Force Publique. He and other survivors were sold into slavery. Verner could not believe his good fortune: he purchased Ota for “a pound of salt and a bolt of cloth.” [p103] He later recruited other more willing volunteers, and set sail for home. The Thirteenth Amendment forbidding chattel slavery was ratified nearly four decades prior, but that proved not to be a barrier to Verner’s transport of Ota to the United States.

That is just the beginning of this fascinating story! There is much more to come, which makes this book, although flawed on some levels, well worth the read. Those who have studied the American Civil War and the antebellum south are familiar with the nuanced relationships that can develop between the enslaved and those who hold them as property. A bond developed between Verner and Ota that was even more complicated than that. Verner may have purchased Ota and dutifully turned him over to the World’s Fair, but he later freely returned him to Africa. Yet, after a time, Ota, widowed once more after losing a second wife to snakebite, found himself with little to hold him there and a taste for the excitement he had found in America. Thus, he made an enthusiastic return to the US with Verner. But things were not destined to go well for either of them.

Verner had visions of grandeur that did not translate into either the wealth or recognition he sought. He seemed to genuinely care about Ota’s welfare, but that fell to neglect as his own fortunes dwindled, and Ota wound up in that degrading display at the Bronx Zoo. He was not there very long. His rescue came from unlikely quarters: African American clergymen, chafing at their own second-class status, were rightly appalled at the humiliating spectacle of Ota at the zoo, which they likewise perceived as advancing Darwinism, an abomination for their Christian faith. Ota went first to an orphanage in the Bronx, and later to Lynchburg, Virginia, where a kindly patron arranged to have his sharpened teeth capped, fitted him out in suitable clothing, sent him to school, and found him work at a tobacco factory, where he was known as Otto Bingo. But Ota, who in his heyday with Verner had been a celebrity of sorts on travels that had once even taken him to Mardi Gras in New Orleans, found himself lonely and alienated. One day in 1916, he pried the caps off his teeth and shot himself. He was about thirty-three years old.

In the end, I longed for more information about Ota and less about Verner. This volume, while enhanced by both wonderful photographs and a thick appendix of press clippings from the day, is conspicuously absent of endnotes—which would be useful for the reader anxious to separate fact from fiction in Verner’s likely embellishments. Still, despite its limitations, I enjoyed this book and would recommend it.

My Review of: King Leopold’s Ghost: A Story of Greed, Terrorism and Heroism in Colonial Africa, by Adam Hochschild

Ota Benga is referenced, with much relevance, in Angela Saini’s fine work, reviewed here:  Superior: The Return of Race Science

Featured

Review of: The Children of Athena: Greek Intellectuals in the Age of Rome: 150 BC-400 AD, by Charles Freeman

In 399 BCE, Socrates was condemned to death, a tragic punctuation mark to the celebrated fifth century that had Athens and Sparta and the multitude of other poleis witness first the repulse of the mighty Persian Empire, the flourish that was the Age of Pericles, and then the carnage of the Peloponnesian War that for nearly three decades battered Greek civilization and culminated in Athenian defeat. In that same era, hardly anyone had heard of Rome, humiliated just shortly thereafter when sacked by Gauls in 390 BCE. A mere century and a half later, the Greeks were themselves subjects of a Rome that had become master of the Mediterranean. But in victory or defeat, sovereign or not, the pulse never failed to beat in the poleis—or beyond it. The life of Socrates, likely embellished, was told most famously by Plato, who founded his Academy in Athens in 387 BCE. Plato’s pupil Aristotle later established his own school, the Lyceum, and served as tutor to Alexander the Great, who in his vast conquest spread Hellenism across the east. By the time that Egypt, the last parcel of territory once claimed by Alexander, fell to Roman rule in 31 BCE, Greek thought prevailed more than a thousand miles from Attica and the Peloponnesus, and it was to dominate Roman intellectual life for centuries to come. As Roman poet Horace once observed: “Captive Greece took captive her savage conqueror and brought the arts to rustic Latium.”

That story is subject to a superlative treatment in The Children of Athena: Greek Intellectuals in the Age of Rome:150 BC-400 AD [2023], a fascinating and engaging work that is the latest to spring from the extremely talented pen of acclaimed classicist Charles Freeman. In a departure from the thick tomes and deep dives into intellectual history that have made his reputation, such as The Closing of the Western Mind1 [2003], and its sequel of sorts, The Reopening of the Western Mind2 [2020], this delightful survey sacrifices none of the scholarship Freeman is known for while expanding his appeal to both an academic and a popular audience. Even better, the volume is structured such that it can just as suitably be approached as a random perusal of out of sequence episodes as a cover-to-cover read.

Books of history often have a slow build, but not this one. The reader is instantly hooked by the “Prologue,” which features an adaptation of The Banquet, a hilarious satirical work by Lucian of Samosata, a second-century CE Hellenized Syrian who wrote in Greek, that has representatives of virtually every school of philosophy attending a wedding feast that degenerates from debate and dispute to debauchery—and even a full scale brawl! Attendees include Stoics, Platonists, Peripatetics, Epicureans, Cynics, and various hangers-on. The point, of courses, for the purpose of Freeman’s work, is both the considerable diversity that was manifested in Greek thought, as well as how prevalent that proved to be in the immensity of an empire that stretched from Mauretania to Armenia.

To animate this compelling cultural history, Freeman has chosen a select group of representative figures. Those grounded in the classics will recognize most if perhaps not all of them, which only serves as underscore to the sheer numbers of Greeks who took leading roles in Roman life over the many hundreds of years that spanned the time when Greece succumbed to Roman conquest in the second century BCE to the fall of Rome in the west in the fifth century CE. There are philosophers, of course, such as Epictetus and Plotinus, but there is also the historian Polybius, the biographer Plutarch, the geographer Strabo, the traveler Pausanias, the astronomer Ptolemy, the surgeon Galen, and a dozen others. Chapters for each are comprised of biographical sketches with an exploration of their significance, as well as the imprint their legacies left upon later Western Civilization. Included too are a number of interludes that explore wider themes to better place these individuals in context to their times.

Rome’s was a martial society not known for organic cultural achievements, at least not until much later in the course of its history. Greek art and epic, already deeply influential on the Etruscans that Rome supplanted in their geography, came to fill that vacuum. The syncretism that gradually integrated Greek mythology into equivalent Roman gods and goddesses, with appropriate name changes, similarly saw Greek culture increasingly borrowed and incorporated over time, even as this latter process met with a sometimes fierce resistance by conservative Roman elites. Philosophy proved especially unwelcome at first, as perhaps best highlighted in a report by Plutarch of an Athenian delegation to Rome in 155 BCE that saw a certain Carneades, a philosopher associated with antidogmatic skepticism, argue convincingly to an audience in favor of one proposition the first day, only to return the next and masterfully rebut his own position—to the horror of Cato the Elder! But such attitudes were not to prevail; Greek philosophy was to dominate Roman intellectual life, even as Christianity gained traction—and some of Freeman’s Greeks are in fact Christian—until repressed by the Church in the last decades prior to the fall of Rome. This was especially facilitated by the Pax Romana that characterized the first two centuries of empire, a period of relative peace and stability that allowed ideas, including spirited philosophical debate, to spread freely across long distances. The Emperor Marcus Aurelius (reigned 161 to 180 CE) was himself a Stoic philosopher!

Freeman’s book demonstrates the vitality of Greek thought in Roman life not merely through the various schools of philosophy, but even more importantly in the realms of science, medicine, and scholarship. Long ago, in my own studies of ancient Greece, I read both Polybius (c.200–c.118 BCE) and Plutarch (46-119 CE) while carelessly overlooking the implications in that these were Greeks who resided in Rome. Plutarch himself even became a Roman citizen. It is a telling reminder that Greeks remained a critical influence upon Western Civilization—long after their city-states ceased to be anything other than place names on Roman maps.

I once ran across a claim of Christian triumphalism in the literature that argued that the rise of Christianity was enabled by a paganism that had so run its course that it had doomed itself to obsolescence, leaving a gaping spiritual hole that begged for a new, more fulfilling religious experience for the masses. It’s a nice fairy tale for the faithful, but lacks support in the scholarship. Even as the “catastrophic” notion of the demise of polytheism (associated with Gibbon) has given way to the more realistic “long and slow” view by historians, it is often surprising to discover how vibrant paganism remained, well into late antiquity. And the best evidence for that is the flourishing of Greek philosophy, and the paganism associated with it, in the Roman world—both which finally fell victim to the totalitarianism of the early Christian Church that at first discouraged and later prohibited anything that strayed from established doctrine. With that in mind, The Children of Athena serves as a kind of prequel to Freeman’s magnificent The Closing of the Western Mind, which chronicled the course of events that came to crush independent inquiry for a millennium to follow in the Western world.

There is possibly no more chilling metaphor for this than in one of the final chapters of The Children of Athena that is given to Hypatia (c.350-415 CE), a Neoplatonist philosopher and mathematician who lived in Alexandria, Egypt in the twilight of the empire. Hypatia, the rare female of her times who was a philosophical and scientific thinker, fell afoul of a local bishop and was murdered by a Christian mob that stripped her naked and scraped her to death with shards of roof tiles. And so the Western mind indeed did close.

For the record, I have come to know Charles Freeman over the years, and we correspond via email from time to time. I read portions of drafts of The Children of Athena as it was coming together, and offered my ideas, for whatever those might be worth, to help polish the narrative. As such, I was honored to see my name appear in the book’s “Acknowledgements.” But I am not a paid reviewer, and I would never praise a title that did not warrant it, regardless of my connection to the author. I genuinely enjoyed it, and would highly recommend it.

This is, in fact, one of those works that is difficult to fault, despite my glaring critical eye. Freeman’s depth in the field is on display and impressive, as is his ability to articulate a wide range of sometimes arcane concepts in a comprehensible fashion. I suppose if I were to find a flaw, it would be for the lack of much needed back matter. Readers may bemoan the absence of a “cast of characters” to catalog the names of the major and minor individuals that occur in the text, a key to philosophical schools and unfamiliar terminology, as well as maps of ancient cities and towns. Still, that is a minor quibble that should best be taken up with the publisher rather than the author, and hardly diminishes the overall achievement of this book, which does include copious notes and a fine concluding chapter that for my part found me motivated to go back to my own shelves and read more about the men and women that people The Children of Athena.

 

1 A link to my review of: Review of: The Closing of the Western Mind: The Rise of Faith and the Fall of Reason by Charles Freeman

2 A link to my review of: The Reopening of the Western Mind: The Resurgence of Intellectual Life from the End of Antiquity to the Dawn of the Enlightenment, by Charles Freeman

 

Featured

Review of: The Road to Unfreedom: Russia, Europe, America, by Timothy Snyder

When Hillary Clinton compared Russian President Vladimir Putin to Adolf Hitler after Russia occupied Ukraine’s Crimean Peninsula in early March 2014, I thought she had gone too far—and so did many of her critics. But Clinton’s analogy turned out to be far more prescient than hyperbolic. When, just weeks later, Putin boldly annexed Crimea following a mock referendum, and then sponsored puppet separatists to launch civil war in the Donbas in Ukraine’s east, she proved that her memory of European history—of Hitler’s annexation of the Sudetenland and the Austrian Anschluss—correctly detected echoes in Putin’s recourse to fake ballots and real bullets.

Like most Americans, I believed at the time that the economic sanctions the Obama Administration imposed upon Russia represented a sound, measured policy that sidestepped unnecessary overreaction, rather than what was in retrospect clearly a tepid, ineffective response, especially given that shortly thereafter Russian proxies shot down a commercial airliner over Ukraine that killed nearly three hundred innocents. Hardly as blatant as Munich in 1938, the lack of meaningful repercussions here certainly emboldened Putin on the path to the full-scale invasion of Ukraine that came in 2022—an act of unprovoked aggression not seen in Europe since World War II. It has hardly gone as planned, of course, but then it is not over yet, either.

Along the way, some have suggested that Putin’s fantasies of himself as a kind of latter-day Peter the Great have instead degenerated into a Putin-as-Stalin motif, but that strikes as somewhat inelegant. Rather, while Clinton was pilloried for flashing that “Hitler card” back when, she was indeed on to something. There is far more than mimicry in the Russian president’s seizure of a neighbor’s territory and denial of its very sovereignty, most significantly in the pretext and justification for his acts. Because when you deconstruct Putin, you find him not only glancing backward over his shoulder at der Führer, but working with quiet determination over the last two decades to reinvent that brand of fascism for the twenty-first century.

That Putin turns out to be the driving force behind the neofascism that has had the Western world increasingly in its sway since the turn of millennium is just one of the insights to emerge in The Road to Unfreedom: Russia, Europe, America [2018], the at-turns brilliant and chilling work by Yale professor and acclaimed historian Timothy Snyder, which traces the roots and more recent rise of the forces of the undemocratic right, along with its sometimes Byzantine web of connections that stretch to Viktor Orban’s Hungary, Britain’s Brexit, and Trump’s MAGA—and all too many strands lead back to the Kremlin, some crisscrossing Ukraine on the way. In a narrative that is engaging and well-written, if often somewhat complex, Snyder channels the philosophical, psychological, and ideological to reveal the dangerous resurrection of principles that were fundamental to 1930s fascism, retooled and even transmogrified to suit new generations, new audiences. What’s especially striking about Snyder’s analysis is that this book, published in 2018, anticipates so much of what is to follow.

Fascism, born in Mussolini’s Italy almost a century ago, takes on many forms that have been catalogued by a number of scholars and writers, including—famously—Umberto Eco. Laurence W. Britt[1] compiled perhaps the most comprehensive list of its known characteristics, although the specific expression can vary widely.  Central to all is ultranationalism, typically coupled with a yearning for a mythical, bygone era of greatness that has been lost to liberal decadence. Mussolini looked to the glory of ancient Rome; Hitler to the more recent past of Imperial Germany. Contemporary neofascism is no different. Putin mourns the collapse of the Soviet Union and its larger sphere of influence that encompassed Eastern Europe. In the United States, it simmers beneath the ultrapatriotic flag-waving surface of Trump’s “Make America Great Again” (MAGA) movement as a dog-whistle that fondly looks back on a “whiter” America when blacks were more complacent and “brown” immigrants were not threatening our borders. (Trump himself recently and unrepentantly paraphrased Hitler with talk of immigrants “poisoning the blood” of America.) Racism is always a part of the equation. Hitler’s hatred of the Jews stood out in stark underscore, but antisemitism ever lurks. In the U.S., it is masked in the thinly veiled contempt spewed upon billionaire George Soros, who acts as a convenient placeholder for “liberal Jews.”

But targets of racism are not alone: they share space with a crowded field of “enemies” who threaten the harmony of the state and serve as scapegoats for society’s alleged ailments, including: communists, foreigners, lawbreakers, intellectual elites, nonconformist artists, members of the media, organized labor, minorities, feminists, homosexuals, etc. There’s always a list of grievances and national ills, real and imagined, for which the latter can be held responsible—and serve as a unifying force that must be opposed by those who seek to restore the nation’s greatness. Religion is often an ally in the combating of sin. There is an obsession with national security expressed by rampant militarism that on the domestic front translates to a hyperbolic law-and-order fixation on crime and punishment. Individuals and institutions alike are demonized. Since fascists have no respect for human rights, opponents are dehumanized, transformed into “the other,” deserving of persecution for both their actions and their ideas.  Violence and the threat of violence are ever present or looming. Among institutions, democracy itself is the foremost adversary, and an early casualty to authoritarianism. The fascist leader becomes the self-appointed savior: only he and his corrupt cronies can solve the disorders of the state, but only if he is granted the absolute authority to do so. Elections become a sham: if you lose, just declare victory anyway. Keep lying until the lie becomes the truth.

The Road to Unfreedom identifies the commonality of these elements in right-wing parties across Europe and in the United States. It turns out to be pretty shocking how closely each of these movements resemble one another—and how similar they are to the fascism once associated with Mussolini, Franco, and Hitler! But the real epiphany is not only the role Putin has played in inspiring and encouraging today’s brand of neofascism, but how frequently the contemporary manifestations originated with Putin himself.  Snyder chronicles how Putin managed to dismantle democracy in Russia while maintaining its outward forms, and how that has served as a blueprint of sleight-of-hand authoritarianism for his imitators abroad. (Donald Trump is just one of them.)  But, more critically still, he details how it is that Putin resurrected and reinvented fascism for the new century by returning to the philosophy and ideology of fascists of the past while embracing and encouraging the neofascist thinkers of the present.

A large piece of The Road to Unfreedom is given to events in Ukraine, to Putin’s focused attempt to recover for Russia what for him is the central component of what he calls the “near abroad,” the now independent successor states once incorporated into the USSR. For those who have read Serhii Plokhy’s landmark chronicle, The Gates of Europe[2], or his more recent book, The Russo-Ukrainian War, there is nothing new here. But, significantly, Snyder deftly locates Putin’s brand of revanchism in the fascist-friendly political philosophy of the right that thrived before he was born, and which has been reshaped, with Putin’s patronage, for our own times. He identifies Ivan Ilyin (1883-1954), a White Russian émigré who admired both Mussolini and Hitler, as a major influence on Putin. Ilyin was a key proponent for the socio-political “Eurasianism” that Putin holds dear, an antidemocratic imperialism that claims for Russia a distinct civilization that transcends geography and ethnicity to command a vast territory ruled by the Russian state. Perhaps today’s most prominent Eurasianist is Aleksandr Dugin, said to be close to the Kremlin. The point is that political philosophy serves as underpinning to Putin’s opportunism. It is not simply about seizing territory. There is a long-term plan.

Snyder traces the roots of “the road to unfreedom” to the naivety of a West swelling with triumphalism in the wake of the fall of the Soviet Union, blinded by what he brands the “politics of inevitability:” individuals and ideas were seen as obsolete, supplanted instead by an unyielding optimism in the conviction that the marriage of capitalism and democracy guaranteed ineluctable progress to a favorable future. The opposite of the “politics of inevitability,” Snyder argues, is “the politics of eternity,” that offers instead a cyclical tale of victimhood inflicted upon the state by age-old threats and enemies that ever reappear and must be vanquished. In the politics of eternity, only one man—and misogyny dictates that it must be a man—can save the nation, and because at root it is decidedly antidemocratic there can be no thought of succession. The “dear leader” is the only hope. The politics of eternity governs Putin’s Russia. It also, most certainly, governs Donald Trump’s MAGA vision for the United States.

Snyder is unforgiving towards Trump in The Road to Unfreedom, but hardly unfair, although he goes further than many dare in positioning Trump in Putin’s orbit. In a famous 2016 debate exchange, Hillary Clinton accused Trump of being Putin’s puppet, and the Trump that emerges here is not unlike a more malevolent (if less bright) incarnation of Pinocchio fashioned with the fingers of a Geppetto-like Putin. The reader may be forgiven for an eyeroll or two when Snyder posits that it was Putin who crafted the fictional character “Donald Trump successful businessman” who was then marketed to the American public as a political candidate. But that hardly seems an exaggeration when you learn that it was actually Putin who first floated the canard of Obama’s forged birth certificate, the banner of Trump’s political rise. That policies that opposed NATO, decried the EU, championed Brexit, demonized Islam, the LGBTQ community, and immigrants—all central to the MAGA machine—almost perfectly aligned and still align with Putin propaganda. To channel The Godfather, it turns out that it was not Barzini all along, but Vladimir Putin.

And again, so many of the “roads to unfreedom” lead through Ukraine. Snyder reminds us that the last advisor to the last pro-Russian president of Ukraine, Viktor Yanukovych, who fled to Russia when he was deposed, was none other than Paul Manafort, who then became the campaign manager to fictional candidate “Donald Trump successful businessman.” It was then, Snyder goes on, that Manafort “oversaw the import of Russian-style political fiction … It was also on Manafort’s watch that Trump publicly requested that Russia find and release Hillary Clinton’s emails. Manafort had to resign as Trump’s campaign manager after it emerged that he had been paid $12.7 million in off-the-books cash by Yanukovych … In 2018, Manafort was convicted of eight counts of federal crimes and pled guilty to two more, conspiracy and obstruction of justice, in a deal made with federal prosecutors.” [p236]

It is remarkable that Snyder’s book, published in 2018, anticipates so much of what is to come, and not only the Russian tanks that rolled into Ukraine. The so-called “Mueller Report” that investigated Russian interference in the 2016 election may not have found a smoking gun with Putin fingering the grip, but it did deem Donald Trump guilty of obstruction, even if that outcome was mischaracterized by then Attorney General William Barr. Throughout Trump’s presidency and beyond, Putin has remained his loudest public advocate. And then there was Trump’s “perfect phone call” that attempted to extort Ukrainian President Volodymyr Zelenskiy for political purposes—the subject of Trump’s first impeachment trial.  Also, on December 23, 2020, just weeks before the end of his presidency and the insurrection he sponsored in a crude attempt to extend his tenure, Trump issued Manafort a full pardon. As this review goes to press, just shortly after the third anniversary of that insurrection, Trump is in the news every day exploiting neofascist themes, threatening dictatorship, declaring the last election stolen, and running for president once more as Vladimir Putin cheers him on from the platform of Russian state TV, while Trump returns the favor at every opportunity. Perhaps the greatest gift comes via his allies in Congress, who are blocking desperately needed American military aid to Ukraine.

There is much more to give us all pause. One common feature of fascism is a celebration of hypermasculinity that also hosts a distinct antifeminism and asserts traditional roles for men and women in society. As Putin’s grip on authoritarianism in Russia grew, so too did scorn for feminists and for those who identified as LGBTQ, as he proclaimed a focus on “traditional family values,” supported by the Russian Orthodox Church, a reliable ally for his one-man rule as well as his war on Ukraine. Fascists poke fun at the soft, decadent underbelly of effeminate liberalism, hurling the expletive “cuck” at the male who does not live up to their patriarchal ideal of the man’s man. Today, sadly, that curse is no less likely to be heard on the avenues of Atlanta than it is on the streets of Moscow. Putin, who reportedly enjoys the sympathetic coverage he has come to expect from FOX News, likely chuckled to himself watching a 2021 episode when former FOX host Tucker Carlson mocked the “maternity flight suits” of pregnant women serving in the armed forces, lecturing that our military had become soft and feminine, in contrast to those of our adversaries that were tough and masculine. Of course, Putin is likely not laughing as hard these days as young Ukrainian women, some former fashion models, are at the front gunning down Russian soldiers daily.

Donald Trump is known to frequently employ projection as a defense mechanism. When Hillary dubbed him a Putin puppet, he shot back with “You’re the puppet!” When in a speech to mark the anniversary of the January 6th insurrection President Biden branded Trump a “threat to democracy,” Trump countered that it was Biden instead who was the threat to democracy. Putin is an expert at this craft, although naturally he is more articulate and his phrasing more elegant than Trump’s. Snyder notes that Putin is the master of what he calls “schizo-fascism,” that has fascists re-branding their enemies as fascists, as when Putin has styled has invasion of Ukraine as an effort to combat resurgent Nazis—despite the fact that Ukrainian president Zelensky is Jewish. Just lie, and then keep recycling the same lie. Rinse and repeat.

It’s hard to find a real flaw in The Road to Unfreedom, other than that some of it strays to the arcane and may challenge the attention span of the popular audience that would most benefit from reading it. There is, however, terrain left unexplored. Putin gets his due as the brilliant villain he turned out to be, but the author overlooks how his rise could have been forestalled by a post-Soviet Russia given to prosperity and stability. The West, basking in the glow of Snyder’s “politics of inevitability,” failed to act consequentially when it could have, in that narrow window between Gorbachev and Putin. I would have liked to see Snyder probe those missed opportunities for economic aid and support for fledgling democratic institutions in Russia, a topic of adept analysis in Peter Conradi’s Who Lost Russia?[3] Also, unlike Conradi, Snyder is unsympathetic to Russian fears stoked by NATO expansion and US withdrawal from the Antiballistic Missile Treaty. These anxieties were certainly opportunistically exaggerated and exploited by Putin, but they nevertheless were and would remain legitimate concerns—even to a democratic Russia. But these are quibbles. In this election year, with the very future of our fragile Republic at stake, read The Road to Unfreedom. It may be too late for Russia, but our vehicle of democracy, if a bit clunky, is still roadworthy, and there’s still time to save America. Let’s step on the gas.

 

[1] Laurence W. Britt, “Fascism Anyone?” Free Inquiry Magazine, [Vol 22 no 2., July 15, 2003] Fascism Anyone?

[2] Link to my review of  The Gates of Europe: A History of Ukraine, by Serhii Plokhy

[3] Link to my review of  Who Lost Russia?: How the World Entered a New Cold War, by Peter Conradi

Featured

Review of: Charlie Chaplin vs. America: When Art, Sex, and Politics Collided, by Scott Eyman

Decades before J. Edgar Hoover’s FBI connived for a pretext to deport a wealthy British ex-pat suspected of communist connections who also happened to be an influential, world-famous artist—that person was John Lennon and that attempt ultimately failed—a much younger Hoover and his then-cronies mounted a similar but far more effective crusade against an individual who in his time was even more consequential. That man was British-born Charlie Chaplin, Hollywood’s first truly global phenomenon, who while not deported was yet driven into permanent exile.

Author and film historian Scott Eyman sets out to tell that story and more in Charlie Chaplin vs. America: When Art, Sex, and Politics Collided [2023], an informative and entertaining if uneven portrait of a celebrated figure as devoted to his art as he was indifferent to the enemies he spawned along the way. Seizing upon Chaplin’s clash with the authorities over his politics as the focal point of the narrative, the author seeks to distinguish this work from numerous previous chroniclers of its prominent subject, with mixed results.

Chaplin, both a genius and a giant in the days when cinema was in its infancy, left an indelible mark upon the nascent motion picture industry. In the process, he attracted both critical acclaim and legions of adoring fans, as well as, with equal fervor, the scorn of moralists and the disfavor of those who viewed his brand of social consciousness as a threat to the American way of life. Like the later John Lennon, he was an outsize talent who eschewed convention, dared to take unpopular positions, flaunted a somewhat sybaritic lifestyle, accumulated enormous wealth, and was a legend in his own time—the very ingredients that stoked in alternate audiences parallel passions of adulation and abhorrence.

Anticommunism runs deep in the United States, from the “Red Scare” of the 1920’s to the House Un-American Activities Committee (HUAC)—a creature of the Depression era that was reborn with a fury in the postwar period—and the related excesses of McCarthyism, a nearly continuous stream of panic and paranoia that characterized American culture for decades that ostensibly aimed to identify enemies foreign and domestic but instead extra-constitutionally branded certain political thought as a crime. In the process, thousands of Americans and foreign nationals suffered persecution, ostracism and even imprisonment. A weaker dynamic by Lennon’s time, it nevertheless remained a force to be reckoned with for those so victimized. Something of anachronism, anticommunism yet still echoes into today’s politics. One of the faults in Eyman’s treatment is his failure to place Chaplin’s harassment for his alleged political sympathies into this wider context for the reader unfamiliar with its deeper historical roots.

Neither Chaplin nor Lennon were members of the Communist Party, but that was almost beside the point for authorities who deemed each as unwelcome, if only for their respective advocacies for a greater social and economic equality, which was seen as sympathetic to communist ideology. And there was a whiff of perceived disloyalty in their demeanors. Lennon was ardently anti-war. Chaplin, who styled himself a “peace monger,” was regarded as especially suspect because he never sought US citizenship; instead he railed against nationalism as a root cause of war, and imagined himself as a kind of citizen of the world. Alas, Chaplin had the bad fortune to find himself targeted in tumultuous times characterized by a populace both less sophisticated and more docile than in Lennon’s day. And he paid for it. Of course, as Eyman’s book underscores, objections to Chaplin’s way of life proved far more damning to him than his actual politics.

Charles Spencer Chaplin was born in London in 1889 into abject poverty much like a character out of Dickens. Orphaned by circumstances if not literally, in his childhood he endured the dehumanizing struggle of the workhouse and for a time lived alone on the streets. His older brother Sydney was to rescue him and helped foster his budding stage presence in sketch comedy that eventually took him across the pond, first to vaudeville and later to Los Angeles, where he made his silent film debut. A master of the art of physical comedy, it was there that he invented the trademark character that would define his career and later cement his celebrity—the “Little Tramp,” a good-natured, childlike, sometime vagrant costumed in baggy pants, oversize shoes, and a tiny mustache—that would evolve into an endearing screen icon endowed with a blend of humor and pathos. The Tramp would later serve as the central protagonist for the very first dramatic comedies.

Chaplin was a wildly popular overnight sensation, which by 1915 made him, at only twenty-six years old, one of the highest paid individuals in the world. Just four years later, he joined forces with other leading lights to form United Artists, a revolutionary film distribution company that permitted him to fund and maintain complete creative control over his own productions. UA served as the critical vehicle that enabled Chaplin to write, produce, direct, star in, and even compose the music for a series of films that would make him a legend, including The Kid (1921), The Gold Rush (1925), City Lights (1931), and Modern Times (1936), all featuring the Tramp character. Favoring the subtle artistry of silent films over the “talkies” that came to dominate motion pictures, Chaplin stubbornly continued to produce silent (or mostly silent) movies long after that format had been largely abandoned by others. Eventually he moved to talkies with The Great Dictator (1940), a political satire that starred Chaplin in a dual role as a Tramp-like character and a farcical persona based upon Hitler. Later, he abandoned the Tramp in Monsieur Verdoux (1947), and comedy altogether in the semi-autobiographical Limelight (1952).

In his personal life, Chaplin was a bundle of contradictions. An extremely wealthy but socially conscious man, he was capable of great generosity towards those he favored, but like many who grew up in extreme poverty he could be stingy, as well. It was that childhood abandoned to the streets, according to Eyman and other biographers, that informed every aspect of the mature Charlie Chaplin, in his screen persona as well as his private life. A dominating perfectionist on the set who could be maddening for cast and crew alike with his demands for multiple re-takes of the same scene day after day, and productions that could go on for months or even a year, off camera he was moved by the inequalities of unbridled capitalism, and the plight of the disadvantaged. He advocated for greater economic equity and against the rise of fascism both on screen and off, which made him seem suspect to the powers-that-be, which was further exacerbated by clashes with puritanical movie censors that regarded him as openly flirting with immoral themes. That he was friendly with known communists and campaigned to open a second front to benefit the USSR in the wartime alliance against Hitler only heightened those suspicions and led to accusations that Chaplin himself was a communist or “fellow traveler.”  He once received a subpoena to appear before HUAC, but he was never actually called to testify.

Physically tiny but handsome and charming, Chaplin was something of a womanizer who was frequently unfaithful and had a string of liaisons, sometimes with leading ladies, as well as a total of four marriages. He favored young women: his first two wives were each only sixteen years old when he married them, his fourth wife was eighteen and he was fifty-four when they wed. Other than his infidelities, he seems to have been kind and considerate to his various partners, and he often remained friends and sometimes a financial benefactor to former lovers. His third marriage to actress Paulette Goddard ended in divorce, but she then starred in his next film, and they got along amicably for years to come. The notable exception to that rule was his affair circa 1941-42 with the unstable and vindictive Joan Berry, which led to a career-damaging paternity suit for Chaplin. But he was a devoted and faithful husband to his final wife, Oona O’Neill, with whom he fathered eight children; they stayed together for thirty-four years until his death in Switzerland in 1977 at the age of eighty-eight. Still, the romantic scandals that dogged him—especially the poisonous courtroom drama that played out publicly in his disputes with Joan Berry—tarnished his reputation and bred a whole coterie of enemies in and out of Hollywood willing to work against him when the FBI set its sights on him as an undesirable alien.

As it was, while he championed social justice, Chaplin himself was remarkably apolitical. As the very archetype of the rags-to-riches story, he cited the inherent incongruity of accusing a man who made his fortune via American capitalism of being a communist. But already castigated for his alleged moral turpitude by the doyens of “respectable” society, as the Cold War dawned and the Soviet Union turned from former ally into existential threat, he was widely denounced by detractors and calls grew for him to be deported. Chaplin’s greatest weakness turned out to be his own naivety. When he left for London for the world premiere of Limelight, his re-entry permit was revoked. The grounds for this action were quite tenuous; had he contested it, it is likely he would have been readmitted. But he was so embittered by this affront that he remained in exile from the United States for the rest of his life, returning only once very briefly in 1972 to accept an honorary Academy Award for “the incalculable effect he has had in making motion pictures the art form of this century.”

The problem with Charlie Chaplin vs. America is that despite what may have been his original intentions, it is not completely clear what kind of book Scott Eyman ultimately sent to press after he rested his pen. The subtitle presumes it to be a chronicle of Chaplin’s dual with the established order to avoid banishment, but that theme hardly dominates the plot. On the other hand, there is a wealth of material that hints at what could have been. Much print is devoted to Charlie’s childhood struggles on the streets of London, as well his grand success with the production of The Kid, but there is little connective tissue of points in between, leaving the formative Chaplin mostly conspicuous in its absence. So it cannot be termed an authoritative biography. Likewise, there are sometimes lengthy excursions to focus on a particular movie or a specific film technique, while others are glossed over or ignored. So it cannot be a critical study of Chaplin’s filmmaking. It is as if Eyman bit off far more than he wanted to chew and ended up uncertain what should be spit out, with some chunks of the account too fat and others too lean.

The result is a narrative that is sometimes choppy, with a tendency at times to clumsily slip in and out of chronology, and a penchant in places to fall into extended digressions—including an awkward multipage interview excerpt with a Chaplin associate that might better have been relegated to the back matter of notes or appendix. Still, warts and all, the book never grows dull. The reader may be left unsatisfied, but ever remains engaged. And, to his credit, Eyman succeeds superbly in capturing Chaplin’s personality, by far the most significant challenge for any biographer. That is in itself a notable achievement, especially with a subject as nuanced as this one.

J. Edgar Hoover died in 1972, some twenty years after Chaplin’s re-entry permit was denied. John Lennon was ordered out of the country in 1973, but a New York judge reversed the deportation order in 1975. Earlier that same year, HUAC was formally terminated. After some quiet years, Lennon was making a musical comeback when he was murdered by a deranged fan in 1980. He was only forty years old. By then Chaplin had been dead for three years, at a ripe old age, but his creative juices had never really flowed the same way in exile. He made two additional films abroad, but neither lived up to his earlier triumphs.

Modern Times (1936)

Charlie Chaplin could be the most famous movie star in history whose films most Americans alive today have never seen, largely because even in my 1960s boyhood, when Chaplin still walked the earth, and when most broadcasts outside of prime time were devoted to old movies, silent films were already long passé, and most of his greatest films were silents.  With that in mind, along with reading Eyman’s book I screened several Chaplin films: The Kid, City Lights, Modern Times, and The Great Dictator. I am neither film critic nor film historian, but I consider myself something of a film geek, and I confess that I was blown away by Chaplin’s brilliance in both The Kid and City Lights. While I can understand and appreciate its message and its impact upon release, I found The Great Dictator dated, overly long, and less entertaining. But I would judge Modern Times as not only magnificent, but so extraordinarily timely with its themes of technological oppression, automation, corporate capitalism, threats to individuality, and loss of privacy that it belongs as much to 2023 as it did to 1936. If there is a fault to be found in any of these efforts, it is that Chaplin’s absolute creative control denied him the editorial input that was warranted on occasion. There are slapstick bits, for instance, that while hilarious yet go on interminably. Someone needed to yell “Cut!” Even a genius, as Chaplin indubitably was, needs an editor.

So too, in my opinion, does Scott Eyman. A talented and prolific writer who has authored numerous biographies of stars who once peopled the “Golden Age” of Hollywood, Eyman’s prior accolades could very well be the reason that someone with a sharp red pen did not have the authority to carve out the potentially great book that lay within that sheaf of pages that went to print.

 

[Note: This ARC edition came to me through an early reviewers’ program.]

Featured

Review of: The Calculus of Violence: How Americans Fought the Civil War, by Aaron Sheehan-Dean

The term “bloody” is so frequently attached as qualifier to the American Civil War that we tend to accept it without question. But how bloody was it? According to some estimates, in excess of 620,000 soldiers died on both sides, perhaps another 50,000 civilians, and total casualties including those wounded and missing are said to exceed 1.5 million. We have been told that the trenches around Petersburg and Sherman’s “hard war” anticipated World War I, but Civil War casualties are dwarfed by that carnage that claimed some 20 million lives, almost evenly split between military and civilian, not including another 21 million wounded, just a half century later. China’s Taiping Rebellion (1850-64) left 20 million dead, as well. That is not to minimize the suffering and death that characterized what was indeed America’s bloodiest war, but rather to put it in its appropriate context. Which then begs asking: why was it yet not bloodier still?

Acclaimed historian Aaron Sheehan-Dean ponders just that in his magnificent, ground-breaking work, The Calculus of Violence: How Americans Fought the Civil War [2018], an engaging and extremely well-written analysis of a long-overlooked topic hiding in plain sight.  More than 60,000 books have been written about the American Civil War, and each time I crack the cover of another I cannot help but wonder if there yet exists anything new to say about it. In this case, Sheehan-Dean one-ups hopes for fresh perspective with something akin to epiphany! The very definition of war implies violence, of course, and historians have a fairly good sense of how much violence was contained in the totality of those four years of armed conflict, but what was it that set those decidedly finite parameters? Were there certain guardrails in place, and if so, why? Confronted with something so conspicuous yet so generally ignored in the literature can be startling—and highly rewarding.

In the ancient world, survival of the conquered on and off the battlefield was subject to whims of kings or commanders, and the outcome was typically grim. The Assyrians were known to be especially sanguinary, the Greeks less so, but despite his disapproving tone we know from Thucydides’ account of the siege of Melos—which ended with the Athenians putting all the men to the sword—that massacre was far more often the rule than the exception. In the contemporary world, there are a whole host of international agreements specifically structured to protect noncombatants, but look only to the streets of Ukraine or across the landscape of the Middle East to observe how meaningless these turn out to be for those casually and euphemistically dismissed as “collateral damage.” The rest of Europe was appalled when German zeppelins bombed London in 1915, but whatever may be solemnly sworn to on parchment, such tactics are today nothing less than standard operating procedure. Yet, sandwiched between these ancient and modern extremes was an era in the West when rules of engagement among warring nations were less fuzzy and more generally respected. This was the milieu that hosted the American Civil War.

Throughout history, while levels of violence in war have often been arbitrary, restraint in warfare was governed by law and custom. At the outbreak of the Civil War, what we understand today as international law was nonexistent. The behavior of belligerents was instead governed by unwritten codes that evolved over centuries of European conflicts that looked to rules of engagement on the battlefield, secured the lives of prisoners of war, and made clear distinctions between soldiers and non-combatants. Those operating on behalf of the enemy out of uniform were treated as spies or saboteurs and subject to execution. Civilians were not to be targeted. This is not to say that abuses never occurred, nor that hapless inhabitants caught up in the path of invading armies did not suffer, but these customs of war were commonly observed by those engaged in hostilities.

The United States regarded the seceded states as in rebellion and refused to recognize the Confederate States of America as a rival nation, although it was nevertheless compelled to treat it as such in certain situations, as during a truce or in a prisoner exchange. This was similar to the dynamic during the Revolutionary War, when the British came to treat captured Continentals as prisoners of war, rather than as rebels subject to hanging. Still, circumstances sometimes made for some awkward posturing by the Lincoln administration, such as when it imposed a naval blockade of southern ports, since it is impossible to blockade your own country. Foreign recognition, especially by European powers such as Great Britain and France, was a cherished hope of the Confederacy.  While it was ultimately not to be, the dream never really died, which, the author emphasizes, motivated the CSA to act within the confines of established European traditions of war in order to assert their legitimacy as a member of the family of nations. For its part, the United States not only abided by these identical customs but was careful to do nothing which might strain those boundaries and provoke foreign sympathy for the Confederacy that might lead to recognition, a stumble certain to jeopardize the cause of Union. The matter was further complicated by the curious reality of adversaries each governed by representative democracies, with public opinion and the support of the electorate vital to their respective conduct of the war.

It was a mutual respect for these customs of war that defined the state of affairs as belligerency commenced, but unanticipated factors threatened to upset that uneasy balance almost from the outset. The first was when Jefferson Davis quietly sanctioned guerrilla warfare by failing to discourage it, over the objection of soldiers of the regular army such as Robert E. Lee. Historically, there was a distinction between officially organized “partisans” and gangs of guerrillas, but here lines were very blurred. Bands of raiders responsible for so much bloodshed in competing causes in the pre-war era in places like Kansas were embraced as worthy irregulars by the Confederate cause across the southern geography. These loosely organized marauders operated out of uniform to harass, sabotage, and pick off Union ranks. There was uncertainty as to what to do with these “bushwhackers” when captured. Many were executed, as the customs of war would dictate. This was branded as murder by the Confederacy, even as their protest was muted. But not every irregular captured by the north was put to death.

The second was the status of the human property that the Confederate cause held so dear.  Despite loud cries to the contrary in the postwar period that still echo into today’s politics, the southern states seceded principally in order to champion human chattel slavery in their “proud slave Republic,” with hopes of one day expanding it beyond their borders. Historians distinguish between societies with slaves and slave societies; the CSA was a slave society. The labor of enslaved African Americans was a critical piece of the southern war effort that freed up a larger percentage of white southerners to fight. And we now know that tens of thousands of “camp slaves” routinely accompanied Confederate forces on campaign, providing the essential support for an army in the field performed by the typical soldier in blue on the other side. When Union armies moved onto southern soil, escaped members of the local enslaved population sought refuge behind their lines. Initially, southern masters demanded their return in accordance with the Fugitive Slave Act, and some northern officers complied. Others refused. Lincoln dithered at first. The northern cause was preservation of the Union; emancipation would not become a war aim until some time later. There was, too, a need for a delicate balancing act to avoid alienating the coalition of border slave states still loyal to the United States. Still, it was clearly in the north’s interest to deprive Richmond of what after all amounted to a human component of the enemy’s materiel. With that in mind, the decision was made not to return “contrabands,” which enraged the south by an act they deemed dishonorable.

But anger turned to outrage in the third instance, when Lincoln’s Emancipation Proclamation not only declared free all the enslaved in the rebellious states but also called for recruiting black men into uniform. For the south, this was a violation of civilizational norms, which was furiously denounced and accompanied by threats to return to slavery or even execute captured black troops, and severely punish their white officers. Lincoln countered with his own warnings of reprisals, which led to an official standoff. But it was different in the field. Confederates frequently murdered black soldiers seeking surrender. There were well-publicized massacres of large numbers of United States Colored Troops (USCT) at places like Fort Pillow and the Battle of the Crater, but such atrocities on a smaller scale were more common over the course of the war than were once acknowledged. Still, as in the case of southern guerrillas who fell into the hands of Union forces, not all suffered this terrible fate; despite uncertain status, both groups ended up as well in respective prisoner of war camps. Likewise, after Fort Pillow, some African Americans, swearing revenge, summarily executed captured Confederates, but not every black soldier resorted to such measures. In the end, restraint ruled the day more often than we would expect.

The author has a lot to say about restraint, which is key to his thesis that under the circumstances we might have expected the Civil War to be far more brutal than it turned out to be. One salient aspect that generally escapes consideration is the conspicuous absence of slave insurrections during the war years: a noteworthy example of self-restraint by the enslaved population. The plantation elite long lived in terror of uprisings such as the 1831 Nat Turner revolt that saw the slaughter of whites by their chattel property, but these incidents were not only exceedingly rare in the antebellum, but despite increased vulnerabilities on the southern home front never occurred during the Civil War.  More than 500,000 enslaved individuals fled to northern lines as refugees during the Civil War; very few resorted to acts of violence against their former masters. By the end of the war, USCT made up about ten percent of the Union army, where the overwhelming majority served with bravery and distinction as part of a regular uniformed fighting force. Given the inhumanity that was part and parcel of the African American experience in chattel slavery, it is indeed remarkable that episodes of retaliation against those who held them in cruel bondage were not more prevalent.

George Caleb Bingham’s depiction of the execution of  General Order No. 11

Overall, civilians fared far better in the Civil War than in most ancient or modern wars. Much has been made of northern so-called “hard war” policies that the south viewed as barbaric, but under scrutiny it seems that “Lost Cause” hyperbole has distorted historical memory. The infamous 1863 Union Army directive General Order No. 11, which banished 20,000 residents of four counties in western Missouri, is frequently cited as an especially heinous act. But this was a direct response to the slaughter of about 150 men and boys at Lawrence, Kansas at the hands of Confederate guerrillas led by William Quantrill and, as the author underscores, this tactic of mass relocation likely reduced the number of vigilante reprisals that might otherwise have occurred. Likewise, Sherman’s march has long been characterized as unduly harsh, although the truth is that few noncombatants were killed along the way. On the other hand, Sheehan-Dean is clear that there is no doubt that even when not targeted by bullets, civilians suffered through lack of access to food, shelter, and medical care when caught in the path of armies, and the majority of this took place on southern soil.

As for the behavior of the regular armies on both sides, the author notes that the customs of war were generally respected, and neither combatants nor civilians were subjected to the kind of unrestrained brutality that might have been visited upon Native Americans with little hesitation. This brings to mind British horror when Germans introduced machine guns to World War I battlefields, although the Brits themselves had slaughtered some 1,500 African Ndebele warriors in 1893 with similar firepower. There were supposed to be rules for how “civilized” white men waged war; these rules did not apply to those deemed the “other.” Of course, that is likely how some Confederates reconciled the murder of black troops seeking surrender. But it is also, as the author reminds us, how Union officers justified executing captured guerrillas, another group of “outsiders.” Despite this, episodes of restraint on both sides were far more common than we might expect. As Sheehan-Dean eloquently argues:

The wartime calculus created by the Civil War’s participants sanctioned episodes of grim destruction and instances where the inertia of violence weakened … Moments of charity occurred wherever Union commanders and Confederate commanders or Southern politicians negotiated surrenders—of forts, armies, and towns—without violence. They happened when soldiers surrendered on battlefields and became prisoners of war. They even happened when officers used threats of retaliation to demand an end to unjust practices. In most cases, a retaliatory order de-escalated the situation. The most pivotal moment of de-escalation was the decision by enslaved people to pursue freedom rather than revenge … [p355]

At the outset of the Civil War, the closest thing to a manual of conduct for war was a wordy treatise based upon European history and tradition by Henry Halleck, who later became Lincoln’s General-in-Chief. But, as chronicled in some detail in The Calculus of Violence, formal rules of warfare were officially established on the Union side through the Herculean efforts of German-born Francis Lieber, who based what came to be known as the Lieber Code upon a “just war” theory. This first modern codification of the laws of war has had a lasting legacy, deeply influencing the later Hague Conventions and Geneva Conventions that established the existing tenets of international law and determined what acts of war can be considered tantamount to a war crime. Of course, there’s no shortage of irony to the awful truth that civilian populations often fared far better in Lieber’s day than they have in the days since. Structured, codified, hallowed international law has done little to mitigate the harsh reality found in the mass murder of populations who happen to get in the way of belligerents.

No review, no matter how detailed, could possibly do justice to the breadth and depth of ideas explored in this book, but it is a testament to the author’s brilliance as a thinker and talents as a writer that a tome so weighty with concepts of political philosophy and legal theory never once turns to slog. In truth, I could not put it down! Moreover, it is an important work of Civil War history that manages to cut its own indelible groove in the historiography. And, finally, by highlighting how restraint was pivotal to checking the potential for even greater bloodletting, Sheehan-Dean has managed to achieve something perhaps once deemed impossible by casting a glow of unexpected optimism around the tragedy that was the American Civil War. It is no wonder that The Calculus of Violence has been selected as the book of the Civil War Institute’s Summer Conference 2024. I not only highly recommend this superb work, but I would urge it as  an essential read for any student of Civil War history.

More information on the Civil War Institute’s Summer Conference 2024 can be found here: Civil War Institute at Gettysburg Summer Conference 2024

 

Featured

Review of: President Garfield: From Radical to Unifier, by C.W. Goodyear

I first encountered James A. Garfield in the course of my boyhood enthusiasm for philately with a colorful six-cent mint specimen, part of the 1922 series of definitive stamps dominated by images of American presidents. There was Garfield, an immense head in profile sporting a massive beard, encased in a protective mount on a decorative album page. I admired the stamp, even if I paid little mind to the figure it portrayed, just one of a number of undistinguished bearded or bewhiskered faces in the series. Except for the orange pigment of his portrait, he was otherwise colorless.

As I grew older, American history became a passion and presidential biographies a favored genre, but Garfield eluded me. Nor did I pursue him. I did occasionally stumble upon General Garfield in Civil War studies. And I was vaguely familiar with the fact that like Lincoln he was both born in a log cabin and murdered by an assassin, but he was in office for only a matter of months. I could recite from memory every American president in chronological order, and tell you something about each—but not very much about Garfield.

So it was that I came to President Garfield: From Radical to Unifier [2023], a detailed, skillfully executed, well-written, if uneven full-length biography by historian C.W. Goodyear. The Garfield that emerges in this treatment is capable, intellectual, modest, steadfast, and genial—but also dull … almost painfully dull. So much so that it is only the author’s talent with a pen that keeps the reader engaged. But even Goodyear’s literary skills—and these are manifest—threaten to be inadequate to the task of maintaining interest in his subject after a while.

That Garfield comes off so lackluster is strikingly incongruous to his actual life story, which at least partly seems plucked from a Dickens tale. Born in 1831 to a hardscrabble struggle in the Ohio backwoods that intensified when he lost his father at a very young age, he was raised by his strong-willed, religious mother who favored him over his siblings and encouraged his brilliant mind. He grew up tall, powerfully built, and handsome, with an unusually large head that was much remarked upon by observers in his lifetime. Like Lincoln, he was a voracious reader and autodidact. After a short-lived stint prodding mules as a canal towpath driver in his teens, his mother helped secure for him an avenue to formal education at a seminary, where he met his future wife Lucretia, whom he called “Crete.” Employed variously as a teacher, carpenter, preacher, and janitor, he worked his way first into Ohio’s Hiram College and then Williams College in Massachusetts, later returning in triumph to Hiram as its principal. He then entered politics as a member of the Ohio state legislature, until the outbreak of the Civil War found him with a colonel’s commission, fired by a passion for abolition to oppose the slave power. He demonstrated courage and acumen on the battlefield, and was promoted first to brigadier general, and then—after service in campaigns at Shiloh and Chickamauga—to major general. He left the army in 1863 and embarked on a career as Republican congressman that lasted almost two decades, until he won election as president of the United States. In the meantime, he also found time to practice law and publish a mathematical proof of the Pythagorean theorem. With a life like that, how is it that the living Garfield seems so lifeless?

Part of it is that in this account he seems nearly devoid of emotion. He makes few close friends. His relationship with Crete is conspicuous in the absence of genuine affection, and their early marriage marked by long separations that are agonizing for her but in Garfield provoke little but indifference; he eventually admits he does not really love her. A fleeting affair and the sudden loss of a cherished child finally bring them together, but in the throes of emotional turmoil he yet strikes as more calculating than crushed. If there is one constant to his temperament, it is a desire to navigate a middle path in every arena, ever chasing compromise, while quietly trying to have it both ways. In his first years of marriage, he demonstrates a determination to be husband and father without actually being physically present in either role. Likewise, this trait marks a tendency to moderate his convictions by convenience. The prewar period finds him a fervent proponent of abolition, but willing to temper that when it menaces harmony in his circles. Later, he is just as passionate for African American civil rights—that is, until that proves inelegant to consensus.

The book’s subtitle, From Radical to Unifier, more specifically speaks to Garfield’s shift from one of the “Radical Republicans” who advanced black equality and clashed with Andrew Johnson, to a congressman who could work across interparty enmity to achieve balance amid ongoing factional feuding. But “from radical to unifier” can also be taken as a larger metaphor of a trajectory for Garfield that smacks less of an evolution than a tightly wound tension that ever attempted to have that cake and eat it too. And since it is impossible to simultaneously be both “radical” and “unifier,” there is a hint that Garfield was always more bureaucrat than believer. But was he? Truthfully, it is difficult to know what to make of him much of the time. And it is not clear whether the blame for that should be laid upon his biographer or upon a subject so enigmatic he defies analysis.

Garfield indeed proves elusive; he hardly could have achieved so much success without an engine of ambition, but that drive remained mostly out of sight. As a major general in the midpoint of the Civil War, he stridently resisted calls to shed his uniform for Congress, but yet finally went to Washington. Almost two decades later, he stood equally adamant against efforts to recruit him as nominee for president, but ultimately ran and won the White House. Was he really so self-effacing, or simply expert at disguising his intentions? And what of his integrity? Garfield was implicated in the infamous Credit Mobilier scandal, but it did not stick. In an era marked by rampant political corruption, Garfield was no crook, but neither was he an innocent, trading certain favors for rewards when it suited him. Was he honest? Here we are reminded of what Jake Gittes, Jack Nicholson’s character in the film Chinatown, replied when asked that about a detective on the case: “As far as it goes. He has to swim in the same water we all do.”

For me, presidential biographies shine the brightest when they employ the central protagonist to serve as a focal point for relating the grander narrative of the historical period that hosted them. Think John Meacham, in Thomas Jefferson: The Art of Power. Or Robert Caro, in The Years of Lyndon Johnson: Master of the Senate. Or, in perhaps its most extreme manifestation, A Self-Made Man: The Political Life of Abraham Lincoln, by Sidney Blumenthal, where Lincoln himself is at times reduced to a series of bit parts. What those magnificent biographies have in common is their ability to brilliantly interpret not simply the lives that are spotlighted but also the landscape that each trod upon in the days they walked the earth. Unfortunately, this element is curiously absent in much of Goodyear’s President Garfield.

Garfield’s life was mostly centered upon the tumultuous times of Civil War and Reconstruction, but those who came to this volume with little familiarity of the era would learn almost nothing of it from Goodyear except how events or individuals touched Garfield directly. The war hardly exists beyond Garfield’s service in it. So too what follows in the struggle for equality for the formerly enslaved against the fierce resistance of Andrew Johnson, culminating in the battle over impeachment. Remarkably, Ulysses S. Grant, second to Lincoln arguably the most significant figure of the Civil War and its aftermath, makes only brief appearances, and then merely as a vague creature of Garfield’s disdain. And there is just a rough sketch of the disputed election that makes Rutherford B. Hayes president and brings an end to Reconstruction. Goodyear’s Garfield is actually the opposite of Blumenthal’s Lincoln: this time it is all Garfield and history is relegated to the cameo.

And then suddenly, unexpectedly, Goodyear rescues the narrative and the reader—and even poor Garfield—with a dramatic shift that stuns an unsuspecting audience and not only succeeds, but succeeds splendidly! It seems as if we have finally reached the moment the author has been eagerly anticipating. Garfield has little more than fifteen months to live, but no matter: this now is clearly the book Goodyear had long set out to write. Part of the reader’s reward for sticking it out is the deep dive into history denied in prior chapters.

Only fifteen years had passed since Appomattox, and the two-party system was in flux, reinventing itself for another era. The Democrats—the party of secession—were slowly clawing their way back to relevance, but Republicans remained the dominant national political force, often by waving the “bloody shirt.” Since the failed attempt to remove Johnson, the party had cooled in their commitment to civil rights, a reflection of a public that had grown weary of the plight of freedmen and longed for reconciliation. Fostering economic growth was the prime directive for Republicans, but so too was jealously guarding their power and privilege, as well as the entrenched spoils system that had begotten.

Party members had few policy differences, but yet fell into fierce factions that characterized what came to be a deadlocked Republican National Convention in 1880. The “Stalwarts” were led by flamboyant kingmaker Roscoe Conkling, who had long been locked in a bitter personal and political rivalry with James G. Blaine of the “Half-Breeds,” who sought the nomination for president. Garfield and the latter were on friendly terms, and had worked closely together in the House when Blaine had been Speaker, although Garfield was identified with neither faction. Conkling and the Stalwarts were Grant loyalists, and dreamed instead of his return to the White House. There were also reformers who coalesced around former Senator John Sherman. Garfield delivered the nominating speech for Sherman, but then—after thirty-five ballots failed to select a candidate—he himself ended up as the consensus “Dark Horse” improbably (and reluctantly) drafted as the Republican Party nominee! The ticket was rounded out with Chester A. Arthur, a Conkling crony, for vice president. Goodyear’s treatment of the drawn-out convention crisis and Garfield’s unlikely selection is truly superlative!

So too is the author’s coverage of Garfield’s brief presidency, as well as the theatrical foreshadowing of his death, as he was stalked by the unhinged jobseeker Charles J. Guiteau. Garfield prevailed in a close election against the Democrat, former Union General Winfield Scott Hancock. Once in office, Garfield refused to go along with Conkling’s picks for financially lucrative appointments, which sparked an extended stand-off that surprisingly climaxed with the Senate resignation of Conkling and his close ally Thomas “Easy Boss” Platt, asserting Garfield’s executive prerogative, striking a blow for reform, and upending Conkling’s legendary control over spoils. Meanwhile, homeless conman Guiteau, who imagined himself somehow personally responsible for Garfield’s election, grew enraged at his failure to be named to the Paris consulship, which he fantasized was his due, and plotted instead to kill the president. Guiteau proved both insane and incompetent; his bullets fired at point-blank range missed Garfield’s spine and all major organs.

Odds are that Garfield might have recovered, but the exploratory insertion of unwashed fingers into the site of the wound—more than once, by multiple doctors—likely introduced the aggressive infection that was to leave him in the lingering, excruciating pain he bore heroically until he succumbed seventy-nine days later. The reader fully experiences his suffering. It seems that Joseph Lister’s antiseptic methods, adopted across much of Europe, were scoffed at by the American medical community, which ridiculed the notion of invisible germs. For weeks, doctors continued to probe in an attempt to locate the bullet lodged within. In a fascinating subplot, a young Alexander Graham Bell elbowed his way in with a promising new invention that although unsuccessful in this case became prototype for the first metal detector. The nation grew fixated on daily updates to the president’s condition until the moment he was gone. He had been president for little more than six months, nearly half of it spent incapacitated, dying of his injuries. The tragedy of Garfield is mitigated somewhat by the saga of his successor: President Arthur astonished everyone when an unlikely letter stirred his conscience to abandon Conkling and embark on a reformist crusade.

While faults can be found, ultimately the author redeems himself and his work. The best does lie in the final third of the volume, which because of content and style is far more fast-paced and satisfying than that which precedes it, but that earlier material nevertheless sustains the entirety. Yes, those readers less acquainted than this reviewer with the Civil War and Reconstruction will at times have a tougher hill to climb placing Garfield’s life in appropriate context, but the careful study and trenchant analysis of the forces in play in Republican politics leading up to the 1880 nomination, as well as the underscore to the significance of a brief presidency too often overlooked, without doubt distinguishes Goodyear as a fine writer, researcher, and historian. President Garfield represents an important contribution to the historiography, and likely will be seen as the definitive biography for some time to come. As stamp values plummeted, I long ago liquidated my collection for pennies on the dollar, so I no longer own that six-cent Garfield, but now, thanks to Goodyear, I can boast a deeper understanding of the man’s life and his legacy.

Note: This edition came to me through an early reviewers’ program.

Note: I reviewed the Blumenthal Lincoln biography here: Review of: A Self-Made Man: The Political Life of Abraham Lincoln Vol. I, 1809–1849, by Sidney Blumenthal

Featured

Review of: Scars on the Land: An Environmental History of Slavery in the American South, by David Silkenat

For several days we traversed a region, which had been deserted by the occupants—being no longer worth culture—and immense thickets of young red cedars, now occupied the fields, in digging of which, thousands of wretched slaves had worn out their lives in the service of merciless masters … It had originally been highly fertile and productive, and had it been properly treated, would doubtlessly have continued to yield abundant and prolific crops; but the gentlemen who became the early proprietors of this fine region, supplied themselves with slaves from Africa, cleared large plantations of many thousands of acres—cultivated tobacco—and became suddenly wealthy … they valued their lands less than their slaves, exhausted the kindly soil by unremitting crops of tobacco, declined in their circumstances, and finally grew poor, upon the very fields that had formerly made their possessors rich; abandoned one portion after another, as not worth planting any longer, and, pinched by necessity, at last sold their slaves to Georgian planters, to procure a subsistence … and when all was gone, took refuge in the wilds of Kentucky, again to act the same melancholy drama, leaving their native land to desolation and poverty … Virginia has become poor by the folly and wickedness of slavery, and dearly has she paid for the anguish and sufferings she has inflicted upon our injured, degraded, and fallen race.1

Those are the recollections of Charles Ball, an enslaved man in his mid-twenties from Maryland who was sold away from his wife and child and—wearing an iron collar shackled to a coffle with other unfortunates—was driven on foot to his new owner in Georgia in 1805. As he was marched through Virginia, the perspicacious Ball observed not only the ruin of what had once been fertile lands, but the practices that had brought these to devastation. Ball serves as a prominent witness in the extraordinary, ground-breaking work, Scars on the Land: An Environmental History of Slavery in the American South [2022], by David Silkenat, professor of history at the University of Edinburgh, which probes yet one more critical yet largely ignored component of Civil War studies.

Excerpts like this one from Ball’s memoir—an invaluable primary source written many years later once he had won his freedom—also well articulate the triple themes that combine to form the thesis of Silkenet’s book: southern planters perceived land as a disposable resource and had little regard for it beyond its potential for short-term profitability; slave labor directed on a colossal scale across the wider geography dramatically and permanently altered every environment it touched; and, the masses of the enslaved were far better attuned and adapted to their respective ecosystems, which they frequently turned to for privacy, nourishment, survival—and even escape. And there is too a darker ingredient that clings to all of these themes, and that was the almost unimaginable cruelty that defined the lives of the enslaved.

The men who force-marched Ball’s coffle as if they were cattle no doubt viewed him with contempt, yet though held as chattel, the African American Charles Ball was more familiar with the past, present, and likely future of the ground he trod upon than most of his white oppressors.  Frequently condemned to a lifetime of hard labor in unforgiving environments, often sustaining conditions little better than that afforded to livestock, this sophisticated intimacy of their natural surroundings could for the enslaved prove to be the only alternative to a cruel death in otherwise harsh elements. And, sometimes, it could—always at great risk—also translate into liberty.

Those who claimed ownership over their darker-complected fellow human beings were not entirely ignorant of the precarious balance of nature in the land they exploited, but they paid that little heed. Land was, after all, not only cheap but appeared to be limitless. As the Indigenous fell victim in greater numbers to European diseases, as militias drove the survivors deeper into the wilderness, as the British loss in the American Revolution removed the final barriers to westward expansion, the Chesapeake elite counted their wealth not in acreage but in human chattel. Deforestation was widespread, fostering erosion. First tobacco and later wheat sapped nutrients and strained the soil’s capacity to sustain bountiful yields over time. Well-known practices such as crop rotation, rigorously applied in the north, were largely scorned by the planter aristocracy. The land, as Ball had discerned, was rapidly used up.

Already in Jefferson’s time, “breeding” the enslaved for sale to the lower south was growing far more profitable than agriculture in the upper south. And demand increased exponentially with the introduction of the “cotton gin” and the subsequent boom in cotton production, as well as the end of the African slave trade that was to follow. Human beings became the most reliable “cash crop.” Charles Ball’s transport south was part of a trickle that grew to a multitude later dubbed the “Slave Trail of Tears” that stretched from Maryland to Louisiana and saw the involuntary migration of about a million enslaved souls in the five decades prior to the Civil War. Many, like Ball, were forced to cope with new environments unlike anything they had experienced before their forced resettlement. What did not change, apparently, was the utter disregard for these various environments by their new owners.

For those who imagined the enslaved limited to working cotton or sugar plantations, Silkenet’s book will be something of an eye-opener. In a region of the United States that with only some exceptions stubbornly remained pre-industrial, large forces of slave labor were enlisted to tame—and put to ruin—a wide variety of landscapes through extensive overexploitation that included forestry, mining, levee-building, and turpentine extraction, usually in extremely perilous conditions.

The enslaved already had to cope with an oppressive collection of unhealthy circumstances that included exposure to extreme heat, exhaustion, insects, a range of diseases including chronic ringworm, inadequate clothing, and an insufficient diet—as well as an ongoing unsanitary lifestyle that even kept them from washing their hands except on infrequent occasions. All this was further exacerbated by the demands inherent in certain kinds of more specialized work.

Enslaved “dippers” extracted turpentine from pine trees which left their “hands and clothing … smeared with the gum, which was almost impossible to remove. Dippers accumulated layers of dried sap and dirt on their skin and clothes, an accumulation that they could only effectively remove in November when the harvest ended. They also suffered from the toxic cumulative effect of inhaling turpentine fumes, which left them dizzy and their throats raw.” [p70] Mining for gold was an especially dangerous endeavor that had the additional hazard in the use of “mercury to cause gold to amalgamate … leaving concentrated amounts of the toxin in the spoil piles and mountain streams. Mercury mixed with the sulfuric acid created when deep earth soils came into contact with oxygen poisoned the watershed … Enslaved miners suffered from mercury poisoning, both from working with the liquid form with their bare hands and from inhaling fumes during distillation. Such exposure had both short- and long-term consequences, including skin irritation, numbness in the hands and feet, kidney problems, memory loss, and impaired speech, hearing, and sight.” [p24] There were dangers too for lumberjacks and levee-builders. Strangely perhaps, despite the increased risks many of the enslaved preferred to be working the mines and forests because of opportunities for limited periods of autonomy in wilder locales that would be impossible in plantation life.

In the end, mining and deforestation left the land useless for anything else. Levees, originally constructed to forestall flooding to enable rice agriculture, ended up increasing flooding, a problem that today’s New Orleans inherited from the antebellum. All these pursuits tended to lay waste to respective ecosystems, leaving just the “scars on the land” of the book’s title, but of course they also left lasting physical and psychological scars upon a workforce recruited against their will.

What was common to each and every milieu was the mutual abuse of the earth as well as those coerced to work it. Ball mused that the quotient for cruelty towards those who toiled the land seemed roughly similar to the degree that the land was ravaged. Indeed, cruelty abounds: the inhumanity that actually defines the otherwise euphemistically rendered “peculiar institution” stands stark throughout the narrative, supported by a wide range of accounts of those too often condemned to lives beset by a quotidian catalog of horrors as chattel property in a system marked by nearly inconceivable brutality.

Beatings and whippings were standard fare. Runaways, even those who intended to absent themselves only temporarily, were treated with singular harshness. Sallie Smith, a fourteen-year-old girl who went truant in the woods to avoid repeated abuse, was apprehended and “brutally tortured: suspended by ropes in a smoke house so that her toes barely touched the ground and then rolled across the plantation inside a nail-studded barrel, leaving her scarred and bruised.” [p78]

Slaveowners also commonly employed savage hunting dogs or bloodhounds that were specially trained to track runaways, which sometimes led to the maiming or even death of the enslaved:

“One enraged slave owner ‘hunted and caught’ a fugitive ‘with bloodhounds, and allowed the dogs to kill him. Then he cut his body up and fed the fragments to the hounds.’ Most slave owners sought to capture their runaway slaves alive; but unleashed bloodhounds could inflict serious wounds in minutes … Some masters saw the violence done by dogs as part of the punishment due to rebellious slaves. Over the course of ten weeks in 1845, Louisiana planter Bennet Barrow noted in his diary three occasions when bloodhounds attacked runaway slaves. First, they caught a runaway named Ginny Jerry, who sought refuge in the branches before the ‘negro hunters … made the dogs pull him out of the tree, Bit him very badly’ … Second, a few weeks later, while pursuing another truant, Barrow ‘came across Williams runaway,’ who found himself cornered by bloodhounds, and the ‘Dogs nearly et his legs off—near killing him.’  Finally, an unnamed third runaway managed to elude the hounds for half a mile before the ‘dogs soon tore him naked.’ When he returned to the plantation, Barrow ‘made the dogs give him another overhauling’ in front of the assembled enslaved community as a deterrent. Although Barrow may have taken unusual pleasure in watching dogs attack runaway slaves, his diary reveals that slave owners used dogs to track fugitives and torture them.” [p52-53]

That such practices were treated as unremarkable by white contemporaries finds a later echo in the routine bureaucracy of atrocities that the Nazis inflicted on Jews sent to forced labor camps. For his part, Silkenat reports episodes like these dispassionately, in what appears to be a deliberate effort on the author’s part to sidestep sensationalism. This technique is effective: hyperbolic editorial is unnecessary—the horror speaks for itself—and those well-read in the field are aware that such barbarity was hardly uncommon. Moreover, it serves as a robust rebuke to today’s “Lost Cause” enthusiasts who would cast slavery as benign or even benevolent, as well as to those promoting recent disturbing trends to reshape school curricula to minimize and even sugarcoat the awful realities that history reveals. (Sidenote to Florida’s Board of Education: exactly which skills did Sallie Smith in her nail-studded barrel, or those disfigured by ferocious dogs, develop that later could be used for their “personal benefit?”)

I first encountered the author and his book quite by accident. I was attending the Civil War Institute (CWI) 2023 Summer Conference at Gettysburg College2, and David Silkenat was one of the scheduled speakers for a particular presentation—“Slavery and the Environment in the American South”—that I nearly skipped because I worried it might be dull. As it turned out, I could not have been more wrong. I sat at rapt attention during the talk, then purchased the book immediately afterward.

Silkenet’s lecture took an especially compelling turn when he spoke at length of maroon communities of runaways who sought sanctuary in isolated locations that could be far too hostile to foster recapture even by slave hunters with vicious dogs. One popular refuge was the swamp, especially unwholesome but yet out of reach of the lash, another underscore by the author that enslaved blacks by virtue of necessity grew capable of living off the land—every kind of land, no matter how harsh—with a kind of adaptation out of reach to their white oppressors. Swamps tended to be inhospitable, given to fetid water populated with invisible pathogens, masses of biting and stinging insects, poisonous snakes, alligators, and even creatures such as panthers and bears that that had gone extinct elsewhere. But for the desperate it meant freedom.

A number of maroon communities appeared in secluded geographies that were populated by escapees mostly on the margins of settled areas, with inhabitants eking out a living by hunting and gathering as well as small scale farming, supplemented by surreptitious trading with the outside world. The largest was in the Great Dismal Swamp in Virginia and North Carolina, where thousands managed to thrive over multiple generations.

But not all flourished. In Scars on the Land, Silkenat repeats Ball’s tragic tale of coming upon a naked and dirty fugitive named Paul, an African survivor of the Middle Passage who had fled a beating to the swamp. On his neck, he wore a heavy iron collar that was fastened with bells to help discourage escape. Ball assisted him as best he could clandestinely, but could not remove the collar. When he returned a week later to offer additional assistance, his nostrils traced a rancid smell to the hapless Paul, a suicide, hanging by his neck from a tree, crows pecking at his eyes. 3 [p124]

Scars on the Land is directed at a scholarly audience, yet it is so well-written that any student of the Civil War and African American history will find it both accessible and engaging. But more importantly, in a genre that now boasts an inventory of more than 60,000 works, it is no small distinction to pronounce Silkenet’s book a significant contribution to the historiography that should be a required read for everyone with an interest in the field.

 

1Charles Ball. Slavery in the United States: A Narrative of the Life and Adventures of Charles Ball, a Black Man, Who Lived Forty Years in Maryland, South Carolina and Georgia, as a Slave Under Various Masters, and was One Year in the Navy with Commodore Barney, During the Late War. (NY: John S. Taylor, 1837)  Slavery in the United States

2 For more about the CWI Summer Conference at Gettysburg College see: Civil War Institute at Gettysburg Summer Conference 2024

3The illustration of Paul hanging from a tree appears alongside Ball’s narrative in this publication:  Nathaniel Southard, ed. The American Anti-Slavery Almanac for 1838, Vol I, Nr 3, The American Anti-Slavery Society, (Boston: Isaac Knapp, 1838), 13, The American Anti-Slavery Almanac for 1838

Note: I reviewed this book about a well-known maroon community here: Review of: The Battle of Negro Fort: The Rise and Fall of a Fugitive Slave Community, by Matthew J. Clavin

Featured

Review of: Harpers Ferry Under Fire: A Border Town in the American Civil War, by Dennis E. Frye

Most people only know of Harpers Ferry as the town in present day West Virginia where John Brown, a zealous if mercurial abolitionist, set out to launch an ill-fated slave insurrection by seizing the national armory located there, an attempt which was completely crushed, sending John Brown to the gallows and his body “a-mouldering” in the grave shortly thereafter. Those more familiar with the antebellum are aware that many historians consider that event to be the opening salvo of the Civil War, as hyper-paranoid southern planters—who no longer as in Jefferson’s day bemoaned the burden and the guilt of their “peculiar institution,” but instead championed human chattel slavery as the most perfect system ever ordained by the Almighty—imagined the mostly anti-slavery north as a hostile belligerency intent to deprive them of their property rights and to actively incite the enslaved to murder them in their sleep. Brown was hanged seventeen months prior to the assault on Fort Sumter, but some have suggested that first cannonball was loosed at his ghost.

Those in the know will also point out that the man in overall command when they took Brown down was Colonel Robert E. Lee, and that his aide-de-camp was J. E. B. Stuart. And perhaps to underscore the outrageous twists of fate history is known to fashion for us, they might add that present for Brown’s later execution were Thomas J. (later “Stonewall”) Jackson, John Wilkes Booth, Walt Whitman, and even Edmund Ruffin, the notable Fire-eater who was among the first to fire actual rather than metaphorical shots at Sumter in 1861. You can’t make this stuff up.

But it turns out that John Brown’s Raid in 1859 represents only a small portion of the Civil War history that clings to Harpers Ferry, perhaps the most quintessential border town of the day, which changed hands no less than eight times between 1861 and 1865. Both sides took turns destroying the successively rebuilt Baltimore & Ohio bridge—the only railroad bridge connecting northern and southern states across the Potomac. Harpers Ferry was integral to Lee’s invasion of Maryland that ended at Antietam, and had a supporting role at the outskirts of the Gettysburg campaign, as well as in Jubal Early’s aborted march on Washington. There’s much more, and perhaps the finest source for the best immersion in the big picture would be Harpers Ferry Under Fire: A Border Town in the American Civil War [2012], by the award-winning retired National Park Service Historian Dennis E. Frye, who spent some three decades of his career at Harpers Ferry National Park. Frye is a talented writer, the narrative is fascinating, and this volume is further enhanced by lavish illustrations, period photographs, and maps. Even better, while the book is clearly aimed towards a popular audience, it rigorously adheres to strict standards of scholarship in presentation, interpretation, and analysis.

West Virginia has the distinction of being the only state to secede from another state, as its Unionist sympathies took issue with Virginia’s secession from the United States. But it had been a long time coming. The hardscrabble farmers in the west had little in common with the wealthy elite slaveholding planter aristocracy that dominated the state’s government. This is not to say those to the west of Richmond were any less racist than the rest of the south, or much of the antislavery north for that matter; it was a nation then firmly based upon principles of white supremacy. For Virginia and its southern allies, the conflict hinged on their perceived right to spread slavery to the vast territories seized from Mexico in recent years. For the north, it was about free soil for white men and for Union. West Virginia went with Union. But back then, when John Brown took his crusade to free the enslaved to Harpers Ferry, it was still part of Virginia, and while some residents might have feared for the worst, most Americans could not have dreamed of the scale of bloodletting that was just around the corner, nor that the cause of emancipation—John Brown’s cause—would one day also become inextricably entwined with the preservation of the Union.

Harpers Ferry is most notable for its dramatic topography, which has nothing to do with its armory and arsenal—the object of Brown’s raid—but everything to do with its persistent pain at the very edge of Civil War. Strategically situated at the confluence of the Potomac and Shenandoah rivers, where today the states of Maryland, Virginia, and West Virginia meet, the town proper is surrounded on three sides by the high grounds at Bolivar Heights to the west, Loudoun Heights to the south, and Maryland Heights to the east that define its geography and the challenges facing both attackers and defenders. It is immediately clear to even the most amateur tactician that the town is indefensible without control of the heights.

I was drawn to Harpers Ferry Under Fire by design. I had already registered for the Civil War Institute (CWI) 2023 Summer Conference at Gettysburg College, and selected Harpers Ferry National Historic Park as one of my battlefield tours. While I have visited Antietam and Gettysburg on multiple occasions, somehow I had never made it to Harpers Ferry. These CWI conference tours are typically quite competitive, so I was pleased when I learned that I had won a seat on the bus. And not only that—the tour guide was to be none other than Dennis Frye himself! I have met Dennis before, at other Civil War events, including a weekend at Chambersburg some years ago with the late, legendary Ed Bearss. Like Ed, Dennis is very sharp, with an encyclopedic knowledge of people and events. I assigned myself his book as homework.

The original itinerary was scheduled to include a morning tour of the town, designated as the Harpers Ferry Historic District—which hosts John Brown’s Fort as well as many restored nineteenth century buildings that have been converted into museums—and an afternoon tour focused on the battles and the heights. Inclement weather threatened, so Dennis mixed it up and had us visit the heights first. In retrospect, in my opinion, this turned out to have been the better approach anyway, because when you stand on the heights and look down upon the town proper below, you understand instantly the strategic implications from a military standpoint. Later, walking the streets of the hamlet and looking up at those heights, you can fully imagine the terror of the citizens there during the war years, completely at the mercy of whatever side controlled that higher ground.

The most famous example of that was when, during Lee’s Antietam campaign, he sought to protect his supply line by splitting his forces and sending Stonewall Jackson to seize Harpers Ferry. Jackson’s victory there proved brilliant and decisive, a devastating federal capitulation that turned more than twelve thousand Union troops over to the rebels—the largest surrender of United States military personnel until the Battle of Bataan eighty years afterward! This event is covered in depth in Harpers Ferry Under Fire, but given Dennis Frye’s passion for history, the story proved to be a great deal more compelling when gathered with a group of fellow Civil War afficionados on Bolivar Heights, spectacular views of the Potomac River and the Cumberland Gap before us, while Dennis rocked on his heels, pumped his arms in the air, and let his voice boom with the drama and excitement of those events so very long past. While Dennis lectured, gesturing wildly, I think all of us, if only for an instant, were transported back to 1862, gazing down from the heights at the tiny town below through the eyes of a common soldier, garbed in blue or gray. The remainder of the day’s tour, including John Brown’s Fort and the town’s environs, was a superlative experience, but it was that stirring moment on Bolivar Heights that will remain with me for many years to come.

It is worth pausing for a moment to consider the Fort, where John Brown’s raid ended in disaster, ten of his men killed, including two of his sons, and the badly wounded Brown captured, along with a handful of survivors. The original structure, which served as the Armory’s fire engine and guard house, was later dismantled, moved out of state and rebuilt, then dismantled again and eventually re-erected not far from the location where Brown and his men sought refuge that day, before it was stormed by the militia. It is open to the public. Walking around and within it today, there is an omnipresent eerie feeling. Whatever Brown’s personal flaws—and those were manifold—he went to Harpers Ferry on a sort of holy quest and was martyred for it. The final words he scribbled down in his prison cell—”I, John Brown, am now quite certain that the crimes of this guilty land will never be purged away but with blood”—rang in my ears as I trod upon that sacred ground.

If you are a Civil War buff, you must visit Harpers Ferry. Frye himself is retired, but if you can somehow arrange to get a tour of the park with this man, jump on the chance. Failing that, read Harpers Ferry Under Fire, for it will enhance your understanding of what occurred there, and through the text the authoritative voice of Dennis Frye will speak to you.

A link to Harpers Ferry National Park is here: Harpers Ferry National Park

More on the CWI Summer Conference is here: Civil War Institute at Gettysburg Summer Conference 2024

NOTE: Except for the cover art, all photos featured here were taken by Stan Prager

Featured

Review of: The Making of the African Queen: Or How I went to Africa With Bogart, Bacall and Huston and almost lost my mind, by Katharine Hepburn

One of my favorite small venues for an intimate, unique concert experience is The Kate—short for The Katharine Hepburn Cultural Arts Center—in Old Saybrook, Connecticut, a 285-seat theater with outstanding acoustics that hosts multi-genre entertainment in a historic building dating back to 1911 that once served as both theater and Town Hall. In 2013, my wife and I had the great pleasure of seeing Jefferson Airplane alum Marty Balin rock out at The Kate. More recently, we swayed in our seats to the cool Delta blues of Tab Benoit. On each occasion, prior to the show, we explored the photographs and memorabilia on display in the Katharine Hepburn Museum on the lower level, dedicated to the life and achievements of an iconic individual who was certainly one of greatest actors of her generation.

Hepburn was a little girl when she first stayed at her affluent family’s summer home in the tony Fenwick section of Old Saybrook, just a year after the opening of the then newly constructed Town Hall that today bears her name. She later dubbed the area “paradise,” returning frequently over the course of her long life and eventually retiring to her mansion in Fenwick overlooking the water, where she spent her final years until her death at 96 in 2003. The newly restored performing arts center named in her honor opened six years later, with the blessings of the Hepburn family and her estate.

One of the eye-catching attractions in the museum includes an exhibit behind glass showcasing Hepburn’s performance with co-star Humphrey Bogart in the celebrated 1951 film, The African Queen, that features a copy of the 1987 memoir credited to her whimsically entitled The Making of the African Queen: Or How I went to Africa With Bogart, Bacall and Huston and almost lost my mind. I turned to my wife and asked her to add this book to my Christmas list.

Now, full disclosure: I am a huge Bogie fan (my wife less so!). I recently read and reviewed the thick biography Bogart, by A.M. Sperber & Eric Lax, and in the process screened twenty of his films in roughly chronological order. My wife sat in on some of these, including The African Queen, certainly her favorite of the bunch. If I had to pick five of the finest Bogie films of all time, that would certainly make the list. Often denied the recognition that was his due, he won his sole Oscar for his role here. A magnificent performer, in this case Bogart benefited not only from his repeat collaboration with the immensely talented director John Huston, but also by starring opposite the inimitable Kate Hepburn.

For those who are unfamiliar with the film (what planet are you from?), The African Queen, based on the C. S. Forester novel of the same name, is the story of the unlikely alliance and later romantic relationship between the staid, puritanical British missionary and “spinster” (a term suitable to the times) Rose Sayers (Hepburn) and the gin-soaked Canadian Charlie Allnut (Bogart), skipper of the riverboat African Queen, set in German East Africa (present-day Tanzania) at the outbreak of World War I. After aggression by German forces leaves Rose stranded, she is taken onboard by Allnut. In a classic journey motif that brilliantly courts elements of drama, adventure, comedy, and romance, the film follows this mismatched duo as they conspire to arm the African Queen with explosives and pilot it on a mission to torpedo a German gunboat. Those who watch the movie for the first time will be especially struck by the superlative performances of both Bogie and Hepburn, two middle-aged stars who not only complement one another beautifully but turn out an unexpected on-screen chemistry that has the audience emotionally involved, rooting for their romance and their cause. It is a tribute to their mutual talents that the two successfully communicated palpable on-screen passion to audiences of the time who must have been struck by the stark disparity between the movie posters depicting Bogie as a muscular he-man and Hepburn as a kind of Rita Hayworth twin—something neither the scrawny Bogart nor the aging Hepburn live up to in the Technicolor print. But even more so because those same 1951 audiences were well acquainted with the real-life 51-year-old Bogart’s marriage to the beautiful 27-year-old starlet Lauren (real name Betty) Bacall, born of an on-set romance when she was just 19.

Katharine Hepburn had a long career in Hollywood marked by dramatic ebbs and flows. While she was nominated for an Academy Award twelve times and set a record for winning the Best Actress Oscar four times, more than once her star power waned, and at one point she was even widely considered “box office poison.” Her offscreen persona was both unconventional and eccentric. She defied contemporary expectations of how a woman and a movie star should behave: shunning celebrity, sparring with the press, expressing unpopular political opinions, wearing trousers at a time that was unacceptable for ladies, fiercely guarding her privacy, and stubbornly clinging to an independent lifestyle. She was pilloried as boyish, and accused of lesbianism at a time when that was a vicious expletive, but she evolved into a twentieth century cultural icon. Divorced at a young age, she once dated Howard Hughes, but spent nearly three decades in a relationship with the married, alcoholic Spencer Tracy, with whom she costarred in nine films. Rumors of liaisons with other women still linger. Perhaps no other female figure cut a groove in Hollywood as deep as Kate Hepburn did.

Hepburn’s book, The Making of the African Queen, showed up under the tree last Christmas morning—the original hardcover first edition, for that matter—and I basically inhaled it over the next couple of days. It’s an easy read. Hepburn gets the byline but it’s clear pretty early on that the “narrative” is actually comprised of excerpts from interviews she sat for, strung together to give the appearance of a book-length chronicle. But no matter. Those familiar with Kate’s distinctive voice and the cadence of her signature Transatlantic accent will start to hear her pronouncing each syllable of the text in your head as you go along. That quality is comforting. But it is nevertheless plagued by features that should make you crazy: it’s anecdotal, it’s uneven, it’s conversational, it’s meandering, and maddingly it reveals only what Hepburn is willing to share. In short, if this were any other book about any other subject related by any other person, you would grow not only annoyed but fully exasperated. But somehow, unexpectedly, it turns out to be nothing less than a delight!

If The African Queen is a cinema adventure, aspects of the film production were a real-life one. Unusual for its time, bulky Technicolor cameras were transported to on-location shoots in Uganda and Congo, nations today that then were still under colonial rule. The heat was oppressive, and danger seemed to lurk everywhere, but fears of lions and crocodiles were trumped by smaller but fiercer army ants and mosquitoes, a host of water-borne pathogens, as well as an existential horror of leeches. Tough guy Bogie was miserable from start to finish, but Hepburn reveled in the moment, savoring the exotic flora and fauna, and bursting with excitement. Still, almost everyone—including Kate—fell terribly ill at least some of the time with dysentery and a variety of other jungle maladies. At one point Hepburn was vomiting between takes into a bucket placed off-screen. The running joke was that the only two who never got sick were Bogie and director Huston, because they eschewed the local water and only drank Scotch!

Huston went to Africa hoping to “out-Hemingway” Hemingway in big game hunting, but his safari chasing herds of elephants turned into a lone antelope instead. He seemed to do better with Kate. The book does not openly admit to an affair, but the intimacy between them leaps off the page. Hepburn proves affable through every paragraph, although sometimes less than heroic. Readers will wince when upon first arrival in Africa she instantly flies into a fit of rage that has her evict a staff member from an assigned hotel room that to her mind rightly should belong to a VIP of her caliber! And while she is especially kind, almost to a fault, to every African recruited to serve her in various capacities, there is a patronizing tone in her recollections that can’t help but make us a bit uncomfortable today. Still, you cannot detect even a hint of racism. You get the feeling that she genuinely liked people of all stations of life, but could be unrepentantly condescending towards those who did not, like her, walk among the stars. Yet, warts and all—and these are certainly apparent—Kate comes off today, long after her passing, as likeable as she did to those who knew her in her times. And what times those must have been!This book is pure entertainment, with the added bonus of forty-five wonderful behind-the-scenes photographs that readers may linger upon far longer than the pages of text. For those who loved the film as I do, the candid moments that are captured of Bogie, Hepburn, and Huston are precious relics of classic Hollywood that stir the heart and the soul. If you are a fan, carve out the time and read The Making of the African Queen. But more importantly, screen The African Queen again. Then you will truly know what I mean.

A link to The Kate: The Kate

A link to the The African Queen on IMDB: IMDB: The African Queen

My review of the Bogart bio: Review of: Bogart, by A.M. Sperber & Eric Lax

NOTE:  My top five Bogie films: Casablanca, The Maltese Falcon, Treasure of the Sierra Madre, The African Queen, The Caine Mutiny—but there are so many, it’s difficult to choose…

Featured

Review of: The Battle of Negro Fort: The Rise and Fall of a Fugitive Slave Community, by Matthew J. Clavin

In yet another fortuitous connection to my attendance at the Civil War Institute (CWI) 2023 Summer Conference at Gettysburg College, I sat in on an enlightening presentation by the historian David Silkenat1 on the environmental history of slavery in the American south that turned to a discussion of the frequently overlooked phenomenon of communities in secluded geographies that were populated by runaways who fled enslavement. These so-called “maroon communities” appeared mostly on the margins of settled areas across the upper and lower south, sometimes in tandem with the indigenous, with inhabitants eking out a living by hunting and gathering as well as small scale farming, supplemented by limited and surreptitious trading with the outside world. The origin of some of these maroon societies can be traced back to the Revolutionary War and the War of 1812, when the British offered freedom to the enslaved if they were willing to serve in the military. Many jumped at the chance. On both occasions, when hostilities concluded, those who were unable or unwilling to withdraw with British forces went into hiding to avoid recapture and a return to slavery. One such refuge in the Spanish Floridas became known as the “Negro Fort.”

My next out of state trip subsequent to Gettysburg brought me to a small town in southern Vermont that one lazy afternoon found me exploring a used bookstore—housed in, of all places, a yurt2—where I stumbled upon The Battle of Negro Fort: The Rise and Fall of a Fugitive Slave Community [2019], by Matthew J. Clavin. Silkenet’s fascinating talk about maroons rang in my head as I bought the book, and I started reading it that very day.

Southern planters often held competing, contradictory notions in their heads simultaneously, while sidestepping the cognitive dissonance that practice should have provoked. On the one hand, they deluded themselves that their enslaved “property” were content in their condition of servitude. At the same time, they held them inferior in every sense and thought them nearly helpless, unable to successfully function independently. Slaveowners also dismissed the idea that African Americans could possibly make good soldiers, even though they did manage to fight on both sides during the Revolution. On the other hand, whites nursed a deep visceral fear of slave uprisings by armed blacks, whom despite their apparent contentment and incompetence might somehow team up and murder them in their sleep.

This heavy load of contradictions got hoisted menacingly above them to cast an ever-lengthening shadow when numbers of escaped slaves recruited into service by the British in what was then the Spanish colony of East Florida during the War of 1812 opted to remain behind after the Treaty of Ghent in a military fortification on Prospect Bluff overlooking the Apalachicola River heavily stocked with cannon and munitions and bolstered with support from allied Native Americans. These were not handfuls of fugitives out of reach in an unknown, inaccessible swamp somewhere, like most maroon settlements; this was a prominent, fully equipped, self-sustaining, armed camp, which even had the temerity to continue to fly the Union Jack—the so-called “Negro Fort.” This was an invitation to fellow runaways. This was not only a challenge to the white man’s “peculiar institution,” this was a thumbing of the nose to the entire planter mentality. This was an unacceptable threat. They could not bear it; they would not bear it.

In The Battle of Negro Fort, Clavin, Professor of History at University of Houston, deftly explores not only the origin of this community and its eventual annihilation through the machinations of then General Andrew Jackson, quietly countenanced by the federal government, but places the fort and its destruction in its appropriate context by opening a wider lens upon the entire era. This was a surprisingly significant moment in American history that for too long fell victim to superficial treatments that overlooked the significance of the multiplicity of forces in play, a neglect much more recently remedied by Pulitzer Prize winning scholar Alan Taylor, whose body of work points not only to the far greater complexities attached to the War of 1812 that have usually remained unacknowledged, but also identifies the broader consequences that rose out of the series of conflicts Taylor collectively terms the “Wars of the 1810s.” Taylor’s brilliant American Republics3 specifically cites actions against the Negro Fort, and connects that to a series of events that included the First Seminole War, sparked by attempts to recapture runaway blacks living among Native Americans, and finally to Spain’s relinquishing of the Floridas to the United States. While never losing focus on the fort itself, Clavin too walks skillfully in this larger arena that hosts war, diplomacy, indigenous tribes pitted against each other, related maroon communities, as well as overriding issues of enslavement and the predominance of white supremacy.

The Battle of Negro Fort is very well-written, but it takes on an academic tone that makes it more accessible to a scholarly than a popular audience. But it is hardly dull, so those comfortable in the realm of historical studies will be undeterred. And it is, after all, a stirring tale that leads to a dramatic and tragic end. Just as the Venetians blew up the Parthenon in 1687 by scoring a hit on the gunpowder the Turks had stored there, a gunboat’s cannonball struck the powder magazine located in the center of the fort, which exploded spectacularly and obliterated the structure. Scores (or hundreds, depending upon the source) were killed, the leaders who survived executed, and those who failed to make their escape returned to slavery.

The author’s thesis underscores that the chief motive for the assault on the Negro Fort by Jackson’s agents in 1816 was to advance white supremacy rather than as part of a greater strategy to dominate the Floridas, which strikes as perhaps somewhat overstated. Still, Clavin cites later antebellum abolitionists who reference the Negro Fort with specificity in this regard, so he may very well have a point. In any case, this contribution to the historiography proves a worthy addition to the literature and an understanding of this less well-known period of early American history will be significantly enhanced by adding it to your reading list.

1Note: David Silkenat is the author of Scars on the Land: An Environmental History of Slavery in the American South.

2The used bookstore in the yurt is West End Used Books in Wilmington, VT

3Note: I reviewed the referenced Alan Taylor book here:  Review of: American Republics: A Continental History of the United States, 1783-1850, by Alan Taylor

 

 

 

 

Featured

Review of: Saving Yellowstone: Exploration and Preservation in Reconstruction America, by Megan Kate Nelson

As a child, one cartoon that habitually had me glued to our black and white TV set was the Yogi Bear Show, which spun a recurring comedic yarn starring that eponymous suave if mischievous anthropomorphic bear and his best bud Boo-Boo who routinely sparred with a ranger as they poached picnic baskets. It was set in Jellystone Park, a thinly veiled animated rendering of Yellowstone National Park. As I grew older, I wondered what it would be like to check out the natural wonders of the real Yellowstone, but many decades later it yet remains an unfulfilled checkbox on a long bucket list. Other than passing views of documentaries that splashed spectacular images of waterfalls, geysers, and herds of bison across my 4K screen, I rarely gave the park a second thought.

So it was while attending the Civil War Institute (CWI) 2023 Summer Conference at Gettysburg College that I learned with no little surprise that there was to be a scheduled segment on Yellowstone. I was puzzled; beyond the scenic imagery recalled from episodes of Nat Geo, what little I knew about Yellowstone was that it was established as our first national park in 1872—seven years after Lee’s surrender! What could this possibly have to do with the Civil War?

Fortunately, I got a clue at the conference’s opening night ice cream social when I was by chance introduced to Megan Kate Nelson, author of Saving Yellowstone: Exploration and Preservation in Reconstruction America [2022], who was slated to give that very presentation. As we chatted, Megan Kate—almost nonchalantly—made the bold statement that without the Civil War there never could have been a Yellowstone Park. Agnostic but intrigued, I sat in the audience a couple of days later for her talk, which turned out to be both engaging and persuasive. I purchased her book along with a stack of others at the conference, and it turned out to be my first read when I got home.

History is too frequently rendered in a vacuum, often isolated from the competing forces that shape it, which not only ignores key context but in the process distorts interpretation. In contrast, and hardly always immediately apparent, every historical experience is to some degree or another the consequence of its relationship to a variety of other less-than-obvious factors, such as climate, the environment, the prevalence of various flora and fauna (as well as pathogens), resources, trade networks, and sometimes the movements of peoples hundreds or even thousands of miles away. It is so rewarding to stumble upon a historian who not only identifies these kinds of wider forces in play but capitalizes upon their existence to turn out a stunning work of scholarship. In Saving Yellowstone, Megan Kate Nelson brilliantly locates a confluence of events, ideas, and individuals that characterize a unique moment in American history.

The Civil War was over. The fate of the disputed territories—the ill-begotten gains of the Mexican War that sparked secession when the south’s slave power was, by Lincoln’s election, stymied in their resolve to spread their so-called “peculiar institution” westward—had been settled: the Union had been preserved, slavery had been outlawed, and these would remain federal lands preserved for white free-soil settlement. This translated into immense opportunities for postwar Americans who pushed west towards what seemed like a limitless horizon of vast if barely explored open spaces, chasing opportunities in land or commerce or perhaps even a fortune in precious metals buried in the ground. Those in the way would be displaced: if not invisible, the Native Americans who had occupied these places for centuries were irrelevant, stubborn obstacles that could be either bought off or relocated or exterminated. Lakota Sioux chief Tȟatȟáŋka Íyotake, also known as Sitting Bull, would have something to say about that.

Ulysses S. Grant, the general who had humbled Lee at Appomattox, was now the President of the United States, and remained committed to a Reconstruction that was on shaky ground largely due to the disastrous administration of his predecessor, Andrew Johnson, who had allowed elites of the former Confederacy to regain political power and trample upon the newly won rights of the formerly enslaved. The emerging reality was looking much like the south had lost the war but somehow won the peace, as rebels were returned to elective office while African Americans were routinely terrorized and murdered. Postwar demilitarization left a shrunken force of uniforms stretched very thin, who could either protect blacks from racist violence or white settlers encroaching on Native lands—but could not do both.

Meanwhile, the landscape was being transformed by towns that seemed to spring up everywhere, many connected by the telegraph and within the orbit of transcontinental railroads that would perhaps one day include the Northern Pacific Railway, a kind of vanity project of millionaire financier Jay Cooke that nearly destroyed him. All of this sparked frenetic activities that centered upon exploration, bringing trailblazers and surveyors and scientists and artists and photographers west to determine exactly what was there and what use could be made of it. One of these men was geologist Ferdinand Hayden, who led a handpicked team on a federally funded geological survey to the wilderness of the Yellowstone Basin in 1871 and charted a course that led, only one short year later, to its designation as America’s first national park.

More than six hundred thousand years ago, a massive super volcano erupted and begat the Yellowstone Caldera and its underlying magma body that produces the extreme high temperatures that power the hydrothermal features it is well known for, including hot springs, mudpots, fumaroles, and more than three hundred geysers! Reports of phenomena like these preceded Hayden’s expedition, but most were chalked up to tall tales. Hayden sought to map the expanse and to separate truth from fantasy. Unlike white men on a quest of discovery, of course, there was nothing new about Yellowstone to neighboring Native Americans, who had inhabited the region into the deep mists of time.

The best crafted biographies employ a central protagonist to not only tell their story but also to immerse the reader in a grand narrative that reveals not only the subject but the age in which they walked the earth. Nelson’s technique here, deftly executed, is to likewise write a kind of biography of Yellowstone that lets it serve as the central protagonist amid a much larger cast in a rich chronicle of this unique historical moment. A moment for the United States, no longer debased by the burden of human chattel slavery, that on the one hand had it celebrating ambitious achievements on an expanding frontier that boasted not only thriving towns and cities and industry and invention but even the remarkable triumph of posterity over profit by creating a national park and setting it aside for the benefit of all Americans. But not, on the other hand, actually for all Americans. Not for Native Americans, certainly, who at the point of the bayonet were driven away, into decades of decline. And not for African Americans, who in the national reconciliation of whites found themselves essentially erased from history and forced to live under the shadow of Jim Crow for a full century hence. Later, when the “West was Won” so to speak, both blacks and Native Americans could very well visit Yellowstone Park as tourists, but never on the same terms as their white counterparts.

Saving Yellowstone is solid history as well as a terrific adventure tale, attractive to both popular and scholarly audiences. There are times, especially early on in the narrative, that it can be slow-going, and the quantity of characters that people the storyline can be dizzying, but as the author lays the groundwork the momentum picks up. You can perhaps sense that Nelson, as a careful historian, is perhaps sometimes holding back so that the drama does not outpace her citations. But it is, after all, a grand theme, and such details only enrich it. This is the rare book that will keeping you thinking long after you have turned the last page. Oh, and for Civil War enthusiasts, I should add: it turns out that Megan Kate was absolutely correct—for both better and for worse, without the Civil War there indeed never could have been a Yellowstone Park!

Featured

Review of: Harriet Tubman: The Road to Freedom, by Catherine Clinton

In 2016, Jack Lew, President Obama’s treasury secretary, announced a redesign of the twenty-dollar bill that would feature on its front a likeness of Harriet Tubman, arguably the most significant African American female of the Civil War era, while displacing Andrew Jackson, who not only owned slaves but championed the institution of human chattel slavery, and was likewise a driving force behind the Indian Removal Act, one of the most shameful episodes in our national saga. Immediate controversy ensued, which was ratcheted up when Donald Trump stepped into the White House. Many have correctly styled Trump as having almost no sense of history, but he did seem to have had a kind of boyhood crush on Jackson, who like Trump did whatever he liked with little regard for the consequences to others, especially the weak and powerless. Trump relocated a portrait of Jackson to a position of prominence in the Oval Office, and his new treasury secretary postponed the currency redesign, almost certainly an echo of Trump’s campaign grievance that putting Tubman on the bill was nothing but “pure political correctness.” Meanwhile, resistance on the left grew, as well, as many pointed to the disrespect of putting the face of one who was formerly enslaved on legal tender that is a direct descendant of that once used to buy and sell human beings. Then there is the paradox in the redesign that puts Tubman on the obverse while maintaining Jackson’s presence on the reverse side of the bill, perhaps reflecting with a dose of disturbing irony the glaring sense of polarization that manifests the national character these days, much as it did in Tubman’s time. Still, the Biden Administration has pledged to accelerate the pace of issuing the new currency, but there remains no sign that anyone will be buying groceries with Tubman twenties anytime soon. We can only imagine what Harriet Tubman, who was illiterate and lived most of her life in poverty, would make of all this!

A dozen years before all this hoopla over who will adorn the paper money, acclaimed historian Catherine Clinton published Harriet Tubman: The Road to Freedom [2004], a well-written, engaging study that turns out to be one of a string of books on Tubman to hit the press nearly at the same—the others are by Kate Larson and Jean Humez—which collectively represented the first scholarly biographies of her life in more than six decades. (There have since been additional contributions to the historiography.) Surprisingly, Tubman proves a tough subject to chronicle: a truly larger-than-life heroic figure who can be credited with verifiable exploits to free the enslaved both before and during the Civil War—admirers nicknamed her “Moses” for her role in spiriting fugitives to freedom, and she was later dubbed “General” by John Brown—her achievements have also long been distorted by myth and embellishment, something nourished early on by a subjective biography of sorts by her friend Sarah Bradford that is said to play loose with facts and events. Then there is the challenge in fashioning an accurate account of someone who spent much of her consequential years living in the shadows, both by the circumstance of anonymity imposed by her condition of enslavement, as well as the deliberate effort to wear a mask of invisibility by one operating outside the law where the penalty for detection would be a return to slavery or, much more likely, death. For the historian, that translates into a delicate—and precarious—balancing act.

Clinton’s approach is to recreate Tubman’s life as close to the colorful adventure it certainly was, without falling victim to sensationalism. She relies on scholarship to sketch the skeletal framework for Tubman’s life, then turns to a variety of sources and reports to put flesh upon it, sharing with the reader when she resorts to surmise to shade aspects of the complexion.  In this effort, she largely succeeds.

Born Araminta Ross in Maryland in perhaps 1822—like many of the enslaved she could only guess at her date of birth—Tubman survived an especially brutal upbringing in bondage that witnessed family members sold, a series of vicious beatings and whippings, and a severe head injury incurred in adolescence when a heavy metal weight tossed by an overseer at another struck her instead, which left her with a lingering dizziness, headaches, seizures, and what was likely chronic hypersomnia, a neurological disorder of excessive sleepiness. It also spawned vivid dreams and visions that reinforced religious convictions that God was communicating with her. By then, she was no stranger to physical abuse. Tubman was first hired out as a nursemaid when she herself was only about five years old, responsible for rocking a baby while it slept. If the baby woke and cried she was lashed as punishment. She recalled once being whipped five times before breakfast. She was left scarred for life. Tubman’s experiences serve as a strong rebuke to those deluded by “Lost Cause” narratives that would cast antebellum slavery as a benign institution.

Despite her harsh treatment at the hands of various enslavers, Tubman proved strong and resilient. Rather than break her, the cruelties she endured galvanized her, sustained by a religious devotion infused with Old Testament promises of deliverance. Still enslaved, she married John Tubman, a free black man, and changed her first name to Harriet shortly thereafter. When she fled to freedom in Philadelphia a few short years later, he did not accompany her. Tubman’s journey out of slavery was enabled by the so-called “Underground Railroad,” a route of safehouses hosted by sympathetic abolitionists and their allies.

For most runaways, that would be the end of the story, but for Tubman it proved just the beginning. Committed to liberating her family and friends, Tubman covertly made more than a dozen missions back to Maryland over a period of eight years and ultimately rescued some seventy individuals, while also confiding escape methods to dozens of others who successfully absconded. In the process, as Clinton points out, she leapfrogged from the role as a “conductor” on the Underground Railroad to an “abductor.” Now known to many as Moses, she was a master of disguise and subterfuge; the illiterate Tubman once famously pretended to read a newspaper in order to avoid detection. To those who knew her, she seemed to be utterly fearless. She carried a pistol, not only to defend herself against slavecatchers if needed, but also to threaten the fainthearted fugitive who entertained notions of turning back. She never lost a passenger.

At the same time, Harriet actively campaigned for abolition, which brought her into the orbit of John Brown, who dubbed her “General Tubman.” Unlike other antislavery allies, she concurred with his advocacy for armed insurrection, and she proved a valuable resource for him with her detailed knowledge of support networks in border states. Brown’s raid on Harper’s Ferry was, of course, a failure, and Brown was hanged, but her admiration for the man never diminished. With the onset of the Civil War, Tubman volunteered to help “contrabands” living in makeshift refugee camps, and also served as a nurse before immersing herself in intelligence-gathering activities. Most spectacularly, Tubman led an expedition of United States Colored Troops (USCT) on the remarkable 1863 Combahee River Raid in South Carolina that freed 750 of the formerly enslaved—then recruited more than 100 of them to enlist to fight for Union. She is thus credited as the first woman to lead American forces in combat! She was even involved with Colonel Robert Gould Shaw in his preparations for the assault on Fort Wagner, later dramatized in the film Glory. When the war ended, Tubman went on to lobby for women’s suffrage, and died in her nineties in 1913—the end of a life that was given to legend because so very much of it more closely resembled imagined epic than authentic experience.

In this biography, Clinton the historian wrestles against the myth, yet sometimes seems seduced by it. She reports claims of the numbers of the enslaved Tubman liberated that seem exaggerated, and references enormous sums slaveowners offered as reward for her capture that defy documented evidence. There’s also a couple of egregious factual errors that any student of the Civil War would stumble upon with mouth agape: she misidentifies the location of the battle of Shiloh from Tennessee to Virginia, and declares Delaware a free state, which would have been a surprise to the small but yet enduring population of the enslaved that lived there. For these blunders, I am inclined to give Clinton the benefit of the doubt; she is an esteemed scholar who likely relied on a lousy editor. Perhaps these mistakes have been corrected in later editions.

In June 2023, shortly after I read this volume, I had the pleasure to sit in on Catherine Clinton’s lecture on the life of Harriet Tubman at the Civil War Institute (CWI) Summer Conference at Gettysburg College. Unlike all too many academics, Clinton is hardly dull on stage, and her presentation was as lively and colorful as her subject certainly must have been in the days when she walked the earth. During a tangent that drifted to the currency controversy, she noted that one of the more superficial objections to the rebranding of the twenty was that there are no existing images of Tubman smiling, something Clinton—grinning mischievously—reminded the audience should hardly be surprising since Harriet once dealt with a toothache while smuggling human beings out of bondage by knocking her own tooth out with her pistol, an episode recounted in the book, as well. Harriet Tubman’s life was an extraordinary one. If you want to learn more, pick up Clinton’s book.

I reviewed an earlier book by Clinton here: Review of: Tara Revisited: Women, War & The Plantation Legend, by Catherine Clinton

Featured

Review of: The Failed Promise: Reconstruction, Frederick Douglass, and the Impeachment of Andrew Johnson, by Robert S. Levine

As President of the United States, he is ranked at or near the bottom by most historians, a dramatic contrast to the man whose untimely death elevated him to that office, who is consistently ranked at or near the top. When some today bemoan the paradox of a south that lost the Civil War but yet seemed in so many ways to have won the peace, his name is often cited as principal cause. While he cannot solely be held to blame, he bears an outsize responsibility for the mass rehabilitation of those who once fomented secession and led a rebellion against the United States, a process that saw men who once championed treason regain substantial political power—and put that authority forcefully to bear to make certain that the rights and privileges granted to formerly enslaved African Americans in the 14th and 15th amendments would not be realized. He was Andrew Johnson.

As foremost black abolitionist, as well as vigorous advocate for freedom and civil rights for African Americans before, during, and after the Civil War, he is almost universally acclaimed as the greatest figure of the day in that long struggle. Born enslaved, often hungry and clad in rags, he was once hired out to a so called “slave-breaker” who frequently whipped him savagely. But, like Abraham Lincoln, he proved himself a remarkable autodidact who not only taught himself to read but managed to obtain a solid education that was to shape a clearly sophisticated intellect. He escaped to freedom, and distinguished himself as orator, author, and activist. Lincoln welcomed him at the White House. He lived long enough to see much of the dreams of his youth realized, as well as many of his hopes for the future dashed. He was Frederick Douglass.

At first glance, it seemed a bit odd and even unsettling to find these two men juxtaposed in The Failed Promise: Reconstruction, Frederick Douglass, and the Impeachment of Andrew Johnson [2021], but it was that very peculiarity that drew me to this kind of dual biography by Robert S. Levine, a scholar of African American literature who has long focused on the writings of Frederick Douglass. But back to that first glance: it seemed to me that the more elegant contrast would have been of Johnson and Ulysses S. Grant, since the latter was the true heir to Lincoln’s (apparent) moderate stances on reconciliation with the south that also promoted the well-being of the formerly enslaved—which at times put Grant uncomfortably at odds with both Johnson and his eventual opponents who controlled Congress, the Radical Republicans, who were hell-bent on punishing states once in rebellion while insisting upon nothing less than a social revolution that mandated equality for blacks in every arena. Meanwhile, while Johnson was president of the United States in 1865, Douglass himself had neither basic civil rights nor the right to vote in the next election.

Still, with gifted prose, a fast-paced narrative, and a talent for analysis that one-ups a number of credentialed historians of this era, Levine sets out to demonstrate that Johnson’s real rival in his tumultuous tenure was neither Grant nor a recalcitrant Congress, but rather Douglass who—much like Martin Luther King a full century later—unshakably occupied the moral high ground. In this, he mostly succeeds.

The outline to his story of Johnson is a mostly familiar one, yet punctuated by some keen insights into the man overlooked in other studies. Johnson, who (also like Lincoln) grew from poverty to prominence, was a Democrat who served as governor of Tennessee and later as member of Congress. A staunch Unionist, he was the only sitting senator from a seceding state who did not resign his seat. Lincoln made him Military Governor of Tennessee soon after it was reoccupied, and in 1864 he replaced Hannibal Hamlin as Lincoln’s running mate on the Republican Party’s rechristened “National Union” ticket in an election Lincoln felt certain he would lose. Johnson showed up drunk on inauguration day—sparking an unresolved controversy over whether the cause was recreation or self-medication—which tarnished his reputation in some quarters. Still, there were some among the Radical Republicans who wished that Johnson was the president and not Lincoln. Johnson, a former slaveowner who had first emancipated his own human property and later Tennessee’s entire enslaved population, had an abiding hatred for the plantation elites who had long scorned men of humble beginnings like himself, and a deep anger towards those who had severed the bonds of union with the United States. He seemed to many in Congress like the better agent to wreak revenge upon the conquered south for the hundreds of thousands of lives lost to war than the conciliatory Lincoln, who was willing to welcome seceded states back into the fold if a mere ten percent of its male population took loyalty oaths to the union.

The inauguration with an inebriated Johnson in attendance took place on March 4, 1865. On April 9, Lee surrendered at Appomattox. On April 15, Lincoln was dead and Johnson was president. Quietly—very quietly indeed—some Radical Republicans rejoiced. Lincoln had led them through the war, but now Johnson would be the better man to make the kind of unforgiving peace they had in mind. Moreover, Johnson—who had styled himself as “Moses” to African Americans in Tennessee as he preemptively (and illegally) freed them statewide in 1864—seemed like the ideal candidate to lead their crusade to foster a new reality for the defeated south that would crush the Confederates while promoting civil equality for their formerly chattel property. In all this, they were to be proved mistaken.

Meanwhile, Douglass brooded—and entertained hopes for Johnson not unlike those of his white allies in Congress. While there’s no evidence that he celebrated Lincoln’s untimely demise, Levine brilliantly reveals that Douglass’s appraisal of Lincoln evolved over time, that his own idolatry for the president was a creature of his later reflections, long after the fact, when he came to fully appreciate in retrospect not only what Lincoln had truly achieved but how deeply the promise of Reconstruction was irrevocably derailed by his successor. In their time, they had forged a strong relationship and even a bond of sorts, but Douglass consistently had doubts about Lincoln’s real commitment to the cause of African American freedom and civil liberties. Douglass took seriously Lincoln’s onetime declaration that “If I could save the Union without freeing any slave I would do it,” and he was suitably horrified by what that implied. Like some in Congress, Douglass was deluded by the fantasy of what Johnson’s accession might mean for the road ahead. This serves both as a strong caution and timely reminder to all of us in the field that it is critical to evaluate not only what was said or written by any individual in the past, but when it was said or written.

The author’s analysis of Johnson proves fascinating. Levine maintains that Johnson’s contempt for the elites who once disdained him was genuine, but that this was counterbalanced by his secret longing for their acceptance. And he reveled in freeing and enabling the enslaved, but only paternalistically and only ever on his own terms. If he could not be Moses, he would be Pharaoh. Levine also argues that whatever his flaws—and they were manifold—Johnson’s vision of his role as president in Reconstruction mirrored Lincoln’s. Lincoln believed that Reconstruction must flow primarily from the executive branch, not the legislative, and he intended to direct it as such. Lincoln’s specific plans died with him, but Johnson had his own ideas. This suggests that it is just as likely there would have been a clash between Lincoln and the Congress had he lived, although knowing what we know of Lincoln we might speculate at more positive results.

Levine breaks no new ground in his coverage of the failed impeachment, which the narrative treats without the kind of scrutiny found, for instance, in Impeached: The Trial of President Andrew Johnson and the Fight for Lincoln’s Legacy, by David O. Stewart. But there is the advantage of added nuance in this account because it is enriched by the presence of Douglass as spectator and sometime commentator. And here is Levine’s real achievement: it is through Douglass’s eyes that we can vividly see the righteous cause of emancipation won, obtained at least partially with the blood of United States Colored Troops (USCT), a constitutional amendment passed forever prohibiting human chattel slavery, and subsequent amendments guaranteeing civil rights, equality, and the right to vote for African Americans. And through those same eyes we witness the disillusion and disgust as the accidental president turns against everything Douglass holds dear. Those elite slaveholders who led rebellion, championing a proud slave republic, have their political rights restored and later show up as governors and members of Congress. The promise of Reconstruction is derailed, replaced by “Redemption” as unreconstructed ex-Confederates recapture the statehouses, black codes are enacted, African Americans and their white allies are terrorized and murdered. Constitutional amendments turn moot. The formerly enslaved, once considered three-fifths of a person, are now counted as full citizens but despite the 15th Amendment denied the vote at the point of a gun, so representation for the former slave states that engineered the war effectively increases after rejoining the union. That union has been restored with the sacrifice of more than six hundred thousand lives, and while slavery is abolished Douglass grows old observing the reconciliation of white men on both sides of the Mason-Dixon along with an embrace of the “Lost Cause” ideology that sees the start of a process that enshrines repression and leads to the erasure of African Americans from Civil War history.

That Levine is a professor of literature rather than of history is perhaps why the story he relates has a more emotional impact upon the reader than it might have if rendered by those with roots in our own discipline. The scholarship is by no means lacking, as evidenced by the ample citations in the thick section of notes at the end of the volume, but thankfully he eschews the dry, academic tone that tends to dominate the history field. This is a work equally attractive to a popular or scholarly audience, something that should be both celebrated and emulated. As an added bonus, he includes as appendix Douglass’s 1867 speech, “Sources of Danger to the Republic,” which argues for constitutional reforms that nicely echo down to our own times. Among other things, Douglass boldly calls for eliminating the position of vice president to avoid accidental presidencies (such as that of Andrew Johnson!) and for curbing executive authority. It is well worth the read and unfortunately not easy to access elsewhere except through a paywall. The Failed Promise is an apt title: the optimism at the dawn of Reconstruction holds so much appeal because we know all too well the tragedy of its outcome. To get a sense of how it began, as well as how it went so wrong, I recommend this book.

 

Here’s a link to a rare free online transcript of Frederick Douglass’s 1867 speech: “Sources of Danger to the Republic”

I reviewed Stewart’s book here: Impeached: The Trial of President Andrew Johnson and the Fight for Lincoln’s Legacy, by David O. Stewart

Featured

Review of: Love & Duty: Confederate Widows and the Emotional Politics of Loss, by Angela Esco Elder

In 2008, one hundred forty-three years after Appomattox, ninety-three-year-old Maudie Hopkins of Arkansas passed away, most likely the final surviving widow of a Confederate Civil War soldier. Like her literary counterpart Lucy Marsden, the star of Allan Gurganus’s delightful novel Oldest Living Confederate Widow Tells All, Maudie married an elderly veteran decades after the war ended and benefited from his pension. The widows are gone, but their legacy is still celebrated by the United Daughters of the Confederacy (UDC), a neo-Confederate organization comprised of female descendants of rebel soldiers that promotes the “Myth of the Lost Cause” and has long been associated with white supremacy.

But for students of the Civil War curious about the various fates of southern women whose husbands were among the many thousands who lost their lives at places like Shiloh and Chancellorsville, or in some random hospital tent, there is little to learn from the second-hand tales of either the fictional Lucy Marsden nor the real-life Maudie Hopkins—and even less from the pseudohistorical fantasies peddled by the UDC.  For that, fortunately, there is the outstanding recent work by historian Angela Esco Elder, Love & Duty: Confederate Widows and the Emotional Politics of Loss (2022), a well-written and surprisingly gripping narrative that brings a fresh perspective to a mostly overlooked corner of Civil War studies.

Something like 620,000 soldiers died during four years of Civil War, far more of disease than bullets or bayonets, but for most of the estimated 200,000 left widowed on both sides, the specific cause was less significant than the shock, pain, and lingering tragedy of loss. This they shared, north and south alike. But for a variety of reasons, the aftermath for the southern widow was substantially more complicated, often more desperate, and for many bred a suffering not only persistent but perhaps chronic. Southern women not only lost a husband; they also lost a war and a way of life.

Building upon Drew Gilpin Faust’s magnificent and groundbreaking study, This Republic of Suffering: Death and the American Civil War, Elder yet carves her own unique corridor to a critical realm of the past too often treated superficially or not at all. In the process, she engages the reader on a journey peopled with long-dead characters that spring to life at the stroke of her pen, enriched with anecdote while anchored to solid scholarship. I have read deeply in Civil War literature. While this topic interested me, I approached the book with some trepidation: after all, this is a theme that in the wrong hands could be a dull slog. Instead, it turned out to be a page-turner! Thus the author has managed to attain a rare achievement in our field: she has written a book equally attractive to both a scholarly and a popular audience.

In war, the southern woman endured a reality far more difficult than her northern counterpart. For one thing, the war was either on—or potentially on—her doorstep. Other than brief, failed rebel incursions on the north, the American Civil War was fought almost entirely on the territory of the seceded states that formed the Confederacy. A certain menace ever loomed over that landscape. With imports choked off, there were critical shortages of goods, not simply luxuries but everyday items all households counted on. This was exacerbated by inflation that dramatically increased the cost of living.

And then there was the enslaved. Today’s “Lost Cause” proponents would insist that the war had nothing to do with what the south once styled as its “peculiar institution,” but the scholarly consensus has long established that slavery was the central cause of the war. For those on the home front, that translated into a variety of complex realities. While most southerners did not themselves own human property, communities lived in fear of violent uprisings, even if these were imagined ones. For that segment whose households included the enslaved, there was the matter of managing that population held in bondage, large or small, with their men away at war. And most of the men were indeed away. In such a slave-based society, labor to support the infrastructure was performed by the enslaved, freeing up a much larger proportion of military age males to go off to war.

For women, all this was further complicated by a culture that disdained manual labor for white men or women, and placed women on a romantic pedestal where they also functioned primarily as property of sorts: of their husbands or fathers. As if all of that was not challenging enough, when the war was over, the south lay in ruins: their economy shattered, their chattel slaves freed, their outlook utterly bleak. Even bleaker was the reality of the Confederate widow.

Elder succeeds in Love & Duty where others might have failed in that she launches the story with a magnetic snapshot of what life was like for southern women in the antebellum era in a culture of hyperbolic chivalry and courtship rituals and idealistic images of how a proper young lady should look and behave. Like James Cameron in the first part of the film Titanic, she vividly demonstrates what life was like prior to the metaphorical shipwreck, showcasing the experiences of a cast of characters far more fascinating than any contrived in fiction.

Among these, perhaps the most unforgettable is Octavia “Tivie” Bryant, a southern belle in the grand style of the literature whom we encounter when she is only fourteen, courted by the twenty-six-year-old plantation owner Winston Stephens. Tivie’s father objects and they are parted, but the romance never cools, and a few years later there is a kind of fairy-tale wedding. But life intervenes. In 1864, a sniper’s bullet takes Winston and abruptly turns twenty-two-year-old Tivie into a widow. She is inconsolable. She lives on with a grief she can never reconcile until she finally passes on in 1908, more than forty years later.

But those who have read Catherine Clinton’s brilliant Tara Revisited: Women, War & The Plantation Legend are well aware that most southern women could not boast lives like Scarlett O’Hara—or Tivie, for that matter. Clinton underscored that while there were indeed women like Scarlett from families of extreme wealth who lived on large plantations with many slaves and busied themselves with social dalliances, her demographic comprised the tiniest minority of antebellum southern women. In fact, plantation life typically meant hard work and much responsibility even for affluent women. And for others, it could be brutally demanding, before the war and even more so during the course of it, for wives and daughters with no slaves who had very modest means, deprived of husbands and fathers away at war while they struggled to survive. Many of the widows that Elder profiles represent this cohort, providing the reader with a colorful panoramic of what life was really like for those far from the front as the Confederate cause was gradually but eventually crushed on battlefields west and east.

Elder’s account is especially effective because she exposes the reader to the full range of the Confederate widow and how they coped with their grief (or lack thereof), which of course ran the full gamut of the human experience. In this deeply patriarchal society, a man who lost his spouse was expected to wear a black armband and mourn for a matter of months. A woman, on the other hand, was to dress only in black and be in mourning a full two and a half years. Not all complied. We might have empathy for poor Tivie, who wallowed in her agony for decades, but on the other hand, her station in life permitted her an extended period of grieving, for better or worse. Others lacked that option. Many struggled just to survive. Some lost or had to give up their children. Some turned to prostitution. Some turned to remarriage.

In war or peace, not every woman is devastated by the death of their spouse, especially if he was lecherous, adulterous, or abusive. Nineteenth century women, north and south, were essentially the property of first their fathers and then their husbands. In reality, this was even more true for southern women, subject to the sometimes-twisted romantic idealism of their culture. Some deaths were welcomed, albeit quietly. Elder relates that:

In 1849, one wife petitioned the North Carolina courts for a divorce after her husband continuously drank heavily, beat her, locked her out of the house overnight, slept with an enslaved woman, and in one instance, forced his wife to watch them have sex. The chief justice did not grant the absolute dissolution of the marriage, believing there was reasonable hope for the couple’s reconciliation. Another wife, in Virginia, would flee to the swamps when her husband drank. If he caught her in the kitchen seeking protection from the weather, he attacked “with his fists and with sticks” …  And within the patriarchy, men maintained the right to correct their wives … Alvin Preslar beat his wife so brutally that she fled with two of her children toward her father’s house, dying before she reached it. Three hundred people petitioned against Preslar’s sentence to hang, arguing his actions were not intentional but rather “the result of a drunken frolic.” [p29]

A woman who managed to survive a marriage to men like these likely would not mourn like Tivie if a bullet—or measles—took him from her far from home.

Some of the best portions of this book focus upon specific individuals. One of my favorites is the spotlight on Emilie Todd Helm, a younger sister of Mary Todd Lincoln, whose husband General Benjamin Hardin Helm was killed at Chickamauga. Emilie sought to return home from Georgia to the border state of Kentucky, but was denied entry because she refused to take a loyalty oath to the Union. Her brother-in-law President Lincoln himself intervened and an exception was made. Elder reports that:

When Emilie approached the White House in 1863, she was “a pathetic little figure in her trailing black crepe.” Her trials had transformed the beautiful woman into a “sad-faced girl with pallid cheeks, tragic eyes, and tight, unsmiling lips.” Reunited with Abe and Mary, Emilie wrote, “we were all too grief-stricken at first for speech…. We could only embrace each other in silence and tears.” Certainly, the war had not been easy on the Lincolns either. The Todd sisters had lost two brothers, Mary had lost a son, and Emilie’s loss of Benjamin gave them much to grieve over together. “I never saw Lincoln more moved,” recalled Senator David Davis, “than when he heard of the death of his young brother-in-law, Helm, only thirty-two-years-old, at Chickamauga.”… ‘Davis,’ said he, ‘I feel as David of old did when he was told of the death of Absalom. Would to God that I had died for thee, oh, Absalom, my son, my son?’” … Emilie and Mary found comfort in each other’s company, but their political differences divided them. [p91]

When Emilie, still loyal to the Confederacy, later petitioned to sell her cotton crop despite wartime strictures to the contrary, Lincoln refused her.

No review can appropriately assess the extent of Elder’s achievements in Love & Duty, but there is much worthy of praise. Are there shortcomings? In the end, I was left wanting more. I would have liked Elder to better connect the experiences of actual widows with the myths of the UDC that later subsumed these. I also wanted more on the experiences of the enslaved who lived in the shadows of these white widows. Finally, I thought there were too many direct references to Drew Gilpin Faust in the narrative. Yes, the author admires Faust and yes, Faust’s scholarship is extraordinary, but Elder’s work is a significant contribution to the historiography on its own: let’s let Faust live on in the endnotes, as is appropriate. But … these are quibbles.  This is a fine work, and if you are invested in Civil War studies it belongs on your bookshelf—and in your lap, turning each page!

 

I reviewed the Drew Gilpin Faust book here: Review of: This Republic of Suffering: Death and the American Civil War, by Drew Gilpin Faust

I reviewed the Catherine Clinton book here:  Review of: Tara Revisited: Women, War & The Plantation Legend, by Catherine Clinton

 

Featured

Review of: Searching for Black Confederates: The Civil War’s Most Persistent Myth, by Kevin M. Levin

In March 1865, just weeks before the fall of Richmond that was to be the last act ahead of Appomattox, curious onlookers gathered in that city’s Capitol Square to take in a sight not only never before seen but hardly ever even imagined: black men drilling in gray uniforms—a final desperate gasp by a Confederacy truly on life support. None were to ever see combat. Elsewhere, it is likely that a good number of the ragged white men marching with Robert E. Lee’s shrunken Army of Northern Virginia were aware of this recent development. News travels fast in the ranks, and after all it was pressure from General Lee himself that finally won over adamant resistance at the top to enlist black troops. We can suppose that many of Lee’s soldiers—who had over four years seen much blood and treasure spent to guard the principle that the most appropriate condition for African Americans was in human bondage—were quite surprised by this strange turn of events. But more than one hundred fifty years later, the ghosts of those same men would be astonished to learn that today’s “Lost Cause” celebrants of the Confederacy insist that legions of “Black Confederates” had marched alongside them throughout the struggle.

In Searching for Black Confederates: The Civil War’s Most Persistent Myth (2019), historian Kevin M. Levin brings thorough research and outstanding analytical skills to an engaging and very well-written study of how an entirely fictional, ahistorical notion not only found life, but also the oxygen to gain traction and somehow spawn an increasingly large if misguided audience. For those committed to history, Levin’s effort arrived not a moment too soon, as so many legitimate Civil War groups—on and off social networking—have come under assault by “Lost Cause” adherents who have weaponized debate with fantastical claims that lack evidence in the scholarship but are cleverly packaged and aggressively peddled to the uninformed. The aim is to sanitize history in an attempt to defend the Confederacy, shift the cause of secession from slavery to states’ rights, refashion their brand of slavery as benevolent, and reveal purported long suppressed “facts” allegedly erased by today’s “woke” mob eager to cast the south’s doomed quest to defend their liberty from northern aggression in a negative light. In this process, the concept of “Black Confederates” has turned into their most prominent and powerful meme, winning converts of not only the uninitiated but sometimes, unexpectedly, of those who should know better.

What has been dubbed the “Myth of the Lost Cause” was born of the smoldering ashes of the Confederacy. The south had been defeated; slavery not only outlawed but widely discredited. Many of the elite southern politicians who back in 1861 had proclaimed the Confederate States of America a “proud slave republic” after fostering secession because Lincoln’s Republicans would block their peculiar institution from the territories, now rewrote history to erase slavery as their chief grievance. Attention was instead refocused on “states’ rights,” which in prior decades had mostly served as euphemism for the right to own human beings as property. Still, the scholarly consensus has established that slavery was indeed the central cause of the war. As Gary Gallagher, one of today’s foremost Civil War historians, has urged: pay attention to what they said at the dawn of the war, not what they said when it was over. Of course, for those who promote the Lost Cause, it is just the opposite.

There are multiple prongs to the Lost Cause strategy. One holds slavery as a generally benign practice with deep roots to biblical times, along with a whiff of the popular antebellum trope that juxtaposed the enslaved with beleaguered New England mill workers, maintaining that the former lived better, more secure lives as property—and that they were content, even pleased, by their station in life. This theme was later exploited with much fanfare in the fiction and film of Gone with the Wind, with such memorable episodes as the enslaved Prissy screeching in terror that “De Yankees is comin!”—a cry that in real life would far more likely have been in celebration than distress.

But, as Levin reveals through careful research, the myth of black men in uniform fighting to defend the Confederacy did not emerge until the 1970s, as the actual treatment of African Americans—in slavery, in Jim Crow, as second-class citizens—became widely known to a much larger audience. This motivated Lost Cause proponents to not only further distance the southern cause from slavery, but to invent the idea that blacks actually laid down their lives to preserve it. In the internet age, this most conspicuously translated into memes featuring out-of-context photographs of black men clutching muskets and garbed in gray … the “Black Confederates” who bravely served to defend Dixie against marauding Yankees.

All of this seems counterintuitive, which is why it is remarkable that the belief not only caught on but has grown in popularity. In fact, some half million of the enslaved fled to Union lines over the course of the war. Two hundred thousand black men formed the ranks of the United States Colored Troops (USCT); ultimately a full ten percent of the Union Army was comprised of African Americans. If captured, blacks were returned to slavery or—all too frequently—murdered as they attempted to surrender at Fort Pillow, the Battle of the Crater, and elsewhere. That idea that African Americans would willingly fight for the Confederacy seems not only unlikely, but insane.

So what about those photographs of blacks in rebel uniforms? What is their provenance? To find out, Levin begins by exploring what life was like for white Confederates. In the process, he builds upon Colin Woodward’s brilliant 2014 study, Marching Masters: Slavery, Race, and the Confederate Army During the Civil War. Woodward challenged the popular assumption that while most rebels fought for southern independence, they remained largely agnostic about the politics of slavery, especially since only a minority were slaveowners themselves. Disputing this premise, Woodward argued that the peculiar institution was never some kind of abstract notion to the soldier in the ranks, since tens of thousands of blacks accompanied Confederate armies as “camp slaves” throughout the course of the war! (Many Civil War buffs are shocked to learn that Lee brought as many as six to ten thousand camp slaves with him on the Gettysburg campaign—this while indiscriminately scooping up any blacks encountered along the way, both fugitive and free.)

Levin skips the ideological debate at the heart of Woodward’s thesis while bringing focus to the omnipresence of the enslaved, whose role was entirely non-military, devoted instead to perform every kind of labor that would be part of the duties of soldiers on the other side. This included digging entrenchments, tending to sanitation, serving as teamsters, cooks, etc. Many were subject to impressment by the Confederate government to support the war effort, while others were the personal property of officers or enlisted men, body servants who accompanied their masters to the front. According to Levin, it turns out that some of the famous photographs of so-called Black Confederates were of these enslaved servants whom their owners dressed up for dramatic effect in the studio, decked out in a matching uniform with musket and sword—before even marching off to war. Once in camp, of course, these men would no longer be in costume: they were slaves, not soldiers.

After the war, legends persisted of loyal camp slaves who risked their lives under fire to tend to a wounded master or brought their bodies home for burial. While likely based upon actual events, the number of such occurrences was certainly overstated in Lost Cause lore that portrayed the enslaved as not only content to be chattel but even eager to assist those who held them as property. Also, as Reconstruction fell to Redemption, blacks in states of the former Confederacy who sought to enjoy rights guaranteed to them by the Fourteenth and Fifteenth Amendments were routinely terrorized and frequently murdered. For African Americans who faced potentially hostile circumstances, championing their roles as loyal camp slaves, real or imagined, translated into a survival mechanism. Meanwhile, whites who desperately wanted to remember that which was contrived or exaggerated zealously hawked such tales, later came to embrace them, and then finally enshrined them as incontrovertible truth, celebrated for decades hence at reunions where former camp slaves dutifully made appearances to act the part.

Still later, there was an intersection of such celebrity with financial reward, when southern states began to offer pensions for veterans and some provision was made for the most meritorious camp slaves. But, at the end of the day, these men remained slaves, not soldiers. Nevertheless, more than a full century hence, many of these pensioners were transformed into Black Confederates. And some of them people the memes of a now resurgent Lost Cause often inextricably entwined with today’s right-wing politics.

It is certainly likely that handfuls of camp slaves may have, on rare occasions, taken up a weapon alongside their masters and fired at soldiers in blue charging their positions. Such reports exist, even if these cannot always be corroborated. In the scheme of things, these numbers are certainly miniscule. And, of course, in every conflict there are collaborators. But the idea that African Americans served as organized, uniformed forces fighting for the south not only lacks evidence but rationality.

Yet, how can we really know for certain? For that, we turn to a point Levin makes repeatedly in the narrative: there are simply no contemporaneous accounts of such a thing. It has elsewhere been estimated that soldiers in the Civil War, north and south, collectively wrote several million letters. Tens of thousands of these survive, and touch on just about every imaginable topic. Not a one refers Black Confederate troops in the field.

On the other hand, quite a few letters home reference the sometimes-brutal discipline inflicted upon disobedient camp slaves. In one, a Georgia Lieutenant informed his wife that he whipped his enslaved servant Joe “about four hundred lashes … I tore his back and legs all to pieces. I was mad enough to kill him.” Another officer actually did beat a recalcitrant slave to death [p26-27]. Such acts went unpunished, of course, and that they were so frankly and unremarkably reported in letters to loved ones speaks volumes about the routine cruelty of chattel slavery while also contradicting modern fantasies that black men would willingly fight for such an ignoble cause. The white ex-Confederates who later hailed the heroic and loyal camp slave no doubt willingly erased from memory the harsh beatings that could characterize camp life; the formerly enslaved who survived likely never forgot.

Searching for Black Confederates is as much about disproving their existence as it is about the reasons some insist against all evidence that they did. With feet placed firmly in the past as well as the present, Levin—who has both a talent for scholarship as well as a gifted pen—has written what is unquestionably the definitive treatment of this controversy, and along the way has made a significant contribution to the historiography. The next time somebody tries to sell you on “Black Confederates,” advise them to read this book first, and then get back to you!

 

I reviewed the Woodward book here: Review of: Marching Masters: Slavery, Race, and the Confederate Army During the Civil War, by Colin Edward Woodward

Featured

Review of: Wrestling With His Angel: The Political Life of Abraham Lincoln Vol. II, 1849-1856, and All the Powers of Earth: The Political Life of Abraham Lincoln Vol. III, 1856-1860, by Sidney Blumenthal

On November 6, 1860, Abraham Lincoln was elected the 16th president of the United States, although his name did not appear on the ballot in ten southern states. Just about six weeks later, South Carolina seceded. This information is communicated in only the final few of the more than six hundred pages contained in All the Powers of Earth: The Political Life of Abraham Lincoln Vol. III, 1856-1860, the ambitious third installment in Sidney Blumenthal’s projected five-volume series. But this book, just as the similarly thick ones that preceded it, is burdened neither by unnecessary paragraphs nor even a single gratuitous sentence. Still, most noteworthy, Abraham Lincoln—the ostensible subject—is conspicuous in his absence in vast portions of this intricately detailed and extremely well-written narrative that goes well beyond the boundaries of ordinary biography to deliver a much-needed re-evaluation of the tumultuous age that he sprang from in order to account for how it was that this unlikely figure came to dominate it. The surprising result is that through this unique approach, the reader will come to know and appreciate the nuance and complexity that was the man and his times like never before.

When I was in school, in the standard textbooks Lincoln seems to come out of nowhere. A homespun, prairie lawyer who served a single, unremarkable term in the House of Representatives, he is thrust into national prominence when he debates Stephen A. Douglas in his ultimately unsuccessful campaign for the U.S. Senate, then somehow rebounds just two years later by skipping past Congress and into the White House. Douglas, once one of the most well-known and consequential figures of his day, slips into historical obscurity. Meanwhile, long-simmering sectional disputes between white men on both sides roar to life with Lincoln’s election, sparking secession by a south convinced that their constitutional rights and privileges are under assault. Slavery looms just vaguely on the periphery. Civil War ensues, an outgunned Confederacy falls, Lincoln is assassinated, slavery is abolished, national reconciliation follows, and African Americans are even more thoroughly erased from history than Stephen Douglas.

Of course, the historiography has come a long way since then. While fringe “Lost Cause” adherents still speak of states’ rights, the scholarly consensus has unequivocally established human chattel slavery as the central cause for the conflict, as well as resurrected the essential role of African Americans—who comprised a full ten percent of the Union army—in putting down the rebellion. In recent decades, this has motivated historians to reexamine the prewar and postwar years through a more polished lens. That has enabled a more thorough exploration of the antebellum period that had been too long cluttered with grievances of far less significance such as the frictions in rural vs. urban, agriculture vs. industry, and tariffs vs. free trade. Such elements may indeed have exacerbated tensions, but without slavery there could have been no Civil War.

And yet … and yet with all the literature that has resulted from this more recent scholarship, much of it certainly superlative, students of the era cannot help but detect the shadows of missing bits and pieces, like the dark matter in the universe we know exists but struggle to identify. This is at least partially due to timelines that fail to properly chart root causes that far precede traditional antebellum chronologies that sometimes look back no further than the Mexican War—which at the same time serves as a bold underscore to the lack of agreement on even a consistent “start date” for the antebellum. Not surprisingly perhaps, this murkiness has also crept into the realm of Lincoln studies, to the disfavor of genres that should be complementary rather than competing.

In fact, the trajectory of Lincoln’s life and the antebellum are inextricably conjoined, a reality that Sidney Blumenthal brilliantly captures with a revolutionary tactic that chronicles these as a single, intertwined narrative that begins with A Self-Made Man: The Political Life of Abraham Lincoln Vol. I, 1809–1849 (which I reviewed elsewhere). It is evident that at Lincoln’s birth the slave south already effectively controlled the government, not only by way of a string of chief executives who also happened to be Virginia plantation dynasts, but—of even greater consequence—outsize representation obtained via the Constitution’s “Three-Fifth’s Clause.” But even then, there were signs that the slave power—pregnant with an exaggerated sense of their own self-importance, a conviction of moral superiority, as well as a ruthless will to dominate—possessed an unquenchable appetite to enlarge their extraordinary political power to steer the ship of state—frequently enabled by the northern men of southern sympathies then disparaged as “doughfaces.” Lincoln was eleven at the time of the Missouri Compromise, twenty-three during the Nullification Crisis so closely identified with John C. Calhoun, twenty-seven when the first elements of the “gag rule” in the House so ardently opposed by John Quincy Adams were instituted, thirty-seven at the start of both the Mexican War and his sole term as an Illinois Congressman, where he questioned the legitimacy of that conflict. That same year, Stephen A. Douglas, also of Illinois, was elected U.S. Senator.

Through it all, the author proves as adept as historian of the United States as he is biographer of Lincoln—who sometimes goes missing for a chapter or more, only summoned when the account calls for him to make an appearance. Some critics have voiced their frustration at Lincoln’s own absence for extended portions in what is after all his own biography, but they seem to be missing the point. As Blumenthal demonstrates in this and subsequent volumes, it is not only impossible to study Lincoln without surveying the age that he walked the earth, but it turns out that it is equally impossible to analyze the causes of the Civil War absent an analysis of Lincoln, because he was such a critical figure along the way.

Wrestling With His Angel: The Political Life of Abraham Lincoln Vol. II, 1849-1856, picks up where A Self-Made Man leaves off, and that in turn is followed by All the Powers of Earth. All form a single unbroken narrative of politics and power, something that happens to fit with my growing affinity for political biography, as distinguished by David O. Stewart’s George Washington: The Political Rise of America’s Founding Father, Jon Meacham’s Thomas Jefferson: The Art of Power, and Franklin D. Roosevelt: A Political Life, by Robert Dallek. Blumenthal, of course, takes this not only to a whole new level, but to an entirely new dimension.

For more recent times, the best of the best in this genre appears in works by historian Rick Perlstein (author of Nixonland and Reaganland) who also happens to be the guy who recommended Blumenthal to me. In the pages of Perlstein’s Reaganland, Jimmy Carter occupies center-stage far more so than Ronald Reagan, since without Carter’s failed presidency there never could have been a President Reagan. Similarly, Blumenthal cedes a good deal of Lincoln’s spotlight to Stephen A. Douglas, Lincoln’s longtime rival and the most influential doughface of his time. Many have dubbed John C. Calhoun the true instigator in the process that led to Civil War a decade after his death. And while that reputation may not be undeserved, it might be overstated. Calhoun, a southerner who celebrated slavery, championed nullification, and normalized notions of secession, could indeed be credited with paving the road to disunion. But, as Blumenthal skillfully reveals, maniacally gripping the reins of the wagon that in a confluence of unintended consequences was to hurtle towards both secession and war was the under-sized, racist, alcoholic, bombastic, narcissistic, ambitious, pro-slavery but pro-union northerner Stephen A. Douglas, the so-called “Little Giant.”

Like Calhoun, Douglas was self-serving and opportunistic, with a talent for constructing an ideological framework for issues that suited his purposes. But unlike Calhoun, while he often served their interests Douglas was a northern man never accepted nor entirely trusted by the southern elite that he toadied to in his cyclical unrequited hopes they would back his presidential ambitions. Such support never materialized.

It may not have been clear at the time, and the history books tend to overlook it, but Blumenthal demonstrates that it was the rivalry between Douglas and Lincoln that truly defined the struggles and outcomes of the age. It was Douglas who—undeterred by the failed efforts of Henry Clay—shepherded through the Compromise of 1850, which included the Fugitive Slave Act that was such an anathema to the north. More significantly, the 1854 Kansas-Nebraska Act that repealed the Missouri Compromise was Douglas’s brainchild, and Douglas was to continue to champion his doctrine of “popular sovereignty” even after Taney’s ruling in Dred Scott invalidated it. It was Douglas’s fantasy that he alone could unite the states of north and south, even as the process of fragmentation was well underway, a course he himself surely if inadvertently set in motion. Douglas tried to be everyone’s man, and in the end he was to be no one’s. Throughout all of this, over many years, Blumenthal argues, Lincoln—out of elective office but hardly a bystander—followed Douglas. Lincoln’s election brought secession, but if a sole agent was to be named for fashioning the circumstances that ignited the Civil War, that discredit would surely go to Douglas, not Lincoln.

These two volumes combined well exceed a thousand pages, not including copious notes and back matter, so no review can appropriately capture it all except to say that collectively it represents a magnificent achievement that succeeds in treating the reader to what the living Lincoln was like while recreating the era that defined him. Indeed, including his first book, I have thus far read nearly sixteen hundred pages of Blumenthal’s Lincoln and my attention has never wavered. Only Robert Caro—with his Shakespearian multi-volume biography of Lyndon Johnson—has managed to keep my interest as long as Blumenthal. And I can’t wait for the next two in the series to hit the press!  To date, more than fifteen thousand books have been published about Abraham Lincoln, so there are many to choose from. Still, these from Blumenthal are absolutely required reading.

 

I reviewed Blumenthal’s first volume, A Self-Made Man, here:  Review of: A Self-Made Man: The Political Life of Abraham Lincoln Vol. I, 1809–1849, by Sidney Blumenthal

I reviewed Rick Perlstein’s Reaganland here:  Review of: Reaganland: America’s Right Turn 1976-1980, by Rick Perlstein

Featured

Review of: The Patriarchs: The Origins of Inequality, by Angela Saini

“Down With the Patriarchy” is a both a powerful rallying cry and a fashionable emblem broadcast in memes, coffee mugs, tee shirts—and even, paired with an expletive, sung aloud in a popular Taylor Swift anthem. But what exactly is the patriarchy? Is it, as feminists would have it, a reflection of an entrenched system of male domination that defines power relationships between men and women in arenas public and private? Or, as some on the right might style it, a “woke” whine of victimization that downplays the equality today’s women have achieved at home and at work? Regardless, is male dominance simply the natural order of things, born out of traditional gender roles in hunting and gathering, reaping and sowing, sword-wielding and childbearing? Or was it—and does it remain—an artificial institution imposed from above and stubbornly preserved? Do such patterns run deep into human history, or are they instead the relatively recent by-products of agriculture, of settled civilization, of states and empires? Did other lifeways once exist? And finally, perhaps most significantly, does it have to be this way?

A consideration of these and other related questions, both practical and existential, form the basis for The Patriarchs: The Origins of Inequality, an extraordinary tour de force by Angela Saini marked by both a brilliant gift for analysis and an extremely talented pen. Saini, a science journalist and author of the groundbreaking, highly acclaimed Superior: The Return of Race Science, one-ups her own prior achievements by widening the lens on entrenched inequalities in human societies to look beyond race as a factor, a somewhat recent phenomenon in the greater scheme of things, to that of gender, which—at least on the face of it—seems far more ancient and deep-seated.

To that end, in The Patriarchs Saini takes the reader on a fascinating expedition to explore male-female relationships—then and now—ranging as far back as the nearly ten-thousand-year-old proto-city Çatalhöyük in present-day Turkey, where some have suggested that female deities were worshipped and matriarchy may have been the status quo, and flashing forward to the still ongoing protests in Iran, sparked by the death in custody of a 22-year-old woman detained for wearing an “improper” hijab. There are many stops in between, including the city-states of Classical Greece, which saw women controlled and even confined by their husbands in democratic Athens, but yet celebrated for their strength and independence (of a sorts) in the rigidly structured autocracy that defined the Spartan polis.

But most of the journey is contemporary and global in scope, from Seneca Falls, New York, where many Onondaga Native American women continue to enjoy a kind of gender equality that white American women could hardly imagine when they launched their bid for women’s rights in that locale in 1848, to the modern-day states of Kerala and Meghalaya in India, which still retain deeply-rooted traditions of the matrilinear and the matriarchal, respectively, in a nation where arranged marriages remain surprisingly common. And to Afghanistan, where the recently reinstalled Taliban regime prohibits the education of girls and mandates the wearing of a Burqa in public, and Ethiopia, where in many parts of the country female genital mutilation is the rule, not the exception. There are even interviews with European women who grew up in the formerly socialist eastern bloc, some who look back wistfully to a time marked by better economic security and far greater opportunities for women, despite the repression that otherwise characterized the state.

I’m a big fan of Saini’s previous work, but still I cracked the cover of her latest book with some degree of trepidation. This is, after all, such a loaded topic that it could, if mishandled, too easily turn to polemic.  So I carefully sniffed around for manifesto-disguised-as-thesis, for axes cleverly cloaked from grinding, for cherry-picked data, and for broad brushes. (Metaphors gleefully mixed!) Thankfully, there was none of that. Instead, she approaches this effort throughout as a scientist, digging deep, asking questions, and reporting answers that sometimes are not to her liking. You have to respect that. My background is history, a study that emphasizes complexity and nuance, and mandates both careful research and an analytical evaluation of relevant data. Both science and history demand critical thinking skills. In The Patriarchs, Saini demonstrates that she walks with great competence in each of these disciplines.

A case in point is her discussion of Çatalhöyük, an astonishing neolithic site first excavated by English archaeologist James Mellaart in the late 1950s that revealed notable hallmarks of settled civilization uncommon for its era. Based on what he identified as figurines of female deities, such as the famous Seated Woman of Çatalhöyük that dates back to 6000 BCE, Mellaart claimed that a “Mother Goddess” culture prevailed. The notion that goddesses once dominated a distant past was dramatically boosted by Lithuanian archaeologist and anthropologist Marija Gimbutas, who wrote widely on this topic, and argued as well that a peaceful, matriarchal society was characteristic to the neolithic settlements of Old Europe prior to being overrun by Indo-European marauders from the north who imposed a warlike patriarchy upon the subjugated population.

I squirmed a bit in my seat as I read this, knowing that the respective conclusions of both Mellaart and Gimbutas have since been, based upon more rigorous studies, largely discredited as wildly overdrawn. But there was no need for such concerns, for in subsequent pages Saini herself points to current experts and the scholarly consensus to rebut at least some of the bolder assertions of these earlier archaeologists. It turns out that in both Çatalhöyük and Old Europe, while society was probably not hierarchal, it was likely more gender-neutral than matriarchal. It is clear that the author should be commended for her exhaustive research. While reading of Indo-European invaders—something Gimbutas got right—my thoughts instantly went to David Anthony’s magnificent study, The Horse, the Wheel, and Language: How Bronze-Age Riders from the Eurasian Steppes Shaped the Modern World, which I read some years back. When I thumbed ahead to the “Notes,” I was delighted to find a citation for the Anthony book!

It is soon clear that in her search for the origins of inequality, Saini’s goal is to ask more questions than insist upon answers. Also increasingly evident is that even if it seems to have become more common in the past centuries, patriarchy is not the norm. No, it doesn’t have to be this way. Perhaps matriarchy did not characterize Çatalhöyük—and we really can’t be certain—but there is evidence for matriarchal societies elsewhere; some still flourish to this day. History and events in the current millennium demonstrate that there are choices, and societies can—and we can—choose equality rather than a condition where one group is dominated by another based upon race, caste, or gender.

With all of the author’s questions and her search for answers, however, it is the journey that is most enjoyable. In such an expansive work of science, history, and philosophy, the narrative never bogs down. And while the scope is vast, it is only a couple of hundred pages. I actually found myself wanting more.

If there is one area where I would caution Saini, it was in her treatment of ancient Greece. Yes, based upon the literature, Athenian women seem to have been stifled and Spartan women less inhibited, but of the hundreds of poleis that existed in the Classical period, we really only have relevant information for a few, surviving data is weighted heavily towards the elites of Athens and Sparta, and much of it is tarnished by editorializing on both sides that reflected the antipathy between these two bitter rivals. There is more to the story. Aspasia, the mistress of the Athenian statesman Pericles, was a powerful figure in her own right. Lysistrata, the splendid political satire created by the Athenian Aristophanes, smacks of a kind of ancient feminism as it has women on both sides of the Peloponnesian War denying sex to their men until a truce is called. This play could never have resonated if the female characters were wholly imagined. And while we can perhaps admire the status of a Spartan woman when juxtaposed with her Athenian counterpart, we must remember that their primary role in that rigid, militaristic society was to bear the sons of warriors.

But the station of a Spartan woman raises an interesting counterintuitive that I had hoped Saini would explore. Why was it—and does it remain the case—that women seem to gain greater freedom in autocratic states than democratic ones? It is certainly anachronistic to style fifth century Sparta as totalitarian, but the structure of the state seems to have far more in common with the twentieth century Soviet Union and the Peoples Republic of China, where despite repression women achieved far greater equality than they did in Athens or, at least until very recently, in Europe and the United States. And I really wanted a chapter on China, where the crippling horror of foot-binding for girls was not abolished until 1912, and still lingered in places until the communist takeover mid-century. Mao was responsible for the wanton slaughter of millions, yet women attained a greater equality under his brutal regime than they had for the thousands of years that preceded him.

While she touches upon it, I also looked for a wider discussion of how conservative women can sometimes come to not only represent the greatest obstacle for women’s rights but to advance rather than impede the patriarchy. As an American, there are many painful reminders of that here, where in decades past the antifeminist Phyllis Schlafly nearly single-handedly derailed passage of the Equal Rights Amendment. Most recently, it was a coalition of Republican and Christian evangelical women who led the crusade that eventually succeeded in curbing abortion rights. But then, as I wished for another hundred pages to go over all this, Saini summed up the incongruity succinctly in a discussion of female genital mutilation in Africa, citing the resistance to change by an Ethiopian girl who asserted: “If our mothers should refuse to continue cutting us, we will cut ourselves.” [p191]

In the end, Saini’s strategy was sound. The Patriarchs boasts a manageable size and the kind of readability that might be sacrificed in a bulkier treatise. The author doesn’t try to say it all: only what is most significant. Also, both the length and the presentation lend appeal to a popular audience, while the research and extensive notes will suit an academic one, as well. That is an especially rare accomplishment these days!

Whatever preconceived notions the reader might have, based upon the title and its implications, Saini demonstrates again and again that it’s not her intention to prove a point, but rather to make you think. Here she succeeds wonderfully. And you get the impression that it is her intellectual curiosity that guides her life. Born in London of ethnic Indian parents and now residing in New York City, she is a highly educated woman with brown skin, feet that can step comfortably into milieus west and east, and an insightful mind that fully embraces the possibilities of the modern world. Thus, Saini is in so many ways ideally suited to address issues of racism and sexism. She is still quite young, and this is her fourth book. I suspect there will be many more. In the meantime, read this one. It will be well worth your time.

 

Note: This review was based upon an Uncorrected Page Proof edition

Note: I reviewed Saini’s previous book here: Review of: Superior: The Return of Race Science, by Angela Saini

Featured

Review of: Yippie Girl: Exploits in Protest and Defeating the FBI, by Judy Gumbo

The typical American family of 1968 sitting back to watch the nightly news on their nineteen-inch televisions could be excused for sometimes gripping their armrests as events unfolded before them—for most in living color, but for plenty of others still on the familiar black-and-white sets rapidly going extinct. (I was eleven: we had a color TV!) The first seven months of that year was especially tumultuous.

There was January’s spectacular Tet Offensive across South Vietnam, which while ultimately unsuccessful yet stunned a nation still mostly deluded by assurances from Lyndon Johnson’s White House that the war was going according to plan. Then in February, the South Carolina highway patrol opened fire on unarmed black Civil Rights protestors on the state university campus, leaving three dead and more than two dozen injured in what was popularly called the “Orangeburg Massacre.” In March, a shaken LBJ announced in a live broadcast that he would not seek reelection. In April, Martin Luther King Jr. was assassinated, sparking riots in cities across the country. Only two days later, a fierce firefight erupted between the Oakland police and Black Panther Party members Eldridge Cleaver and “Lil’ Bobby” Hutton, which left two officers injured, Hutton dead, and Cleaver in custody; some reports maintain that seventeen-year-old Hutton was executed by police after he surrendered. Later that same month, hundreds of antiwar students occupied buildings on Columbia University’s campus until the New York City police violently broke up the demonstration, beating and arresting protesters. In May, Catholic activists known as the Catonsville Nine removed draft files from a Maryland draft board which they set ablaze in the parking lot. In June, Robert F. Kennedy was assassinated. In July, what became known as the “Glenville Shootout” saw black militants engaged in an extended gunfight with police in Cleveland, Ohio that left seven dead.

In August, just days before the streets outside the arena hosting the Democratic National Convention deteriorated into violent battles between police and demonstrators that later set the stage for the famous trial of the “Chicago Seven,” a group of Yippies—members of the Youth International Party that specialized in pranks and street theatre— were placed under arrest by the Chicago police while in the process of nominating a pig named “Pigasus” for president. In addition to Pigasus, those taken into custody included Yippie organizer Jerry Rubin, folk singer Phil Ochs, and activist Stew Albert. Present but not detained was Judy Gumbo, Stew’s girlfriend and a feminist activist in her own right.

Known for their playful anarchy, many leaders of the New Left dismissed Yippies as “Groucho Marxists,” but for some reason the FBI, convinced they were violent insurrectionists intent on the overthrow of the United States government, became obsessed with the group, placing them on an intensive surveillance that lasted for years to come. A 1972 notation in Gumbo’s FBI files declared, without evidence, that she was “the most vicious, the most anti-American, the most anti-establishment, and the most dangerous to the internal security of the United States.” She was later to obtain copies of these files, which served as an enormously valuable diary of events of sorts for her (2022) memoir, Yippie Girl: Exploits in Protest and Defeating the FBI, a well-written if sometimes uneven account of her role in and around an organization at the vanguard of the potent political radicalism that swept the country in the late-sixties and early-seventies.

Born Judy Clavir in Toronto, Canada, she grew up a so-called “red diaper baby,” the child of rigidly ideological pro-Soviet communists. She married young and briefly to actor David Hemblen and then fled his unfaithfulness to start a new life in Berkeley, California in the fall of 1967, in the heyday of the emerging counterculture, and soon fell in with activists who ran in the same circles with new boyfriend Stew Albert. Albert’s best friends were Black Panther Eldridge Cleaver and Yippie founder Jerry Rubin. She squirmed when Cleaver referred to her as “Mrs. Stew,” insisting upon her own identity, until one day Eldridge playfully dubbed her “Gumbo”—since “gumbo goes with stew.” Ever after she was known as Judy Gumbo.

Gumbo took a job writing copy for a local newspaper, while becoming more deeply immersed in activism as a full-fledged member of the Yippies. As such, those in her immediate orbit were some of the most consequential members of the antiwar and Black Power movements, which sometimes overlapped, including Abbie and Anita Hoffman, Nancy Kurshan, Paul Krassner, Phil Ochs, William Kunstler, David Dellinger, Timothy Leary, Kathleen Cleaver, and Bobby Seale. She describes the often-immature jockeying for leadership that occurred between rivals Abbie Hoffman and Jerry Rubin, which also underscored her frustration in general with ostensibly enlightened left-wing radicals who nevertheless casually asserted male dominance in every arena—and fueled her increasingly more strident brand of feminism. She personalized the Yippie exhortation to “rise up and abandon the creeping meatball”—which means to conquer fear by turning it into an act of defiance and deliberately doing exactly what you most fear—by leaving her insecurities behind, as well as her reliance on other people, to grow into an assertive take-no-prisoners independent feminist woman with no regrets. How she achieves this is the journey motif of her life and this memoir.

Gumbo’s behind-the-scenes anecdotes culled from years of close contact with such a wide assortment of sixties notables is the most valuable part of Yippie Girl. There is no doubt that her ability to consult her FBI files—even if these contained wild exaggerations about her character and her activities—refreshed her memories of those days, more than a half century past, which lends authenticity to the book as a kind of primary source for life among Yippies, Panthers, and fellow revolutionaries of the time. And she successfully puts you in the front seat, with her, as she takes you on a tour of significant moments in the movement and in its immediate periphery in Berkeley, Chicago, and New York. Her style, if not elegant, is highly readable, which is an accomplishment for any author that merits mention in a review of their work.

The weakest part of the book is her unstated insistence on making herself the main character in every situation, which betrays an uncomfortable narcissism that the reader suspects had negative consequences in virtually all of her relationships with both allies and adversaries. Yes, it is her memoir. Yes, her significance in the movement deserves—and has to some degree been denied by history—the kind of notoriety accorded to what after all became household names like Abbie Hoffman and Jerry Rubin. But the reality is that she was never in a top leadership role. She was not arrested with Pigasus. She was not put on trial with the Chicago 7. You can detect in the narrative that she wishes she was.

This aspect of her personality makes her a less sympathetic figure than she should be as a committed activist tirelessly promoting peace and equality while being unfairly hounded by the FBI. But she carries something else unpleasant around with her that is unnerving: an allegiance to her cause and herself that boasts a kind of ruthless naïveté that rejects correction when challenged either by reality or morality. She condemns Cleaver’s infidelity to his wife, but abandons Stew for a series of random affairs, most notably with a North Vietnamese diplomat who happens to be married. She personally eschews violence, but cheers the Capitol bombing by the Weathermen, domestic terrorists who splintered from the former (SDS) Students for a Democratic Society.

To oppose the unjust U.S. intervention in Vietnam and decry the millions of lives lost across Southeast Asia was certainly an honorable cause, worthy of respect, then and now. But this red diaper baby never grew up: her vision of the just and righteous was distinguished by her admiration of oppressive, totalitarian regimes in the Soviet Union, North Korea, Cuba—and North Vietnam. Like too many in the antiwar movement, opposition to Washington’s involvement in the Vietnam War strangely morphed into a distorted veneration for Hanoi. There may indeed have been much to condemn about the America of that era in the realm of militarism, imperialism, and inequality, but that hardly justified—then or now—championing communist dictatorships on the other side known for regimes marked by repression and sometimes even terror.

Gumbo visited most of these repressive states that she supported, including North Vietnam. She reveals that while there she settled into the seat of a Russian anti-antiaircraft machine gun much like the one Jane Fonda later sat in. Fonda, branded a traitor by the right, later lamented that move, and publicly admitted it. Gumbo will have none of it: “I have never regretted looking through those gun sights,” she proudly asserts [p203]. She still celebrates the reunification of Vietnam, while ignoring its aftermath. Her stubborn allegiance to ideology over humanity, and her utter inability to evolve as a person further points to her inherent narcissism. She is never wrong. She is always right. Just ask her, she’ll tell you so.

Yippie Girl also lacks a greater context that would make it more accessible to a wider audience. The author assumes the reader is well aware of the climate of extremism that often characterized the United States in the sixties and seventies—like the litany of news events of the first half of 1968 that opened this review—when in fact for most Americans today those days likely seem like accounts from another planet in another dimension. I would have loved to see Gumbo write a bigger book that wasn’t just about her and her community. At the same time, if you are a junkie for American political life back in the day when today’s polarization seems tame by comparison, and youth activism ruled, I would recommend you read Gumbo’s book. I suspect that whether you end up liking or detesting her in the end, she will still crave the attention.

NOTE: This book was obtained as part of an Early Reviewers program

Featured

Review of: The Reopening of the Western Mind: The Resurgence of Intellectual Life from the End of Antiquity to the Dawn of the Enlightenment, by Charles Freeman

Nearly two decades have passed since Charles Freeman published The Closing of the Western Mind: The Rise of Faith and the Fall of Reason, a brilliant if controversial examination of the intellectual totalitarianism of Christianity that dated to the dawn of its dominance of Rome and the successor states that followed the fragmentation of the empire in the West.  Freeman argued persuasively that the early Christian church vehemently and often brutally rebuked the centuries-old classical tradition of philosophical enquiry and ultimately drove it to extinction with a singular intolerance of competing ideas crushed under the weight of a monolithic faith. Not only were pagan religions prohibited, but there would be virtually no provision for any dissent with official Christian doctrine, such that those who advanced even the most minor challenges to interpretation were branded heretics and sent to exile or put to death. That tragic state was to define medieval Europe for more than a millennium.

Now the renowned classical historian has returned with a follow-up epic, The Reopening of the Western Mind: The Resurgence of Intellectual Life from the End of Antiquity to the Dawn of the Enlightenment, (a revised and repolished version of The Awakening: A History of the Western Mind AD 500-1700, previously published in the UK), which recounts the slow—some might brand it glacial—evolution of Western thought that restored legitimacy to independent examination and analysis, and that eventually led to a celebration, albeit a cautious one, of reason over blind faith. In the process, Freeman reminds us that quality, engaging narrative history has not gone extinct, while demonstrating that it is possible to produce a work that is so well-written it is readable by a general audience while meeting the rigorous standards of scholarship demanded by academia. That this is no small achievement will be evident to anyone who—as I do—reads both popular and scholarly history and is struck by the stultifying prose that often typifies the academic. In contrast, here Freeman takes a skillful pen to reveal people, events, and occasionally obscure concepts, much of which may be unfamiliar to those who are not well versed in the medieval period.

Full disclosure: Charles Freeman and I began a long correspondence via email following my review of Closing. I was honored when he selected me as one of his readers for his drafts of Awakening, the earlier UK edition of this work, which he shared with me in 2018—but at the same time I approached this responsibility with some trepidation: given Freeman’s credentials and reputation, what if I found the work to be sub-standard? What if it was simply not a good book?  How would I address that? As it was, these worries turned out to be misplaced. It is a magnificent book, and I am grateful to have read much of it as a work in progress, and then again after publication. I did submit several pages of critical commentary to assist the author, to the best of my limited abilities, hone a better final product, and to that end I am proud see my name appear in the “Acknowledgments.” But to be clear: I am an independent reviewer and did not receive compensation for this review.

The fall of Rome remains a subject of debate for historians. While traditional notions of sudden collapse given to pillaging Vandals leaping over city walls and fora engulfed in flames have long been revised, competing visions of a more gradual transition that better reflect the scholarship sometimes distort the historiography to minimize both the fall and what was actually lost. And what was lost was indeed dramatic and incalculable. If, to take just one example, sanitation can be said to be a mark of civilization, the Roman aqueducts and complex network of sewers that fell into disuse and disrepair meant that fresh water was no longer reliable, and sewage that bred pestilence was to be the norm for fifteen centuries to follow. It was not until the late nineteenth century that sanitation in Europe even approached Roman standards. So, whatever the timeline—rapid or gradual—there was indeed a marked collapse. Causes are far more elusive.  But Gibbon’s largely discredited casting of Christianity as the villain that brought the empire down tends to raise hackles in those who suspect someone like Freeman attempting to point those fingers once more. But Freeman has nothing to say about why Rome fell, only what followed. The loss of the pursuit of reason was to be as devastating for the intellectual health of the post-Roman world in the West as sanitation was to prove for its physical health. And here Freeman does squarely take aim at the institutional Christian church as the proximate cause for the subsequent consequences for Western thought. This is well-underscored in the bleak assessment that follows in one of the final chapters in The Closing of the Western Mind:

Christian thought that emerged in the early centuries often gave irrationality the status of a universal “truth” to the exclusion of those truths to be found through reason. So the uneducated was preferred to the educated and the miracle to the operation of natural laws … This reversal of traditional values became embedded in the Christian tradi­tion … Intellectual self-confidence and curiosity, which lay at the heart of the Greek achievement, were recast as the dreaded sin of pride. Faith and obedience to the institutional authority of the church were more highly rated than the use of reasoned thought. The inevitable result was intellectual stagnation …

Reopening picks up where Closing leaves off, but the new work is marked by far greater optimism. Rather than dwell on what has been lost, Freeman puts focus not only upon the recovery of concepts long forgotten but how rediscovery eventually sparked new, original thought, as the spiritual and later increasingly secular world danced warily around one another—with a burning heretic all too often staked between them on Europe’s fraught intellectual ballroom. Because the timeline is so long—encompassing twelve centuries—the author sidesteps what could have been a dull chronological recounting of this slow progression to narrow his lens upon select people, events and ideas that collectively marked milestones on the way, which  comprise thematic chapters to broaden the scope. This approach thus transcends what might have been otherwise parochial to brilliantly convey the panoramic.

There are many superlative chapters in Reopening, including the very first one, entitled “The Saving of the Texts 500-750.” Freeman seems to delight in detecting the bits and pieces of the classical universe that managed to survive not only vigorous attempts by early Christians to erase pagan thought but the unintended ravages of deterioration that is every archivist’s nightmare. Ironically, the sacking of cities in ancient Mesopotamia begat conflagrations that baked inscribed clay tablets, preserving them for millennia. No such luck for the Mediterranean world, where papyrus scrolls, the favored medium for texts, fell to war, natural disasters, deliberate destruction, as well as to entropy—a familiar byproduct of the second law of thermodynamics—which was not kind in prevailing environmental conditions. We are happily still discovering papyri preserved by the dry conditions in parts of Egypt—the oldest dating back to 2500 BCE—but it seems that the European climate doomed papyrus to a scant two hundred years before it was no more.

Absent printing presses or digital scans, texts were preserved by painstakingly copying them by hand, typically onto vellum, a kind of parchment made from animal skins with a long shelf life, most frequently in monasteries by monks for whom literacy was deemed essential. But what to save? The two giants of ancient Greek philosophy, Plato and Aristotle, were preserved, but the latter far more grudgingly. Fledgling concepts of empiricism in Aristotle made the medieval mind uncomfortable. Plato, on the other hand, who pioneered notions of imaginary higher powers and perfect forms, could be (albeit somewhat awkwardly) adapted to the prevailing faith in the Trinity, and thus elements of Plato were syncretized into Christian orthodoxy. Of course, as we celebrate what was saved it is difficult not to likewise mourn what was lost to us forever. Fortunately, the Arab world put a much higher premium on the preservation of classical texts—an especially eclectic collection that included not only metaphysics but geography, medicine, and mathematics. When centuries later—as Freeman highlights in Reopening —these works reached Europe, they were to be instrumental as tinder to the embers that were to spark first a revival and then a revolution in science and discovery.

My favorite chapter in Reopening is “Abelard and the Battle for Reason,” which chronicles the extraordinary story of scholastic scholar Peter Abelard (1079-1142)—who flirted with the secular and attempted to connect rationalism with theology—told against the flamboyant backdrop of Abelard’s tragic love affair with Héloïse, a tale that yet remains the stuff of popular culture. In a fit of pique, Héloïse’s father was to have Abelard castrated. The church attempted something similar, metaphorically, with Abelard’s teachings, which led to an order of excommunication (later lifted), but despite official condemnation Abelard left a dramatic mark on European thought that long lingered.

There is too much material in a volume this thick to cover competently in a review, but the reader will find much of it well worth the time. Of course, some will be drawn to certain chapters more than others. Art historians will no doubt be taken with the one entitled “The Flowering of the Florentine Renaissance,” which for me hearkened back to the best elements of Kenneth Clark’s Civilisation, showcasing not only the evolution of European architecture but the author’s own adulation for both the art and the engineering feat demonstrated by Brunelleschi’s dome, the extraordinary fifteenth century adornment that crowns the Florence Cathedral. Of course, Freeman does temper his praise for such achievements with juxtaposition to what once had been, as in a later chapter that recounts the process of relocating an ancient Egyptian obelisk weighing 331 tons that had been placed on the Vatican Hill by the Emperor Caligula, which was seen as remarkable at the time. In a footnote, Freeman reminds us that: “One might talk of sixteenth-century technological miracles, but the obelisk had been successfully erected by the Egyptians, taken down by the Romans, brought by sea to Rome and then re-erected there—all the while remaining intact!”

If I was to find a fault with Reopening, it is that it does not, in my opinion, go far enough to emphasize the impact of the Columbian Experience on the reopening of the Western mind.  There is a terrific chapter devoted to the topic, which explores how the discovery of the Americas and its exotic inhabitants compelled the European mind to examine other human societies whose existence had never before even been contemplated. While that is a valid avenue for analysis, it yet hardly takes into account just how earth-shattering 1492 turned out to be—arguably the most consequential milestone for human civilization (and the biosphere!) since the first cities appeared in Sumer—in a myriad of ways, not least the exchange of flora and fauna (and microbes) that accompanied it. But this significance was perhaps greatest for Europe, which had been a backwater, long eclipsed by China and the Arab middle east.  It was the Columbian Experience that reoriented the center of the world, so to speak, from the Mediterranean to the Atlantic, which was exploited to the fullest by the Europeans who prowled those seas and first bridged the continents. It is difficult to imagine the subsequent accomplishments—intellectual and otherwise—had Columbus not landed at San Salvador. But this remains just a quibble that does not detract from Freeman’s overall accomplishment.

Interest in the medieval world has perhaps waned over time, but that is, of course, a mistake: how we got from point A to point B is an important story, even it has never been told before as well as Freeman has told it in Reopening. And it is not an easy story to tell. As the author acknowledges in a concluding chapter: “Bringing together the many different elements that led to the ‘reopening of the western mind’ is a challenge. It is important to stress just how bereft Europe was, economically and culturally, after the fall of the Roman empire compared to what it had been before …”

Those of us given to dystopian fiction, concerned with the fragility of republics and civilization, and perhaps wondering aloud in the midst of an ongoing global pandemic and the rise of authoritarianism what our descendants might recall of us if it all fell to collapse tomorrow, cannot help but be intrigued by how our ancestors coped—for better or for worse—after Rome was no more. If you want to learn more about that, there might be no better covers to crack than Freeman’s The Reopening of the Western Mind. I highly recommend it.

 

NOTE: Portions of this review also appear in my review of The Awakening: A History of the Western Mind AD 500-1700, by Charles Freeman, previously published in the UK, here: Review of: The Awakening: A History of the Western Mind AD 500-1700, by Charles Freeman

NOTE: My review of The Closing of the Western Mind: The Rise of Faith and the Fall of Reason, by Charles Freeman, is here:  Review of: The Closing of the Western Mind: The Rise of Faith and the Fall of Reason by Charles Freeman

Featured

Review of: Cloud Cuckoo Land, by Anthony Doerr

Ὦ ξένε, ὅστις εἶ, ἄνοιξον, ἵνα μάθῃς ἃ θαυμάζεις

“Stranger, whoever you are, open this to learn what will amaze you.”

This passage, in ancient Greek and in translation, is the key to Cloud Cuckoo Land, a big, ambitious, complicated novel by Anthony Doerr, the latest from the author of the magnificent, Pulitzer Prize-winning All the Light We Cannot See (2014). Classicists will recognize “Cloud Cuckoo Land” as borrowed from The Birds, the 414 BCE comedy by the Athenian satirist Aristophanes, a city in the sky constructed by birds that later became synonymous for any kind of fanciful world. In this case, Cloud Cuckoo Land serves as the purported title of a long-lost ancient work by Antonius Diogenes, rediscovered as a damaged but partially translatable codex in 2019, that relates the tale of Aethon, a hapless shepherd who transforms into a donkey, then into a fish, then into a crow, in a quest to reach that utopian city in the clouds. It serves as well as the literary glue that binds together the narrative and the central protagonists of Doerr’s novel.

There is the octogenarian Zeno, self-taught in classical Greek, who has translated the fragmentary codex and adapted it into a play that is to be performed by fifth graders in the public library located in Lakeport, Idaho in 2020.  Lurking in the vicinity is Seymour, an alienated teen with Asperger’s, flirting with eco-terrorism. And hundreds of years in the past, there is also the thirteen-year-old Anna, who has happened upon that same codex in Constantinople, on the eve of its fall to the Turks. Among the thousands of besiegers outside the city’s walls is Omeir, a harelipped youngster who with his team of oxen was conscripted to serve the Sultan in the cause of toppling the Byzantine capital. Finally, there is Konstance, fourteen years old, who has lived her entire life on the Argos, a twenty-second century spacecraft destined for a distant planet; she too comes to discover “Cloud Cuckoo Land.”

Alternating chapters, some short, others far longer, tell the stories of each protagonist, in real time or through flashbacks. For the long-lived Zeno, readers follow his hardscrabble youth, his struggle with his closeted homosexuality, his stint as a POW in the Korean War, and his long love affair with the language of the ancient Greeks. We observe how an uncertain and frequently bullied Seymour reacts to the destruction of wilderness and wildlife in his own geography. We watch the rebellious Anna abjure her work as a lowly seamstress to clandestinely translate the codex. We learn how the disfigured-at-birth Omeir is at first nearly left to die, then exiled along with his family because villagers believe he is a demon. We see Konstance, trapped in quarantine in what appears to be deep space, explore the old earth through an “atlas” in the ship’s library.

Cloud Cuckoo Land is in turn fascinating and captivating, but sometimes—unfortunately—also dull. There are not only the central protagonists to contend with, but also a number of secondary characters in each of their respective orbits, as well as the multiple timelines spanning centuries, so there is much to keep track of. I recall being so spellbound by All the Light We Cannot See that I read its entire 500-plus pages over a single weekend. This novel, much longer, did not hook me with a similar force. I found it a slow build: my enthusiasm tended to simmer rather than surge. Alas, I wanted to care about the characters far more than I did.  Still, the second half of the novel is a much more exciting read than the first portion.

Science—in multiple disciplines—is often central to a Doerr novel. That was certainly the case in All the Light We Cannot See, as well as in his earlier work, About Grace. In Cloud Cuckoo Land, in contrast, science—while hardly absent—takes a backseat. The sci-fi in the Argos voyage is pretty cool, but hardly the stuff of Asimov or Heinlein. And Seymour’s science of climate catastrophe strikes as little more than an afterthought in the narrative.

Multiple individuals with lives on separate trajectories centuries apart whose exploits resonated larger and often overlapping themes reminded me at first of another work with a cloud in its title: Cloud Atlas, by David Mitchell.  But Cloud Cuckoo Land lacks the spectacular brilliance of that novel, which manages to take your very breath away. It also falls short of the depth and intricacy that powers Doerr’s All the Light We Cannot See. And yet … and yet … I ended up really enjoying the book, even shedding a tear or two in its final pages. So there’s that. In the final analysis, Doerr is a talented writer and if this is not his finest work, it remains well worth the read.

I have reviewed other novels by Anthony Doerr here:

Review of: All the Light We Cannot See, by Anthony Doerr

Review of: About Grace, by Anthony Doerr

Featured

Review of: The Sumerians: Lost Civilizations, by Paul Collins

Reading the “The Epic of Gilgamesh,” in its entirety rekindled a long dormant interest in the Sumerians, the ancient Mesopotamian people that my school textbooks once boldly proclaimed as inventors not only of the written word, but of civilization itself! One of the pleasures of having a fine home library stocked with eclectic works is that there is frequently a volume near at hand to suit such inclinations, and in this case I turned to a relatively recent acquisition, The Sumerians, a fascinating and extremely well-written—if decidedly controversial—contribution to the Lost Civilizations series, by Paul Collins.

“The Epic of Gilgamesh” is, of course, the world’s oldest literary work: the earliest record of the five poems that form the heart of the epic were carved into Sumerian clay tablets that date back to 2100 BCE, and relate the exploits of the eponymous Gilgamesh, an actual historic king of the Mesopotamian city state Uruk circa 2750 BCE who later became the stuff of heroic legend. Most famously, a portion of the epic recounts a flood narrative nearly identical to the one reported in Genesis, making it the earliest reference to the Near East flood myth held in common by the later Abrahamic religions.

Uruk was just one of a number of remarkable city states—along with Eridu, Ur, and Kish—that formed urban and agricultural hubs between the Tigris and Euphrates rivers in what is today southern Iraq, between approximately 3500-2000 BCE, at a time when the Persian Gulf extended much further north, putting these cities very near the coast.  Some archaeologists also placed “Ur of the Chaldees,” the city in the Hebrew Bible noted as the birthplace of the Israelite patriarch Abraham, in this vicinity, reinforcing the Biblical flood connection.  A common culture that boasted the earliest system of writing that recorded in cuneiform script a language isolate unrelated to others, advances in mathematics that utilized a sexagesimal system, and the invention of both the wheel and the plow came to be attributed to these mysterious non-Semitic people, dubbed the Sumerians.

But who were the Sumerians? They were completely unknown, notes the author, until archaeologists stumbled upon the ruins of their forgotten cities about 150 years ago. Collins, who currently is Curator for Ancient Near East, Ashmolean Museum*, at University of Oxford, fittingly opens his work with the baked clay artifact known as a “prism” inscribed with the so-called Sumerian King List, circa 1800 BCE, currently housed in the Ashmolean Museum. The opening passage of the book is also the first lines of the Sumerian King List: “After the kingship descended from heaven, the kingship was in Eridu. In Eridu, Alulim became king; He ruled for 28,800 years.” Heady stuff.

“It is not history as we would understand it,” argues Collins, “but a combination of myth, legend and historical information.” This serves as a perfect metaphor for Collins’s thesis, which is that after a century and a half of archaeology and scholarship, we know less about the Sumerians—if such a structured, well-defined common culture ever even existed—and far more about the sometimes-spurious conclusions and even outright fictions that successive generations of academics and observers have attached to these ancient peoples.

Thus, Collins raises two separate if perhaps related issues that both independently and in tandem spark controversy. The first is the question of whether the Sumerians ever existed as a distinct culture, or whether—as the author suggests—scholars may have somehow mistakenly woven a misleading tapestry out of scraps and threads in the archaeological record representing a variety of inhabitants within a shared geography with material cultures that while overlapping were never of a single fabric?  The second is how deeply woven into that same tapestry are distortions—some intended and others inadvertent—tailored to interpretations fraught with the biases of excavators and researchers determined to locate the Sumerians as uber-ancestors central to the myth of Western Civilization that tends to dominate the historiography? And, of course, if there is merit to the former, was it entirely the product of the latter, or were other factors involved?

I personally lack both the expertise and the qualifications to weigh in on the first matter, especially given that its author’s credentials include not only an association with Oxford’s School of Archaeology, but also as the Chair of the British Institute for the Study of Iraq. Still, I will note in this regard that he makes many thought-provoking and salient points. As to the second, Collins is quite persuasive, and here great authority on the part of the reader is not nearly as requisite.

Nineteenth century explorers and archaeologists—as well as their early twentieth century successors—were often drawn to this Middle Eastern milieu in a quest for concordance between Biblical references and excavations, which bred distortions in outcomes and interpretation. At the same time, a conviction that race and civilization were inextricably linked—to be clear, the “white race” and “Western Civilization”—determined that what was perceived as “advanced” was ordained at the outset for association with “the West.” We know that the leading thinkers of the Renaissance rediscovered the Greeks and Romans as their cultural and intellectual forebears, with at least some measure of justification, but later far more tenuous links were drawn to ancient Egypt—and, of course, later still, to Babylon and Sumer. Misrepresentations, deliberate or not, were exacerbated by the fact that the standards of professionalism characteristic to today’s archaeology were either primitive or nonexistent.

None of this should be news to students of history who have observed how the latest historiography has frequently discredited interpretations long taken for granted—something I have witnessed firsthand as a dramatic work in progress in studies of the American Civil War in recent decades: notably, although slavery was central to the cause of secession and war, for more than a century African Americans were essentially erased from the textbooks and barely acknowledged other than at the very periphery of the conflict, in what was euphemistically constructed as a sectional struggle among white men, north and south. It was a lie, but a lie that sold very well for a very long time, and still clings to those invested in what has come to called “Lost Cause” mythology.

But yet it’s surprising, as Collins underscores, that what should long have been second-guessed about Sumer remains integral to far too much of what persists as current thinking. Whether the Sumerians are indeed a distinct culture or not, should those peoples more than five millennia removed from us continue to be artificially attached to what we pronounce Western Civilization? Probably not. And while we certainly recognize today that race is an artificial construct that relates zero information of importance about a people, ancient or modern, we can reasonably guess with some confidence that those indigenous to southern Iraq in 3500 BCE probably did not have the pale skin of a native of, say, Norway. We can rightfully assert that the people we call the Sumerians were responsible for extraordinary achievements that were later passed down to other cultures that followed, but an attempt to draw some kind of line from Sumer to Enlightenment-age Europe is shaky, at best.

As such, Collins’s book gives focus to what we have come to believe about the Sumerians, and why we should challenge that. I previously read (and reviewed) Egypt by Christina Riggs, another book in the Lost Civilizations series, which is preoccupied with how ancient Egypt has resonated for those who walked in its shadows, from Roman tourists to Napoleon’s troops to modern admirers, even if that vision little resembles its historic basis. Collins takes a similar tack but devotes far more attention to parsing out in greater detail exactly what is really known about the Sumerians and what we tend to collectively assume that we know. Of course, Sumer is far less familiar to a wider audience, and it lacks the romantic appeal of Egypt—there is no imagined exotic beauty like Cleopatra, only the blur of the distant god-king Gilgamesh—so the Sumerians come up far more rarely in conversation, and provoke far less strong feelings, one way or the other.

The Sumerians is a an accessible read for the non-specialist, and there are plenty of illustrations to enhance the text. Like other authors in the Lost Civilizations series, Collins deserves much credit for articulating sometimes arcane material in a manner that suits both a scholarly and a popular audience, which is by no means an easy achievement. If you are looking for an outstanding introduction to these ancient people that is neither too esoteric nor dumbed-down,  I highly recommend this volume.

*NOTE: I recently learned that Paul Collins has apparently left the Ashmolean Museum as of end October 2022, and is now associated with the Middle East Department, British Museum.

The Sumerian Kings List Prism at the Ashmolean Museum online here:       The Sumerian King List Prism

More about “The Epic of Gilgamesh” can be found in my review here: Review of: Gilgamesh: A New English Version, by Stephen Mitchell

I reviewed other volumes in the Lost Civilizations series here:

Review of: The Indus: Lost Civilizations, by Andrew Robinson

Review of Egypt: Lost Civilizations, by Christina Riggs

Review of: The Etruscans: Lost Civilizations, by Lucy Shipley

 

 

 

 

 

Featured

Review of: The Passenger and Stella Maris, by Cormac McCarthy

Imagine if God—or Gary Larson—had an enormous mayonnaise shaped jar at his disposal and stuffed it chock full of the collective consciousnesses of the greatest modern philosophers, psychoanalysts, neuroscientists, mathematicians, physicists, quantum theoreticians, and cosmologists … then lightly dusted it with a smattering of existential theologians, eschatologists, dream researchers, and violin makers, before tossing in a handful of race car drivers, criminals, salvage divers, and performers from an old-time circus sideshow … and next layered it with literary geniuses, heavy on William Faulkner and Ernest Hemingway with perhaps a dash of Haruki Murakami and just a smidge of Dashiell Hammett … before finally tossing in Socrates, or at least Plato’s version of Socrates, who takes Plato along with him because—love him or hate him—you just can’t peel Plato away from Socrates.  Now imagine that giant jar somehow being given a shake or two before being randomly dumped into the multiverse, so that all the blended yet still unique components poured out into our universe as well into other multiple hypothetical universes. If such a thing was possible, the contents that spilled forth might approximate The Passenger and Stella Maris, the pair of novels by Cormac McCarthy that has so stunned readers and critics alike that there is yet no consensus whether to pronounce these works garbage or magnificent—or, for that matter, magnificent garbage.

The eighty-nine-year-old McCarthy, perhaps America’s greatest living novelist, released these companion books in 2022 after a sixteen-year hiatus that followed publication of The Road, the 2006 postapocalyptic sensation that explored familiar Cormac McCarthy themes in a very different genre, employing literary techniques strikingly different from his previous works, and in the process finding a whole new audience. The same might be said, to some degree, of the novel that preceded it just a year earlier, No Country for Old Men, another clear break from his past that was after all a radical departure for readers of say, The Border Trilogy, and his magnum opus, Blood Meridian, which to my mind is not only a superlative work but truly one of the finest novels of the twentieth century.

Full disclosure: I have read all of Cormac McCarthy’s novels, as well as a play and a screenplay that he authored. To suggest that I am a fan would be a vast understatement. My very first McCarthy novel was The Crossing, randomly plucked from a grocery store magazine rack while on a family vacation. That was 2008. I inhaled the book and soon set out to read his full body of work.  The Crossing is actually the middle volume in The Border Trilogy, preceded by All the Pretty Horses and followed by Cities of the Plain, which collectively form a near- Shakespearean epic of the American southwest and the Mexican borderlands in the mid-twentieth century, which yet retain a stark primitivism barely removed from the milieu of Blood Meridian, set a full century earlier. The author’s style, in these sagas and beyond, has at times by critics been compared with both Faulkner and Hemingway, both favorably and unfavorably, but McCarthy’s voice is distinctive, and hardly derivative. There is indeed the rich vocabulary of a Faulkner or a Styron, that add a richness to the quality of the prose even as it challenges readers to sometimes seek out the dictionary app on their phones. There is also a magnificent use of the objective correlative, made famous by Hemingway and later in portions of the works of Gabriel Garcia Márquez, which evokes powerful emotions from inanimate objects. For McCarthy, this often manifests in the vast, seemingly otherworldly geography of the southwest. McCarthy also frequently makes use of Hemingway’s polysyndetic syntax that adds emphasis to sentences through a series of conjunctions. Most noticeable for those new to Cormac McCarthy is his omission of most traditional punctuation, such as quotation marks, which often improves the flow of the narrative even as it sometimes lends to a certain confusion in long dialogues between two characters that span several pages.

The Passenger opens with the prologue of a Christmas day suicide that must be recited in the author’s voice as an underscore to the beauty of his prose:

It had snowed lightly in the night and her frozen hair was gold and crystalline and her eyes were frozen cold and hard as stones. One of her yellow boots had fallen off and stood in the snow beneath her. The shape of her coat lay dusted in the snow where she’d dropped it and she wore only a white dress and she hung among the bare gray poles of the winter trees with her head bowed and her hands turned slightly outward like those of certain ecumenical statues whose attitude asks that their history be considered. That the deep foundation of the world be considered where it has its being in the sorrow of her creatures. The hunter knelt and stogged his rifle upright in the snow beside him … He looked up into those cold enameled eyes glinting blue in the weak winter light. She had tied her dress with a red sash so that she’d be found. Some bit of color in the scrupulous desolation. On this Christmas day.

With a poignancy reminiscent of the funeral of Peyton Loftis, also a suicide, in the opening of William Styron’s Lie Down in Darkness, the reader here encounters whom we later learn is Alicia Western, one of the two central protagonists in The Passenger and its companion volume, who much like Peyton in Styron’s novel haunts the narrative with chilling flashbacks. Ten years have passed when, on the very next page, we meet her brother Bobby, a salvage diver exploring a submerged plane wreck who happens upon clues that could put his life in jeopardy among those seeking something missing from that plane. Bobby is a brilliant intellect who could have been a physicist, but instead spends his life chasing down whatever provokes his greatest psychological fears. In this case, the terror of being deep underwater has driven him to salvage work in the oceans. Bobby is also a rugged and resourceful man’s man, a kind of Llewelyn Moss from No Country for Old Men, but with a much higher I.Q. Finally, Bobby, now thirty-seven years old, has never recovered from the death of his younger sister, with whom he had a close, passionate—and possibly incestuous—relationship.

Also integral to the plot is their now deceased father, a physicist who was once a key player in the Manhattan Project that produced the first atomic bombs that obliterated Hiroshima and Nagasaki. Their first names—Alicia and Bobby—seem to be an ironic echo of the “Alice and Bob” characters that are used as placeholders in science experiments, especially in physics.  Their surname, Western, could be a kind of doomed metaphor for the tragedy of mass murder on a scale never before imagined that has betrayed the promise of western civilization in the twentieth century and in its aftermath.

A real sense of doom, and a mounting paranoia, grips the narrative in general and Bobby in particular, in what appears to be a kind of mystery/thriller that meanders about, sometimes uncertainly. The cast of characters are extremely colorful, from a Vietnam veteran whose only regret from the many lives he brutally spent while in-country are the elephants that he exploded with rockets from his gunship just for fun, to a small-time swindler with a wallet full of credit cards that don’t belong to him, and a bombshell trans woman with a heart of gold. Some of these folks are like the sorts that turn up in John Steinbeck’s Tortilla Flat, but on steroids, and more likely to suffer an unpredictable death.

But it is Alicia who steals the show in flashback fragments that reveal a stunningly beautiful young woman whose own brilliance in mathematics, physics, and music overshadows even Bobby. She seems to be schizophrenic, plagued by extremely well-defined hallucinations of bedside visitors who could be incarnates of walk-ons from an old-time circus sideshow, right out of central casting. The most prominent is the “Thalidomide Kid”—replete with the flippers most commonly identified with those deformities—who engages her as interlocutor with long-winded, fascinating, and often disturbing dialogue that can run to several pages. Alicia has been on meds, and has checked herself into institutions, but in the end, she becomes convinced both that her visitors are real and that she herself does not belong in this world. But is Alicia even human? There are passing hints that she could be an alien, or perhaps from another universe.

There’s much more, including an episode where “The Kid,” Alicia’s hallucination (?) takes a long walk on the beach with Bobby. This is surprising, if only because McCarthy has long pilloried the magical realism that frequently populates the novels of Garcia Márquez or Haruki Murakami. Perhaps “The Kid” is no hallucination, after all? In any event, much like a Murakami novel—think 1Q84, for example—there are multiple plot lines in The Passenger that go nowhere, and the reader is left frustrated by the lack of resolution. And yet … and yet, the characters are so memorable, and the quality of the writing is so exceptional, that the cover when finally closed is closed without an ounce of regret for the experience. And at the same time, the reader demands more.

The “more” turns out to be Stella Maris, the companion volume that is absolutely essential to broadening your awareness of the plot of The Passenger. Stella Maris is a mental institution that Alicia—then a twenty-year-old dropout from a doctoral program in mathematics—has checked herself into one final time, in the very last year of her life, and so a full decade before the events recounted in The Passenger. She has no luggage, but forty thousand dollars in a plastic bag which she attempts to give to a receptionist. Bobby, in those days a race car driver, lies in a coma as the result of a crash. He is not expected to recover, but Alicia refuses to remove him from life support. The full extent of the novel is told solely in transcript form through the psychiatric sessions of Alicia and a certain Dr. Cohen, but it is every bit a Socratic dialogue of science and philosophy and the existential meaning of life—not only Alicia’s life, but all of our lives, collectively. And finally, there is the dark journey to the eschatological. Alicia—and I suppose by extension Cormac McCarthy—doesn’t take much stock in a traditional, Judeo-Christian god, which has to be a myth, of course. At the same time, she has left atheism behind: there has to be something, in her view, even if she cannot identify it. But most terrifying, Alicia has a certainty that there lies somewhere an undiluted force of evil, something she terms the “Archatron,” that we all resist, even if there is a futility to that resistance.

I consider myself an intelligent and well-informed individual, but reading The Passenger, and especially Stella Maris, was immeasurably humbling. I felt much as I did the first time that I read Faulkner’s The Sound and the Fury, and even the second time that I read Gould’s Book of Fish, by Richard Flanagan. As if there are minds so much greater than mine that I cannot hope to comprehend all that they have to share, but yet I can take full pleasure in immersing myself in their work. To borrow a line from Alicia, in her discussion of Oswald Spengler in Stella Maris, we might say also of Cormac McCarthy: “As with the general run of philosophers—if he is one—the most interesting thing was not his ideas but just the way his mind worked.”

Featured

Review of: The Gates of Europe: A History of Ukraine, by Serhii Plokhy

Still reeling from the pandemic, the world was rocked to its core on February 24, 2022, when Russian tanks rolled into Ukraine, an act of unprovoked aggression not seen in Europe since World War II that conjured up distressing historical parallels. If there were voices that previously denied the echo of Hitler’s Austrian Anschluss to Putin’s annexation of Crimea, as well as German adventurism in Sudetenland with Russian-sponsored separatism in the Donbas, there was no mistaking the similarity to the Nazi invasion of Poland in 1939. But it was Vladimir Putin’s challenge to the very legitimacy of Kyiv’s sovereignty—a shout out to the Kremlin’s rising chorus of irredentism that declares Ukraine a wayward chunk of the “near abroad” that is rightly integral to Russia—that compels us to look much further back in history.

Putin’s claim, however dubious, begs a larger question: by what right can any nation claim self-determination? Is Ukraine really just a modern construct, an opportunistic product of the collapse of the USSR that because it was historically a part of Russia should be once again? Or, perhaps counter-intuitively, should western Russia instead be incorporated into Ukraine? Or—let’s stretch it a bit further—should much of modern Germany rightly belong to France? Or vice versa? From a contemporary vantage point, these are tantalizing musings that challenge the notions of shifting boundaries, the formation of nation states, fact-based if sometimes uncomfortable chronicles of history, the clash of ethnicities, and, most critically, actualities on the ground. Naturally, such speculation abruptly shifts from the purely academic to a stark reality at the barrel of a gun, as the history of Europe has grimly demonstrated over centuries past.

To learn more, I turned to the recently updated edition of The Gates of Europe: A History of Ukraine, by historian and Harvard professor Serhii Plokhy, a dense, well-researched, deep dive into the past that at once fully establishes Ukraine’s right to exist, while expertly placing it into the context of Europe’s past and present. For those like myself largely unacquainted with the layers of complexity and overlapping hegemonies that have long dominated the region, it turns out that there is much to cover. At the same time, the wealth of material that strikes as unfamiliar places a strong and discouraging underscore to the western European bias in the classroom—which at least partially explains why it is that even those Americans capable of locating Ukraine on a map prior to the invasion knew almost nothing of its history.

Survey courses in my high school covered Charlamagne’s 9th century empire that encompassed much of Europe to the west, including what is today France and Germany, but never mentioned Kievan Rus’—the cultural ancestor of modern Ukraine, Belarus and Russia—that was in the 10th and 11th centuries the largest and by far the most powerful state on the continent, until it fragmented and then fell to Mongol invaders! To its east, the Grand Principality of Moscow, a 13th century Rus’ vassal state of the Mongols, formed the core of the later Russian Empire. In the 16th and 17th centuries, the Polish–Lithuanian Commonwealth was in its heyday among the largest and most populous on the continent, but both Poland and Lithuania were to fall to partition by Russia, Prussia, and Austria, and effectively ceased to exist for more than a century. Also missing from maps, of course, were Italy and Germany, which did not even achieve statehood until the later 19th century. And the many nations of today’s southeastern Europe were then provinces of the Ottoman Empire. That is European history, complicated and nuanced, as history tends to be.

Plokhy’s erudite study restores from obscurity Ukraine’s past and reveals a people who while enduring occupation and a series of partitions never abandoned an aspiration to sovereignty that was not to be realized until the late 20th century.  Once a dominant power, Ukraine was to be overrun by the Mongols, preyed upon for slave labor by the Crimean Khanate, and throughout the centuries sliced up into a variety of enclaves ruled by the Golden Horde, the Polish–Lithuanian Commonwealth, the Austrian Empire, the Tsardom of Russia, and finally the Soviet Union.

That long history was written with much blood and suffering inflicted by its various occupiers. Just in the last hundred years that included Soviet campaigns of terror, ethnic cleansing, and deportations, as well as the catastrophic Great Famine of 1932–33—known as the “Holodomor”—a product of Stalin’s forced collectivization that led to the starvation deaths of nearly four million Ukrainians. Then there was World War II, which claimed another four million lives, including about a million Jews. The immediate postwar period was marked by more tumult and bloodshed. Stability and a somewhat better quality of life emerged under Nikita Khrushchev, who himself spent many years of residence in Ukraine. It was Khrushchev who transferred title of the Crimea to Ukraine in 1954. The final years under Soviet domination saw the Chernobyl nuclear disaster.

The structure of the USSR was manifested in political units known as Soviet Socialist Republics, which asserted a fictional autonomy subject to central control. Somewhat ironically, as time passed this enabled and strengthened nationalism within each of the respective SSR’s. Ukraine (like Belarus) even held its own United Nations seat, although its UN votes were rubber-stamped by Moscow. Still, this further reinforced a sense of statehood, which was realized in the unexpected dissolution of the Soviet Union and Ukraine’s independence in 1991. In the years that followed, as Ukraine aspired to closer ties with the West, that statehood increasingly came under attack by Putin, who spoke in earnest of a “Greater Russia” that by all rights included Ukraine. Election meddling became common, but with the spectacular fall of the Russian-backed president in 2014, Putin annexed Crimea and fomented rebellion that sought to create breakaway “republics” in the Donbas of eastern Ukraine. This only intensified the desire of Kyiv for integration with the European Union and for NATO membership.

A vast country of forest and steppe, marked by fertile plains crisscrossed by rivers, Ukraine has long served as a strategic gateway between the east and west, as emphasized in the book’s title. Elements of western, central, and eastern Europe all in some ways give definition to Ukrainian life and culture, and as such Ukraine remains inextricably as much a part of the west as the east. While Russia has left a huge imprint upon the nation’s DNA, it hardly informs the entirety of its national character. The Russian language continues to be widely spoken, and at least prior to the invasion many Ukrainians had Russian sympathies—if never a desire for annexation! For Ukrainians, stateless for too long, their own national identity ever remained unquestioned. The Russian invasion has, rather than threatened that, only bolstered it.

Today, Ukraine is the second largest European nation, after Russia. Far too often overlooked by both statesmen and talking heads, Ukraine would also be the world’s third largest nuclear power—and would have little to fear from the tanks of its former overlord—had it not given up its stockpile of nukes in a deal brokered by the United States, an important reminder to those who question America’s obligation to defend Ukraine.

As this review goes to press, Russia’s war—which Putin euphemistically terms a “special military operation”—is going very poorly, and despite energy supply shortages and threats of nuclear brinksmanship, the West stands firmly with Ukraine, which in the course of the conflict has been subjected to horrific war crimes by Russian invaders.  However, as months pass, and both Europe and the United States endure the economic pain of inflation and rising fuel prices, as well as the ever-increasing odds of rightwing politicians gaining political power on both sides of the Atlantic, it remains to be seen if this alliance will hold steady. As battlefield defeats mount, and men and materiel run short, Putin seems to be running out the clock in anticipation of that outcome. We can only hope it does not come to that.

While I learned a great deal from The Gates of Europe, and I would offer much acclaim to its scholarship, there are portions that can prove to be a slog for a nonacademic audience. Too much of the author’s chronicle reads like a textbook—pregnant with names and dates and events—and thus lacks the sweep of a grand thematic narrative inspiring to the reader and so deserving of the Ukrainian people he treats. At the same time, that does not diminish Plokhy’s achievement in turning out what is certainly the authoritative history of Ukraine. With their right to exist under assault once more, this volume serves as a powerful defense—the weapon of history—against any who might challenge Ukraine’s sovereignty. If you believe, as I do, that facts must triumph over propaganda and polemic, then I highly recommend that you turn to Plokhy to best refute Putin.

 

My review of Plokhy’s more recent book is here: Review of: The Russo-Ukrainian War: The Return of History, by Serhii Plokhy

 

 

Featured

Review of: After the Apocalypse: America’s Role in the World Transformed, by Andrew Bacevich

I often suffer pangs of guilt when a volume received through an early reviewer program languishes on the shelf unread for an extended period. Such was the case with the “Advanced Reader’s Edition” of After the Apocalypse: America’s Role in the World Transformed, by Andrew Bacevich, that arrived in August 2021 and sat forsaken for an entire year until it finally fell off the top of my TBR (To-Be-Read) list and onto my lap. While hardly deliberate, my delay was no doubt neglectful. But sometimes neglect can foster unexpected opportunities for evaluation. More on that later.

First, a little about Andrew Bacevich. A West Point graduate and platoon leader in Vietnam 1970-71, he went on to an army career that spanned twenty-three years, including the Gulf War, retiring with the rank of Colonel. (It is said his early retirement was due to being passed over for promotion after taking responsibility for an accidental explosion at a camp he commanded in Kuwait.) He later became an academic, Professor Emeritus of International Relations and History at Boston University, and one-time director of its Center for International Relations (1998-2005). He is now president and co-founder of the bipartisan think-tank, the Quincy Institute for Responsible Statecraft. Deeply influenced by the theologian and ethicist Reinhold Niebuhr, Bacevich was once tagged as a conservative Catholic historian, but he defies simple categorization, most often serving as an unlikely voice in the wilderness decrying America’s “endless wars.” He has been a vocal, longtime critic of George W. Bush’s doctrine of preventative war, most prominently manifested in the Iraqi conflict, which he has rightly termed a “catastrophic failure.” He has also denounced the conceit of “American Exceptionalism,” and chillingly notes that the reliance on an all-volunteer military force translates into the ongoing, almost anonymous sacrifice of our men and women for a nation that largely has no skin in the game. His own son, a young army lieutenant, was killed in Iraq in 2007.  I have previously read three other Bacevich works. As I noted in a review of one of these, his resumé attaches to Bacevich either enormous credibility or an axe to grind, or perhaps both. Still, as a scholar and gifted writer, he tends to be well worth the read.

The “apocalypse” central to the title of this book takes aim at the chaos that engulfed 2020, spawned by the sum total of the “toxic and divisive” Trump presidency, the increasing death toll of the pandemic, an economy in free fall, mass demonstrations by Black Lives Matter proponents seeking long-denied social justice, and rapidly spreading wildfires that dramatically underscored the looming catastrophe of global climate change. [p.1-3] Bacevich takes this armload of calamities as a flashing red signal that the country is not only headed in the wrong direction, but likely off a kind of cliff if we do not immediately take stock and change course. He draws odd parallels with the 1940 collapse of the French army under the Nazi onslaught, which—echoing French historian Marc Bloch—he lays to “utter incompetence” and “a failure of leadership” at the very top. [p.xiv] This then serves as a head-scratching segue into a long-winded polemic on national security and foreign policy that recycles familiar Bacevich themes but offers little in the way of fresh analysis. This trajectory strikes as especially incongruent given that the specific litany of woes besetting the nation that populate his opening narrative have—rarely indeed for the United States—almost nothing to do with the military or foreign affairs.

If ever history was to manufacture an example of a failure of leadership, of course, it would be hard-pressed to come up with a better model than Donald Trump, who drowned out the noise of a series of mounting crises with a deafening roar of self-serving, hateful rhetoric directed at enemies real and imaginary, deliberately ignoring the threat of both coronavirus and climate change, while stoking racial tensions. Bacevich gives him his due, noting that his “ascent to the White House exposed gaping flaws in the American political system, his manifest contempt for the Constitution and the rule of law placing in jeopardy our democratic traditions.” [p.2] But while he hardly masks his contempt for Trump, Bacevich makes plain that there’s plenty of blame to go around for political elites in both parties, and he takes no prisoners, landing a series of blows on George W. Bush, Barack Obama, Hillary Clinton, Joe Biden, and a host of other members of the Washington establishment that he holds accountable for fostering and maintaining the global post-Cold War “American Empire” responsible for the “endless wars” that he has long condemned. He credits Trump for urging a retreat from alliances and engagements, but faults the selfish motives of an “America First” predicated on isolationism. Bacevich instead envisions a more positive role for the United States in the international arena—one with its sword permanently sheathed.

All this is heady stuff, and regardless of your politics many readers will find themselves nodding their heads as Bacevich makes his case, outlining the many wrongheaded policy endeavors championed by Republicans and Democrats alike for a wobbly superpower clinging to an outdated and increasingly irrelevant sense of national identity that fails to align with the global realities of the twenty-first century.  But then, as Bacevich looks to the future for alternatives, as he seeks to map out on paper the next new world order, he stumbles, and stumbles badly, something only truly evident in retrospect when viewing his point of view through the prism of the events that followed the release of After the Apocalypse in June 2021.

Bacevich has little to add here to his longstanding condemnation of the U.S. occupation of Afghanistan, which after two long decades of failed attempts at nation-building came to an end with our messy withdrawal in August 2021, just shortly after this book’s publication. President Biden was pilloried for the chaotic retreat, but while his administration could rightly be held to account for a failure to prepare for the worst, the elephant in that room in the Kabul airport where the ISIS-K suicide bomber blew himself up was certainly former president Trump, who brokered the deal to return Afghanistan to Taliban control. Biden, who plummeted in the polls due to outcomes he could do little to control, was disparaged much the same way Obama once was when he was held to blame for the subsequent turmoil in Iraq after effecting the withdrawal of U.S. forces agreed to by his predecessor, G.W. Bush. Once again, history rhymes.  But the more salient point for those of us who share, as I do, Bacevich’s anti-imperialism, is that getting out is ever more difficult than going in.

But Bacevich has a great deal to say in After the Apocalypse about NATO, an alliance rooted in a past-tense Cold War stand-off that he pronounces counterproductive and obsolete. Bacevich disputes the long-held mythology of the so-called “West,” an artificial “sentiment” that has the United States and European nations bound together with common values of liberty, human rights, and democracy. Like Trump—who likely would have acted upon this had he been reelected—Bacevich calls for an end to US involvement with NATO. The United States and Europe have embarked on “divergent paths,” he argues, and that is as it should be. The Cold War is over. Relations with Russia and China are frosty, but entanglement in an alliance like NATO only fosters acrimony and fails to appropriately adapt our nation to the realities of the new millennium.

It is an interesting if academic argument that was abruptly crushed under the weight of the treads of Russian tanks in the premeditated invasion of Ukraine February 24, 2022. If some denied the echo of Hitler’s 1938 Austrian Anschluss to Putin’s 2014 annexation of Crimea, there was no mistaking the similarity of unprovoked attacks on Kyiv and sister cities to the Nazi war machine’s march on Poland in 1939. And yes, when Biden and French President Emmanuel Macron stood together to unite that so-called West against Russian belligerence, the memory of France’s 1940 defeat was hardly out of mind. All of a sudden, NATO became less a theoretical construct and somewhat more of a safe haven against brutal militarism, wanton aggression, and the unapologetic war crimes that livestream on twenty-first century social media of streets littered with the bodies of civilians, many of them children. All of a sudden, NATO is pretty goddamned relevant.

In all this, you could rightly argue against the wrong turns made after the dissolution of the USSR, of the failure of the West to allocate appropriate economic support for the heirs of the former Soviet Union, of how a pattern of NATO expansion both isolated and antagonized Russia. But there remains no legitimate defense for Putin’s attempt to invade, besiege, and absorb a weaker neighbor—or at least a neighbor he perceived to be weaker, a misstep that could lead to his own undoing. Either way, the institution we call NATO turned out to be something to celebrate rather than deprecate. The fact that it is working exactly the way it was designed to work could turn out to be the real road map to the new world order that emerges in the aftermath of this crisis. We can only imagine the horrific alternatives had Trump won re-election: the U.S. out of NATO, Europe divided, Ukraine overrun and annexed, and perhaps even Putin feted at a White House dinner. So far, without firing a shot, NATO has not only saved Ukraine; arguably, it has saved the world as we know it, a world that extends well beyond whatever we might want to consider the “West.”

As much as I respect Bacevich and admire his scholarship, his informed appraisal of our current foreign policy realities has turned out to be entirely incorrect. Yes, the United States should rein in the American Empire.  Yes, we should turn away from imperialist tendencies. Yes, we should focus our defense budget solely on defense, not aggression, resisting the urge to try to remake the world in our own image for either altruism or advantage. But at the same time, we must be mindful—like other empires in the past—that retreat can create vacuums, and we must be ever vigilant of what kinds of powers may fill those vacuums.  Because we can grow and evolve into a better nation, a better people, but that evolution may not be contagious to our adversaries. Because getting out remains ever more difficult than going in.

Finally, a word about the use of the term “apocalypse,” a characterization that is bandied about a bit too frequently these days. 2020 was a pretty bad year, indeed, but it was hardly apocalyptic. Not even close. Despite the twin horrors of Trump and the pandemic, we have had other years that were far worse. Think 1812, when the British burned Washington and sent the president fleeing for his life. And 1862, with tens of thousands already lying dead on Civil War battlefields as the Union army suffered a series of reverses. And 1942, still in the throes of economic depression, with Germany and Japan lined up against us. And 1968, marked by riots and assassinations, when it truly seemed that the nation was unraveling from within. Going forward, climate change may certainly breed apocalypse. So might a cornered Putin, equipped with an arsenal of nuclear weapons and diminishing options as Russian forces in the field teeter on collapse. But 2020 is already in the rear-view mirror. It will no doubt leave a mark upon us, but as we move on, it spins ever faster into our past. At the same time, predicting the future, even when armed with the best data, is fraught with unanticipated obstacles, and grand strategies almost always lead to failure. It remains our duty to study our history while we engage with our present. Apocalyptic or not, it’s all we’ve got …

I have reviewed other Bacevich books here:

Review of: Breach of Trust: How Americans Failed Their Soldiers and Their Country, by Andrew J. Bacevich

Review of: America’s War for the Greater Middle East: A Military History, by Andrew J. Bacevich

 

 

Featured

Review of: 1957: The Year That Launched the American Future, by Eric Burns

On October 4, 1957, the Soviet Union sent geopolitical shock waves across the planet with the launch of Sputnik 1, the first artificial Earth satellite. Sputnik was only twenty-three inches in diameter, transmitted radio signals for a mere twenty-one days, then burned up on reentry just three months after first achieving orbit, but it literally changed everything. Not only were the dynamics of the Cold War permanently altered by what came to be dubbed the “Space Race,” but the success of Sputnik ushered in a dramatic new era for developments in science and technology. I was not quite six months old.

America was to later win that race to the moon, but despite its fearsome specter as a diabolical power bent on world domination, the USSR turned out to be a kind of vast Potemkin Village that almost noiselessly went out of business at the close 1991. The United States had pretty much lost interest in space travel by then, but that was just about the time that the next critical phase in the emerging digital age—widespread public access to personal computers and the internet—first wrought the enormous changes upon the landscape of American life that today might have Gen Z “zoomers” considering 1957 as something like a date out of ancient times.

And now, as this review goes to press—in yet still one more recycle of Mark Twain’s bon mot “History Doesn’t Repeat Itself, but It Often Rhymes”—NASA temporarily scrubbed the much anticipated blastoff of lunar-bound Artemis I, but a real space race is again fiercely underway, although this time the rivals include not only Russia, but China and a whole host of billionaires, at least one of whom could potentially fit a template for a “James Bond” style villain. And while all this is going on, I recently registered for Medicare.

Sixty-five years later, there’s a lot to look back on. In 1957: The Year That Launched the American Future (2020), a fascinating, fast-paced chronicle manifested by articulately rendered, thought-provocative chapter-length essays, author and journalist Eric Burns reminds us of what a pivotal year that proved to be, not only by kindling that first contest to dominate space, but in multiple other arenas of the social, political, and cultural, much that is only apparent in retrospect.

That year, while Sputnik stoked alarms that nuclear-armed Russians would annihilate the United States with bombs dropped from outer space, tabloid journalism reached what was then new levels of the outrageous exploiting “The Mad Bomber of New York,” who turned out to be a pathetic little fellow whose series of explosives actually claimed not a single fatality. In another example of history’s unintended consequences, a congressional committee investigating illegal labor activities helped facilitate Jimmy Hoffa’s takeover of the Teamsters. A cloak of mystery was partially lifted from organized crime activities with a very public police raid at Apalachin that rounded up Mafia bosses by the score. The iconic ’57 Chevy ruled the road and cruised on newly constructed interstate highways that would revolutionize travel as well as wreak havoc on cityscapes. African Americans remained second-class citizens but struggles for equality ignited a series of flashpoints. In September 1957, President Eisenhower federalized the Arkansas National Guard and sent Army troops to Little Rock to enforce desegregation. That same month, Congress passed the Civil Rights Act of 1957, watered-down yet still landmark legislation that paved the way for more substantial action ahead. Published that year were Jack Kerouac’s On the Road and Nevil Shute’s On the Beach. Michael Landon starred in I Was a Teenage Werewolf. Little Richard, who claimed to see Sputnik while performing in concert and took it as a message from God, abruptly walked off stage and abandoned rock music to preach the word of the Lord. But the nation’s number one hit was Elvis Presley’s All Shook Up; rock n’ roll was here to stay.

Burns’ commentary on all this and more is engaging and generally a delight to read, but 1957 is by no means a comprehensive history of that year. In fact, it is a stretch to term this book a history at all except in the sense that the events it describes occurred in the past. Instead, it is rather a subjective collection of somewhat loosely linked commentaries that spotlight specific events and emerging trends that the author identifies as formative for the nation we would become in decades that followed. As such, the book succeeds due to Burn’s keen sense of how both key episodes as well as more subtle cultural waves influenced a country in transition from the conventional, consensus-driven postwar years to the radicalized, tumultuous times that lay just ahead.

His insight is most apparent in his cogent analysis of how Civil Rights advanced not only through lunch-counter sit-ins and a reaction that was marked by violent repression, but by cultural shifts among white Americans—and that rock n’ roll had at least some role in this evolution of outlooks. At the same time, his conservative roots are exposed in his treatment of On the Road and the rise of the “Beat generation;” Burns genuinely seems as baffled by their emergence as he is amazed that anyone could praise Kerouac’s literary talents. But, to his credit, he recognizes the impact the novel has upon a national audience that no longer could confidently boast of a certainty in its destiny. And it is Burns’ talent with a pen that captivates a markedly different audience, some sixty-five years later.

In the end, the author leaves us yearning for more. After all, other than references that border on the parenthetical to Richard Nixon, Robert F. Kennedy, and Dag Hammarskjöld, there is almost no discussion of national politics or international relations, essential elements in any study of a nation at what the author insists is at a critical juncture. Even more problematic, very conspicuous in its absence is the missing chapter that should have been devoted to television. In 1950, 3.9 million TV sets were in less than ten percent of American homes. By 1957, that number increased roughly tenfold to 38.9 million TVs in the homes of nearly eighty percent of the population! That year, I Love Lucy aired its final half-hour episode, but in addition to network news, families were glued to their black-and-white consoles watching Gunsmoke, Alfred Hitchcock, Lassie, You Bet Your Life, and Red Skelton. For the World War II generation, technology that brought motion pictures into their living rooms was something like miraculous. Nothing was more central to the identity of the life of the average American in 1957 than television, but Burns inexplicably ignores it.

Other than Sputnik, which clearly marked a turning-point for science and exploration, it is a matter of some debate whether 1957 should be singled out for demarcation as the start of a new era. One could perhaps argue instead for the election of John F. Kennedy in 1960, or with even greater conviction, for the date of his assassination in 1963, as a true crossroads of sorts for the past and future United States. Still, if for no other reason than the conceit that this was my birth year, I am willing to embrace Burns’ thesis that 1957 represented a collective critical moment for us all. Either way, his book promises an impressive tour of a time that seems increasingly more distant with the passing of each and every day.

 

 

Featured

Review of: Bogart, by A.M. Sperber & Eric Lax

Early in 2022, I saw Casablanca on the big screen for the first time, the 80th anniversary of its premiere. Although over the years I have watched it in excess of two dozen times, this was a stunning, even mesmerizing experience for me, not least because I consider Casablanca the finest film of Old Hollywood—this over the objections of some of my film-geek friends who would lobby for Citizen Kane in its stead. Even so, most would concur with me that its star, Humphrey Bogart, was indeed the greatest actor of that era.

Attendance was sparse, diminished by a resurgence of COVID, but I sat transfixed in that nearly empty theater as Bogie’s distraught, drunken Rick Blaine famously raged that “Of all the gin joints in all the towns in all the world, she walks into mine!” He is, of course, lamenting his earlier unexpected encounter with old flame Ilsa Lund, splendidly portrayed with a sadness indelibly etched upon her beautiful countenance by Ingrid Bergman, who with Bogart led the credits of a magnificent ensemble cast that also included Paul Henreid, Claude Rains, Conrad Veidt, Sydney Greenstreet, and Peter Lorre. But Bogie remains the central object of that universe; the plot and the players in orbit about him. There’s no doubt that without Bogart, there could never have been a Casablanca as we know it. Such a movie might have been made, but it could hardly have achieved a greatness on this order of magnitude.

Bogie never actually uttered the signature line “Play it again, Sam,” so closely identified with the production (and later whimsically poached by Woody Allen for the title of his iconic 1972 comedy peppered with clips from Casablanca). And although the film won Academy Awards for Best Picture and Best Director, as well as in almost every other major category, Bogart was nominated but missed out on the Oscar, which instead went to Paul Lukas—does anyone still remember Paul Lukas?—for his role in Watch on the Rhine. This turns out to be a familiar story for Bogart, who struggled with a lifelong frustration at typecasting, miscasting, studio manipulation, lousy roles, inadequate compensation, missed opportunities, and repeated snubs—public recognition of his talent and star-quality came only late in life and even still frequently eluded him, as on that Oscar night. He didn’t really expect to win, but we can yet only wonder at what Bogart must have been thinking . . . He was already forty-four years old on that disappointing evening when the Academy passed him over. There was no way he could have known that most of his greatest performances would lie ahead, that after multiple failed marriages (one still unraveling that very night) a young starlet he had only just met would come to be the love of his life and mother of his children, and that he would at last achieve not only the rare brand of stardom reserved for just a tiny slice of the top tier in his profession, but that he would go on become a legend in his own lifetime and well beyond it: the epitome of the cool, tough, cynical guy who wears a thin veneer of apathy over an incorruptible moral center, women swooning over him as he stares down villains, an unlikely hero that every real man would seek to emulate.

My appreciation of Casablanca and its star in this grand cinema setting was enhanced by the fact that I was at the time reading Bogart (1997), by A.M. Sperber & Eric Lax, which is certainly the definitive biography of his life. I was also engaged in a self-appointed effort to watch as many key Bogie films in roughly chronological order as I could while reading the bio, which eventually turned out to be a total of twenty movies, from his first big break in The Petrified Forest (1936) to The Harder They Fall (1956), his final role prior to his tragic, untimely death at fifty-seven from esophageal cancer.

Bogie’s story is told brilliantly in this unusual collaboration by two authors who had never actually met. Ann Sperber, who wrote a celebrated biography of journalist Edward R. Murrow, spent seven years researching Bogart’s life and conducted nearly two hundred interviews with those who knew him most intimately before her sudden death in 1994. Biographer Eric Lax stepped in and shaped her draft manuscript into a coherent finished product that reads seamlessly like a single voice. I frequently read biographies of American presidents not only to study the figure that is profiled, but because the very best ones serve double duty as chronicles of United States history, the respective president as the focal point. I looked to the Bogart book for something similar, in this case a study of Old Hollywood with Bogie in the starring role. I was not to be disappointed.

Humphrey DeForest Bogart was born on Christmas Day 1899 in New York City to wealth and privilege, with a father who was a cardiopulmonary surgeon and a mother who was a commercial illustrator. Both parents were distant and unaffectionate. They had an apartment on the Upper West side and a vast estate on Canandaigua Lake in upstate New York, where Bogie began his lifelong love affair with boating. Indifferent to higher education, he eventually flunked out of boarding school and joined the navy. There seems nothing noteworthy about his early life.

His acting career began almost accidentally, and he spent several years on the stage before making his first full-length feature in 1930, Up the River, with his drinking buddy Spencer Tracy, who called him “Bogie.” He was already thirty years old. What followed were largely lackluster roles on both coasts, alternating between Broadway theaters and Hollywood studios. He was frequently broke, drank heavily, and his second marriage was crumbling. Then he won rave reviews as escaped murderer Duke Mantee in The Petrified Forest, playing opposite Leslie Howard on the stage. The studio bought the rights, but characteristically for Bogie, they did not want to cast him to reprise his role, looking instead for an established actor, with Edward G. Robinson at the top of the list. Then Howard, who had production rights, stepped in to demand Bogart get the part. The 1936 film adaptation of the play, which also featured a young Bette Davis, channeled Bogart’s dark and chillingly realistic portrayal of a psychopathic killer—in an era when gangsters like Dillinger and Pretty Boy Floyd dominated the headlines—and made Bogie a star.

But again he faced a series of let-downs. This was the era of the studio system, with actors used and abused by big shots like Jack Warner, who locked Bogart into a low-paid contract that tightly controlled his professional life, casting him repeatedly in virtually  interchangeable gangster roles in a string of B-movies. It wasn’t until 1941, when he played  Sam Spade in The Maltese Falcon—quintessential film noir as well as John Huston’s directorial debut—that Bogie joined the ranks of undisputed A-list stars and began the process of taking revenge on the studio system by commanding greater compensation and demanding greater control of his screen destiny. But in those days, despite his celebrity, that remained an uphill battle.

I began watching his films while reading the bio as a lark, but it turned out to be an essential assignment: you can’t read about Bogie without watching him. Many of the twenty that I screened I had seen before, some multiple times, but others were new to me. I was raised by my grandparents in the 1960s with a little help from a console TV in the livingroom and all of seven channels delivered via rooftop antenna. When cartoons, soaps, and prime time westerns and sitcoms weren’t broadcasting, the remaining airtime was devoted to movies. All kinds of movies, from the dreadful to the superlative and everything in-between, often on repeat. Much of it was classic Hollywood and Bogart made the rounds. One of my grandfather’s favorite flicks was The Treasure of the Sierra Madre, and I can recall as a boy watching it with him multiple times. In general, he was a lousy parent, but I am grateful for that gift; it remains among my top Bogie films. We tend to most often think of Bogart as Rick Blaine or Philip Marlowe, but it is as Fred C. Dobbs in The Treasure of the Sierra Madre and Charlie Allnutt in The African Queen and Captain Queeg in The Caine Mutiny that the full range of his talent is revealed.

It was hardly his finest role or his finest film, but it was while starring as Harry Morgan in To Have and Have Not (1944) that Bogie met and fell for his co-star, the gorgeous, statuesque, nineteen-year-old Lauren Bacall—twenty-five years younger than him—spawning one of Hollywood’s greatest on-screen, off-screen romances. They would be soulmates for the remainder of his life, and it was she who brought out the very best of him. Despite his tough guy screen persona, the real-life Bogie tended to be a brooding intellectual who played chess, was well-read, and had a deeply analytical mind. An expert sailor, he preferred boating on the open sea to carousing in bars, although he managed to do plenty of both. During crackdowns on alleged communist influence in Hollywood, Bogart and Bacall together took controversial and sometimes courageous stands against emerging blacklists and the House Un-American Activities Committee (HUAC). But he also had his flaws. He could be cheap. He could be a mean drunk. He sometimes wore a chip on his shoulder carved out of years of frustration at what was after all a very slow rise to the top of his profession.  But warts and all, far more of his peers loved him than not.

Bogart is a massive tome, and the first section is rather slow-going because Bogie’s early life was just so unremarkable. But it holds the reader’s interest because it is extremely well-written, and it goes on to succeed masterfully in spotlighting Bogart’s life against the rich fabric that forms the backdrop of that distant era of Old Hollywood before the curtains fell for all time.  If you are curious about either, I highly recommend this book. If you are too busy for that, at the very least carve out some hours of screen time and watch Bogie’s films. You will not regret the time spent. Although his name never gets dropped in the lyrics by Ray Davies for the familiar Kinks tune, if there were indeed Celluloid Heroes, the greatest among them was certainly Humphrey Bogart.

 

NOTE: These are Bogart films I screened while reading this book:

The Petrified Forest (1936)

Dead End (1937)

High Sierra (1941)

The Maltese Falcon (1941)

Across the Pacific (1942)

Casablanca (1942)

Passage to Marseille (1944)

To Have and Have Not (1944)

The Big Sleep (1946)

Dark Passage (1947)

Dead Reckoning (1947)

Treasure of the Sierra Madre (1948)

Key Largo (1948)

In a Lonely Place (1950)

The African Queen (1951)

Beat the Devil (1953)

The Caine Mutiny (1954)

Sabrina (1954)

The Desperate Hours (1955)

The Harder They Fall (1956)

 

Featured

Review of: A Self-Made Man: The Political Life of Abraham Lincoln Vol. I, 1809–1849, by Sidney Blumenthal

Historians consistently rank him at the top, tied with Washington for first place or simply declared America’s greatest president. His tenure was almost precisely synchronous with the nation’s most critical existential threat: his very election sparked secession, first shots fired at Sumter a month after his inauguration, the cannon stilled at Appomattox a week before his murder.  There were still armies in the field, but he was gone, replaced by one of the most sinister men to ever take the oath of office, leaving generations of his countrymen to wonder what might have transpired with all the nation’s painful unfinished business had he survived, to the trampled hopes for equality for African Americans to the promise of a truly “New South” that never emerged.  A full century ago, decades after his death, he was reimagined as an enormous, seated marble man with the soulful gaze of fixed purpose, the central icon in his monument that provokes tears for so many visitors that stand in awe before him. When people think of Abraham Lincoln, that’s the image that usually springs to mind.

The seated figure rises to a height of nineteen feet; somebody calculated that if it stood up it would be some twenty-eight feet tall. The Lincoln that once walked the earth was not nearly that gargantuan, but he was nevertheless a giant in his time: physically, intellectually—and far too frequently overlooked—politically! He sometimes defies characterization because he was such a character, in so very many ways.

An autodidact gifted with a brilliant analytical mind, he was also a creature of great integrity loyal to a firm sense of a moral center that ever evolved when polished by new experiences and touched by unfamiliar ideas. A savvy politician, he understood how the world worked. He had unshakeable convictions, but he was tolerant of competing views. He had a pronounced sense of empathy for others, even and most especially his enemies. In company, he was a raconteur with a great sense of humor given to anecdotes often laced with self-deprecatory wit. (Lincoln, thought to be homely, when accused in debate of being two-faced, self-mockingly replied: “I leave it to my audience. If I had another face, do you think I’d wear this one?”) But despite his many admirable qualities, he was hardly flawless. He suffered with self-doubt, struggled with depression, stumbled through missteps, burned with ambition, and was capable of hosting a mean streak that loomed even as it was generally suppressed. More than anything else he had an outsize personality.

And Lincoln likewise left an outsize record of his life and times! So why has he generally posed such a challenge for biographers? Remarkably, some 15,000 books have been written about him—second, it is said, only to Jesus Christ—but yet in this vast literature, the essence of Lincoln again and again somehow seems out of reach to his chroniclers. We know what he did and how he did it all too well, but portraying what the living Lincoln must have been like has remained frustratingly elusive in all too many narratives. For instance, David Herbert Donald’s highly acclaimed bio—considered by many the best single volume treatment of his life—is indeed impressive scholarship but yet leaves us with a Lincoln who is curiously dull and lifeless. Known for his uproarious banter, the guy who joked about being ugly for political advantage is glaringly absent in most works outside of Gore Vidal’s Lincoln, which superbly captures him but remains, alas, a novel not a history.

All that changed with A Self-Made Man: The Political Life of Abraham Lincoln Vol. I, 1809–1849, by Sidney Blumenthal (2016), an epic, ambitious, magnificent contribution to the historiography that demonstrates not only that despite the thousands of pages written about him there still remains much to say about the man and his times, but even more significantly that it is possible to brilliantly recreate for readers what it must have been like to engage with the flesh and blood Lincoln. This is the first in a projected five-volume study (two subsequent volumes have been published to date) that—as the subtitle underscores—emphasize the “political life” of Lincoln, another welcome contribution to a rapidly expanding genre focused upon politics and power, as showcased in such works as Jon Meacham’s Thomas Jefferson: The Art of Power, Robert Dallek’s Franklin D. Roosevelt: A Political Life, and George Washington: The Political Rise of America’s Founding Father, by David O. Stewart.

At first glance, this tactic might strike as surprising, since prior to his election as president in 1860 Lincoln could boast of little in the realm of public office beyond service in the Illinois state legislature and a single term in the US House of Representatives in the late 1840s. But, as Blumenthal’s deeply researched and well-written account reveals, politics defined Lincoln to his very core, inextricably manifested in his life and character from his youth onward, something too often disregarded by biographers of his early days. It turns out that Lincoln was every bit a political animal, and there is a trace of that in nearly every job he ever took, every personal relationship he ever formed, and every goal he ever chased.

This approach triggers a surprising epiphany for the student of Lincoln. It is as if an entirely new dimension of the man has been exposed for the first time that lends new meaning to words and actions previously treated superficially or—worse—misunderstood by other biographers. Early on, Blumenthal argues that Donald and others have frequently been misled by Lincoln’s politically crafted utterances that cast him as marked by passivity, too often taking him at his word when a careful eye on the circumstances demonstrates the exact opposite. In contrast, Lincoln, ever maneuvering, if quietly, could hardly be branded as passive [p9]. Given this perspective, the life and times of young Abe is transformed into something far richer and more colorful than the usual accounts of his law practice and domestic pursuits. In another context, I once snarkily exclaimed  “God save us from The Prairie Years” because I found Lincoln’s formative period—and not just Sandburg’s version of it—so uninteresting and unrelated to his later rise. Blumenthal has proved me wrong, and that sentiment deeply misplaced.

But Blumenthal not only succeeds in fleshing out a far more nuanced portrait of Lincoln—an impressive accomplishment on its own—but in the process boldly sets out to do nothing less than scrupulously detail the political history of the United States in the antebellum years from the Jackson-Calhoun nullification crisis onward.  Ambitious is hardly an adequate descriptive for the elaborate narrative that results, a product of both prodigious research and a very talented pen. Scores of pages—indeed whole chapters—occur with literally no mention of Lincoln at all, a striking technique that is surprisingly successful; while Lincoln may appear conspicuous in his absence, he is nevertheless present, like the reader a studious observer of these tumultuous times even when he is not directly engaged, only making an appearance when the appropriate moment beckons.  As such, A Self-Made Man is every bit as much a book of history as it is biography, a key element to the unstated author’s thesis: that it is impossible to truly get to know Lincoln—especially the political Lincoln—except in the context and complexity of his times, a critical emphasis not afforded in other studies.

And there is much to chronicle in these times. Some of this material is well known, even if until recently subject to faulty analysis.  The conventional view of the widespread division that characterized the antebellum period centered on a sometimes-paranoid south on the defensive, jealous of its privileges, in fear of a north encroaching upon its rights. But in keeping with the latest historiography, Blumenthal deftly highlights how it was that, in contrast, the slave south—which already wielded a disproportionate share of national political power due to the Constitution’s three-fifths clause that inflated its representation—not only stifled debate on  slavery but aggressively lobbied for its expansion. And just as a distinctly southern political ideology evolved its notion of the peculiar institution from the “wolf by the ear” necessary evil of Jefferson’s time to a vaunted hallmark of civilization that boasted benefit to master and servant, so too did it come to view the threat of separation less in dread than anticipation. The roots of all that an older Lincoln would witness severing the ancient “bonds of affection” of the then no longer united states were planted in these, his early years.

Other material is less familiar. Who knew how integral to Illinois politics—for a time—was the cunning Joseph Smith and his Mormon sect?  Or that Smith’s path was once entangled with the budding career of Stephen A. Douglas? Meanwhile, the author sheds new light on the long rivalry between Lincoln and Douglas, which had deep roots that went back to the 1830s, decades before their celebrated clash on the national stage brought Lincoln to a prominence that finally eclipsed Douglas’s star.

Blumenthal’s insight also adeptly connects the present to the past, affording a greater relevance for today’s reader.  He suggests that the causes of the financial crisis of 2008 were not all that dissimilar to those that drove the Panic of 1837, but rather than mortgage-backed securities and a housing bubble, it was the monetization of human beings as slave property that leveraged enormous fortunes that vanished overnight when an oversupply of cotton sent market prices plummeting, which triggered British banks to call in loans on American debtors—a cotton bubble that burst spectacularly (p158-59). This point can hardly be overstated, since slavery was not only integral to the south’s economy, but by the eve of secession human property was to represent the largest single form of wealth in the nation, exceeding the combined value of all American railroads, banks, and factories. A cruel system that assigned values to men, women, and children like cattle had deep ramifications not only for masters who acted as “breeders” in the Chesapeake and markets in the deep south, but also for insurance companies in Hartford, textile mills in Lowell, and banks in London.

Although Blumenthal does not himself make this point, I could detect eerie if imperfect parallels to the elections of 2016 and 1844, with Lincoln seething as the perfect somehow became the enemy of the good. In that contest, Whig Henry Clay was up against Democrat James K. Polk. Both were slaveowners, but Clay opposed the expansion of slavery while Polk championed it. Antislavery purists in New York rejected Clay for the tiny Liberty Party, which by a slender margin tipped the election to Polk, who then boosted the slave power with Texas annexation, and served as principal author of the Mexican War that added vast territories to the nation, setting forces in motion that later spawned secession and Civil War. Lincoln was often prescient, but of course he could not know all that was to follow when, a year after Clay’s defeat, he bitterly denounced the “moral absolutism” that led to the “unintended tragic consequences” of Polk’s elevation to the White House (p303). To my mind, there was an echo of this in the 2016 disaster that saw Donald Trump prevail, a victory at least partially driven by those unwilling to support Hillary Clinton who—despite the stakes—threw away their votes on Jill Stein and Gary Johnson.

No review could properly summarize the wealth of the material contained here, nor overstate the quality of the presentation, which also suggests much promise for the volumes that follow. I must admit that at the outset I was reluctant to read yet another book about Lincoln, but A Self-Made Man was recommended to me by no less than historian Rick Perlstein, (author of Nixonland), and like Perlstein, Blumenthal’s style is distinguished by animated prose bundled with a kind of uncontained energy that frequently delivers paragraphs given to an almost breathless exhale of ideas and people and events that expertly locates the reader at the very center of concepts and consequences. The result is something exceedingly rare for books of history or biography: a page-turner! Whether new to studies of Lincoln or a long-time devotee, this book should be required reading.

 

A review of one of Rick Perlstein’s books is here: Review of: Nixonland: The Rise of a President and the Fracturing of America, by Rick Perlstein

I reviewed the subsequent two volumes in Blumenthal’s Lincoln series here: Review of: Wrestling With His Angel: The Political Life of Abraham Lincoln Vol. II, 1849-1856, and All the Powers of Earth: The Political Life of Abraham Lincoln Vol. III, 1856-1860, by Sidney Blumenthal

 

Featured

Review of: Maladies of Empire: How Colonialism, Slavery, and War Transformed Medicine, by Jim Downs

As the COVID-19 pandemic swept the globe in 2020, it left in its wake the near-paralysis of many hospital systems, unprepared and unequipped for the waves of illness and death that suddenly overwhelmed capacities for treatment that were after all at best only palliative care, since for this deadly new virus there was neither a cure nor a clear route to prevention. Overnight, epidemiologists—scrambling for answers or even just clues—became the most critically significant members of the public health community, even if their informed voices were often shouted down by the shriller ones of media pundits and political hacks.

Meanwhile, data collection began in earnest and the number of data dashboards swelled. In the analytical process, the first stop was identifying the quality of the data and the disparities in how data was collected. Was it true, as some suggested, that a disproportionate number of African Americans were dying from COVID? At first, there was no way to know since some states were not collecting data broken down by this kind of specific demographic. Data collection eventually became more standardized, more precise, and more reliable, serving as a key ingredient to combat the spread of this highly contagious virus, as well as one of the elements that guided the development of vaccines.  Even so, dubious data and questionable studies too often took center stage both at political rallies and in the media circus that echoed a growing polarization that had one side denouncing masks, resisting vaccination, and touting sideshow magic bullets like Ivermectin. But talking heads and captive audiences aside, masks reduce infection, vaccines are effective, and dosing with Ivermectin is a scam. How do we know that? Data. Mostly due to data. Certainly, other key parts of the mix include scientists, medical professionals, case studies, and peer reviewed papers, but data—first collected and then analyzed—is the gold standard, not only for COVID but for all disease treatment and prevention.

But it wasn’t always that way.

In the beginning, there was no such thing as epidemiology. Disease causes and treatments were anecdotal, mystical, or speculative.  Much of the progress in science and medicine that was the legacy of the classical world had long been lost to the west. The dawn of modern epidemiology rose above a horizon constructed of data painstakingly collected and compiled and subsequently analyzed. In fact, certain aspects of the origins of epidemiology were to run concurrent with the evolution of statistical analysis. In the early days, as the reader comes to learn in this brilliant and groundbreaking 2021 work by historian Jim Downs, Maladies of Empire: How Colonialism, Slavery, and War Transformed Medicine, the bulk of the initial data was derived from unlikely and unwilling participants who existed at the very margins: the enslaved, the imprisoned, the war-wounded, and the destitute condemned to the squalor of public hospitals. Their identities are mostly forgotten, or were never recorded in the first place, but yet collectively the data harvested from them was to provide the skeletal framework for the foundation of modern medicine.

In a remarkable achievement that could hardly be more relevant today, the author cleverly locates Maladies of Empire at the intersection of history and medicine, where data collection from unexpected and all too frequently wretched subjects comes to form the very basis of epidemiology itself. It is these early stories that send shudders to a modern audience. Nearly everyone is familiar with the wrenching 1787 diagram of the lower deck of the slave ship Brookes, where more than four hundred fifty enslaved human beings were packed like sardines for a months-long voyage, which became an emblem for the British antislavery movement. But, as Downs points out, few are aware that the sketch can be traced to the work of British naval surgeon Dr. Thomas Trotter, one of the first to recognize that poor ventilation in crowded conditions results in a lack of oxygen that breeds disease and death. His observations also led to a better understanding of how to prevent scurvy, a frequent cause of higher mortality rates among the seaborne citrus-deprived. Trotter himself was appalled by the conditions he encountered on the Brookes, and testified to this before the House of Commons. But that was hardly the case for many of his peers, and certainly not for the owners of slave ships, who looked past the moral dilemmas of a Trotter while exceedingly grateful for his insights; after all, the goal was keep larger quantities of their human cargo alive in order to turn greater profits. Dead slaves lack market value.

A little more than three decades prior to Trotter’s testimony, the critical need for ventilation was documented by another physician in the wake of the confinement of British soldiers in the infamous “Black Hole of Calcutta” during the revolution in Bengal, which resulted in the death by suffocation of the majority of the captives.  Downs makes the point that one of the unintended consequences of colonialism was that for early actors in the medical arena it served to vastly extend the theater of observation of the disease-afflicted to a virtually global stage that hosted the byproducts of colonialism: war, subjugated peoples, the slave trade, military hospitals and prisons. But it turns out that the starring roles belong less to the doctors and nurses that receive top billing in the history books than to the mostly uncredited bit players removed from the spotlight: the largely helpless and disadvantaged patients whose symptoms and outcomes were observed and cataloged, whose anonymous suffering translated into critical data that collectively advanced the emerging science of epidemiology.

Traditionally, history texts rarely showcased notable women, but one prominent exception was Florence Nightingale, frequently extolled for her role as a nurse during the Crimean War. But as underscored in Maladies of Empire, Nightingale’s real if often overlooked legacy was as a kind of disease statistician through her painstaking data collection and analysis—the very basis for epidemiology that was generally credited to white men rather than to “women working in makeshift hospitals.” [p111] But it was the poor outcomes for patients typically subjected to deplorable conditions in these makeshift military hospitals—which Nightingale assiduously observed and recorded—that drew attention to similarly appalling environments in civilian hospitals in England and the United States, which led to a studied analysis that eventually established systematic evidence for the causes, spread, and treatment of disease.

The conclusions these early epidemiologists reached were not always accurate. In fact, they were frequently wrong. But Downs emphasizes that what was significant was the development of the proper analytical framework. In these days prior to the revolutionary development of germ theory, notions on how to improve survival rates of the stricken put forward by Nightingale and others were controversial and often contradictory. Was the best course quarantine, a frequent resort? Or would improving the sickbed conditions, as Nightingale advocated, lead to better outcomes? Unaware of the role of germs in contagion, evidence could be both inconclusive and inconsistent, and competing ideas could each be partly right. After all, regardless of how disease spread, cleaner and better ventilated facilities might lead to lower mortality rates. Nightingale stubbornly resisted germ theory, even as it was widely adopted, but after it won her grudging acceptance, she continued to promote more sanitary hospital conditions to improve survival rates. Still, epidemiologists faced difficult challenges with diseases that did not conform to familiar patterns, such as cholera, spread by a tainted water supply, and yellow fever, a mosquito-borne pathogen.

In the early days, as noted, European observers collected data from slave ships, yet it never occurred to them that because their human subjects were black such evidence was not applicable to the white population. But epidemiology took a surprisingly different course in the United States, where race has long proved to be a defining element.  Of the more than six hundred thousand who lost their lives during the American Civil War, about two-thirds were felled not by bullets but by disease. The United States Sanitary Commission (USSC) was established in an attempt to ameliorate these dreadful outcomes, but its achievements on one hand were undermined on the other by an obsession with race, even going so far as the sending out to “. . . military doctors a questionnaire, ‘The Physiological Status of the Negro,’ whose questions were based on the belief that Black soldiers were innately different from white soldiers . . . The questionnaire also distinguished gradations of color among Black soldiers, asking doctors to compare how ‘pure Negroes’ differed from people of ‘mixed races’ and to describe ‘the effects of amalgamation on the vital endurance and vigor of the offspring.’” With its imprimatur of governmental authority, the USSC officially championed scientific racism, with profound and long-term social, political, and economic consequences for African Americans. [p134-35]

Some of these notions can be traced back to the antebellum musings of Alabama surgeon Josiah Nott—made famous after the war when he correctly connected mosquitoes to the etiology of Yellow Fever—who asserted that blacks and whites were members of separate species whose mixed-race offspring he deemed “hybrids” who were “physiologically inferior.” Nott believed that all three of these distinct “types” responded differently to disease. [p124-25] His was but one manifestation of the once widespread pseudoscience of physiognomy that alleged black inferiority in order to justify first slavery and later second-class citizenship. Such ideas persisted for far too long, and although scientific racism still endures on the alt-right, it has been thoroughly discredited by actual scientists.  It turns out that a larger percentage of African Americans did indeed succumb to death in the still ongoing COVID pandemic, but this has been shown to be due to factors of socioeconomic status and lack of access to healthcare, not genetics.

Still, although deemed inferior, enslaved blacks also proved useful when convenient. The author argues that “… enslaved children were most likely used as the primary source of [smallpox] vaccine matter in the Civil War South,” despite the danger of infection in harvesting lymph from human subjects in order to vaccinate Confederate soldiers in the field. In yet one more reminder of the moral turpitude that defined the south’s “peculiar institution,” the subjects also included infants whose resulting scar or pit, Downs points out,    “. . . would last a lifetime, indelibly marking a deliberate infection of war and bondage. Few, if any, knew that the scars and pit marks actually disclosed the infant’s first form of enslaved labor, an assignment that did not make it into the ledger books or the plantation records.” [p141-42]

Tragically, this episode was hardly an anomaly, and unethical medical practices involving blacks did not end with Appomattox. The infamous “Tuskegee Syphilis Study” that observed but failed to offer treatment to the nearly four hundred black men recruited without informed consent ran for forty years and was not terminated until 1972! One of the chief reasons for COVID vaccine hesitancy among African Americans has been identified as a distrust of a medical community that historically has either victimized or marginalized them.

Maladies of Empire is a well-written, highly readable book suitable to a scholarly as well as popular audience, and clearly represents a magnificent contribution to the historiography. But it is hardly only for students of history. Instead, it rightly belongs on the shelf of every medical professional practicing today—especially epidemiologists!

Featured

Review of: Chemistry for Breakfast: The Amazing Science of Everyday Life, by Mai Thi Nguyen-Kim

Is your morning coffee moving? Is there a particle party going on in your kitchen? What makes for a great-tasting gourmet meal? Does artificial flavoring really make a difference? Why does mixing soap with water get your dishes clean? Why do some say that “sitting is the new smoking?” How come one beer gives you a strong buzz but your friend can drink a bottle of wine without slurring her words?  When it comes to love, is the “right chemistry” just a metaphor? And would you dump your partner because he won’t use fluoridated toothpaste?

All this and much more makes for the delightful conversation packed into Chemistry for Breakfast: The Amazing Science of Everyday Life, by Mai Thi Nguyen-Kim, a fun, fascinating, and fast-moving slender volume that could very well turn you into a fan of—of all things—chemistry! This cool and quirky book is just the latest effort by the author—a real-life German chemist who hosts a YouTube channel and has delivered a TED Talk—to combat what she playfully dubs “chemism:” the notion that chemistry is dull and best left to the devices of boring nerdy chem-geeks! One reason it works is because Nguyen-Kim is herself the antithesis of such stereotypes, coming off in both print and video as a hip, brilliant, and articulate young woman with a passion for science and for living in the moment.

I rarely pick up a science book, but when I do, I typically punch above my intellectual weight, challenging myself to reach beyond my facility with history and literature to dare to tangle with the intimidating realms of physics, biology, and the like. I often emerge somewhat bruised but with the benefit of new insights, as I did after my time with Sean Carroll’s The Particle at the End of the Universe and Bill Schopf’s Cradle of Life. So it was with a mix of eagerness and trepidation that I approached Chemistry for Breakfast.

But this proved to be a vastly different experience! Using her typical day as a backdrop—from her own body’s release of stress hormones when the alarm sounds to the way postprandial glasses of wine mess with the neurotransmitters of her guests—Nguyen-Kim demonstrates the omnipresence of chemistry to our very existence, and distills its complexity into bite-size concepts that are easy to process but yet never dumbed-down. Apparently, there is a particle party going on in your kitchen every morning, with all kinds of atoms moving at different rates in the coffee you’re sipping, the mug in your hand, and the steam rising above it. It’s all about temperature and molecular bonds.  In a chapter whimsically entitled “Death by Toothpaste,” we find out how chemicals bond to produce sodium fluoride, the stuff of toothpaste, and why that not only makes for a potent weapon against cavities, but why the author’s best buddy might dump her boyfriend—because he thinks fluoride is poison! There’s much more to come—and it’s still only morning at Mai’s house …

As a reader, I found myself learning a lot about chemistry without studying chemistry, a remarkable achievement by the author, whose technique is so effective because it is so unique. Fielding humorous anecdotes plucked from everyday existence, Mai’s wit is infectious, so the “lessons” prove entertaining without turning silly. I love to cook, so I especially welcomed her return to the kitchen in a later chapter. Alas, I found out that while I can pride myself on my culinary expertise, it all really comes down to the way ingredients react with one another in a mixing bowl and on the hot stove.  Oh, and it turns out that despite the fearmongering in some quarters, most artificial flavors are no better or worse than natural ones. Yes, you should read the label—but you have to know what those ingredients are before you judge them healthy or not.

Throughout the narrative, Nguyen-Kim conveys an attractive brand of approachability that makes you want to sit down and have a beer with her, but unfortunately she can’t drink:  Mai, born of Vietnamese parents, has inherited a gene mutation in common with a certain segment of Asians which interferes with the way the body processes alcohol, so she becomes overly intoxicated after just a few sips of any strong drink. She explains in detail why her “broken” ALDH2 enzyme simply will not break down the acetaldehyde in the glass of wine that makes her guests a little tipsy but gives her nausea, a rapid-heartbeat, and sends a “weird, lobster-red tinge” to her face.  Mai’s issue with alcohol reminded me of recent studies that revealed the reason that some people of northern European ancestry always burn instead of tan at the beach is due to faulty genes that block the creation of melanin in response to sun exposure. This is a strong underscore that while race is of course a myth that otherwise communicates nothing of importance about human beings, in the medical world genetics has the potential of serving as a powerful tool to explain and treat disease.  As for Mai, given the overall health risks of alcohol consumption, she views her inability to drink as more of a blessing than a curse, and hopes to pass her broken gene on to her offspring!

The odds that I would ever deliberately set out to read a book about chemistry were never that favorable.  That I would do so and then rave about the experience seemed even more unlikely.  But here we are, along with my highest recommendations. Mai’s love of science is nothing less than contagious. If you read her work,  I can promise that not only will you learn a lot, but you will really enjoy the learning process. And that too, I suppose, is chemistry!

 

[Note: I read an Advance Reader’s Copy of this book as part of an early reviewer’s program]

Featured

Review of: The Lost Founding Father: John Quincy Adams and the Transformation of American Politics, by William J. Cooper

Until Jimmy Carter came along, there really was no rival to John Quincy Adams (1767-1848) as best ex-president, although perhaps William Howard Taft earns honorable mention for his later service as Chief Justice of the Supreme Court.  Carter—who at ninety-seven still walks among us as this review goes to press—has made his reputation as a humanitarian outside of government after what many view as a mostly failed single term in the White House. Adams, on the other hand, whose one term as the sixth President of the United States (1825-29) was likewise disappointing, managed to establish a memorable outsize official legacy when he returned to serve his country as a member of the House of Representatives from 1831 until his dramatic collapse at his desk and subsequent death inside the Capitol Building in 1848. Freshman Congressman Abraham Lincoln would be a pallbearer.

Like several of the Founders whose own later presidential years were troubled, including his own father, John Quincy had a far more distinguished and successful career prior to his time as Chief Executive. But quite remarkably, unlike these other men—John Adams, Jefferson, Madison—who lingered in mostly quiet retirement for decades beyond their respective tenures, in his long career John Quincy Adams could be said to have equaled or surpassed his accomplished pre-presidential service as diplomat, United States Senator, and Secretary of State, returning as just a simple Congressman from Massachusetts who was to be a giant in antislavery advocacy. Adams remains the only former president elected to the House, and until George W. Bush in 2001, the only man who could claim his own father as a fellow president.

Notably, the single unsatisfactory terms that he and his father served in the White House turned out to be bookends to a significant era in American history: John Adams was the first to run for president in a contested election (Washington had essentially been unopposed); his son’s tenure ended along with the Early Republic, shattered by the ascent of Jacksonian democracy. But if the Early Republic was no more, it marked only the beginning of another chapter in the extraordinary life of John Quincy Adams. And yet, for a figure that carved such indelible grooves in our nation’s history, present at the creation and active well into the crises of the antebellum period that not long after his death would threaten to annihilate the American experiment, it remains somewhat astonishing how utterly unfamiliar he remains to most citizens of the twenty-first century.

Prominent historian William J. Cooper seeks to remedy that with The Lost Founding Father: John Quincy Adams and the Transformation of American Politics (2017), an exhaustively researched, extremely well-written, if dense study that is likely to claim distinction as the definitive biography for some years to come. Cooper’s impressive work is old-fashioned narrative history at its best. John Quincy Adams is the main character, but his story is told amid the backdrop of the nation’s founding, its evolution as a young republic, and its descent to sectional crises over slavery, while many, at home and abroad, wondered at the likelihood of its survival. It is not only clever but entirely apt that in the book’s title the author dubs his subject the “Lost Founding Father.”

Some have called Benjamin Franklin the “grandfather of his country.” Likewise, John Quincy Adams could be said to be a sort of “grandson.” He was not only to witness the tumultuous era of the American Revolution and observe John Adams’ storied role as a principal Founder, he also accompanied his father on diplomatic missions to Europe while still a boy, and completed most of his early education there.  Like Franklin, Jefferson, and his father, he spent many years abroad during periods of fast-moving events and dramatic developments on American soil that altered the nation and could prove jarring upon return. Unlike the others, his extended absence coincided with his formative years; John Quincy grew up not in New England but rather in France, the Netherlands, Russia, and Great Britain, and this came to deeply affect him.

A brooding intellectual with a brilliant mind who sought solitude over society, dedicated to principle above all else, including loyalty to party, the Adams that emerges in these pages was a socially awkward workaholic subject to depression, blessed with a wide range of talents that ranged from the literary to languages to the deeply analytical, but lacking even the tiniest vestige of charisma.  He strikes the reader as the least suitable person to ever aspire to or serve as president of the United States. A gifted writer, he began a diary when he was twelve years old that he continued almost without interruption until shortly before his death. He frequently expressed dismay at his inability to keep up with his ambitious goals for daily diary entries that often ran to considerable length.

There is much in the man that resembles his father, also a principled intellect, whom he much admired even while he suffered a sense of inadequacy in his shadow. Both men were stubborn in their ideals and tended to alienate those who might otherwise be allies. While each could be self-righteous, John Adams was also ever firmly self-confident in a way that his son could never match. Of course, in his defense, the younger man not only felt obligated to live up to a figure who was a titan in the public arena, but he lacked a wife that was cut from the same cloth as his mother, with whom he had a sometimes-troubled relationship.

Modern historians have made much of the historic partnership that existed, mostly behind the scenes, between John and Abigail Adams; in every way except eighteenth century mores she seems his equal. John Quincy, on the other hand, was wedded to Louisa Catherine, a sickly woman given to fainting spells and frequent migraines whose multiple miscarriages coupled with the loss of an infant daughter certainly triggered severe psychological trauma. A modern audience can’t help but wonder if her many maladies and histrionics were not psychosomatic. At any rate, John Quincy treated his wife and other females he encountered with the patronizing male chauvinism typical of his times, so it is dubious that if he instead found an Abigail Adams at his side, he could have flourished in her orbit the way his father did.

Although Secretary of State John Quincy Adams was largely the force that drove the landmark “Monroe Doctrine” and other foreign policy achievements of the Monroe Administration, most who know of Adams tend to know of him only peripherally, through his legendary political confrontation with the far more celebrated Andrew Jackson. That conflict was forged in the election of 1824. The Federalist Party, scorned for threats of New England secession during the War of 1812, was essentially out of business. James Monroe was wrapping up his second term in what historians have called the “Era of Good Feelings” that ostensibly reflected a sense of national unity controlled by a single party, the Democratic-Republicans, but there were fissures, factions, local interests, and emerging coalitions beneath the surface. In the most contested election to date in the nation’s history, John Quincy, Andrew Jackson, Henry Clay, and William Crawford were chief contenders for the highest office. While Jackson received a plurality, none received a majority of the electoral votes, so as specified in the Constitution the race was sent to the House for decision. Crawford had suffered a devastating stroke and was thus out of consideration. Adams and Clay tended to clash, but both were aligned on many national issues, and Jackson was rightly seen as a dangerous demagogue. Clay threw his support to Adams, who became president. Jackson was furious, even more so when Adams later named Clay Secretary of State, which was then seen as a sure steppingstone to the presidency, something that further enraged Jackson, who branded his appointment by Adams a “Corrupt Bargain.” As it turned out, while Adams prevailed, his presidency was marked by frustration, his ambitious domestic goals stymied by Congress. In a run for reelection, he was dealt a humiliating defeat by Jackson, who headed the new Democratic Party. The politics of John Quincy Adams and the Early Republic went extinct.

While evaluating these two elections, it’s worth pausing here to emphasize John Quincy’s longtime objection to the nefarious if often overlooked impact of the three-fifths clause in the Constitution, which granted southern slaveholding states outsize political clout by counting an enslaved individual as three-fifths of a person for the purpose of representation. This was to prove significant, since the slave south claimed a disproportionate share of national political power when it came to advancing legislation or, for that matter, electing a president.  He found focus on this issue while Secretary of State in the debate that swirled around the Missouri Compromise of 1820, concluding that:

The bargain in the Constitution between freedom and slavery had conveyed to the South far too much political influence, its base the notorious three-fifths clause, which immorally increased southern power in the nation … the past two decades had witnessed a southern domination that had ravaged the Union … he emphasized what he saw as the moral viciousness of that founding accord. It contradicted the fundamental justification of the American Revolution by subjecting slaves to oppression while privileging their masters with about a double representation.  [p174]

This was years before he was himself to fall victim to the infamous clause. As underscored by historian Alan Taylor in his recent work, American Republics (2021), the disputed election of 1824 would have been far less disputed without the three-fifths clause, since in that case Adams would have led Andrew Jackson in the Electoral College 83 to 77 votes, instead of putting Jackson in the lead 99 to 84. When Jackson prevailed in the next election in 1828, it was the south that cemented his victory. The days of Virginia planters in the White House may have passed, but the slave south clearly dominated national politics and often served as antebellum kingmaker for the White House.

In any case, Adams’ dreams of vindicating his father’s single term were dashed.  A lesser man would have gone off into the exile of retirement, but Adams was to come back—and come back stronger than ever as a political figure to be reckoned with, distinguished by his fierce antislavery activism. His abhorrence of human bondage ran deep, and long preceded his return to Congress. And because he kept such a detailed journal, we have insight into his most personal convictions.

Musing once more about the Missouri Compromise, he confided to his diary his belief that a war over slavery was surely on the horizon that would ultimately result in its elimination:  “If slavery be the destined sword in the hand of the destroying angel which is to sever the ties of this Union … the same sword will cut in sunder the bonds of slavery itself.” [p173] He also wrote of his conversations with the fellow cabinet secretary he most admired at the time, South Carolina’s John C. Calhoun, who clearly articulated the doctrine of white supremacy that defined the south. To Adams’ disappointment, Calhoun told him that southerners did not believe the Declaration’s guarantees of universal rights applied to blacks, and “Calhoun maintained that racial slavery guaranteed equality among whites because it placed all of them above blacks.” [p175]

These diary entries from 1820 came to foreshadow the more crisis-driven politics in the decades hence when Adams—his unhappy presidency long behind him—was the leading figure in Congress who stood against the south’s “peculiar institution” and southern domination of national politics. These were, of course, far more fraught times. He opposed both Texas annexation and the Mexican War, which he correctly viewed as a conflict designed to extend slavery. But he most famously led the opposition against the 1836 resolution known as the “gag rule” that prohibited House debate on petitions to abolish slavery, which incensed the north and spawned greater polarization. Adams was eventually successful, and the gag rule was repealed, but not until 1844.

It has long been my goal to read at least one biography of each American president, and I came to Cooper’s book with that objective in mind.  I found my time with it a deeply satisfying experience, although I suspect because it is so pregnant in detail it will find less appeal among a more popular audience.  Still, if you want to learn about this too often overlooked critical figure and at the same time gain a greater understanding of an important era in American history, I would highly recommend that you turn to The Lost Founding Father.

——————————————

Note: I reviewed the referenced Alan Taylor work here: Review of: American Republics: A Continental History of the United States, 1783-1850, by Alan Taylor

 

Featured

Review of: Marching Masters: Slavery, Race, and the Confederate Army During the Civil War, by Colin Edward Woodward

Early in the war … a Union squad closed in on a single ragged Confederate, and he obviously didn’t own any slaves. He couldn’t have much interest in the Constitution or anything else. And said: “What are you fighting for, anyhow?” they asked him. And he said: “I’m fighting because you’re down here.” Which is a pretty satisfactory answer.

That excerpt is from Ken Burns’ epic The Civil War (1990) docuseries, Episode 1, “The Cause.” It was delivered by the avuncular Shelby Foote in his soft, reassuring—some might say mellifluous—cadence, the inflection decorated with a pronounced but gentle southern accent. As professor of history James M. Lundberg complains, Foote, author of a popular Civil War trilogy who was himself not a historian, “nearly negates Burns’ careful 15-minute portrait of slavery’s role in the coming of the war with a 15-second” anecdote.  Elsewhere, Foote rebukes the scholarly consensus that slavery was the central cause for secession and the conflict it spawned that would take well over 600,000 American lives.

While all but die-hard “Lost Cause” myth fanatics have relegated Foote’s ill-conceived dismissal of the centrality of slavery to the dustbin of history, the notion that southern soldiers fought solely for home and hearth has long persisted, even among historians. And on the face of it, it seems as if it should be true. After all, secession was the work of a narrow slice of the antebellum south, the slave-owning planter class which only comprised less than two percent of the population but dominated the political elite, in fury that Lincoln’s election by “Free-Soil” Republicans would likely deny their demands to transplant their “peculiar institution” to the new territories acquired in the Mexican War. More critically, three-quarters of southerners owned no slaves at all, and nearly ninety per cent of the remainder owned twenty or fewer. Most whites lived at the margins as yeoman farmers, although their skin color ensured a status markedly above those of blacks, free or enslaved. The Confederate army closely reflected that society: most rebel soldiers were not slaveowners.  So slavery could not have been important to them … or could it?

The first to challenge the assumption that Civil War soldiers, north or south, were political agnostics was James M. McPherson in What They Fought For 1861-1865 (1995). Based on extensive research on letters written home from the front, McPherson argued that most of those in uniform were far more ideological than previously acknowledged. In a magnificent contribution to the historiography, Colin Edward Woodward goes much further in Marching Masters: Slavery, Race, and the Confederate Army During the Civil War (2014), presenting compelling evidence that not only were most gray-clad combatants well-informed about the issues at stake, but a prime motivating force for a majority was to preserve the institution of human chattel bondage and the white supremacy that defined the Confederacy.

Like McPherson, Woodward does a deep dive into the wealth of still extant letters from those at the front to make his case in a deeply researched and well-written narrative that reveals that the average rebel was surprisingly well-versed in the greater issues manifested in the debates that launched an independent Confederacy and justified the blood and treasure being spent to sustain it. And just as in secession, the central focus was upon preserving a society that had its foundation in chattel slavery and white supremacy. Some letters were penned by those who left enslaved human beings—many or just a few—back at home with their families when they marched off to fight, while most were written by poor dirt farmers who had no human property nor the immediate prospect of obtaining any.

But what is fully astonishing, as Woodward exposes in the narrative, is not only how frequently slavery and the appropriate status for African Americans is referenced in such correspondence, but how remarkably similar the language is, whether the soldier is the son of a wealthy planter or a yeoman farmer barely scraping by. In nearly every case, the righteousness of their cause is defined again and again not by the euphemism of “states’ rights” that became the rallying cry of “Lost Cause” after the war, but by the sanctity of the institution of human bondage. More than once, letters resound with a disturbing yet familiar refrain that asserted that the most fitting condition for blacks is as human property, something seen as mutually beneficial to the master as well as to the enslaved.

If those without slaves risking life and limb to sustain slavery with both musket in hand and zealous declarations in letters home provokes a kind of cognitive dissonance to modern ears, we need only be reminded of our own contemporaries in doublewides who might sound the most passionate defense of Wall Street banks.  Have-nots in America often aspire to what is beyond their reach, for themselves or for their children. For poor southern whites of the time, in and out of the Confederate army, that turns out to be slave property.

One of the greatest sins of postwar reconciliation and the tenacity of the “Lost Cause” was the erasure of African Americans from history. In the myth-making that followed Appomattox, with human bondage extinct and its practice widely reviled, the Civil War was transformed into a sectional war of white brother against white brother, and blacks were relegated to roles as bit players. The centrality of slavery was excised from the record. In the literature, blacks were generally recalled as benign servants loyal to their masters, like the terrified Prissy in Gone with the Wind screeching “De Yankees is comin!” in distress rather than the celebration more likely characteristic to that moment in real time. That a half million of the enslaved fled to freedom in Union lines was lost to memory. Also forgotten was the fact that by the end of the war, fully ten percent of the Union Army was comprised of black soldiers in the United States Colored Troops (USCT)—and these men played a significant role in the south’s defeat.  Never mentioned was that Confederate soldiers routinely executed black men in blue uniforms who were wounded or attempting to surrender, not only in well-known encounters like at Fort Pillow and the Battle of the Crater, but frequently and anonymously. As Woodward reminds us, this brand of murder was often unofficial, but rarely acknowledged, and almost never condemned. Only recently have these aspects of Civil War history received the attention that is their due.

And yet, more remarkably, Marching Masters reveals that perhaps the deepest and most enduring erasure of African Americans was of the huge cohort that accompanied the Confederate army on its various campaigns throughout the war. Thousands and thousands of them. “Lost Cause” zealots have imagined great corps of “Black Confederates” who served as fighters fending off Yankee marauders, but if that is fantasy—and it certainly is—the massive numbers of blacks who served as laborers alongside white infantry were not only real but represented a significant reason why smaller forces of Confederates held out as well as they did against their often numerically superior northern opponents. We have long known that a greater percentage of southerners were able to join the military than their northern counterparts because slave labor at home in agriculture and industry freed up men to wield saber and musket, but Woodward uncovers the long-overlooked legions of the enslaved who travelled with the rebels performing the kind of labor that (mostly) fell on white enlisted men in northern armies.

A segment of these were also personal servants to the sons of planters, which sometimes provoked jealousy among the ranks. Certain letters home plead for just such a servile companion, sometimes arguing that the enslaved person would be less likely to flee to Union lines if he was to be a cook in an army camp instead! And there were occasionally indeed tender if somewhat perversely paternalistic bonds between the homesick soldier and the enslaved, some of which found wistful expression in letters, some manifested in relationships with servants in the encampments. Many soldiers had deep attachments to the enslaved that nurtured them as children in the bosom of their families; some of that was sincerely reciprocated. Woodward makes it clear that while certain generalities can be drawn, every individual—soldier or chattel—was a human being capable of a wide range of actions and emotions, from the cruel to the heartwarming. For better or for worse, all were creatures of their times and their circumstances. But, at the end of the day, white soldiers had something like free will; enslaved African Americans were subject to the will of others, sometimes for the better but more often for the worse.

And then there was impressment. One of the major issues relatively unexplored in the literature is the resistance of white soldiers in the Confederate army to perform menial labor—the same tasks routinely done by white soldiers in the Union army, who grumbled as all those in the ranks in every army were wont to do while nevertheless following orders. But southern boys were different. Nurtured in a society firmly grounded in white supremacy, with chattel property doomed to the most onerous toil, rebels not only typically looked down upon hard work but—as comes out in their letters—equated it with “slavery.” To cope with this and an overall shortage of manpower, legislation was passed in 1863 mandating impressment of the enslaved along with a commitment of compensation to owners.  This was not well received, but yet enacted, and thousands more blacks were sent to camps to do the work soldiers were not willing to do.

The numbers were staggering. When Lee invaded Pennsylvania, his army included 6000 enslaved blacks—which added an additional ten percent to the 60,000 infantry troops he led to Gettysburg! This of course does not include the runaways and free blacks his forces seized and enslaved after he crossed the state line. The point to all of this, of course, is that slavery was not some ideological abstraction for the average rebel soldier in the ranks, something that characterized the home front, whether your own family were owners of chattel property or not. Instead, the enslaved were with you in the field every day, not figuratively but in the flesh. With this in mind, sounding a denial that slavery served as a critical motivation for Confederate troops rings decidedly off-key.

While slavery was the central cause of the war, it was certainly not the only cause. There were other tensions that included agriculture vs. industry, rural vs. urban, states’ rights vs. central government, tariffs, etc. But as historians have long concluded, none of these factors on their own could ever have led to Civil War. Likewise, southern soldiers fought for a variety of reasons. While plenty were volunteers, many were also drafted into the war effort. Like soldiers from ancient times to the present day, they fought because they were ordered to, because of their personal honor, because they did not want to appear cowardly in the eyes of their companions. And because much of the war was decided on southern soil, they also fought for their homeland, to defend their families, to preserve their independence. So Shelby Foote might have had a point. But what was that independence based upon? It was fully and openly based upon creating and sustaining a proud slave republic, as all the rhetoric in the lead-up to secession loudly underscored.

Marching Masters argues convincingly that the long-held belief that southern soldiers were indifferent to or unacquainted with the principles that guided the Confederate States of America is in itself a kind of myth that encourages us to not only forgive those who fought for a reprehensible cause but to put them on a kind of heroic pedestal. Many fought valiantly, many lost their lives, and many were indeed heroes, but we must not overlook the cause that defined that sacrifice.  In this, we must recall the speech delivered by the formerly enslaved Frederick Douglass on “Remembering the Civil War” with his plea against moral equivalency that is as relevant today as it was when he delivered it on Decoration Day in 1878: “There was a right side and a wrong side in the late war, which no sentiment ought to cause us to forget, and while today we should have malice toward none, and charity toward all, it is no part of our duty to confound right with wrong, or loyalty with treason.”

For all of the more than 60,000 books on the Civil War, there still remains a great deal to explore and much that has long been cloaked in myth for us to unravel. It is the duty not only of historians but for all citizens of our nation—a nation that was truly reborn in that tragic, bloody conflict—to set aside popular if erroneous notions of what led to that war, as well as what motivated its long-dead combatants to take up arms against one another. To that end, Woodward’s Marching Masters is a book that is not only highly recommended but is most certainly required reading.

 

——————————-

Transcript of The Civil War (1990) docuseries, Episode 1, “The Cause:” https://subslikescript.com/series/The_Civil_War-98769/season-1/episode-1-The_Cause

Comments by James M. Lundberg: https://www.theatlantic.com/national/archive/2011/06/civil-war-sentimentalism/240082/

Speech by Frederick Douglass, “Remembering the Civil War,” delivered on Decoration Day 1878:   https://www.americanyawp.com/reader/reconstruction/frederick-douglass-on-remembering-the-civil-war-1877/

Featured

Review of: Sarah’s Long Walk: The Free Blacks of Boston and How Their Struggle for Equality Changed America, by Stephen Kendrick & Paul Kendrick

Several years ago, I published an article in a scholarly journal entitled “Strange Bedfellows: Nativism, Know-Nothings, African-Americans & School Desegregation in Antebellum Massachusetts,” that spotlighted the odd confluence of anti-Irish nativism and the struggle to desegregate Boston schools. The Know-Nothings—a populist, nativist coalition that contained elements that would later be folded into the emerging Republican Party—made a surprising sweep in the Massachusetts 1854 elections, fueled primarily by anti-Irish sentiment, as well as a pent-up popular rage against the elite status quo that had long dominated state politics. Suddenly, the governor, all forty senators, and all but three house representatives were Know-Nothings!

Perhaps more startling was that during their brief tenure, the Know-Nothing legislature enacted a host of progressive reforms, creating laws to protect workingmen, ending imprisonment for debt, strengthening women’s rights in property and marriage, and—most significantly—passing landmark legislation in 1855 that “prohibited the exclusion [from public schools] of children for either racial or religious reasons,” which effectively made Massachusetts the first state in the country to ban segregation in schools! Featured in the debate prior to passage of the desegregation bill is a quote from the record that is to today’s ears perhaps at once comic and cringeworthy, as one proponent of the new law sincerely voiced his regret “that Negroes living on the outskirts . . . were forced to go a long distance to [the segregated] Smith School. . . while . . . the ‘dirtiest Irish,’ were allowed to step from their houses into the nearest school.”

My article focused on Massachusetts politics and the bizarre incongruity of nativists unexpectedly delivering the long sought-after prize of desegregated schools to the African American community. It is also the story of the nearly forgotten black abolitionist and integrationist William Cooper Nell, a mild if charismatic figure who united disparate forces of blacks and whites in a long, stubborn, determined campaign to end Boston school segregation. But there are lots of other important stories of people and events that led to that moment which due to space constraints could not receive adequate treatment in my effort.

Arguably the most significant one, which my article references but does not dwell upon, centers upon a little black girl named Sarah Roberts.  Her father, Benjamin R. Roberts, sued for equal protection rights under the state constitution because his daughter was barred from attending a school near her residence and was compelled to a long walk to the rundown and crowded Smith School instead. He was represented by Robert Morris, one of the first African American attorneys in the United States, and Charles Sumner, who would later serve as United States Senator. In April 1850, in Roberts v. The City of Boston, the state Supreme Court ruled against him, declaring that each locality could decide for itself whether to have or end segregation. This ruling was to serve as an unfortunate precedent for the ignominious separate but equal ruling in Plessy v. Ferguson some decades hence and was also an obstacle Thurgood Marshall had to surmount when he successfully argued to have the Supreme Court strike down school segregation across the nation in 1954’s breakthrough Brown v. Board of Education case—just a little more than a century after the disappointing ruling in the Roberts case.

Father and son Stephen Kendrick and Paul Kendrick teamed up to tell the Roberts story and a good deal more in Sarah’s Long Walk: The Free Blacks of Boston and How Their Struggle for Equality Changed America, an extremely well-written, comprehensive, if occasionally slow-moving chronicle that recovers for the reader the vibrant, long overlooked black community that once peopled Boston in the years before the Civil War. In the process, the authors reveal how it was that while the state of Massachusetts offered the best overall quality of life in the nation for free blacks, it was also the home to the same stark, virulent racism characteristic of much of the north in the antebellum era, a deep-seated prejudice that manifested itself not only in segregated schools but also in a strict separation in other arenas such as transportation and theaters.

Doctrines of abolition were widely despised, north and south, and while abolitionists remained a minority in Massachusetts, as well, it was perhaps the only state in the country where antislavery ideology achieved widespread legitimacy. But true history is all nuance, and those who might rail passionately against the inherent evil in holding humans as chattel property did not necessarily also advance notions of racial equality. That was indeed far less common. Moreover, it is too rarely underscored that the majority of northern “Freesoilers” who were later to become the most critical component of the Republican Party vehemently opposed the spread of slavery to the new territories acquired in the Mexican War while concomitantly despising blacks, free or enslaved.

At the same time, there was hardly unanimity in the free black community when it came to integration; some blacks welcomed separation.  Still, as Sarah’s Long Walk relates, there were a number of significant African American leaders like Robert Morris and William Cooper Nell whom, with their white abolitionist allies, played the long game and pursued compelling, nonviolent mechanisms to achieve both integration and equality, many of which presaged the tactics of Martin Luther King and other Civil Rights figures a full century later.  For instance, rather than lose hope after the Roberts court decision, Nell doubled down his efforts, this time with a new strategy—a taxpayer’s boycott of Boston which saw prominent blacks move out of the city to suburbs that featured integrated schools, depriving Boston of tax revenue.

The Kendrick’s open the narrative with a discussion of Thurgood Marshall’s efforts to overturn the Roberts precedent in Brown v. Board of Education, and then trace that back to the flesh and blood Boston inhabitants who made Roberts v. The City of Boston possible, revealing the free blacks who have too long been lost to history. Readers not familiar with this material will come across much that will surprise them between the covers of this fine book. The most glaring might be how thoroughly in the decades after Reconstruction blacks have been erased from our history, north and south. Until recently, how many growing up in Massachusetts knew anything at all about the thriving free black community in Boston, or similar ones elsewhere above the Mason-Dixon?

But most astonishing for many will be the fact that the separation of races that that would become the new normal in the post-Civil War “Jim Crow” south had its roots fully nurtured in the north decades before Appomattox.  Whites and their enslaved chattels shared lives intertwined in the antebellum south, while separation between whites and blacks was fiercely enforced in the north.  Many African Americans in Massachusetts had fled bondage, or had family members that were runaways, and knew full well that southern slaveowners commonly traveled by rail accompanied by their enslaved servants, while free blacks in Boston were relegated to a separate car until the state prohibited racial segregation in mass transportation in 1842.

Sarah may not have been spared her long walk to school, but the efforts of integrationists eventually paid off when school segregation was prohibited by Massachusetts law just five years after Sarah’s father lost his case in court. Unfortunately, this battle had to be waged all over again in the 1970s, this time accompanied by episodes of violence, as Boston struggled to achieve educational equality through controversial busing mandates that in the long term generated far more ill will than sustainable results. Despite the elevation of Thurgood Marshall to the Supreme Court bench, and the election of the first African American president, more than one hundred fifty years after the Fourteenth Amendment became the law of the land, the Black Lives Matter (BLM) movement reminds us that there is still much work to be done to achieve anything like real equality in the United States.

For historians and educators, an even greater concern these days lies in the concerted efforts by some on the political right to erase the true story of African American history from public schools. As this review goes to press in Black History Month, February 2022, shameful acts are becoming law across a number of states that by means of gaslighting legislation ostensibly designed to ban Critical Race Theory (CRT) effectively prohibit educators from teaching their students the true history of slavery, Reconstruction, and Civil Rights.  As of this morning, there are some one hundred thirteen other bills being advanced across the nation that could serve as potential gag orders in schools. How can we best combat that? One way is to loudly protest to state and federal officials, to insist that black history is also American history and should not be erased. The other is to freely share black history in your own networks. The best weapons for that in our collective arsenal are quality books like Sarah’s Long Walk.

 

My journal article, “Strange Bedfellows: Nativism, Know-Nothings, African-Americans & School Desegregation in Antebellum Massachusetts,” and related materials can be accessed by clicking here: Know-Nothings

For more about the Know-Nothings, I recommend this book which I reviewed here: Review of: The Know-Nothing Party in Massachusetts: The Rise and Fall of a People’s Movement, by John R. Mulkern

 

 

 

 

Featured

Review of: Into the Heart of the World: A Journey to the Center of the Earth, by David Whitehouse

A familiar trope in the Looney Tunes cartoons of my boyhood had Elmer Fudd or some other zany character digging a hole with such vigor and determination that they emerged on the other side of the world in China, greeted by one or more of the stereotypically racist Asian animated figures of the day.  In the 1964 Road Runner vehicle “War and Pieces,” Wile E. Coyote goes it one better, riding a rocket clear through the earth—presumably passing through its center—until he appears on the other side dangling upside down, only to then encounter a Chinese Road Runner embellished with a braided pigtail and conical hat who bangs a gong with such force that he is driven back through the tunnel to end up right where he started from. In an added flourish, the Chinese Road Runner then peeps his head out of the hole and beep-beep’s faux Chinese characters that turn into letters that spell “The End.”

There were healthy doses of both hilarious comedy and uncomfortable caricature here, but what really stuck in a kid’s mind was the notion that you could somehow burrow through the earth with a shovel or some explosive force, which it turns out is just as impossible in 2022 as it was in 1964. But if you hypothetically wanted to give it a go, you would have to start at China’s actual antipode in this hemisphere, which lies in Chile or Argentina, and then tunnel some 7,918 miles: twice the distance to the center of the earth you would pass through, which lies at around 3,959 miles (6,371 km) from the surface.

So what about the center of the earth? Could we go there? After all, we did visit the moon, and the average distance there—238,855 miles away—is far more distant.  But of course what lies between the earth and its single satellite is mostly empty space, not the crust, mantle, outer core, and inner core of a rocky earth that is a blend of the solid and the molten.  Okay, it’s a challenge, you grant, but how far have we actually made it in our effort to explore our inner planet? We must have made some headway, right? Well, it turns out that the answer is: not very much. A long, concerted effort at drilling that began in 1970 by the then Soviet Union resulted in a measly milestone of a mere 7.6 miles (12.3 km) at the Kola Superdeep Borehole near the Russian border with Norway; efforts were abandoned in 1994 because of higher-than-expected temperatures of 356 °F (180 °C). Will new technologies take us deeper one day at this site or another? Undoubtedly. But it likely will not be in the near future. After all, there’s another 3,951.4 miles to go and conditions will only grow more perilous at greater depths.

But we can dream, can’t we? Indeed. And it was Jules Verne who did so most famously when he imagined just such a trip in his classic 1864 science fiction novel, Journey to the Center of the Earth.  Astrophysicist and journalist David Whitehouse cleverly models his grand exploration of earth’s interior, Into the Heart of the World: A Journey to the Center of the Earth, on Verne’s tale, a well-written, highly accessible, and occasionally exciting work of popular science that relies on geology rather than fiction to transport the reader beneath the earth’s crust through the layers below and eventually to what we can theoretically conceive based upon the latest research as the inner core that comprises the planet’s center.

It is surprising just how few people today possess a basic understanding of the mechanics that power the forces of the earth.  But perhaps even more astonishing is how new—relatively—this science is. When I was a child watching Looney Tunes on our black-and-white television, my school textbooks admitted that although certain hypotheses had been suggested, the causes of sometimes catastrophic events such as earthquakes and volcanoes remained essentially unknown. All that changed effectively overnight—around the time my family got our first color TV—with the widespread acceptance by geologists of the theory of plate tectonics, constructed on the foundation of the much earlier hypothesis of German meteorologist and geophysicist Alfred Wegener, who in 1912 advanced the view of continents in motion known as “continental drift,” which was ridiculed in his time. By 1966, the long-dead Wegener was vindicated, and continental drift was upgraded to the more elegant model of plate tectonics that fully explained not only earthquakes and volcanoes, but mountain-building, seafloor spreading, and the whole host of other processes that power a dynamic earth.

Unlike some disciplines such as astrophysics, the basic concepts that make up earth science are hardly insurmountable to any individual with an average intelligence, so for those who have no idea how plate tectonics work and are curious enough to want to learn, Into the Heart of the World is a wonderful starting point. Whitehouse can be credited with articulating complicated processes in an easy to follow narrative that consistently holds the reader’s interest and remains fully comprehensible to the non-scientist. I came to this book with more than a passing familiarity with plate tectonics, but I nevertheless added to my knowledge base and enjoyed the way the author united disparate topics into this single theme of a journey to the earth’s inner core.

If I have a complaint, and as such it is only a quibble tied to my own preferences, Into the Heart of the World often devotes far more paragraphs to a history of “how we know what we know” rather than a more detailed explanation of the science itself. The author is not to be faulted for what is integral to the structure of the work—after all the cover does boast “A Remarkable Voyage of Scientific Discovery,” but it left me longing for more. Also, some readers may stumble over these backstories of people and events, eager instead to get to the fascinating essence of what drives the forces that shape our planet.

A running gag in more than one Bugs Bunny episode has the whacky rabbit inadvertently tunneling to the other side of the world, then admonishing himself that “I knew I shoulda taken that left turn at Albuquerque!”  He doesn’t comment on what turn he took at his juncture with the center of the earth, but many kids who sat cross-legged in front their TVs wondered what that trip might look like. For grownups who still wonder, I recommend Into the Heart of the World as your first stop.

 

[Note: this book has also been published under the alternate title, Journey to the Centre of the Earth: The Remarkable Voyage of Scientific Discovery into the Heart of Our World.]

[A link to the referenced 1964 Road Runner episode is here: War and Pieces]

Featured

Review of: Ancient Bones: Unearthing the Astonishing New Story of How We Became Human, by Madelaine Böhme, Rüdiger Braun, and Florian Breier

In southern Greece in 1944, German forces constructing a wartime bunker reportedly unearthed a single mandible that paleontologist Bruno von Freyberg incorrectly identified as an extinct Old-World monkey. A decades-later reexamination by another paleoanthropologist determined that the tooth instead belonged to a 7.2-million-year-old extinct species of great ape which in 1972 was dubbed Graecopithecus freybergi and came to be more popularly known as “El Graeco.” Another tooth was discovered in Bulgaria in 2012. Then, in 2017, an international team led by German paleontologist Madelaine Böhme conducted an analysis that came to the astonishing conclusion that El Graeco in fact represents the oldest hominin—our oldest direct human ancestor! At the same, Böhme challenged the scientific consensus that all humans are “Out-of-Africa” with her competing “North Side Story” that suggests Mediterranean ape ancestry instead.  Both of these notions remain widely disputed in the paleontological community.

In Ancient Bones: Unearthing the Astonishing New Story of How We Became Human, Böhme—with coauthors Rüdiger Braun and Florian Breier—advances this North Side Story with a vengeance, scorning the naysayers and intimating the presence of some wider conspiracy in the paleontological community to suppress findings that dispute the status quo. Böhme brings other ammunition to the table, including the so-called “Trachilos footprints,” the 5.7-million-year-old potentially hominin footprints found on Crete, which—if fully substantiated—would make these more than 2.5 million years earlier than the footprints of Australopithecus afarensis found in Tanzania. Perhaps these were made by El Graeco?! And then there’s Böhme’s own discovery of the 11.6-million-year-old Danuvius guggenmosi, an extinct species of great ape she uncovered near the town of Pforzen in southern Germany, which according to the author revolutionizes the origins of bipedalism. Throughout, she positions herself as the lonely voice in the wilderness shouting truth to power.

I lack the scientific credentials to quarrel with Böhme’s assertions, but I have studied paleoanthropology as a layman long enough to both follow her arguments and to understand why accepted authorities would be reluctant to embrace her somewhat outrageous claims that are after all based on rather thin evidence. But for the uninitiated, some background to this discussion is in order:

While human evolution is in itself not controversial (for scientists, at least; Christian evangelicals are another story), the theoretical process of how we ended up as Homo sapiens sapiens, the only living members of genus Homo, based upon both molecular biology and fossil evidence, has long been open to spirited debate in the field, especially because new fossil finds occur with some frequency and the rules of somewhat secretive peer-reviewed scholarship that lead to publication in scientific journals often delays what should otherwise be breaking news.

Paleontologists have long been known to disagree vociferously with one other, sometimes spawning feuds that erupt in the public arena, such as the famous one in the 1970s between the esteemed, pedigreed Richard Leakey and Donald Johanson over Johanson’s discovery and identification of  the 3.2-million-year-old Australopithecine “Lucy,” which was eventually accepted by the scientific community over Leakey’s objections.  At one time, it was said that all hominin fossils could be placed on one single, large table. Now there are far more than that: Homo, Australopithecine, and many that defy simple categorization. Also at one time human evolution was envisioned as a direct progression from primitive to sophistication, but today it is accepted that rather than a “tree” our own evolution can best be imagined as a bush, with many relatives—and many of those relatives not on a direct path to the humans that walk the earth today.

Another controversary has been between those who favored an “Out-of-Africa” origin for humanity, and those who advanced what used to be called the multi-regional hypothesis. Since all living Homo sapiens sapiens are very, very closely related to each other—even more closely related than chimpanzees that live in different parts of Africa today—multiregionalism smacked a bit of the illogical and has largely fallen out of favor. The scholarly consensus that Böhme takes head on is that humans can clearly trace their ancestry back to Africa. Another point that should be made is that there are loud voices of white supremacist “race science” proponents outside of the scientific community whom without any substantiation vehemently oppose the “Out-of-Africa” origin theory for racist political purposes, as underscored in Angela Saini’s brilliant recent book, Superior: The Return of Race Science. This is not to suggest that Böhme is racist nor that her motives should be suspect—there is zero evidence that is the case—but the reader must be aware of the greater “noise” that circulates around this topic.

My most pointed criticism of Ancient Bones is that it is highly disorganized, meandering between science and polemic and unexpected later chapters that read like a college textbook on human evolution.  It is often hard to know what to make of it.  And it’s difficult for me to accept that there is a larger conspiracy in the paleoanthropological community to preserve “Out-of-Africa” against better evidence that few beyond Böhme and her allies have turned up. The author also makes a great deal of identifying singular features in both El Graeco and Danuvius that she insists must establish that her hypotheses are the only correct ones, but as those who are familiar with the work of noted paleoanthropologists John Hawks and Lee Berger are well aware, mosaics—primitive and more advanced characteristics occurring in the same hominin—are far more common than once suspected and thus should give pause to those tempted to conclusions that actual evidence does not unambiguously support.

As noted earlier, I am not a paleontologist or even a scientist, and thus I am far from qualified to peer-review Böhme’s arguments or pronounce judgment on her work. But as a layman with some familiarity with the current scholarship, I remain unconvinced. She also left me uncomfortable with what appears to be a lack of respect for rival ideas and for those who fail to find concordance with her conclusions. More significantly, her book is poorly edited and too often lacks focus. Still, for those like myself who want to stay current with the latest twists-and-turns in the ever-developing story of human evolution, at least some portions of Ancient Bones might be worth a read.

 

[Note: I read an Advance Reader’s Copy (ARC) of this book obtained through an early reviewer’s group.]

[Note: I reviewed Superior: The Return of Race Science, by Angela Saini, here: Review of: Superior: The Return of Race Science, by Angela Saini]

[Note: I reviewed Almost Human: The Astonishing Tale of Homo naledi and the Discovery that Changed Our Human Story,” by Lee Berger and John Hawks here: Review of: Almost Human: The Astonishing Tale of Homo naledi and the Discovery that Changed Our Human Story, by Lee Berger and John Hawks]

 

Featured

Review of: George Washington: The Political Rise of America’s Founding Father, by David O. Stewart

Is another biography of George Washington really necessary? A Google search reveals some nine hundred already exist, not to mention more than five thousand journal articles that chronicle some portion of his life. But the answer turns out to be a resounding yes, and David O. Stewart makes that case magnificently with his latest work, George Washington: The Political Rise of America’s Founding Father, an extremely well-written, insightful, and surprisingly innovative contribution to the historiography.

Many years ago, I recall reading the classic study, Washington: The Indispensable Man, by James Thomas Flexner, which looks beyond his achievements to put emphasis on his most extraordinary contribution, defined not by what he did but what he deliberately did not do: seize power and rule as tyrant. This, of course, is no little thing, as seen in the pages of history from Caesar to Napoleon.  When told he would resign his commission and surrender power to a civilian government, King George III—who no doubt would have had him hanged (or worse) had the war gone differently—famously declared that “If he does that, he will be the greatest man in the world.” Washington demonstrated that greatness again when he voluntarily—you might say eagerly—stepped down after his tenure as President of the United States to retire to private life. Indispensable he was: it is difficult to imagine the course of the American experiment had another served in his place in either of those pivotal roles.

But there is more to Washington than that, and some of it is less than admirable. Notably, there was Washington’s heroic fumble as a young Virginia officer leading colonial forces to warn away the French at what turned into the Battle of Jumonville Glen and helped to spark the French & Indian War. Brash, headstrong, arrogant, thin-skinned, and ever given to an unshakable certitude that his judgment was the sole correct perspective in every matter, the young Washington distinguished himself for his courage and his integrity while at the same time routinely clashing with authority figures, including former mentors that he frequently left exasperated by his demands for recognition.

Biographers tend to visit this period of his life and then fast-forward two decades ahead to the moment when the esteemed if austere middle-aged Washington showed up to the Continental Congress resplendent in his military uniform, the near-unanimous choice to lead the Revolutionary Army in the struggle against Britain. But how did he get here? In most studies, it is not clear. But this is where Stewart shines!  The author, whose background is the law rather than academia—he was once a constitutional lawyer who clerked for Supreme Court Justice Lewis Powell, Jr.—has proved himself a brilliant historian in several fine works, including his groundbreaking reassessment of a key episode of the early post-Civil War era, Impeached: The Trial of President Andrew Johnson and the Fight for Lincoln’s Legacy. And in Madison’s Gift: Five Partnerships that Built America, Stewart’s careful research, analytical skills, and intuitive approach successfully resurrected portions of James Madison’s elusive personality that had been otherwise mostly lost to history.

This talent is on display here, as well, as Stewart adeptly examines and interprets Washington’s evolution from Jumonville Glen to Valley Forge. Washington’s own personality is something of a conundrum for biographers, as he can seem to be simultaneously both selfless and self-centered.  The young Washington so frequently in turn infuriated and alienated peers and superiors alike that it may strike us as fully remarkable that this is the same individual who could later harness the talents and loyalty of both rival generals during the war and the outsize egos of fellow Founders as the new Republic took shape. Stewart demonstrates that Washington was the author of his own success in this arena, quietly in touch with his strengths and weaknesses while earning respect and cultivating goodwill over the years as he established himself as a key figure in the Commonwealth. Washington himself was not in this regard a changed man as much as he was a more mature man who taught himself to modify his demeanor and his behavior in the company of others for mutual advantage. This too, is no small thing.

The subtitle of this book—The Political Rise of America’s Founding Father—is thus hardly accidental, the latest contribution to a rapidly expanding genre focused upon politics and power, showcased in such works as Jon Meacham’s Thomas Jefferson: The Art of Power, and Robert Dallek’s Franklin D. Roosevelt: A Political Life. Collectively, these studies serve to underscore that politics is ever at the heart of leadership, as well as that great leaders are not born fully formed, but rather evolve and emerge.  George Washington perhaps personifies the most salient example of this phenomenon.

The elephant in the room of any examination of Washington—or the other Virginia Founders who championed liberty and equality for that matter—is slavery. Like Jefferson and Madison and a host of others, Washington on various occasions decried the institution of enslaving human beings—while he himself held hundreds as chattel property. Washington is often credited with freeing the enslaved he held direct title to in his will, but that hardly absolves him of the sin of a lifetime of buying, selling, and maintaining an unpaid labor force for nothing less than his own personal gain, especially since he was aware of the moral blemish in doing so. Today’s apologists often caution that is unfair to judge those who walked the earth in the late eighteenth-century by our own contemporary standards, but the reality is that these were Enlightenment-era men that in their own words declared slavery abhorrent while—like Jefferson with his famous “wolf by the ear” cop-out—making excuses to justify participating in and perpetuating a cruel inhumanity that served their own economic self-interests.  As biographer, Stewart’s strategy for this dimension of Washington’s life is to treat very little with it in the course of the narrative, while devoting the second to last chapter to a frank and balanced discussion of the ambivalence that governed the thoughts and actions of the master of Mount Vernon. It is neither whitewash nor condemnation.

Stewart’s study is by no means hagiography, but the author clearly admires his subject. Washington gets a pass for his shortcomings at Jumonville, and he is hardly held to strict account for his role as an enslaver. Still, the result of Stewart’s research, analysis, and approach is the most readable and best single-volume account of Washington’s life to date.  This is a significant contribution to the scholarship that I suspect will long be deemed required reading.

 

I reviewed other works by David O. Stewart here:

Review of: Impeached: The Trial of President Andrew Johnson and the Fight for Lincoln’s Legacy, by David O. Stewart

Review of: Madison’s Gift: Five Partnerships that Built America, by David O. Stewart

My review of Dallek’s Franklin D. Roosevelt, referenced above, is here: Review of: Franklin D. Roosevelt: A Political Life, by Robert Dallek

Featured

Review of: American Republics: A Continental History of the United States, 1783-1850, by Alan Taylor

 

Conspicuous in their absence from my 1960s elementary education were African Americans and Native Americans. Enslaved blacks made an appearance in my textbooks, of course, but slavery as an institution was sketched out as little more than a vague and largely benign product of the times. Then there was a Civil War fought over white men’s sectional grievances; there were dates, and battles, and generals, winners and losers. There was Lincoln’s Emancipation Proclamation, then constitutional amendments that ended slavery and guaranteed equality. There was some bitterness but soon there was reconciliation, and we went on to finish building the transcontinental railroad.  There were the obligatory walk-on cameos by Frederick Douglass and Harriet Tubman, and later George Washington Carver, who had something to do with peanuts. For Native Americans, the record was even worse.  Our texts featured vignettes of Squanto, Pocahontas, Sacajawea, and Sitting Bull. The millions of Amerindians that once populated the country from coast to coast had been effectively erased.

Alan Taylor, Pulitzer Prize winning author and arguably the foremost living historian of early America, has devoted a lifetime to redressing those twin wrongs while restoring the nuanced complexity of our past that was utterly excised from the standard celebration of our national heritage that for so long dominated our historiography. In the process, in the eleven books he has published to date, he has also dramatically shifted the perspective and widened the lens from the familiar approach that more rigidly defines the boundaries of the geography and the established chapters in the history of the United States—a stunning collective achievement that reveals key peoples, critical elements, and greater themes often obscured by the traditional methodology.

I first encountered Taylor some years ago when I read his magnificent American Colonies: The Settling of North America, which restores the long overlooked multicultural and multinational participants who peopled the landscape, while at the same time enlarging the geographic scope beyond the English colonies that later comprised the United States to encompass the rest of the continent that was destined to become Canada and Mexico, as well as highlighting vital links to the West Indies. Later, in American Revolutions, Taylor identifies a series of social, economic and political revolutions of outsize significance over more than five decades that often go unnoticed in the shadows of the War of Independence, which receives all the attention.

Still, as Taylor underscores, it was the outcome of the latter struggle—in which white, former English colonists established a new nation—that was to have the most lasting and dire consequences for all those in their orbit who were not white, former English colonists, most especially blacks and Native Americans.  The defeated British had previously drawn boundaries that served as a brake on westward expansion and left more of that vast territory as a home to the indigenous. That brake was now off. Some decades later, Britain was to abolish slavery throughout its empire, which no longer included its former colonies. Thus the legacy of the American Revolution was the tragic irony that a Republic established to champion liberty and equality for white men would ultimately be constructed upon the backs of blacks doomed to chattel slavery, as well as the banishment or extermination of Native Americans. This theme dominates much of Taylor’s work.

In his latest book, American Republics: A Continental History of the United States, 1783-1850, which roughly spans the period from the Peace of Paris to California statehood, Taylor further explores this grim theme in a brilliant analysis of how the principles of white supremacy—present at the creation—impacted the subsequent course of United States history. Now this is, of course, uncomfortable stuff for many Americans, who might cringe at that very notion amid cries of revisionism that insist contemporary models and morality are being appropriated and unfairly leveraged against the past. But terminology is less important than outcomes: non-whites were not only foreclosed from participating as citizens in the new Republic, but also from enjoying the life, liberty and pursuit of happiness allegedly granted to their white counterparts. At the same time, southern states where slavery thrived wielded outsize political power that frequently plotted the nation’s destiny. As in his other works, Taylor is a master of identifying unintended consequences, and there are more than a few to go around in the insightful, deeply analytical, and well-written narrative that follows.

These days, it is almost de rigueur for historians to decry the failure of the Founders to resolve the contradictions of permitting human chattel slavery to coexist within what was declared to be a Republic based upon freedom and equality. In almost the same breath, however, many in the field still champion the spirit of compromise that has marked the nation’s history. But if there is an original sin to underscore, it is less that slavery was allowed to endure than that it was codified within the very text of the Constitution of the United States by means of the infamous compromise that was the “three-fifths rule,” which for the purposes of representation permitted each state to count enslaved African Americans as three-fifths of a person, thus inflating the political power of each state based upon their enslaved population. This might have benefited all states equally, but since slavery was to rapidly decline and all but disappear above what would be drawn as the Mason-Dixon, all the advantage flowed to the south, where eventually some states saw its enslaved population outnumber its free white citizenry.

This was to prove dramatic, since the slave south claimed a disproportionate share of national political power when it came to advancing legislation or, for that matter, electing a president! Taylor notes that the disputed election of 1824 that went for decision to the House of Representatives would have been far less disputed without the three-fifths clause, since in that case John Quincy Adams would have led Andrew Jackson in the Electoral College 83 to 77 votes, instead of putting Jackson in the lead 99 to 84. [p253] When Jackson prevailed in the next election, it was the south that cemented his victory.

The scholarly consensus has established the centrality of slavery to the Civil War, but Taylor goes further, arguing that its significance extended long before secession: slavery was ever the central issue in American history, representing wealth, power, and political advantage. The revolutionary generation decried slavery on paper—slave masters Washington, Jefferson and Madison all pronounced it one form of abomination or another—but nevertheless failed to act against it, or even part with their own human property. Jefferson famously declared himself helpless, saying of the peculiar institution that “We have the wolf by the ear, and we can neither hold him, nor safely let him go,” but as slavery grew less profitable for Virginia in the upper south, Jefferson and his counterparts turned to breeding the enslaved for sale to the lower south, where the demand was great. Taylor points out that “In 1803 a male field hand sold for about $600 in South Carolina compared to $400 in Virginia: a $200 difference enticing to Virginia sellers and Carolina slave traders … Between 1790 and 1860, in one of the largest forced migrations in world history, slave traders and migrants herded over a million slaves from Virginia and Maryland to expand southern society …” [p159]  Data and statistics may obscure it, but these were after all living, breathing, sentient human beings who were frequently subjected to great brutalities while enriching those who held them as chattel property.

Jefferson and others of his ilk imagined that slavery would somehow fall out of favor at some distant date, but optimistically kicking the can down the road to future generations proved a fraught strategy: nothing but civil war could ever have ended it. As Taylor notes:

Contrary to the wishful thinking of many Patriots, slavery did not wither away after the American Revolution. Instead, it became more profitable and entrenched as the South expanded westward. From 698,600 in 1790, the number of enslaved people soared to nearly 4 million by 1860, when they comprised a third of the South’s population … In 1860, the monetary value of enslaved people exceeded that of all the nation’s banks, factories, and railroads combined. Masters would never part with so much valuable human property without a fight. [p196]

As bad as it was for enslaved blacks, in the end Native Americans fared far worse. It has been estimated that up to 90% of Amerindians died as a result to exposure to foreign pathogens within a century of the Columbian Experience. The survivors faced a grim future competing for land and resources with rapacious settlers who were better armed and better organized. It may very well be that conflict between colonists and the indigenous was inevitable, but as Taylor emphasizes, the trajectory of the relationship became especially disastrous for the latter after British retreat essentially removed all constraints on territorial expansion.

The stated goal of the American government was peaceful coexistence that emphasized native assimilation to “white civilization.” The Cherokees who once inhabited present-day Georgia actually attempted that, transitioning from hunting and gathering to agriculture, living in wooden houses, learning English, creating a written language. Many practiced Christianity. Some of the wealthiest worked plantations with enslaved human property. It was all for naught. With the discovery of gold in the vicinity, the Cherokees were stripped of their lands in the Indian Removal Act of 1830, championed by President Andrew Jackson, and marched at bayonet point over several months some 1200 miles to the far west. Thousands died in what has been dubbed the “Trail of Tears,” certainly one of the most shameful episodes of United States history. Sadly, rather than an exception, the fate of the Cherokees proved to be indicative of what lay in store for the rest of the indigenous as the new nation grew and the hunger for land exploded.

That hunger, of course, also fueled the Mexican War, launched on a pretext in yet another shameful episode that resulted in an enormous land grab that saw a weaker neighbor forced to cede one-third of its former domains. It was the determination of southern states to transplant plantation-based slavery to these new territories—and the fierce resistance to that by “Free-Soilers” in Lincoln’s Republican Party—that lit the fuse of secession and the bloody Civil War that it spawned.

If there are faults to this fine book, one is that there is simply too much material to capably cover in less than four hundred pages, despite the talented pen and brilliant analytical skills of Alan Taylor. The author devoted an entire volume—The Civil War of 1812—to the events surrounding the War of 1812, a conflict also central to a subsequent effort, The Internal Enemy. This kind of emphasis on a particular event or specific theme is typical of Taylor’s work. In American Republics, he strays from that technique to attempt the kind of grand narrative survey seen by other chroniclers of the Republic, powering through decades of significance at sometimes dizzying speeds, no doubt a delight for some readers but yet disappointing to others long accustomed to the author’s detailed focus on the more narrowly defined.

Characteristic of his remarkable perspicacity, Taylor identifies what other historians overlook, arguing in American Republics that the War of 1812 was only the most well-known struggle in a consequential if neglected era he calls the “Wars of the 1810s” that also saw the British retreat northward, the Spanish forsake Florida, and the dispossession of Native Americans accelerate. [p148] That could be a volume in itself.  Likewise, American culture and politics in the twelve years that separate Madison and Jackson is worthy of book-length treatment. There is so much more.

Another issue is balance—or a lack thereof. If the history of my childhood was written solely in the triumphs of white men, such accomplishments are wholly absent in American Republics, which reveals the long-suppressed saga of the once invisible victims of white supremacy.  It’s a true story, an important story—but it’s not the only story. Surely there are some achievements of the Republic worthy of recognition here?

As the culture wars heat to volcanic temperatures, such omissions only add tinder to the flames of those dedicated to the whitewash that promotes heritage over history.  Already the right has conjured an imaginary bugaboo in Critical Race Theory (CRT), with legislation in place or pending in a string of states that proscribes the teaching of CRT. These laws have nothing to do with Critical Race Theory, of course, but rather give cover to the dog whistles of those who would intimidate educators so they cannot teach the truth about slavery, about Reconstruction, about Civil Rights. These laws put grade-school teachers at a risk of termination for incorporating factual elements of our past into their curriculum, effectively banning from the classroom the content of much of American Republics. This is very serious stuff: Alan Taylor is a distinguished professor at the University of Virginia, a state that saw the governor-elect recently ride to an unlikely victory astride a sort of anti-CRT Trojan Horse. Historians cannot afford any unforced errors in a game that scholars seem to be ceding to dogmatists. If the current trend continues, we may very well witness reprints of my childhood textbooks, with blacks and the indigenous once more consigned to the periphery.

I have read seven of Taylor’s books to date. Like the others, his most recent work represents a critical achievement for historical scholarship, as well as a powerful antidote to the propaganda that formerly tarnished studies of the American Experience. The United States was and remains a nation unique in the family of nations, replete with a fascinating history that is at once complicated, messy, and controversial. American history, at its most basic, is simply a story of how we got from then to now: it can only be properly understood and appreciated in the context of its entirety, warts and all. Anything less is a disservice to the discipline as well as to the audience. To that end, American Republics is required reading.

 

Note: I have reviewed other works by Alan Taylor here:

Review of: American Revolutions: A Continental History, 1750-1804, by Alan Taylor

Review of: The Internal Enemy: Slavery and the War in Virginia 1772-1832, by Alan Taylor

Review of: The Civil War of 1812: American Citizens, British Subjects, Irish Rebels, & Indian Allies by Alan Taylor

Review of: Thomas Jefferson’s Education, by Alan Taylor

 

Featured

Review of: Superior: The Return of Race Science, by Angela Saini

In what has to be the most shameful decision rendered in the long and otherwise distinguished career of Justice Oliver Wendell Holmes, in 1927 the Supreme Court ruled in Buck v. Bell to uphold a compulsory sterilization law in Virginia. The case centered on eighteen-year-old Carrie Buck, confined to the Virginia State Colony for Epileptics and Feebleminded, and Holmes wrote the majority opinion in the near unanimous decision, famously concluding that “three generations of idiots is enough.”

Similar laws prevailed in some thirty-two states, resulting in the forced sterilization of more than 60,000 Americans. Had Carrie lived in Massachusetts, she would have avoided this fate, but she likely would have been condemned to the Belchertown State School for the Feeble-Minded, which—like similar institutions of this era—had its foundation in the eugenics, racism and Social Darwinism of the time that argued that “defectives” with low moral character threatened the very health of the population by breeding others of their kind, raising fears that a kind of contagious degeneracy would permanently damage the otherwise worthy inhabitants of the nation. I have written elsewhere of the horror-show of inhumane conditions and patient abuse at the Belchertown State School, which did not finally close its doors until 1992.

Sterilization was only one chilling byproduct of “eugenics,” a term coined by Francis Galton, a cousin of Charles Darwin whose misunderstanding of the principles of Darwinian evolution led to his championing of scientific racism. Eugenics was also the driving force behind the 1924 immigration law that dramatically reduced the number of Jews, Italians, and East Europeans admitted to the United States. White supremacy did not only consign blacks and other people of color to the ranks of the “less developed” races, but specifically exalted those of northern and central European origin as the best and the brightest. This was all pseudoscience of course, but it was quite widely accepted and “respectable” in its day.

Then, along came Hitler and the Holocaust, and more than six million Jews and other “undesirables” were systematically murdered in the name of racial purity. Eugenics was respectable no more. Most of us born in the decades that followed the almost unfathomable horror of that Nazi sponsored genocide may have assumed that race science was finally discredited and disappeared forever, relegated to a blood-spattered dustbin of history.  But, as Angela Saini reveals in her well-written, deeply researched, and sometimes startling book, Superior: The Return of Race Science, scientific racism not only never really went extinct, but it has returned in our day with a kind of vengeance, fueling the fever for calls to action on the right for anti-immigration legislation.

Saini, a science journalist, broadcaster, and author with a pair of master’s degrees may be uniquely qualified to tell this story. Born in London of Indian parents, in a world seemingly obsessed with racial classification she relates how her background and brown complexion defies categorization; some may consider her Indian, or Asian—or even black. But of course in reality she could not be more British, even if for many her skin color sets her apart. The UK’s legacy of empire and Kipling’s “white man’s burden” still loom large.

But Superior is not a screed and is not about Saini, but rather about how mistaken notions of race and the pseudoscience of scientific racism have not only persisted but are rapidly gaining ground for a new audience and a new era. To achieve this, the author conducted comprehensive research into the origins of eugenics, but even more significantly identified how the ideology of race science that fueled National Socialism and begat Auschwitz and Birkenau quietly if no less adamantly endured post-Nuremberg cloaked in the less fiery rhetoric of pseudoscientific journals grasping at the periphery of legitimacy. Moreover, a modern revolution in paleogenetics and DNA research that should firmly refute such dangerous musings has instead been incorporated for a new generation of acolytes to scientific racism that serve to both undergird and add a false sense of authenticity to dangerous political tendencies on the right that long simmered and now have burst forth in the public arena.

Whatever some may believe, science has long established that race, for all intents and purposes, is a myth, a social construct that advances no important information about any given population. Regardless of superficial characteristics, all living humans—designated homo sapiens sapiens—are biologically the same and by every other critical metric are essentially members of the same closely related population. In fact, various groups of chimpanzees of Central Africa demonstrate greater genetic diversity than all humans across the globe today. Modern humans likely evolved from a common ancestor in Africa, and thus all of humanity is out of Africa. It is just as likely that all humans once had dark skin, and that lighter skin, what we would term “white” or Caucasian, developed later as populations moved north and melanin—a pigment located in the outer skin layer called the epidermis—was reduced as an adaptation to cope with relatively weak solar radiation in far northern latitudes. The latest scholarship reveals that Europeans only developed their fairer complexion as recently as 8500 years ago!

The deepest and most glaring flaw in the race science that was foundational to Nazism is that it is actually a lack of diversity that often results in a less healthy population.  This is not only apparent in the hemophilia that plagued the closely related royal houses of the European monarchies, but on a more macro scale with genetic conditions more common to certain ethnic groups, such as sickle cell disease for those of African heritage, and Tay-Sachs disease among Ashkenazi Jews.

Counterintuitively, modern proponents of race science cherry pick DNA data to attempt to promote superiority for whites that concomitantly assigns a lesser status for people of color, and these concepts are then repackaged to champion policies that limit immigration from certain parts of the world. Once anathema for all but those on the very fringes of the political spectrum, this dangerous rebirth of genetic pseudoscience is now given voice on right-wing media. Worse perhaps, the tendency of mainstream media to promote fairness in what has come to be dubbed “bothsiderism” sometimes offers an underserved platform to those spinning racist dogma in the guise of scientific studies. Of course, social media has now transcended television as a messaging vehicle, and it is far better suited to spreading misinformation, especially in an era given to a mistrust of expertise, thus granting a seat at the table to the unsupported on the same platform with credible fact-based reality, urging the audience to do their own research and come to their own conclusions.

The United States was collectively shaken in 2017 when white supremacists wielding tiki torches marched at Charlottesville chanting “Jews will not replace us,” and shaken once more when then-president Donald Trump subsequently asserted that there “were very fine people, on both sides.” But there was far less outrage the following year when Trump both sounded a dog whistle and startled lawmakers as he wondered aloud why we should allow in more immigrants from Haiti and “shithole countries” in Africa instead of from places like Norway. (Unanswered, of course, is why a person would want to abandon the arguably higher quality of life in Norway to come to the U.S. …)  But the volume on such dog whistles has been turned up alarmingly as of late by popular Fox News host Tucker Carlson, who in between fear-mongering messaging that casts the Black Lives Matter (BLM) movement and Critical Race Theory (CRT) as Marxist conspiracies that threaten the American way of life, openly advocates against the paranoid alt-right terror of the “Great Replacement” theory, a staple of the white supremacist canon, declaring the Biden administration actively engaged in trying “to change the racial mix of the country … to reduce the political power of people whose ancestors lived here, and dramatically increase the proportion of Americans newly-arrived from the third world.” Translation: people of color are trying to supplant white people. Carlson doesn’t cite race science, but he did recently allow comments to go unchallenged by his guest, the racist extremist social scientist Charles Murray, that the “the cognitive demands” of some occupations mean “a whole lot of more white people qualify than Black people.” Superior was published in 2019 but is chillingly prescient about the dangerous trajectory of both racism and race science on the right.

There is a lot of material between the covers of this book, but because Saini writes so well and speaks to the more arcane matters in language comprehensible to a wide audience, it is not a difficult read. Throughout, the research is impeccable and the analysis spot-on. Still, there are moments Saini strays a bit, at one point seeming to speculate whether we should hold back on paleogenetic research lest this data be further perverted by proponents of scientific racism. That is, of course, the wrong approach: the best weapon against pseudoscience remains science itself.  Still, the warning bells she sounds here must be heeded. The twin threats of racism and the rebirth of race science into the mainstream are indeed clear and present dangers that must be confronted and combated at every corner.  The author’s message is clear and perhaps more relevant now than at any time since the 1930s, another era when hate and racism served as by-products that informed an angry brand of populism that claimed legitimacy through race science.  We all know how that ended.

I have written of the Belchertown State School here:

https://stanprager.com/projects/belchertown-state-school-historic-preservation-proposal/

I reviewed Saini’s latest book here: Review of: The Patriarchs: The Origins of Inequality, by Angela Saini

 

 

Featured

Review of: Franklin D. Roosevelt: A Political Life, by Robert Dallek

When identifying the “greatest presidents,” historians consistently rank Washington and Lincoln in the top two slots; the third spot almost always goes to Franklin Delano Roosevelt, who served as chief executive longer than any before or since and shepherded the nation through twin existential crises of economic depression and world war. FDR left an indelible legacy upon America that echoes loudly both forward to our present and future as well as back to his day.  Lionized by the left today—especially by its progressive wing—far more than he was in his own time, he remains vilified by the right, then and now. Today’s right, which basks in the extreme and often eschews common sense, conflating social security with socialism, frequently casts him as villain. Yet his memory, be it applauded or heckled, is nevertheless of an iconic figure who forever changed the course of American history, for good or ill.

FDR has been widely chronicled, by such luminaries as James MacGregor Burns, William Leuchtenburg, Doris Kearns Goodwin, Jay Winik, Geoffrey C. Ward, and a host of others, including presidential biographer Robert Dallek, winner of the Bancroft Prize for Franklin D. Roosevelt and American Foreign Policy, 1932–1945. Dallek now revisits his subject with Franklin D. Roosevelt: A Political Life, the latest contribution to a rapidly expanding genre focused upon politics and power, showcased in such works as Jon Meacham’s Thomas Jefferson: The Art of Power, and most recently, in George Washington: The Political Rise of America’s Founding Father, by David O. Stewart.

A rough sketch of FDR’s life is well known. Born to wealth and sheltered by privilege, at school he had difficulty forming friendships with peers. He practiced law for a time, but his passion turned to politics, which seemed ideally suited to the tall, handsome, and gregarious Franklin. To this end, he modeled himself on his famous cousin, President Theodore Roosevelt. He married T.R.’s favorite niece, Eleanor, and like Theodore eventually became Assistant Secretary of the Navy. Unsuccessful as a vice-presidential candidate in the 1920 election, his political future still seemed assured until he was struck down by polio. His legs were paralyzed, but not his ambition. He never walked again, but equipped with heavy leg braces and an impressive upper body strength, he perfected a swinging gait that propelled him forward while leaning into an aide that served, at least for brief periods, as a reasonable facsimile of the same. He made a remarkable political comeback as governor of New York in 1928, and won national attention for his public relief efforts, which proved essential in his even more remarkable bid to win the White House four years later. Reimagining government to cope with the consequences of economic devastation never before seen in the United States, then reimagining it again to construct a vast war machine to counter Hitler and Tojo, he bucked tradition to win reelection three times, then stunned the nation with his death by cerebral hemorrhage only a few months into the fourth term of one of the most consequential presidencies in American history.

That “brief sketch” translates into mountains of material for any biographer, so narrowing the lens to FDR’s “political life” proves to be a sound strategy that underscores the route to his many achievements as well as the sometimes-shameful ways he juggled competing demands and realities. Among historians, even his most ardent admirers tend to question his judgment in the run-up to the disaster at Pearl Harbor, as well as his moral compass in exiling Japanese Americans to confinement camps, but as Dallek reveals again and again in this finely wrought study, these may simply be the most familiar instances of his shortcomings. If FDR is often recalled as smart and heroic—as he indeed deserves to be—there are yet plenty of salient examples where he proves himself to be neither. Eleanor Roosevelt once famously quipped that John F. Kennedy should show a little less profile and a little more courage, but there were certainly times this advice must have been just as suitable to her husband.  What is clear is that while he was genuinely a compassionate man capable of great empathy, FDR was at the same time at his very core driven by an almost limitless ambition that, reinforced by a conviction that he was always in the right, spawned an ever-evolving strategy to prevail that sometimes blurred the boundaries of the greater good he sought to impose. Shrewd, disciplined, and gifted with finely tuned political instincts, he knew how to balance demands, ideals, and realities to shape outcomes favorable to his goals. He was a man who knew how to wield power to deliver his vision of America, and the truth is, he could be quite ruthless in that pursuit. To his credit, much like Lincoln and Washington before him, his lasting achievements have tended to paper over flaws that might otherwise cling with greater prominence to his legacy.

I read portions of this volume during the 2020 election cycle and its aftermath, especially relevant given that the new President, Joe Biden—born just days after the Battle of Guadalcanal during FDR’s third term—had an oversize portrait of Roosevelt prominently hung in the Oval Office across from the Resolute Desk. But even more significantly, Biden the candidate was pilloried by progressives in the run-up to November as far too centrist, as a man who had abandoned the vision of Franklin Roosevelt. But if the left correctly recalls FDR as the most liberal president in American history, it also badly misremembers Roosevelt the man, who in his day very deftly navigated the politics of the center lane.

Dallek brilliantly restores for us the authentic FDR of his own era, unclouded by the mists of time that has begotten both a greater belligerence from the right as well as a distorted worship from the left.  This context is critical: when FDR first won election in 1932, the nation was reeling from its greatest crisis since the Civil War, the economy in a tailspin and his predecessor, Herbert Hoover, unwilling to use the power of the federal government to intervene while nearly a quarter of the nation’s workforce was unemployed, at a time when a social safety net was nearly nonexistent.  People literally starved to death in the United States of America! This provoked radical tugs to the extreme left and extreme right. There was loud speculation that the Republic would not survive, with cries by some for Soviet-style communism and by others for a strongman akin to those spearheading an emerging fascism in Europe. It was into this arena that FDR was thrust. Beyond fringe radical calls for revolution or reaction, despite his party’s congressional majority, like Lincoln before him perhaps Roosevelt’s greatest challenge after stabilizing the state was contending with the forces to the left and right in his own party. This, as Dallek details in a well-written, fast-moving narrative, was to be characteristic of much of his long tenure.

In spite of an alphabet soup of New Deal programs that sought to both rescue the sagging economy and the struggling citizen, for the liberal wing of the Democratic Party FDR never went far enough. For conservative Democrats, on the other hand, the power of the state was growing too large and there was far too much interference with market forces. But, as Dallek stresses repeatedly, Roosevelt struggled the most with forces on the left, especially populist demagogues like Huey Long and the antisemitic radio host Father Coughlin. And with the outbreak of World War II, the left was unforgiving when FDR seemed to abandon his commitment to the New Deal to focus on combating Germany and Japan. Today’s democratic socialists may want to claim him as their own, but FDR was no socialist, seeking to reform capitalism rather than replace it, earning Coughlin’s eventual enmity for being too friendly with bankers. At the same time, Republicans obstructed the president at every turn, calling him a would-be dictator. And most wealthy Americans branded him a traitor to his class. There was also an increasingly hostile Supreme Court, which was to ride roughshod over some of FDR’s most cherished programs, including the National Recovery Act (NRA), which was just one of several that were struck down as unconstitutional. We tend to recall the successes such as the Social Security Act that indelibly define FDR’s legacy, yet he endured many losses as well. But while Roosevelt did not win every battle, as Dallek details, only a leader with FDR’s political acumen could have succeeded so often while tackling so much amid a rising chorus of opposition on all sides during such a crisis-driven presidency. If the left in America tends to fail so frequently, it could be because it often fails to grasp the politics of the possible. In this realm, there has perhaps been no greater genius in the White House than Franklin Delano Roosevelt.

Fault can be found in Dallek’s book. For one thing, in the body of the narrative he too often namedrops references to other notable Roosevelt chroniclers such as Doris Kearns Goodwin and William Leuchtenburg, which feels awkward given that the author is not some unknown seeking to establish credibility, but Robert Dallek himself, distinguished presidential biographer! And less a flaw than a weakness, despite his skill with a pen in these chapters the reader carefully observes FDR but never really gets to know him intimately. I have encountered this in other Dallek works. If you were, for instance, to juxtapose the Lyndon Johnson biographies of Robert Caro with those by Dallek, Caro’s LBJ colorfully leaps off the page the flesh-and-blood menacing figure who grasps you by the lapels and bends you to his will, while Dallek’s LBJ remains off in the distance. Caro has that gift; Dallek does not.

Still, this is a fine book that marks a significant contribution to the literature. FDR was indeed a giant; there has never been anyone like him in the White House, nor are we likely to ever see a rival. Dallek succeeds in placing Roosevelt firmly in the context of his time, warts and all, so that we can better appreciate who he was and how he should be remembered.

Featured

Review of: Flight to Freedom, by Ellen Oppenheimer

When I first met Ellen Oppenheimer she was in her eighties, a spry woman with a contagious smile and a mischievous teenager’s twinkle in her eyes standing beside her now-late husband Marty on a housecall visit that I made on behalf of the computer services company that I own and operate.  But many, many years before that, when she was a very young child, she and her family fled the Nazis, literally one step ahead of a brutal killing machine that claimed too many of her relations, marked for death only because of certain strands of DNA that overlap with centuries of irrational hatred directed at Europeans of Jewish descent.

On subsequent visits, we got to know each other better, and she shared with me bits and pieces of the story of how her family cheated death and made it to America. In turn, I told her of my love of history—and the fact that while I was not raised as a Jew, we had in common some of those same Ashkenazi strands of DNA through my great-grandfather, who fled Russia on the heels of a pogrom. I also mentioned my book blog, and she asked that I bookmark the page on the browser of the custom computer they had purchased from me.

It was on a later housecall, as Marty’s health declined, that I detected shadows intruding on Ellen’s characteristic optimism, and the deep concern for him looming behind the stoic face she wore. But all at once her eyes brightened when she announced with visible pride that she had published a book called Flight to Freedom about her childhood escape from the Nazis. She urged me to buy it and to read it, and she genuinely wanted my opinion of her work.

I did not do so.

But she persisted. On later visits, the urging evolved to something of an admonishment. When I would arrive, she always greeted me with a big hug, as if she was more a grandmother than a client, but she was clearly disappointed that I—a book reviewer no less—had not yet read her book. For my part, my resistance was deliberate. I had too many memories of good friends who pestered me to see their bands playing at a local pub, only to discover that they were terrible. I liked Ellen: what if her book was awful?  I did not want to take the chance.

Then, during the pandemic, I saw Marty’s obituary. Covid had taken him. Ellen and Marty had moved out of state, but I still had Ellen’s email, so I reached out with condolences. We both had much to say to each other, but in the end, she asked once more: “Have you read my book yet?” So I broke down and ordered it on Amazon, then took it with me on a week-long birthday getaway to an Airbnb.

Flight to Freedom is a thin volume with a simple all-white cover stamped with only title and author. I brought it with me to a comfortable chair in a great room lined with windows that gave breathtaking views to waves lapping the shore in East Lyme, CT. I popped the cap on a cold IPA and cracked the cover of Ellen’s book. Once I began, all my earlier reluctance slipped away: I simply could not stop reading it.

In an extremely well-written account—especially for someone with virtually no literary background—the author transports the reader back to a time when an educated, affluent middle-class German family overnight was set upon a road of potential extermination in the wake of the Nazi rise to power.  Few, of course, believed that a barbarity of such enormity could ever come to pass in 1933, when three-year-old Ellen’s father Adolf was seized on a pretext and jailed. But Grete, Ellen’s mother—the true hero of Flight to Freedom—was far more prescient. In a compelling narrative with a pace that never slows down, we follow the brilliant and intrepid Grete as she more than once engineers Adolf’s release from captivity and serves as the indefatigable engine of her family’s escape from the Nazis, first to Paris, and then later, as war erupted, to Marseille and Oran and finally Casablanca—the iconic route of refugees etched on a map that is splashed across the screen in the classic film featuring Bogart and Bergman.

The last leg then was a Portuguese boat that finally delivered them to safety on Staten Island in 1942. In the passages of Flight to Freedom that describe that voyage, the author cannot disguise her disgust at the contempt displayed shipboard for the less fortunate by those who have purchased more expensive berth, when all were Jews who would of course have found a kind of hideous equality in Germany’s death camps.  This was, tragically, the fate of much of Ellen’s extended family who did not heed Grete’s warnings of what might befall them, by those who simply could not believe that such horrors could lurk in their future. Throughout the tale, there is a kind of nuance and complexity one might expect to find in a book by a trained academic or a gifted novelist that instead is delightfully on display by a novice author. Her voice and her pen are both strong from start to finish in this powerful and stirring work.

As a reviewer, can I find some flaws? Of course I can. In the narrative, Ellen treats her childhood character simply as a bystander; the story is instead told primarily through Grete’s eyes.  As such, the omniscient point of view often serves as vehicle to the chronicle, with observations and emotions the author could not really know for certain. And sometimes, the point of view shifts awkwardly. But these are quibbles. This is a fine book on so many levels, and the author deserves much praise for telling this story and telling it so well!

A few days after I read Flight to Freedom, I dug into my client database to come up with Ellen’s phone number and rang her up. She was, as I anticipated, thrilled that I had finally read the book, and naturally welcomed my positive feedback. After we chatted for a while, I confessed that my only complaint, if it could be called a complaint, was that the child character of Ellen stood mute much of time in the narrative, and I wondered why she did not relate more of her feelings as a key actor in the drama. With a firm voice she told me (and I am paraphrasing here): “Because it is my mother’s story. It was she who saved our lives. She was the hero for our family.”

I think Grete would be proud of her little girl for telling the story this way, so many decades later. And I’m proud to know Ellen, who shared it so beautifully. Buy this book. Read it. It is a story you need to experience, as well.

 

 

Featured

Review of: Behind Putin’s Curtain: Friendships and Misadventures Inside Russia, by Stephan Orth

I must admit that I knew nothing of the apparently widespread practice of “couchsurfing” before I read Stephan Orth’s quirky, sometime comic, and utterly entertaining travelogue, Behind Putin’s Curtain: Friendships and Misadventures Inside Russia. For the uninitiated, couchsurfing is a global community said to be comprised of more than 14 million members in over 200,000 cities that includes virtually every country on the map. The purpose is to provide free if bare bones lodging for travelers in exchange for forming new friendships and spawning new adventures. The term couchsurfing is an apt one, since frequently the visitor in fact beds down on a couch, although accommodations range from actual beds with sheets and pillowcases to blankets strewn on a kitchen floor—or, as Orth discovers to his amusement, a cot in a bathroom, just across from the toilet! Obviously, if your idea of a good time is a $2000/week Airbnb with a memory foam mattress and a breathtaking view, this is not for you, but if you are scraping together your loose change and want to see the world from the bottom up, couchsurfing offers an unusual alternative that will instantly plug you into the local culture by pairing you up with an authentic member of the community. Of course, authentic does not necessarily translate into typical. More on that later.

Orth, an acclaimed journalist from Germany, is no novice to couchsurfing, but rather a practiced aficionado, who has not only long relied upon it as a travel mechanism but has upped the ante by doing so in distant and out of the ordinary spots like Iran, Saudi Arabia and China, the subjects of his several best-selling books. This time he gives it a go in Russia: from Grozny in the North Caucasus, on to Volgograd and Saint Petersburg, then to Novosibirsk and the Altai Republic in Siberia, and finally Yakutsk and Vladivostok in the Far East. (Full disclosure: I never knew Yakutsk existed other than as a strategic corner of the board in the game of Risk.) All the while Orth proves a keen, non-judgmental observer of peoples and customs who navigates the mundane, the hazardous, and the zany with an enthusiasm instantly contagious to the reader. He’s a fine writer, with a style underscored by impeccable timing, comedic and otherwise, and passages often punctuated with wit and sometime wicked irony. You can imagine him penning the narrative impatiently, eager to work through one paragraph to the next so he can detail another encounter, express another anecdote, or simply mock his circumstances once more, all while wearing a twinkle in his eye and a wry twist to his lips.

Couchsurfing may be routine for the author, but he wisely assumes this is not the case for his audience, so he introduces this fascinating milieu by detailing the process of booking a room. The very first one he describes turns out to be a hilarious online race down various rabbit holes over a sequence of seventy-nine web pages where his utterly eccentric eventual host peppers him with bizarre, even existential observations, and challenges potential guests to fill in various blanks while warning them “that he follows the principle of ‘rational egoism’” and “doesn’t have ten dwarves cleaning up after guests.” [p7] Orth, unintimidated, responds with a wiseass retort and wins the invitation.

Perhaps the most delightful portions of this book are Orth’s profiles of his various hosts, who tend to run the full spectrum of the odd to the peculiar.  I say this absent any negative connotation that might otherwise be implied. After all, Einstein and Lincoln were both peculiar fellows. I only mean that the reader, eager to get a taste of local culture, should not mistake Orth’s bunkmates for typical representatives of their respective communities. This makes sense, of course, since regardless of nationality the average person is unlikely to welcome complete strangers into their homes as overnight guests for free. That said, most of his hosts come off as fascinating if unconventional folks you might love to hang out with, at least for a time. And they put as much trust in the author as he puts in them. One couple even briefly leaves Orth to babysit their toddler. Another host turns over the keys of his private dacha and leaves him unattended with his dog.

Of course, the self-deprecating Orth, who seems equally gifted as patient listener and engaging raconteur, could very well be the ideal guest in these circumstances. At the same time, he could also very well be a magnet for the outrageous and the bizarre, as witnessed by the madcap week-long car trip through Siberia he ends up taking with this wild and crazy chick named Nadya that begins when they meet and bond over lamb soup and a spirited debate as to what was the best Queen album, survives a rental car catastrophe on a remote roadway, and winds up with them horseback riding on the steppe. Throughout, with only a single exception, the two disagree about … well … absolutely everything, but still manage to have a good time. If you don’t literally laugh out loud while reading through this long episode, you should be banned for life from using the LOL emoji.

You would think that travel via couchsurfing could very well be dangerous—perhaps less for Orth, who is well over six feet tall and a veteran couchsurfer—but certainly for young, attractive women bedding down in unknown environs. But it turns out that such incidents while not unknown are very, very rare. The couchsurfing community is self-policing: guests and hosts rely on ratings and reviews not unlike those on Airbnb, which tends to minimize if not entirely eliminate creeps and psychos. Still, while 14 million people cannot be wrong, it’s not for everyone. Which leads me to note that the only fault I can find with this outstanding work is its title, Behind Putin’s Curtain, since it has little to do with Putin or the lives led by ordinary Russians: certainly the peeps that Orth runs with are anything but ordinary or typical! I have seen this book published elsewhere simply as Couchsurfing in Russia, which I think suits it far better. Other than that quibble, this is one of the best travel books that I have ever read, and I highly recommend it. And while I might be a little too far along in years to start experimenting with couchsurfing, I admire Orth’s spirit and I’m eager to read more of his adventures going forward.

 

[Note: the edition of this book that I read was an ARC (Advance Reader’s Copy), as part of an early reviewer’s program.]

Featured

Review of: Tim & Tigon, by Tim Cope

About five years ago, I read what I still consider to be the finest travel and adventure book I have ever come across, On the Trail of Genghis Khan: An Epic Journey Through the Land of the Nomads, by Tim Cope, a remarkable tale of an intrepid young Australian who in 2004 set out on a three-year mostly solo trek on horseback across the Eurasian steppe from Mongolia to Hungary—some 10,000 kilometers (about 6,200 miles)—roughly retracing routes followed by Genghis Khan and his steppe warriors. An extraordinary individual, Cope refused to carry a firearm, despite warnings against potential predators of the animal or human kind to menace an untested foreigner alone on the vast and often perilous steppe corridor, instead relying on his instincts, personality, and determination to succeed, regardless of the odds. Oh, and those odds seem further stacked against him because despite his outsize ambition, he is quite an inexperienced horseman—in fact his only previous attempt on horseback as a child left him with a broken arm! Nevertheless, his only companions for the bulk of the journey ahead would be three horses—and a dog named Tigon foisted upon him against his will that would become his best friend.

My 2016 review of On the Trail of Genghis Khan—which Cope featured on his website for a time—sparked an email correspondence between us, and shortly after publication he sent me an inscribed copy of his latest work, Tim & Tigon, stamped with Tigon’s footprints. I’m always a little nervous in these circumstances: what if the new book falls short? As it turned out, such concerns were misplaced; I enjoyed it so much I bought another copy to give as a gift!

In Kazakhstan, early in his journey, a herder named Aset connived to shift custody of a scrawny six-month-old puppy to Cope, insisting it would serve both as badly needed company during long periods of isolation as well as an ally to warn against wolves. The dog, a short-haired breed of hound known as a tazi, was named Tigon, which translates into something like “fast wind.” Tim was less than receptive, but Aset was persuasive: “In our country dogs choose their owners. Tigon is yours.” [p89] That initial grudging acceptance was to develop into a critical bond that was strengthened again and again during the many challenges that lay ahead. In fact, Tim’s connection with Tigon came to represent the author’s single most significant relationship in the course of this epic trek. Hence the title of this book.

Tim & Tigon defies simple categorization. On one level, it is a compact re-telling of On the Trail of Genghis Khan, but it’s not simply an abridged version of the earlier book. Styled as a Young Adult (YA) work, it has appeal to a much broader audience. And while it might be tempting to brand it as some kind of heartwarming boy and his dog tale, it is marked by a much greater complexity. Finally, as with the first book, it is bound to frustrate any librarian looking to shelve it properly: Is it memoir? Is it travel? Is it adventure? Is it survival? Is it a book about animals? It turns out to be about all of these and more.

As the title suggests, the emphasis this time finds focus upon the unique connection that develops between a once reluctant Tim and the dog that becomes nothing less than his full partner in the struggle to survive over thousands of miles of terrain marked by an often-hostile environment that frequently saw extreme temperatures of heat and cold, conditions both difficult and dangerous, as well as numerous obstacles.   But despite the top billing neither Tim nor Tigon are the main characters here. Instead, as the narrative comes to reveal again and again, the true stars of this magnificent odyssey are the land and its peoples, a sometimes-forbidding landscape that hosts remarkably resilient, enterprising, and surprisingly optimistic folks—clans, families and individuals that are ever undaunted by highly challenging lifeways that have their roots in centuries-old customs.

Stalin effectively strangled their traditional nomadic ways in the former Soviet Union by enforcing borders that were unknown to their ancestors, but he never crushed their collective spirit. And long after the U.S.S.R went out of business, these nomads still thrive, their orbits perhaps more circumscribed, their horses and camels supplemented—if not supplanted—by jeeps and motorbikes. They still make their homes in portable tents known as yurts, although these days many sport TV sets served by satellite and powered by generators. The overwhelming majority welcome the author into their humble camps, often with unexpected enthusiasm and outsize hospitality, generously offering him food and shelter and tending to his animals, even as many are themselves scraping by in conditions that can best be described as hardscrabble. The shared empathy between Cope and his hosts is marvelously palpable throughout the narrative, and it is this authenticity that distinguishes his work. It is clear that Tim is a great listener, and despite how alien he must have appeared upon arrival in these remote camps, he quickly establishes rapport with men, women, children, clan elders—the old and the young—and remarkably repeats this feat in Mongolia, in Kazakhstan, in Russia, and beyond. This turns out to be his finest achievement: his talents with a pen are evident, to be sure, but the story he relates would hardly be as impressive if not for that element.

When Tim’s amazing journey across the steppe ended in Hungary in 2007, joy mingled with a certain melancholy at the realization that he would have to leave Tigon behind when he returned home. But the obstacles of a an out-of-reach price tag and a mandatory quarantine were eventually overcome, and a little more than a year later, Tigon joined Tim in Australia. Tigon went on to sire many puppies and lived to a ripe old age before, tragically, the dog that once braved perils large and small on the harsh landscapes of the Eurasian steppe fell before the wheels of a speeding car on the Australian macadam. Tim was devastated by his loss, so this book is also, of course, a tribute to Tigon. My signed copy is inscribed with the Kazakh saying that served as a kind of ongoing guidepost to their trek together: “Trust in fate … but always tie up the camel.” That made me smile, but that smile was tinged with sadness as I gazed upon Tigon’s footprint stamped just below it. Tigon is gone, but he left an indelible mark not only on Tim, who perhaps still grieves for him, but also upon every reader, young and old, who is touched by his story.

[I reviewed Tim Cope’s earlier book here: Review of: On the Trail of Genghis Khan: An Epic Journey Through the Land of the Nomads, by Tim Cope]

 

 

 

Featured

Review of: The Caucasus: An Introduction, by Thomas De Waal

Some would argue that the precise moment that marked the beginning of the eventual dissolution of the Soviet Union was February 20, 1988, when the regional soviet governing the Nagorno-Karabakh Oblast—an autonomous region of mostly ethnic Armenians within the Soviet Republic of Azerbaijan—voted to redraw the maps and attach Nagorno-Karabakh to the Soviet Republic of Armenia. Thus began a long, bloody, and yet unresolved conflict in the Caucasus that has ravaged once proud cities and claimed many thousands of lives of combatants and civilians alike.  The U.S.S.R. went out of business on December 25, 1991, about midway through what has been dubbed the First Nagorno-Karabakh War, which ended on May 12, 1994, an Armenian victory that established de facto—if internationally unrecognized—independence for the Republic of Artsakh (also known as the Nagorno-Karabakh Republic), but left much unsettled. Smoldering grievances that remained would come to spark future hostilities.

That day came last fall, when the long uneasy stalemate ended suddenly with an Azerbaijani offensive in the short-lived 2020 Nagorno-Karabakh War that had ruinous consequences for the Armenian side. Few Americans have ever heard of Nagorno-Karabakh, but I was far better informed because when the war broke out I happened to be reading The Caucasus: An Introduction, by Thomas De Waal, a well-written, insightful, and—as it turns out—powerfully relevant book that in its careful analysis of this particular region raises troubling questions about human behavior in similar socio-political environments elsewhere.

What is the Caucasus? A region best described as a corridor between the Black Sea on one side and the Caspian Sea on the other, with boundaries at the south on Turkey and Iran, and at the north by Russia and the Greater Caucasus mountain range that has long been seen as the natural border between Eastern Europe and Western Asia. Above those mountains in southern Russia is what is commonly referred to as the North Caucasus, which includes Dagestan and Chechnya. Beneath them lies Transcaucasia, comprised of the three tiny nations of Armenia, Azerbaijan, and Georgia, whose modern history began with the collapse of the Soviet Union and are the focus of De Waal’s fascinating study. The history of the Caucasus is the story of peoples dominated by the great powers beyond their borders, and despite independence this remains true to this day: Russia invaded Georgia in 2008 to support separatist enclaves in Abkhazia and South Ossetia, in the first European war of the twenty-first century; Turkey provided military support to Azerbaijan in the 2020 Nagorno-Karabakh War.

At this point, some readers of this review will pause, intimidated by exotic place names in an unfamiliar geography. Fortunately, De Waal makes that part easy with a series of outstanding maps that puts the past and the present into appropriate context. At the same time, the author eases our journey through an often-uncertain terrain by applying a talented pen to a dense, but highly readable narrative that assumes no prior knowledge of the Caucasus. At first glance, this work has the look and feel of a textbook of sorts, but because De Waal has such a fine-tuned sense of the lands and the peoples he chronicles, there are times when the reader feels as if a skilled travel writer was escorting them through history and then delivering them to the brink of tomorrow. Throughout, breakout boxes lend a captivating sense of intimacy to places and events that after all host human beings who like their counterparts in other troubled regions live, laugh, and sometimes tragically perish because of their proximity to armed conflict that typically has little to do with them personally.

De Waal proves himself a strong researcher, as well as an excellent observer highly gifted with an analytical acumen that not only carefully scrutinizes the complexity of a region bordered by potentially menacing great powers, and pregnant with territorial disputes, historic enmities, and religious division, but identifies the tolerance and common ground in shared cultures enjoyed by its ordinary inhabitants if left to their own devices. More than once, the author bemoans the division driven by elites on all sides of competing causes that have swept up the common folk who have lived peacefully side-by-side for generations, igniting passions that led to brutality and even massacre. This is a tragic tale we have seen replayed elsewhere, with escalation to genocide among former neighbors in what was once Yugoslavia, for instance, and also in Rwanda. For all the bloodletting, it has not risen to that level in the Caucasus, but unfortunately spots like Nagorno-Karabakh have all the ingredients for some future catastrophe if wiser heads do not prevail.

I picked up this book quite randomly last summer en route from a Vermont Airbnb in my first visit to a brick-and-mortar bookstore since the start of the pandemic. A rare positive from quarantine has been a good deal of time to read and reflect. I am grateful that The Caucasus: An Introduction was in the fat stack of books that I consumed in that period. Place names and details are certain to fade, but I will long remember the greater themes De Waal explored here. If you are curious about the world, I would definitely recommend this book to you.

[Note: Thomas de Waal is a senior fellow with Carnegie Europe, specializing in Eastern Europe and the Caucasus region.]

 

 

 

Featured

Review of: A History of Crete, by Chris Moorey

Myth has it that before he became king of Athens, Theseus went to Crete and slew the Minotaur, a creature half-man and half-bull that roamed the labyrinth in Knossos. According to Homer’s Iliad, Idomeneus, King of Crete, was one of the top-ranked generals of the Greek alliance in the Trojan War.  But long before the legends and the literature, Crete hosted Europe’s most advanced early Bronze Age civilization—dubbed the Minoan—which was then overrun and absorbed by the Mycenean Greeks that are later said to have made war at Troy. Minoan Civilization flourished circa 3000 BCE-1450 BCE, when the Myceneans moved in. What remains of the Minoans are magnificent ruins of palace complexes, brilliantly rendered frescoes depicting dolphins, bull-leaping lads, and bare-breasted maidens, and a still yet undeciphered script known as Linear A. The deepest roots of Western Civilization run to the ancient Hellenes, so much so that some historians proclaim the Greeks the grandfathers of the modern West. If that is true, then the Minoans of Crete were the grandfathers of the Greeks.

Unfortunately, if you want to learn more about the Minoans, do not turn to A History of Crete, by former educator Chris Moorey, an ambitious if too often dull work that affords this landmark civilization a mere 22 pages. Of course, the author has every right to emphasize what he deems most relevant, but the reader also has a right to feel misled—especially as the jacket cover sports a bull-leaping scene from a Minoan fresco! And it isn’t only the Minoans that are bypassed; Moorey’s treatment of Crete’s glorious ancient past is at best superficial. After a promising start that touches on recent discoveries of Paleolithic hand-axes, he fast-forwards at a dizzying rate: Minoan Civilization ends on page 39; more than a thousand years of Greek dominance concludes on page 66, and Roman rule is over by page 84. Thus begins the long saga of Crete as a relative backwater, under the sway of distant colonial masters.

I am not certain what the author’s strategy was, but it appears that his goal was to divide Crete’s long history into equal segments, an awkward tactic akin to a biographer of Lincoln lending equal time to his rail-splitting and his presidency. At any rate, much of the story is simply not all that interesting the way Moorey tells it.  In fact, too much of it reads like an expanded Wikipedia entry, while sub-headings too frequently serve as unwelcome interrupts to a narrative that generally tends to be stilted and colorless. The result is a chronological report of facts about people and events, conspicuously absent the analysis and interpretation critical to a historical treatment.  Moreover, the author’s voice lacks enthusiasm and remains maddeningly neutral, whether the topic is tax collection or captive rebels impaled on hooks. As the chronicle plods across the many centuries, there is also a lack of connective tissue, so the reader never really gets a sense of what distinguishes the people of Crete from people anywhere else. What are their salient characteristics? What is the cultural glue that bonds them together? We never really find out.

To be fair, there is a lot of information here. And Moorey is not a bad writer, just an uninspired one. Could this be because the book is directed at a scholarly rather than a popular audience, and academic writing by its nature can often be stultifying? That’s one possibility.  But is it even a scholarly work? The endnotes are slim, and few point to primary sources.

A History of Crete is a broad survey that may serve as a useful reference for those seeking a concise study of the island’s past, but it seems like an opportunity missed.  In the final paragraph, the author concludes: “In spite of all difficulties, it is likely the spirt of Crete will survive.” What is this spirit of Crete he speaks of? Whatever it may be, the reader must look elsewhere to find out.

Featured

Review of: The Steppe and the Sea: Pearls in the Mongol Empire, by Thomas T. Allsen

In the aftermath of a clash in Turkistan in 1221, a woman held captive by Mongol soldiers admitted she had swallowed her pearls to safeguard them. She was immediately executed and eviscerated. When pearls were indeed recovered, Genghis Khan “ordered that they open the bellies of the slain” on the battlefield to look for more. [p23] Such was the consequence of pearls for the Mongol Empire.

As this review goes to press (5-12-21), the value of a single Bitcoin is about $56,000 U.S. dollars—dwarfing the price for an ounce of gold at a mere $1830—an astonishing number for a popular cybercurrency that few even accept for payment. Those ridiculing the rise of Bitcoin dismiss it as imaginary currency. But aren’t all currencies imaginary? The paper a dollar is printed on certainly is not worth much, but it can be exchanged for a buck because that United States government says so, subject to inflation of course. All else rises and falls on a market that declares a value, which varies from day-to-day. Then why, you might ask, in the rational world of the twenty-first century, are functionally worthless shiny objects like gold and diamonds (for non-industrial applications) worth anything at all? It’s a good question, but hardly a new one—long before the days of Jericho and Troy people have attached value to the pretty but otherwise useless. Circa 4200 BCE, spondylus shells were money of a sort in both Old Europe and the faraway Andes. Remarkably, cowries once served as the chief economic mechanism in the African slave trade; for centuries human beings were bought and sold as chattel in exchange for shells that once housed sea snails!

The point is that even the most frivolous item can be deemed of great worth if enough agree that it is valuable. With that in mind, it is hardly shocking to learn that pearls were treasured above all else by the Mongols during their heady days of empire. It may nevertheless seem surprising that this phenomenon would be worthy of a book-length treatment, but acclaimed educator, author and historian Thomas T. Allsen makes a convincing case that it does in his final book prior to his passing in 2019, The Steppe and the Sea: Pearls in the Mongol Empire, which will likely remain the definitive work on this subject for some time to come.

The oversize footprint of the Mongols and their significance to global human history has been vast if too often underemphasized, a casualty of the Eurocentric focus on so-called “Western Civilization.” Originally nomads that roamed the steppe, by the thirteenth and fourteenth centuries the transcontinental Mongol Empire formed the largest contiguous land empire in history, stretching from Eastern Europe to the Sea of Japan, encompassing parts or all of China, Southeast Asia, the Iranian plateau, and the Indian subcontinent. Ruthlessly effective warriors, numerous kingdoms tumbled before the fierce onslaughts that marked their famously brutal trails of conquest. Less well-known, as Allsen reveals, was their devotion to plunder, conducted with both a ferocious appetite and perhaps the greatest degree of organization ever seen in the sacking of cities. No spoils were prized more than pearls, acquired from looted state treasuries as well as individuals such as that random unfortunate who was sliced open at Genghis Khan’s command. Pearls were more than simply booty; the Mongols were obsessed with them.

This is a story that turns out to be as fascinating as it is unexpected. The author’s approach is highly original, cogently marrying economics to political culture and state-building without losing sight of his central theme. In a well-written if decidedly academic narrative, Allsen focuses on the Mongol passion for pearls as symbols of wealth and status to explore a variety of related topics. One of the most entertaining examines the Yuan court, where pearls were the central element for wardrobe and fashion, and rank rigidly determined if and how these could be displayed. At the very top tier, naturally, was the emperor and his several wives, who were spectacularly identifiable in their extravagant ornamentation. The emperor’s consorts wore earrings of “matched tear-shaped pearls” said to be the size of hazelnuts, or alternately, pendant earrings with as many as sixty-five matched pearls attached to each pendant! More flamboyant was their elaborate headgear, notably the tall, unwieldy boot-shaped headdress called a boghta that was decorated with plumes and gems and—of course—many more pearls! [p52-53]

Beyond the spotlight on court life, the author widens his lens to explore broader arenas. The Mongols may have been the most fanatical about acquiring pearls, but they certainly were not the first to value them, nor the last; pearls remain among the “shiny objects” with no real function beyond adornment that command high prices to this day. Allsen provides a highly engaging short course for the reader as to where pearls come from and why the calcium carbonate that forms a gemstone in one oyster is—based upon shape, size, luster, and color—prized more than another. This is especially important because of the very paradox the book’s title underscores: it is remarkable that products from the sea became the most prized possession for a people of the steppe! There is also a compelling discussion of the transition from conquering nomad warrior to settled overlord that offers a studied debate on whether the “self-indulgent” habit of coveting consumer goods such as “fine textiles, precious metals, and gems, especially pearls” was the result of being corrupted by the sedentary “civilized” they subjected, or if such cravings were born in a more distant past. [p61]

While I enjoyed The Steppe and the Sea, especially the first half, which concludes with the disintegration of the Mongol Empire, this book is not for everyone. Academic writing imposes a certain stylistic rigidity that suits the scholarly audience it is intended for, but that tends to create barriers for the general reader. In this case accessibility is further diminished by Allsen’s translation of Mongolian proper names into ones likely unfamiliar to those outside of his field: Genghis Khan is accurately rendered as Chinggis Qan, and Kubalai Khan as Qubilai Qan, but this causes confusion that might have been mitigated by a parenthetical reference to the more common name. And the second part of the book, “Comparisons and Influence,” which looks beyond the Mongol realm, is slow going. It seemed like a better tactic might have been to incorporate much of it into the previous narrative, strengthening connections and contrasts while improving readability. On the plus side, sound historical maps are included that proved a critical reference throughout the read.

The Mongol Empire is ancient history, but these days a wild pearl of high quality could still be worth as much as $100,000, although most range in price from $300 to $1500. It seems like civilization is still pretty immature when it comes to shiny objects. On the other hand, this morning, an ounce of palladium—a precious metal valued for its use in catalytic converters and multi-layer ceramic capacitors rather than jewelry—was priced at almost $3000, some 62% more than an ounce of gold! So maybe there is hope for us, after all. I wish Dr. Allsen was still alive so I could reach out via email and find out his thoughts on the subject. Since that is impossible, I can only urge you to read his final book and consider how little human appetites have changed throughout the ages.

Featured

Review of: A Tidewater Morning, by William Styron

In April 1962, President John F. Kennedy hosted a remarkable dinner for more than four dozen Nobel Prize winners and assorted other luminaries drawn from the top echelons of the arts and sciences.  With his characteristic wit, JFK pronounced it “The most extraordinary collection of talent, of human knowledge, that has ever been gathered together at the White House with the possible exception of when Thomas Jefferson dined alone.” One of the least prominent guests that evening was the novelist William Styron, who attended with his wife Rose, and recalled his surprise at the invitation. Styron was not yet then the critically-acclaimed, Pulitzer Prize-winning literary icon he was to later become, but he was hardly an unknown figure, and it turns out that his most recent novel of American expatriates, Set This House on Fire, was the talk of the White House in the weeks leading up to the event. So he had the good fortune to dine not only with the President and First Lady, but with the likes of John Glenn, Linus Pauling, and Pearl Buck—and in the after-party forged a long-term intimate relationship with the Kennedy family.

My first Styron was The Confessions of Nat Turner, which I read as a teen. Its merits somewhat unfairly subsumed at the time by the controversy it sparked over race and remembrance, it remains a notable achievement, as well as a reminder that literature is not synonymous with history, nor should it be held to that account.  I found Set This House on Fire largely forgettable, but as an undergrad was utterly blown away when I read Lie Down in Darkness, his first novel and a true masterpiece that while yet indisputably original clearly evoked the Faulknerian southern gothic.  I went on to read anything by the author I could get my hands on. Also a creature of controversy upon publication, Sophie’s Choice, winner of the National Book Award for Fiction in 1980, remains in my opinion one of the finest novels of the twentieth century.

I thought I had read all of Styron’s fiction, so it was with certain surprise that I learned from a friend who is both author and bibliophile of the existence of A Tidewater Morning, a collection of three novellas I had somehow overlooked.  I bought the book immediately, and packed it to take along for a pandemic retreat to a Vermont cabin in the woods where I read it through in the course of the first day and a half of the getaway, parked in a comfortable chair on the porch sipping hot coffee in the morning and cold beer in late afternoon. Perhaps it was the fact that this was our first breakaway from months of quarantine isolation, or maybe it was the alcohol content of the IPA I was tossing down, but there was definitely a palpable emotional tug for me reading Styron again—works previously unknown to me no less—so many decades after my last encounter with his work, back when I was a much younger man than the one turning these pages. The effect was more pronounced, I suppose, because the semi-autobiographical stories in this collection look back to Styron’s own youth in the Virginia Tidewater in the 1930s and were written when he too was a much older man.

“Love Day,” the first tale of the collection, has him as a young Marine in April 1945 yet untested in combat, awaiting orders to join the invasion of Okinawa and wrestling the ambivalence of chasing heroic destiny while privately entertaining “gut-heaving frights.” There’s much banter among the men awaiting their fate, but the story of real significance is told through flashbacks to an episode some years prior, he still a boy in the back seat of his father’s Oldsmobile, broken down on the side of the road.  War is looming—the very war he is about to join—although it was far from certain then, but the catastrophe of an unprepared America overrun by barbaric Japanese invaders is the near-future imagined in the Saturday Evening Post piece the boy is reading in the back of the stalled car. Simmering tempers flare when he lends voice to the prediction. His mother, stoic in her leg brace, slowly dying of a cancer known to all but unacknowledged, had earlier furiously rebuked him for mouthing a racist epithet and now upbraided him again for characterizing the Japanese as “slimy butchers,” while belittling the notion of a forthcoming war. Unexpectedly, his father—a mild, highly-educated man quietly raging at his own inability to effect a simple car repair—lashes out at his wife, branding her “idiotic” and “a fool” for her naïve idealism, then crumbles under the weight of his words to beg her forgiveness.  It is a dramatic snapshot not only of a moment of a family in turmoil, but of a time and a place that has long faded from view. Only Styron’s talent with a pen could leave us with so much from what is after only a few pages.

The third story is the title tale, “A Tidewater Morning,” which revisits the family to follow his mother’s final, agonizing days. It concludes with both the boy and his father experiencing twin if unrelated epiphanies. It’s a good read, but I found it a bit overwrought, lacking the subtlety characteristic of Styron’s prose.

Sandwiched between these two is my own favorite, “Shadrach,” the story of a 99-year-old former slave—sold away to an Alabama plantation in antebellum days—who shows up unpredictably with the dying wish to be buried in the soil of the Dabney property where he was born. The problem is that the Dabney descendant currently living there is a struggling, dirt-poor fellow who could be a literary cousin of one of the Snopes often resident in Faulkner novels.  The law prohibits interring a black man on his property, and he likewise lacks the means to afford to bury him elsewhere. On the surface, “Shadrach” appears to be a simple story, but on closer scrutiny reveals itself to be a very complex one peopled with multidimensional characters and layered with vigorous doses of both comedy and tragedy.

I highly recommend Styron to those who have not yet read him. For the uninitiated, (spoiler alert!) I will close this review with a worthy passage:

“Death ain’t nothin’ to be afraid about,” he blurted in a quick, choked voice … “Life is where you’ve got to be terrified!” he cried as the unplugged rage spilled forth. … Where in the goddamned hell am I goin’ to get the money to put him in the ground? … I ain’t got thirty-five-dollars! I ain’t got twenty-five dollars! I ain’t got five dollars!” … “And one other thing!” He stopped. Then suddenly his fury—or the harsher, wilder part of it—seemed to evaporate, sucked up into the moonlit night with its soft summery cricketing sounds and its scent of warm loam and honeysuckle. For an instant he looked shrunken, runtier than ever, so light and frail that he might blow away like a leaf, and he ran a nervous, trembling hand through his shock of tangled black hair. “I know, I know,” he said in a faint, unsteady voice edged with grief. “Poor old man, he couldn’t help it. He was a decent, pitiful old thing, probably never done anybody the slightest harm. I ain’t got a thing in the world against Shadrach. Poor old man.” …
“And anyway,” Trixie said, touching her husband’s hand, “he died on Dabney ground like he wanted to. Even if he’s got to be put away in a strange graveyard.”
“Well, he won’t know the difference,” said Mr. Dabney. “When you’re dead nobody knows the difference. Death ain’t much.” [p76-78]

 

NOTE: To learn more about JFK’s Nobel Dinner, check out this outstanding book, which contains a foreword by Rose Styron: Review of: Dinner in Camelot: The Night America’s Greatest Scientists, Writers, and Scholars Partied at the Kennedy White House, by Joseph A. Esposito

Featured

Review of: Africa: A Biography of the Continent, by John Reader

Africa. My youth largely knew of it only through the distorted lens of racist cartoons peopled with bone-in-their-nose cannibals, B-grade movies showcasing explorers in pith helmets who somehow always managed to stumble into quicksand, and of course Tarzan. It was still even then sometimes referred to as the “Dark Continent,” something that was supposed to mean dangerous and mysterious but also translated, for most of us, into the kind of blackness that was synonymous with race and skin color.

My interest in Africa came via the somewhat circuitous route of my study of the Civil War. The central cause of that conflict was, of course, human chattel slavery, and nearly all the enslaved were descendants of lives stolen from Africa. So, for me, a closer scrutiny of the continent was the logical next step. One of the benefits of a fine personal library is that there are hundreds of volumes sitting on shelves waiting for me to find the moment to find them. Such was the case for Africa: A Biography of the Continent, by John Reader, which sat unattended but beckoning for some two decades until a random evening found a finger on the spine and then the cover was open and the book was in my lap. I did not turn back.

With a literary flourish rarely present in nonfiction combined with the ambitious sweep of something like a novel of James Michener, Reader attempts nothing less than the epic as he boldly surveys the history of Africa from the tectonic activities that billions of years ago shaped the continent, to the evolution of the single human species that now populates the globe, to the rise and fall of empires, to colonialism and independence, and finally to the twin witness of the glorious and the horrific in the peaceful dismantling of South African apartheid and the Rwandan genocide. In nearly seven hundred pages of dense but highly readable text, the author succeeds magnificently, identifying the myriad differences in peoples and lifeways and environments while not neglecting the shared themes that then and now much of the continent holds in common.

Africa is the world’s second largest continent, and it hosts by far the largest number of sovereign nations: with the addition of South Sudan in 2011—twelve years after Reader’s book was published—there are now fifty-four, as well as a couple of disputed territories. But nearly all of these states are artificial constructs that are relics of European colonialism, lines on maps once penciled in by elite overlords in distant drawing rooms in places like London, Paris, Berlin, and Brussels, and those maps were heavily influenced by earlier incursions by the Spanish, Portuguese, and Dutch. Much of the poverty, instability, and often dreadful standards of living in Africa are the vestiges of these artificial borders that mostly ignored prior states, tribes, clans, languages, religions, identities, lifeways. When their colonial masters, who had long raped the land for its resources and the people for their self-esteem, withdrew in the whirlwind decolonization era of 1956-1976—some at the strike of the pen, others at the point of the sword—the exploiters left little of value for nation-building to the exploited beyond the mockery of those boundaries. That of the ancestral that had been lost in the process, had been irrevocably lost. That is one of Reader’s themes. But there is so much more.

The focus is, as it should be, on sub-Saharan Africa; the continent’s northern portion is an extension of the Mediterranean world, marked by the storied legacies of ancient Greeks, Carthaginians, Romans, and the later Arab conquest. And Egypt, then and now, belongs more properly to the Middle East. But most of Africa’s vast geography stretches south of that, along the coasts and deep into the interior. Reader delivers “Big History” at its best, and the sub-Saharan offers up an immense arena for the drama that entails—from the fossil beds that begat Homo habilis in Tanzania’s Olduvai Gorge, to the South African diamond mines that spawned enormous wealth for a few on the backs of the suffering of a multitude, to today’s Maasai Mara game reserve in Kenya that we learn is not as we would suppose a remnant of some ancient pristine habitat, but rather a breeding ground for the deadly sleeping sickness carried by the tsetse fly that turned once productive land into a place unsuitable for human habitation.

Perhaps the most remarkable theme in Reader’s book is population sustainability and migration. While Africa is the second largest of earth’s continents, it remains vastly underpopulated relative to its size. Given the harsh environment, limited resources, and prevalence of devastating disease, there is strong evidence that it has likely always been this way.  Slave-trading was, of course, an example of a kind forced migration, but more typically Africa’s history has long been characterized by a voluntary movement of peoples away from the continent, to the Middle East, to Europe, to all the rest of the world.  Migration has always been—and remains today—subject to the dual factors of “push” and “pull,” but the push factor has dominated. That is perhaps the best explanation for what drove the migrations of archaic and anatomically modern humans out of Africa to populate the rest of the globe. The recently identified 210,000-year-old Homo sapiens skull in a cave in Greece reminds us that this has been going on a very long time. Homo erectus skulls found in Dmansi, Georgia that date to 1.8 million years old underscore just how long!

Slavery is, not unexpectedly, also a major theme for Reader, largely because of the impact of the Atlantic slave trade on Africa and how it forever transformed the lifeways of the people directly and indirectly affected by its pernicious hold—culturally, politically and economically. The slavery that was a fact of life on the continent before the arrival of European traders closely resembled its ancient roots; certainly race and skin color had nothing to do with it. As noted, I came to study Africa via the Civil War and antebellum slavery. To this day, a favored logical fallacy advanced by “Lost Cause” apologists for the Confederate slave republic asks rhetorically “But their own people sold them as slaves, didn’t they?” As if this contention—if it was indeed true—would somehow expiate or at least attenuate the sin of enslaving human beings. But is it true? Hardly. Captors of slaves taken in raids or in war by one tribe or one ethnicity would hardly consider them “their own people,” any more than the Vikings that for centuries took Slavs to feed the hungry slave markets of the Arab world would have considered them “their own people.” This is a painful reminder that such notions endure in the mindset of the deeply entrenched racism that still defines modern America—a racism derived from African chattel slavery to begin with. It reflects how outsiders might view Africa, but not how Africans view themselves.

The Atlantic slave trade left a mark on every African who was touched by it as buyer, seller or unfortunate victim. The insatiable thirst for cheap labor to work sugar (and later cotton) plantations in the Americas overnight turned human beings into Africa’s most valuable export. Traditions were trampled. An ever-increasing demand put pressure on delivering supply at any cost. Since Europeans tended to perish in Africa’s hostile environment of climate and disease, a whole new class of “middle-men” came to prominence. Slavery, which dominated trade relations, corrupted all it encountered and left scars from its legacy upon the continent that have yet to fully heal.

This review barely scratches the surface of the range of material Reader covers in this impressive work. It’s a big book, but there is not a wasted page or paragraph, and it neither neglects the diversity nor what is held in common by the land and its peoples. Are there flaws? The included maps are terrible, but for that the publisher should be faulted rather than the author. To compensate, I hung a map of modern Africa on the door of my study and kept a historical atlas as companion to the narrative. Other than that quibble, the author’s achievement is superlative. Rarely have I read something of this size and scope and walked away so impressed, both with how much I learned as well as the learning process itself. If you have any interest in Africa, this book is an essential read. Don’t miss it.

Featured

Review of: Hymns of the Republic: The Story of the Final Year of the American Civil War, by S.C. Gwynne

Some years ago, I had the pleasure to stay in a historic cabin on a property in Spotsylvania that still hosts extant Civil War trenches. Those who imagine great armies clad in blue and grey massed against each other with pennants aloft on open fields would not be wrong for the first years of the struggle, but those trenches better reflect the reality of the war as it ground to its slow, bloody conclusion in its final year. Those last months contained some of the greatest drama and most intense suffering of the entire conflict, yet often receive far less attention than deserved. A welcome redress to this neglect is Hymns of the Republic: The Story of the Final Year of the American Civil War, by journalist and historian S.C. Gwynne, that neatly marries literature to history and resurrects for us the kind of stirring narratives that once dominated the field.

Looking back, for all too many Civil War buffs it might seem that a certain Fourth of July in 1863—when in the east a battered Lee retreated from Gettysburg on the same day that Vicksburg fell in the west—marked the beginning of the end for the Confederacy.  But experts know that assessment is overdrawn. Certainly, the south had sustained severe body blows on both fronts, but the war yet remained undecided. Like the colonists four score and seven years prior to that day, these rebels did not need to “win” the war, only to avoid losing it. As it was, a full ninety-two weeks—nearly two years—lay ahead until Appomattox, some six hundred forty-six days of bloodshed and uncertainty for both sides, most of what truly mattered compressed into the last twelve months of the war. And, tragically, those trenches played a starring role.

Hymns of the Republic opens in March 1864, when Ulysses Grant—architect of the fall of Vicksburg that was by far the more significant victory on that Independence Day 1863—was brought east and given command of all Union Armies. In the three years since Fort Sumter, the war had not gone well in the east, largely as the result of a series of less-than-competent northern generals who had squandered opportunities and been repeatedly driven to defeat or denied outright victory by the wily tactician, Robert E. Lee. The seat of the Confederacy at Richmond—only a tantalizing ninety-five miles from Washington—lay unmolested, while European powers toyed with the notion of granting them recognition. The strategic narrative in the west was largely reversed, marked by a series of dramatic Union victories crafted by skilled generals, crowned by Grant’s brilliant campaign that saw Vicksburg fall and the Confederacy virtually cut in half. But all eyes had been on the east, to Lincoln’s great frustration. Now events in the west were largely settled, and Lincoln brought Grant east, confident that he had finally found his general who would defeat Lee and end the war. But while Lincoln’s instincts proved sound in the long term, misplaced optimism for an early close to the conflict soon evaporated. More than a year of blood and tears lay ahead.

Much of the battle tactics are a familiar story—Grant Takes Command was the exact title of a Bruce Catton classic—but Gwynne updates the narrative with the benefit of the latest scholarship that not only looks beyond the stereotypes of Grant and Lee, but the very dynamics of more traditional treatments focused solely upon battles and leaders. Most prominently, he resurrects the African Americans that until somewhat recently were for too long conspicuously absent from much Civil War history, buried beneath layers of propaganda spun by unreconstructed Confederates who fashioned an alternate history of the war—the “Lost Cause” myth—that for too long dominated Civil War studies and still stubbornly persists both in right-wing politics and the curricula of some southern school systems to this day.  In the process, Gwynne restores the role of African Americans as central players to the struggle who have long been erased from the history books.

Erased. Remarkably, most Americans rarely thought of blacks at all in the context of the war until the film Glory (1989) and Ken Burns’ docuseries The Civil War (1990) came along. And there are still books—Joseph Wheelan’s Their Last Full Measure: The Final Days of the Civil War, published in 2015, springs to mind—that demote these key actors to bit parts. Yet, without enslaved African Americans there would have never been a Civil War. The centrality of slavery to secession has been just as incontrovertibly asserted by the scholarly consensus as it has been vehemently resisted by Lost Cause proponents who would strike out that uncomfortable reference and replace it with the euphemistic “States’ Rights,” neatly obscuring the fact that southern states seceded to champion and perpetuate the right to own dark-complected human beings as chattel property. Social media is replete with concocted fantasies of legions of “Black Confederates,” but the reality is that about a half million African Americans fled to Union lines, and so many enlisted to make war on their former masters that by the end of the war fully ten percent of the Union army was comprised of United States Colored Troops (USCT). Blacks knew what the war was about, and ultimately proved a force to be reckoned with that drove Union victory, even as a deeply racist north often proved less than grateful for their service.

Borrowing a page from the latest scholarship, Gwynne points to the prominence of African Americans throughout the war, but especially in its final months—marked both by remarkable heroism and a trail of tragedy. His story of the final year of the conflict commences with the massacre at Fort Pillow in April 1864 of hundreds of surrendering federal troops—the bulk of whom were uniformed blacks—by Confederates under the command of Nathan Bedford Forrest. The author gives Forrest a bit of a pass here—while the general was himself not on the field, he later bragged about the carnage—but Gwynne rightly puts focus on the long-term consequences, which were manifold.

The Civil War was the rare conflict in history not marred by wide scale atrocities—except towards African Americans. Lee’s allegedly “gallant” forces in the Gettysburg campaign kidnapped blacks they encountered to send south into slavery, and while Fort Pillow might have been the most significant open slaughter of black soldiers by southerners, it was hardly the exception. Confederates were enraged to see blacks garbed in uniform and sporting a rifle, and thus they were frequently murdered once disarmed rather than taken prisoner like their white counterparts. Something like a replay of Fort Pillow occurred at the Battle of the Crater during the siege of Petersburg, although the circumstances were more ambiguous, as the blacks gunned down in what rebels termed a “turkey shoot” were not begging for their lives as at Pillow.  This was not far removed from official policy, of course: the Confederate government threatened to execute or sell into slavery captured black soldiers, and refused to consider them for prisoner exchange. This was a critical factor that led to the breakdown of the parole and exchange processes that had served as guiding principles throughout much of the war. The result bred conditions on both sides that led to the horrors of overcrowding and deplorable conditions in places like Georgia’s Andersonville and Camp Douglas in Chicago.

Meanwhile, Grant was hardly disappointed with the collapse of prisoner exchange. To his mind, anything that denied the south men or materiel would hasten the end of the war, which was his single-minded pursuit. Grant has long been subjected to calumnies that branded him “Grant the Butcher” because he seemed to throw lives away in hopeless attempts to dislodge a heavily fortified enemy. The most infamous example of this was Cold Harbor, which saw massive Union casualties. But Lee’s tactical victory there—it was to be his last of the war—further depleted his rapidly diminishing supply of men and arms which simply could not be replaced. Grant had a strategic vision that set him apart from the rest. That Lee pushed on as the odds shrunk for any outcome other than ultimate defeat came to beget what Gwynne terms “the Lee paradox: the more the Confederates prolonged the war, the more the Confederacy was destroyed.” [p252] And that destruction was no unintended consequence, but a deliberate component of Grant’s grand strategy to prevent food, munitions, pack animals, and slave labor from supporting the enemy’s war effort. Gwynne finds fault with Sherman’s generalship, but his “march to the sea” certainly achieved what had been intended. And while a northern public divided between those who would make peace with the rebels and those impatient with both Grant and Lincoln for an elusive victory, it was Sherman who delivered Atlanta and ensured the reelection of the president, something much in doubt even in Lincoln’s own mind.

There is far more contained within the covers of this fine work than any review could properly summarize. Much to his credit, the author does not neglect those often marginalized by history, devoting a well-deserved chapter to Clara Barton entitled “Battlefield Angel.” And the very last paragraph of the final chapter settles upon Juneteenth, when—far removed from the now quiet battlefields—the last of the enslaved finally learned they were free. Thus, the narrative ends as it has begun, with African Americans in the central role in the struggle too often denied to them in other accounts.  For those well-read in the most recent scholarship, there is little new in Hymns of the Republic, but the general audience will find much to surprise them, if only because a good deal of this material has long been overlooked. Perhaps Gwynne’s greatest achievement is in distilling a grand story from the latest historiography and presenting it as the kind of exciting read Civil War literature is meant to be. I highly recommend it.

 

I reviewed Their Last Full Measure: The Final Days of the Civil War, by Joseph Wheelan, here: Review of: Their Last Full Measure: The Final Days of the Civil War, by Joseph Wheelan

 

The definitive study of the massacre at Fort Pillow is River Run Red: The Fort Pillow Massacre in the  American Civil War, by Andrew Ward, which I reviewed here: Review of: River Run Red: The Fort Pillow Massacre in the American Civil War, by Andrew Ward

 

Featured

Review of: A World on Edge: The End of the Great War and the Dawn of a New Age, by Daniel Schönpflug

A familiar construct for students of European history is what is known as “The Long Nineteenth Century,” a period bookended by the French Revolution and the start of the Great War.  The Great War.  That is what it used to be called, before it was diminished by its rechristening as World War I, to distinguish it from the even more horrific conflict that was to follow just two decades hence. It is the latter that in retrospect tends to overshadow the former. Some are even tempted to characterize one as simply a continuation of the other, but that is an oversimplification. There was in fact far more than semantics to that designation of “Great War,” and historians are correct to flag it as a definitive turning point, for by the time it was over Europe’s cherished notions of civilization—for better and for worse—lay in ruins, and her soil hosted not only the scars of vast, abandoned trenches, but the bones of millions who once held the myths those notions turned out to be dear in their heads and their hearts.

The war ended with a stopwatch of sorts. The Armistice that went into effect on November 11, 1918 at 11AM Paris time marked the end of hostilities, a synchronized moment of collective European consciousness it is said all who experienced would recall for as long as they lived. Of course, something like 22 million souls—military and civilian—could not share that moment: they were the dead. Nearly three thousand died that very morning, as fighting continued right up to the final moments when the clock ran out.

What happened next? There is a tendency to fast forward because we know how it ends: the imperfect Peace of Versailles, the impotent League of Nations, economic depression, the rise of fascism and Nazism, American isolationism, Hitler invades Poland. In the process, so much is lost. Instead, Daniel Schönpflug artfully slows the pace with his well-written, highly original strain of microhistory, A World on Edge: The End of the Great War and the Dawn of a New Age.  The author, an internationally recognized scholar and adjunct professor of history at the Free University of Berlin, blends the careful analytical skills of a historian with a talented pen to turn out one of the finest works in this genre to date.

First, he presses the pause button.  That pause—the Armistice—is just a fragment of time, albeit one of great significance. But it is what follows that most concerns Schönpflug, who has a great drama to convey and does so through the voices of an eclectic array of characters from various walks of life across multiple geographies. When the action resumes, alternating and occasionally overlapping vignettes chronicle the postwar years from the unique, often unexpected vantage points of just over two dozen individuals—some very well known, others less so—who were to leave an imprint of larger or smaller consequence upon the changed world they walked upon.

There is Harry S Truman, who regrets that the military glory he aspired to as a boy has eluded him, yet is confident he has acquitted himself well, and cannot wait to return home to marry his sweetheart Bess and—ironically—vows he will never fire another shot as long as he lives. Former pacifist and deeply religious Medal of Honor winner Sergeant Alvin York receives a hero’s welcome Truman could only dream of, but eschews offers of money and fame to return to his backwoods home in Tennessee, where he finds purpose by leveraging his celebrity to bring roads and schools to his community. Another heroic figure is Sergeant Henry Johnson, of the famed 369th Infantry known as the “Harlem Hellfighters,” who incurred no less than twenty-one combat injuries fending off the enemy while keeping a fellow soldier from capture, but because of his skin color returns to an America where he remains a second-class citizen who does not receive the Medal of Honor he deserves until its posthumous award by President Barack Obama nearly a century later. James Reese Europe, the regimental band leader of the “Harlem Hellfighters,” who has been credited with introducing jazz to Europe, also returns home to an ugly twist of fate.

And there’s Käthe Kollwitz, an artist who lost a son in the war and finds herself in the uncertain environment of a defeated Germany engulfed in street battles between Reds and reactionaries, both flanks squeezing the center of a nascent democracy struggling to assert itself in the wake of the Kaiser’s abdication. One of the key members of that tenuous center is Matthias Erzberger, perhaps the most hated man in the country, who had the ill luck to be chosen as the official who formally accedes to Germany’s humiliating terms for Armistice, and as a result wears a target on his back for the rest of his life. At the same time, the former Kaiser’s son, Crown Prince Wilhelm von Preussen, is largely a forgotten figure who waits in exile for a call to destiny that never comes. Meanwhile in Paris, Marshal Ferdinand Foch lobbies for Germany to pay an even harsher price, as journalist Louise Weiss charts a new course for women in publishing and longs to be reunited with her lover, Milan Štefánik, an advocate for Czechoslovak sovereignty.

Others championing independence elsewhere include Nguyễn Tất Thành (later Hồ Chí Minh), polishing plates and politics while working as a dishwasher in Paris; Mohandas Gandhi, who barely survives the Spanish flu and now struggles to hold his followers to a regimen of nonviolent resistance in the face of increasingly violent British repression; T.E. Lawrence, increasingly disillusioned by the failure of the victorious allies to live up to promises of Arab self-determination; and, Terence MacSwiney, who is willing to starve himself to death in the cause of Irish nationhood. No such lofty goals motivate assassin Soghomon Tehlirian, a survivor of the Armenian genocide, who only seeks revenge on the Turks; nor future Auschwitz commandant Rudolf Höss, who emerges from the war an eager and merciless recruit for right-wing paramilitary forces.

There are many more voices, including several from the realms of art, literature, and music such as George Grosz, Virginia Woolf, and Arnold Schönberg. The importance of the postwar evolution of the arts is underscored in quotations and illustrations that head up each chapter. Perhaps the most haunting is Paul Nash’s 1918 oil-on-canvas of a scarred landscape entitled—with a hint of either optimism or sarcasm—We Are Making a New World.  All the stories the voices convey are derived from their respective letters, diaries, and memoirs; only in the “Epilogue” does the reader learn that some of those accounts are clearly fabricated.

Many of my favorite characters in A World on Edge are ones that I had never heard of before, such as Moina Michael, who was so inspired by the sacrifice of those who perished in the Great War that she singlehandedly led a campaign to memorialize the dead with the poppy as her chosen emblem for the fallen, an enduring symbol to this very day. But I found no story more gripping than that of Marina Yurlova, a fourteen year old Cossack girl who became a child soldier in the Russian army, was so badly wounded she was hospitalized for a year, then entered combat once more during the ensuing civil war and was wounded again, this time by the Bolsheviks. Upon recovery, Yurlova embarked upon a precarious journey on foot through Siberia that lasted a month before she was able to flee Russia for Japan and eventually settle in the United States, where despite her injuries she became a dancer of some distinction.

I am a little embarrassed to admit that I received an advance reader’s edition (ARC) of A World on Edge as part of an early reviewer’s program way back in November 2018, but then let it linger in my to-be-read (TBR) pile until I finally got around to it near the end of June 2020.  I loved the book but did not take any notes for later reference. So, by the time I sat down to review it in January 2021, given the size of the cast and the complexity of their stories, I felt there was no way I could do justice to the author and his work without re-reading it—so I did, over just a couple of days! And that is the true beauty of this book: for all its many characters, competing storylines, and what turns out to be multilevel, deeply profound messaging, for something of the grand saga that it is it remains a fast-paced, exciting read. Schönpflug’s technique of employing bit players to recount an epic tale succeeds so masterfully that the reader is hardly aware of what has been happening until the final pages are being turned. This is history, of course, this is indeed nonfiction, but yet the result invites a favorable comparison to great literature, to a collection of short stories by Ernest Hemingway, or to a novel by André Brink. If European history is an interest, A World on Edge is not only a recommended read, but a required one.

Featured

Review of: Liar Temptress Soldier Spy: Four Women Undercover in the Civil War, by Karen Abbott

Women are conspicuously absent in most Civil War chronicles.  With a few notable exceptions—Clara Barton, Harriet Tubman, Mary Todd Lincoln—female figures largely appear in the literature as bit players, if they make an appearance at all. Author Karen Abbott seeks a welcome redress to this neglect with Liar Temptress Soldier Spy: Four Women Undercover in the Civil War, an exciting and extremely well-written, if deeply flawed account of some ladies who made a significant contribution to the war effort, north and south.

The concept is sound enough. Abbott focuses on four very different women and relates their respective stories in alternating chapters. There is Belle Boyd, a teenage seductress with a lethal temper who serves as rebel spy and courier; Emma Edmonds, who puts on trousers to masquerade as Frank Thompson and joins the Union army; Rose O’Neal Greenhow, an attractive widow who romances northern politicians to obtain intel for the south; and, Elizabeth Van Lew, a prominent Richmond abolitionist who maintains a sophisticated espionage ring that infiltrates the inner circles of the Confederate government. Each of these is worthy of book-length treatment, but weaving their exploits together is an effective technique that makes for a readable and compelling narrative.

I had never heard of Karen Abbott—the pen name for Abbott Kahler—a journalist and highly acclaimed best-selling author dubbed the “pioneer of sizzle history” by USA Today.  She is certainly a gifted writer, and unlike all too many works of history, her prose is fast-moving and engaging.  I was swept along by her colorful recounting of the 1861 Battle of Bull Run, with flourishes such as: “Union troops fumbled backward and the Confederates rammed forward, a brutal and uneven dance, with soldiers felled like rotting trees.”  I got so carried away I almost made it through the following passage without stumbling:

Some Northern soldiers claimed that every angle, every viewpoint, offered a fresh horror. The rebels slashed throats from ear to ear. They sliced off heads and dropkicked them across the field. They carved off noses and ears and testicles and kept them as souvenirs. They propped the limp bodies of wounded soldiers against trees and practiced aiming for the heart. They wrested muskets and swords from the clenched hands of corpses. They plunged bayonets deep into the backsides of the maimed and the dead. They burned the bodies, collecting “Yankee shin-bones” to whittle into drumsticks, and skulls to use as steins. [p34]

Almost. But I have a master’s degree in history and have spent a lifetime studying the American Civil War, and I have never heard this account of such barbarism at Bull Run. So I paused and flipped to Abbott’s notes for the corresponding page at the back of the book, where with a whiff of insouciance she admits that: “Throughout the war both the North and the South exaggerated the atrocities committed by the enemy, and it’s difficult to determine which incidents were real and which were apocryphal.” [p442] Which is another way of saying that her account is highly sensationalized, if not outright fabrication.

To my mind, Abbott commits an unpardonable sin here. A little research reveals that there were in fact a handful of allegations of brutality in the course of the battle, including the mutilation of corpses, but much of it anecdotal. There were several episodes of Confederate savagery later in the war, principally inflicted upon black soldiers in blue uniforms, but that is another story.  How many readers of a popular history would without question take her at her word about what transpired at Bull Run? How many when confronted with stories of testicles taken as souvenirs would think to consult her citations? Lively paragraphs like this may certainly make for “sizzle”—but where’s the history? Historical novels have their place—The Killer Angels, by Michael Shaara, and Gore Vidal’s Lincoln, are among my favorites—but that is not the same thing as history, which must abide by a strict allegiance to fact-based reporting, informed analysis, and documentation. Apparently, this author demonstrates little loyalty to such constraints.

I read on, but with far more skepticism. Abbott’s style is seductive, so it’s easy to keep going. But sins do continue to accumulate. I have a passing familiarity with three of the four main characters, but fact-checking remained essential. Certainly the best known and most consequential was Van Lew, a heroic figure who aided the escape of prisoners of war and provided key intelligence to Union forces in the field. Greenhow is often cited as her counterpart working for the southern cause. Belle Boyd, on the other hand, has become a creature of legend who turns up more frequently in fiction or film than in history texts. I had never heard of Emma Edmonds, but I came to find her story the most fascinating of them all.

It seems that the more documented the subject—such as Van Lew, for example—the closer Abbott’s portrait comes to reliable biography.  Beyond that, the imaginative seems to intrude, indeed dominate. The astonishing tale of Emma Edmonds has her not only impersonating a male Union soldier, but also variously posing as an Irish peddler and in blackface disguised as a contraband, engaged in thrilling espionage missions behind enemy lines! It rang of the stuff that Thomas Berger’s Little Big Man was made of. I was suitably sucked in, but also wary. And rightly so: Abbott’s version of Emma Edmonds’ life is based almost entirely on Edmonds’ own memoir, with little that corroborates it, but the author doesn’t bother to reveal that in the narrative. That Edmonds pretended to be a man in order to enlist seems plausible; her spy missions perhaps only fantasy. We simply just don’t know; a true historian would help us draw conclusions. Abbott seems content to let it play out as so much drama to tickle her audience.

But the worst of all is when the time comes to reveal the fate of luckless Confederate spy Greenhow, who drowns when her lifeboat capsizes with Union vessels bearing down on the steamer she abandoned, the moment where the superlative talent of Abbott’s pen collides with her concomitant disloyalty to scholarship:

She was sideways, upside down, somersaulting inside the wet darkness. She screamed noiselessly, the water rushing in. She tried to hold her breath—thirty seconds, sixty, ninety—before her mouth gave way and water filled it again. Tiny streams of bubbles escaped from her nostrils. A burning scythed through her chest. That bag of gold yanked like a noose around her neck. Her hair unspooled and leeched to her skin, twining around her neck. She tried to aim her arms up and her legs down, to push and pull, but every direction seemed the same. No moonlight skimmed along the surface, showing her the way; there was no light at all. [p389]

Entertaining, right? Outstanding writing, correct? Solid history—of course not! Imagining Greenhow’s final agonizing moments of life with a literary flourish may very well enrich the pages of a work of fiction, but it is nothing less than an outrage to a work of history.

This book was a fun read. Were it a novel I would likely give it high marks. But that is not how it is packaged. Emma Edmonds pretended to be a man to save the Union. Karen Abbott pretends to be a historian to sell books. Both make for great stories. But don’t confuse either with reliable history.

Featured

Review of: The Etruscans: Lost Civilizations, by Lucy Shipley

When I visited New York’s Metropolitan Museum of Art some years ago, the object I found most stunning was the “Monteleone Chariot,” a sixth century Etruscan bronze chariot inlaid with ivory.  I stood staring at it, transfixed, long enough for my wife to shuffle her feet impatiently. Still I lingered, dwelling on every detail, especially the panels depicting episodes from the life of Homeric hero Achilles. By that time, I had read The Iliad more than once, and had long been immersed in studies of ancient Greece. How was it then, I wondered, that I could speak knowledgeably about Solon and Pisistratus, but yet know so little about the Etruscans who crafted that chariot in the same century those notables walked the earth?

Long before anyone had heard of the Romans, city-states of Etruria dominated the Italian peninsula—and, along with Carthage and a handful of Greek poleis—the central Mediterranean, as well. Later, Rome would absorb, crush or colonize all of them. In the case of the Etruscans, it was to be a little of each. And somehow, somewhat incongruously, over the millennia Etruscan civilization—or at least what the living, breathing Etruscans would have recognized as such—has been lost to us. But not lost in the way we usually think of “lost civilizations,” like Teotihuacan, for instance, or the Indus Valley, where what remains are ruins of a vanished culture that disappeared from living memory, an undeciphered script, and even the uncertain ethnicity of its inhabitants. The Etruscans, on the other hand, were never forgotten, their alphabet can be read although their language largely defies translation, and their DNA lingers in at least some present-day Italians. Yet, by all accounts they are nevertheless lost, and tantalizingly so.

Such a conundrum breeds frustration, of course: Romans supplanted the Etruscans but hardly exterminated them. Moreover, unlike other civilizations deemed “lost to history,” the Etruscans appear in ancient texts going as far back as Hesiod. There are also hundreds of excavated tombs, rich with decorative art and grave goods, the latter top-heavy with Greek imports they clearly treasured.  So how can we know so much about the Etruscans and at the same time so little? Fortunately, Lucy Shipley, who holds a PhD in Etruscan archaeology, comes to a rescue of sorts with her well-written, delightful contribution to the scholarship, entitled simply The Etruscans, a volume in the digest-sized Lost Civilization series published by Reaktion Books.

Most Etruscan studies are dominated by discussions of the ancient sources and—most prominently—the tombs, which are nothing short of magnificent. But where does that lead us? Herodotus references the Etruscans, as does Livy. But are the sources reliable? Rather dubious, as it turns out. Herodotus may be a dependable chronicler of the Hellenes, but anyone who has read his comically misguided account of Egyptian life and culture is aware how far he can stray from reality. And Roman authors such as Livy routinely trumped a decidedly negative perspective, most evident in disdainful memories of the unwelcome semi-legendary Etruscan kings that are said to have ruled Rome until the overthrow of “Tarquin the Proud” in 509 BCE.

Then there are the tombs. Attempts to extrapolate what ancient life was like from the art that decorates the tombs of the dead—awe inspiring as it may be—can present a distorted picture (pun fully intended!) that ignores all but the wealthiest elite slice of the population. Much like Egyptology’s one-time obsession for pyramids and the pharaoh’s list tended to obscure the no less interesting lives of the non-royal—such as those of the workers who collected daily beer rations and left graffiti within the walls of pyramids they constructed—the emphasis on tombs that is standard to Etruscan studies reveals little of the lives of the vast majority of ordinary folks that peopled their world.

Shipley neatly sidesteps these traditional traps by failing to be constrained by them. Instead, she relies on her training as an archaeologist to ask questions: what do we know about the Etruscans and how do we know it? And, perhaps more critically: what don’t we know and why don’t we know it? In the process, she brings a surprisingly fresh look to an enigmatic people in a highly readable narrative suitable to both academic and popular audiences. Arranged thematically rather than chronologically, the author selects a specific artifact or site for each chapter to serve as a visual trigger for the discussion.  Because Shipley is so talented with a pen, it is worth pausing to let her explain her methodology in her own words:

Why focus on the archaeology? Because it is the very materiality, the physicality, the toughness and durability of things and the way they insidiously slip and slide into every corner of our lives that makes them so compelling … We are continually making and remaking ourselves, with the help of things. I would argue that the past is no different in this respect. It’s through things that we can get at the people who made, used and ultimately discarded them—their projects of self-production are as wrapped up in stuff as our own. And always, wrapped up in these things, are fundamental questions about how we choose to be in the world, questions that structure our actions and reactions, questions that change and challenge how we think and what we feel. Questions and objects—the two mainstays of human experience.  [p19-20]

Shipley’s approach succeeds masterfully. Because many of these objects—critical artifacts for the archaeologist but often also spectacular works of art for the casual observer—are rendered in full color in this striking edition, the reader is instantly hooked: effortlessly chasing the author’s captivating prose down a host of intriguing rabbit holes in pursuit of answers to the questions she has mated with these objects.  Along the way, she showcases the latest scholarship with a concise treatment of a broad range of topics informed by the kind of multi-disciplinary research that defines twenty-first century historical inquiry.

This includes DNA studies of both cattle and human populations in an attempt to resolve the long debate over Etruscan origins. While Herodotus and legions of other ancient and modern detectives have long pointed to legendary migrations from Anatolia, it turns out that the Etruscans are likely autochthonous, speaking a pre-Indo European language that may possibly be related to the one spoken by Ötzi, the mummified iceman, thousands of years ago. Shipley also takes the time to explain how it is that we can read enough of the Etruscan alphabet to decipher proper names while remaining otherwise frustrated in efforts aimed at meaningful translation. Much that we identify as Roman was borrowed from Etruria, but as Rome assimilated the Etruscans over the centuries, their language was left behind. Later, Etruscan literature—like all too much of the classical world—fell victim to the zeal of early Christians in campaigns to purge any remnants of paganism. Most offensive in this regard were writings that described the practices of the “haruspex,” a specialist who sought to divine the future by examining the livers of sacrificial animals, an Etruscan ritual later integrated into Roman religious practices. Texts of haruspices appear prominently in the “hit lists” drawn up by Christian thinkers Tertullian and Arnobius.

My favorite chapter is entitled “Super Rich, Invisible Poor,” which highlights the inevitable distortion that results from the attention paid to the exquisite art and grave goods of the wealthy elite at the expense of the sizeable majority of the inhabitants of a dozen city-states comprised of numerous towns, villages and some larger cities with populations thought to number in the tens of thousands. Although, to be fair, this has hardly been deliberate: there remains a stark scarcity in the archaeological record of the teeming masses, so to speak. While it may smack of the cliché, the famous aphorism “Absence of evidence is not evidence of absence” should be triple underscored here! The Met’s Monteleone Chariot, originally part of an elaborate chariot burial, makes an appearance in this chapter, but perhaps far more fascinating is a look at the great complex of workshops at a site called Poggio Civitate, more than a hundred miles from Monteleone, where skilled craftspeople labored to produce a whole range of goods in the same century that chariot was fashioned. But what of those workers? There seemed to be no trace of them. You can clearly detect the author’s delight as she describes recent excavations that uncovered remains of a settlement that likely housed them. Shipley returns again and again to her stated objective of connecting the material culture to the living Etruscans who were once integral to it.

Another chapter worthy of superlatives is “Sex, Lives and Etruscans.” While it is tempting to impose modern notions of feminism on earlier peoples, Etruscan women do seem to have had claimed lives of far greater independence than their classical contemporaries in Greece and Rome. And there are also compelling hints at an openness in sexuality—including wife-sharing—that horrified ancient observers who nevertheless thrilled in recounting licentious tales of wicked Etruscan behavior! Shipley describes tomb art that depicts overt sex acts with multiple partners, while letting the reader ponder whether legendary accounts of Etruscan profligacy are given to hyperbole or not.

In addition to beautiful illustrations and an engaging narrative, this volume also features a useful map, a chronology, recommended reading, and plenty of notes. It is rare that any author can so effectively tackle a topic so wide-ranging in such a compact format, so Shipley deserves special recognition for turning out such an outstanding work.  The Etruscans rightly belongs on the shelf of anyone eager to learn more about a people who certainly made a vital contribution to the history of western civilization.

Monteleone Chariot photo credit: Image is in public domain.  More about the Monteleone Chariot here:   https://www.metmuseum.org/art/collection/search/247020

I reviewed other books in the Lost Civilizations series here:

Review of: The Indus: Lost Civilizations, by Andrew Robinson

Review of Egypt: Lost Civilizations, by Christina Riggs

Featured

Review of: Reaganland: America’s Right Turn 1976-1980, by Rick Perlstein

In Hearts of Atlantis, Stephen King channels the fabled lost continent as metaphor for the glorious promise of the sixties that vanished so utterly that nary a trace remains. Atlantis sank, King declares bitterly in his fiction. He has a point. If you want to chart the actual moments those collective hopes and dreams were swamped by currents of reaction and finally submerged in the merciless wake of a new brand of unforgiving conservatism, you absolutely must turn to Reaganland: America’s Right Turn 1976-1980, Rick Perlstein’s brilliant, epic political history of an era too often overlooked that surely echoes upon America in 2020 with far greater resonance than perhaps any before or since. But be warned: you may need forearms even bigger than the sign-spinning guy in the Progressive commercial to handle this dense, massive 914-page tome that is nevertheless so readable and engaging that your wrists will tire before your interest flags.

Reaganland is a big book because it is actually several overlapping books. It is first and foremost the history of the United States at an existential crossroads. At the same time, it is a close account of the ill-fated presidency of Jimmy Carter. And, too, it is something of a “making of the president 1980.” This is truly ambitious stuff, and that Perlstein largely succeeds in pulling it off should earn him wide and lasting accolades both as a historian and an observer of the American experience.

Reaganland is the final volume in a series launched nearly two decades ago by Perlstein, a progressive historian, that chronicles the rise of the right in modern American politics. Before the Storm focused on Goldwater’s ascent upon the banner of far-right conservatism. This was followed by Nixonland, which profiled a president who thrived on division and earned the author outsize critical acclaim; and, The Invisible Bridge, which revealed how Ronald Reagan—stridently unapologetic for the Vietnam debacle, for Nixon’s crimes, and for angry white reaction to Civil Rights—brought notions once the creature of the extreme right into the mainstream, and began to pave the road that would take him to the White House. Reaganland is written in the same captivating, breathless style Perlstein made famous in his earlier works, but he has clearly honed his craft: the narrative is more measured, less frenetic, and is crowned with a strong concluding chapter—something conspicuously absent in The Invisible Bridge.

The grand—and sometimes allied—causes of the Sixties were Civil Rights and opposition to the Vietnam War, but concomitant social and political revolutions spawned a myriad of others that included antipoverty efforts for the underprivileged, environmental activism, equal treatment for homosexuals and other marginalized groups such as Native Americans and Chicano farm workers, constitutional reform, consumer safety, and most especially equality for women, of which the right to terminate a pregnancy was only one component. The common theme was inclusion, equality, and cultural secularism. The antiwar movement came to not only dominate but virtually overshadow all else, but at the same time served as a unifying factor that stitched together a kind of counterculture coat of many colors to oppose an often stubbornly unyielding status quo. When the war wound down, that fabric frayed. Those who once marched together now marched apart.

This fragmentation was not generally adversarial; groups once in alliance simply went their own ways, organically seeking to advance the causes dear to them. And there was much optimism. Vietnam was history. Civil Rights had made such strides, even if there remained so much unfinished business. Much of what had been counterculture appeared to have entered the mainstream. It seemed like so much was possible. At Woodstock, Grace Slick had declared that “It’s a new dawn,” and the equality and opportunity that assurance heralded actually seemed within reach. Yet, there were unseen, menacing clouds forming just beneath the horizon.

Few suspected that forces of reaction quietly gathering strength would one day unite to destroy the progress towards a more just society that seemed to lie just ahead. Perlstein’s genius in Reaganland lies in his meticulous identification of each of these disparate forces, revealing their respective origin stories and relating how they came to maximize strength in a collective embrace. The Equal Rights Amendment, riding on a wave of massive bipartisan public support, was but three states away from ratification when a bizarre woman named Phyllis Schlafly seemingly crawled out of the woodwork to mobilize legions of conservative women to oppose it. Gay people were on their way to greater social acceptance via local ordinances which one by one went down to defeat after former beauty queen and orange juice hawker Anita Bryant mounted what turned into a nationwide campaign of resistance. The landmark Roe v. Wade case that guaranteed a woman’s right to choose sparked the birth of a passionate right-to-life movement that soon became the central creature of the emerging Christian evangelical “Moral Majority,” that found easy alliance with those condemning gays and women’s lib. Most critically—in a key component that was to have lasting implications, as Perlstein deftly underscores—the Christian right also pioneered a political doctrine of “co-belligerency” that encouraged groups otherwise not aligned to make common ground against shared “enemies.” Sure, Catholics, Mormons and Jews were destined to burn in a fiery hell one day, reasoned evangelical Protestants, but in the meantime they could be enlisted as partners in a crusade to combat abortion, homosexuality and other miscellaneous signposts of moral decay besetting the nation.

That all this moral outrage could turn into a formidable political dynamic seems to have been largely unanticipated. But, as Perlstein reminds us, maybe it should not have been so surprising: candidate Jimmy Carter, himself deeply religious and well ahead in the 1976 race for the White House, saw a precipitous fifteen-point drop in the polls after an interview in Playboy where he admitted that he sometimes lusted in his heart. Perhaps the sun wasn’t quite ready to come up for that new dawn after all.

Of course, the left did not help matters, often ideologically unyielding in its demand to have it all rather than settle for some, as well as blind to unintended consequences. Nothing was to alienate white members of the national coalition to advance civil rights for African Americans more than busing, a flawed shortcut that ignored the greater imperative for federal aid to fund and rebuild decaying inner-city schools, de facto segregated by income inequality. Efforts to advance what was seen as a far too radical federal universal job guarantee ended up energizing opposition that denied victory to other avenues of reform. And there’s much more. Perlstein recounts the success of Ralph Nader’s crusade for automobile safety, which exposed carmakers for deliberately skimping on relatively inexpensive design modifications that could have saved countless lives in order to turn out even greater profits. Auto manufacturers were finally brought to heel. Consumer advocacy became a thing, with widespread public support and frequent industry acquiescence. But even Nader—not unaware of consequences, unintended or otherwise—advised caution when a protégé pressed a campaign to ban TV ads for sugary cereals that targeted children, predicting with some prescience that “if you take on the advertisers you will end up with so many regulators with their bones bleached in the desert.” [p245] Captains of industry Perlstein terms “Boardroom Jacobins” were stirred to collective action by what was perceived as regulatory overreach, and big business soon joined hands to beat all such efforts back.

Meanwhile, subsequent to Nixon’s fall and Ford’s defeat to Carter in 1976, pundits—not for the last time—prematurely foretold the extinction of the Republican Party, leaving stalwart policy wonks on the right seemingly adrift, clinging to their opposition to the pending Salt II arms agreement and the Panama Canal Treaty, furiously wielding oars of obstruction but yet still lacking a reliable vessel to stem the tide. Bitterly opposed to the prevailing wisdom that counseled moderation to ensure not only relevance but survival, they chafed at accommodation with the Ford-Kissinger-Rockefeller wing of the party that preached détente abroad and compromise at home. They looked around for a new champion … and once again found Ronald Reagan!

The former Bedtime for Bonzo co-star and corporate shill had launched his political career railing against communists concealed in every cupboard, as well as shrewdly exploiting populist rage at long-haired antiwar demonstrators. As governor of California he directed an especially violent crackdown known as “Bloody Thursday” on non-violent protesters at UC Berkeley’s People’s Park that resulted in one death and hundreds of injuries after overzealous police fired tear gas and shotguns loaded with buckshot at the crowd. In a comment that eerily presaged Trump’s “very fine people on both sides” remark, Reagan declared that “Once the dogs of war have been unleashed, you must expect … that people … will make mistakes on both sides.” But a year later he was even less apologetic, proclaiming that “If it takes a bloodbath, let’s get it over with.” This was their candidate, who—remarkably one would think—had nearly snatched the nomination away from Ford in ’76, and then went on to cheer party unity while campaigning for Ford with even less enthusiasm than Bernie Sanders exhibited for Hillary Clinton in 2016. Many hold Reagan at least partially responsible for Ford’s loss in the general election.

But Reagan’s neglect of Ford left him neatly positioned as the front-runner for 1980. As conservatives dug in, others of the party faithful recoiled in horror, fearing a repeat of the drubbing at the polls they took in 1964 with Barry “extremism in defense of liberty is no vice” Goldwater at the top of the ticket. And Reagan did seem extreme, perhaps more so than Goldwater. The sounds of sabers rattling nearly drowned out his words every time he mentioned the U.S.S.R. And he said lots of truly crazy things, both publicly and privately, once even wondering aloud over dinner with columnist Jack Germond whether “Ford had staged fake assassination attempts to win sympathy for his renomination.” Germond later recalled that “He was always a man with a very loose hold on the real world around him.” [p617] Germond had a good point: Reagan once asserted that “Fascism was really the basis for the New Deal,” boosted the valuable recycling potential of nuclear waste, and insisted that “trees cause more pollution than automobiles do”—prompting some joker at a rally to decorate a tree with a sign that said “Chop me down before I kill again.”

But Reagan had a real talent with dog whistles, launching his campaign with a speech praising “states’ rights” at a county fair near Philadelphia, Mississippi, where three civil rights workers were murdered in 1964. He once boasted he “would have voted against the Civil Rights Act of 1964,” claimed “Jefferson Davis is a hero of mine,” and bemoaned the Voting Rights Act as “humiliating to the South.” A whiff of racism also clung to his disdain for Medicaid recipients as a “a faceless mass, waiting for handouts,” and his recycling ad nauseum of his dubious anecdote of a “Chicago welfare queen” with twelve social security cards who bilked the government out of $150,000. Unreconstructed whites ate this red meat up. Nixon’s “southern strategy” reached new heights under Reagan.

But a white southerner who was not a racist was actually the president of the United States. Despite the book’s title, the central protagonist of Reaganland is Jimmy Carter, a man who arrived at the Oval Office buoyed by public confidence rarely seen in the modern era—and then spent four years on a rollercoaster of support that plummeted far more often than it climbed. At one point his approval rating was a staggering 77% … at another 28%—only four points above where Nixon’s stood when he resigned in disgrace. These days, as the nonagenarian Carter has established himself as the most impressive ex-president since John Quincy Adams, we tend to forget what a truly bad president he was. Not that he didn’t have good intentions, only that—like Woodrow Wilson six decades before him—he was unusually adept at using them to pave his way to hell. A technocrat with an arrogant certitude that he had all the answers, he arrived on the Beltway with little idea of how the world worked, a family in tow that seemed like they were right out of central casting for a Beverly Hillbillies sequel. He often gravely lectured the public on what was really wrong with the country—and then seemed to lay blame upon Americans for outsize expectations. And he dithered, tacking this way and that way, alienating both sides of the aisle in a feeble attempt to seem to stand above the fray.

In fairness, he had a lot to deal with. Carter inherited a nation more socio-economically shook up than any since the 1930s. In 1969, the United States had proudly put a man on the moon. Only a few short years later, a country weaned on wallowing in American exceptionalism saw factories shuttered, runaway inflation, surging crime, cities on the verge of bankruptcy, and long lines just to gas up your car at an ever-skyrocketing cost. And that was before a nuclear power plant melted down, Iranians took fifty-two Americans hostage, and Soviet tanks rolled into Afghanistan. All this was further complicated by a new wave of media hype that saw the birth of the “bothersiderism” that gives equal weight to scandals legitimate or spurious—an unfortunate ingredient that remains so baked into current reporting.

Perhaps the most impressive part of Reaganland is Perlstein’s superlative rendering of what America was like in the mid-70s. Stephen King’s horror is often so effective at least in part due to the fads, fast food, and pop music he uses as so many props in his novels. If that stuff is real, perhaps ghosts or killer cars could be real, as well. Likewise, Perlstein brings a gritty authenticity home by stepping beyond politics and policy to enrich the narrative with headlines of serial killers and plane crashes, of assassination and mass suicide, adroitly resurrecting the almost numbing sense of anxiety that informed the times. DeNiro’s Taxi Driver rides again, and the reader winces through every page.

Carter certainly had his hands full, especially as the hostage crisis dragged on, but it hardly ranked up there with Truman’s Berlin Airlift or JFK’s Cuban missiles. There were indeed crises, but Carter seemed to manufacture even more—and to get in his own way most of the time. And his attempts to reassure consistently backfired, fueling even more national uncertainty. All this offered a perfect storm of opportunity for right-wing elements who discovered co-belligerency was not only a tactic but a way of life. Against all advice and all odds, Reagan—retaining his “very loose hold on the real world around him”—saw no contradiction bringing his brand of conservatism to join forces with those maligning gays, opposing abortion, stonewalling the ERA, and boosting the Christian right. Corporate CEO’s—Perlstein’s “Boardroom Jacobins”—already on the defensive, were more than ready to finance it. Carter, flailing, played right into their hands. Already the most right-of-center Democratic president of the twentieth century, he too shared that weird vision of the erosion of American morality. And Perlstein reminds us that the debacle of financial deregulation usually traced back to Reagan actually began on Carter’s watch, the seeds sown for the wage stagnation, growth of income inequality, and endless cycles of recession that has been de rigueur in American life ever since. Carter failed to make a good closing argument for why he should be re-elected, and the unthinkable occurred: Ronald Reagan became president of the United States. The result was that the middle-class dream that seemed so much in jeopardy under Carter was permanently crushed once Reagan’s regime of tax cuts, deregulation, and the supply-side approach George H.W. Bush rightly branded as “voodoo economics” became standard operating policy. Progressive reform sputtered and stalled. The little engine that FDR had ignited to manifest a social and economic miracle for America crashed and burned forever on the vanguard of Reaganomics.

Some readers might be intimidated by the size of Reaganland, but it’s a long book because it tells a long story, and it contains lots of moving parts. Perlstein succeeds magnificently because he demonstrates how all those parts fit together, replete with the nuance and complexity critical to historical analysis. Is it perfect? Of course not. I’m a political junkie, but there were certain segments on policy and legislative wrangling that seemed interminable. And if Perlstein mentioned “Boardroom Jacobins” just one more time, I might have screamed. But these are quibbles. This is without doubt the author’s finest book, and I highly recommend it, as both an invaluable reference work and a cover-to-cover read.

In Hearts of Atlantis, Stephen King imagines the sixties as bookended by JFK’s 1963 assassination and John Lennon’s murder in 1980. Perlstein seems to follow that same school of thought, for the final page of Reaganland also wraps up with Lennon’s untimely death. In an afterword to his work of fiction, King muses: “Although it is difficult to believe, the sixties are not fictional; they actually happened.” If you are more partial to nonfiction and want the real story of how the sixties ended, of how Atlantis sank, you must read Reaganland.

[Note: this review goes to press just a few days before the most consequential presidential election in modern American history. This book and this review are reminders that elections do matter.]

I reviewed Perlstein’s previous books here:

Review of: The Invisible Bridge: The Fall of Nixon and the Rise of Reagan by Rick Perlstein

Review of: Nixonland: The Rise of a President and the Fracturing of America, by Rick Perlstein

Featured

Review of: The Awakening: A History of the Western Mind AD 500-1700, by Charles Freeman

Nearly two decades have passed since Charles Freeman published The Closing of the Western Mind: The Rise of Faith and the Fall of Reason, a brilliant if controversial examination of the intellectual totalitarianism of Christianity that dated to the dawn of its dominance of Rome and the successor states that followed the fragmentation of the empire in the West.  Freeman argues persuasively that the early Christian church vehemently and often brutally rebuked the centuries-old classical tradition of philosophical enquiry and ultimately drove it to extinction with a singular intolerance of competing ideas crushed under the weight of a monolithic faith. Not only were pagan religions prohibited, but there would be virtually no provision for any dissent with official Christian doctrine, such that those who advanced even the most minor challenges to interpretation were branded heretics and sent to exile or put to death. That tragic state was to define medieval Europe for more than a millennium.

Now the renowned classical historian has returned with a follow-up epic, The Awakening: A History of the Western Mind AD 500-1700, recently published in the UK (and slated for U.S. release, possibly with a different title) which recounts the slow—some might brand it glacial—evolution of Western thought that restored legitimacy to independent examination and analysis, that eventually led to a celebration, albeit a cautious one, of reason over blind faith. In the process, Freeman reminds us that quality, engaging narrative history has not gone extinct, while demonstrating that it is possible to produce a work that is so well-written it is readable by a general audience while meeting the rigorous standards of scholarship demanded by academia. That this is no small achievement will be evident to anyone who—as I do—reads both popular and scholarly history and is struck by the stultifying prose that often typifies the academic. In contrast, here Freeman takes a skillful pen to reveal people, events and occasionally obscure concepts, much of which may be unfamiliar to those who are not well versed in the medieval period.

The fall of Rome remains a subject of debate for historians. While traditional notions of sudden collapse given to pillaging Vandals leaping over city walls and fora engulfed in flames have long been revised, competing visions of a more gradual transition that better reflect the scholarship sometimes distort the historiography to minimize both the fall and what was actually lost. And what was lost was indeed dramatic and incalculable. If, to take just one example, sanitation can be said to be a mark of civilization, the Roman aqueducts and complex network of sewers that fell into disuse and disrepair meant that fresh water was no longer reliable, and sewage that bred pestilence was to be the norm for fifteen centuries to follow. It was not until the late nineteenth century that sanitation in Europe even approached Roman standards. So, whatever the timeline—rapid or gradual—there was indeed a marked collapse. Causes are far more elusive.  But Gibbon’s largely discredited casting of Christianity as the villain that brought the empire down tends to raise hackles in those who suspect someone like Freeman attempting to point those fingers once more. But Freeman has nothing to say about why Rome fell, only what followed. The loss of the pursuit of reason was to be as devastating for the intellectual health of the post-Roman world in the West as sanitation was to prove for its physical health. And here Freeman does squarely take aim at the institutional Christian church as the proximate cause for the subsequent consequences for Western thought. This is well-underscored in the bleak assessment that follows in one of the final chapters in The Closing of the Western Mind:

Christian thought that emerged in the early centuries often gave irrationality the status of a universal “truth” to the exclusion of those truths to be found through reason. So the uneducated was preferred to the educated and the miracle to the operation of natural laws … This reversal of traditional values became embedded in the Christian tradi­tion … Intellectual self-confidence and curiosity, which lay at the heart of the Greek achievement, were recast as the dreaded sin of pride. Faith and obedience to the institutional authority of the church were more highly rated than the use of reasoned thought. The inevitable result was intellectual stagnation … [p322]

 Awakening picks up where Closing leaves off as the author charts the “Reopening of the Western Mind” (this was the working title of his draft!) but the new work is marked by far greater optimism. Rather than dwell on what has been lost, Freeman puts focus not only upon the recovery of concepts long forgotten but how rediscovery eventually sparked new, original thought, as the spiritual and later increasingly secular world danced warily around one another—with a burning heretic all too often staked between them on Europe’s fraught intellectual ballroom. Because the timeline is so long—encompassing twelve centuries—the author sidesteps what could have been a dull chronological recounting of this slow progression to narrow his lens upon select people, events and ideas that collectively marked milestones on the way that comprise thematic chapters to broaden the scope. This approach thus transcends what might have been otherwise parochial to brilliantly convey the panoramic.

There are many superlative chapters in Awakening, including the very first one, entitled “The Saving of the Texts 500-750.” Freeman seems to delight in detecting the bits and pieces of the classical universe that managed to survive not only vigorous attempts by early Christians to erase pagan thought but the unintended ravages of deterioration that is every archivist’s nightmare. Ironically, the sacking of cities in ancient Mesopotamia begat conflagrations that baked inscribed clay tablets, preserving them for millennia. No such luck for the Mediterranean world, where papyrus scrolls, the favored medium for texts, fell to war, natural disasters, deliberate destruction, as well as to entropy—a familiar byproduct of the second law of thermodynamics—which was not kind in prevailing environmental conditions. We are happily still discovering papyri preserved by the dry conditions in parts of Egypt—the oldest dating back to 2500 BCE—but it seems that the European climate doomed papyrus to a scant two hundred years before it was no more.

Absent printing presses or digital scans, texts were preserved by painstakingly copying them by hand, typically onto vellum, a kind of parchment made from animal skins with a long shelf life, most frequently in monasteries by monks for whom literacy was deemed essential. But what to save? The two giants of ancient Greek philosophy, Plato and Aristotle, were preserved, but the latter far more grudgingly. Fledgling concepts of empiricism in Aristotle made the medieval mind uncomfortable. Plato, on the other hand, who pioneered notions of imaginary higher powers and perfect forms, could be (albeit somewhat awkwardly) adapted to the prevailing faith in the Trinity, and thus elements of Plato were syncretized into Christian orthodoxy. Of course, as we celebrate what was saved it is difficult not to likewise mourn what was lost to us forever. Fortunately, the Arab world put a much higher premium on the preservation of classical texts—an especially eclectic collection that included not only metaphysics but geography, medicine and mathematics. When centuries later—as Freeman highlights in Awakening—these works reached Europe, they were to be instrumental as tinder to the embers that were to spark first a revival and then a revolution in science and discovery.

My favorite chapter in Awakening is “Abelard and the Battle for Reason,” which chronicles the extraordinary story of scholastic scholar Peter Abelard (1079-1142)—who flirted with the secular and attempted to connect rationalism with theology—told against the flamboyant backdrop of Abelard’s tragic love affair with Héloïse, a tale that yet remains the stuff of popular culture. In a fit of pique, Héloïse’s father was to have Abelard castrated. The church attempted something similar, metaphorically, with Abelard’s teachings, which led to an order of excommunication (later lifted), but despite official condemnation Abelard left a dramatic mark on European thought that long lingered.

There is too much material in a volume this thick to cover competently in a review, but the reader will find much of it well worth the time. Of course, some will be drawn to certain chapters more than others. Art historians will no doubt be taken with the one entitled “The Flowering of the Florentine Renaissance,” which for me hearkened back to the best elements of Kenneth Clark’s Civilisation, showcasing not only the evolution of European architecture but the author’s own adulation for both the art and the engineering feat demonstrated by Brunelleschi’s dome, the extraordinary fifteenth century adornment that crowns the Florence Cathedral. Of course, Freeman does temper his praise for such achievements with juxtaposition to what once had been, as in a later chapter that recounts the process of relocating an ancient Egyptian obelisk weighing 331 tons that had been placed on the Vatican Hill by the Emperor Caligula, which was seen as remarkable at the time. In a footnote, Freeman reminds us that: “One might talk of sixteenth-century technological miracles, but the obelisk had been successfully erected by the Egyptians, taken down by the Romans, brought by sea to Rome and then re-erected there—all the while remaining intact!” [p492n]

If I was to find a fault with Awakening, it is that it does not, in my opinion, go far enough to emphasize the impact of the Columbian Experience on the reopening of the Western mind.  There is a terrific chapter devoted to the topic, “Encountering the Peoples of the ‘Newe Founde Worldes,’” which explores how the discovery of the Americas and its exotic inhabitants compelled the European mind to examine other human societies whose existence had never before even been contemplated. While that is a valid avenue for analysis, it yet hardly takes into account just how earth-shattering 1492 turned out to be—arguably the most consequential milestone for human civilization (and the biosphere!) since the first cities appeared in Sumer—in a myriad of ways, not least the exchange of flora and fauna (and microbes) that accompanied it. But this significance was perhaps greatest for Europe, which had been a backwater, long eclipsed by China and the Arab middle east.  It was the Columbian Experience that reoriented the center of the world, so to speak, from the Mediterranean to the Atlantic, which was exploited to the fullest by the Europeans who prowled those seas and first bridged the continents. It is difficult to imagine the subsequent accomplishments—intellectual and otherwise—had Columbus not landed at San Salvador. But this remains just a quibble that does not detract from Freeman’s overall accomplishment.

Full disclosure: Charles Freeman and I began a long correspondence via email following my review of Closing. I was honored when he selected me as one of his readers for his drafts of Awakening, which he shared with me in 2018, but at the same time I approached this responsibility with some trepidation: given Freeman’s credentials and reputation, what if I found the work to be sub-standard? What if it was simply not a good book?  How would I address that? As it was, these worries turned out to be misplaced. It is a magnificent book and I am grateful to have read much of it as a work in progress, and then again after publication. I did submit several pages of critical commentary to assist the author, to the best of my limited abilities, hone a better final product, and to that end I am proud see my name appear in the “Acknowledgments.”

I do not usually talk about formats in book reviews, since the content is typically neither enhanced nor diminished by its presentation in either a leather-bound tome or a mass-market paperback or the digital ink of an e-book, but as a bibliophile I cannot help but offer high praise to this beautiful, illustrated edition of Awakening published by Head of Zeus, even accented by a ribbon marker. It has been some time since I have come across a volume this attractive without paying a premium for special editions from Folio Society or Easton Press, and in this case the exquisite art that supplements the text transcends the ornamental to enrich the narrative.

Interest in the medieval world has perhaps waned over time. But that is, of course, a mistake. How we got from point A to point B is an important story, even if it has never been told before as well as Freeman has told it in Awakening. And it is not an easy story to tell. As the author acknowledges in a concluding chapter: “Bringing together the many different elements that led to the ‘awakening of the western mind’ is a challenge. It is important to stress just how bereft Europe was, economically and culturally, after the fall of the Roman empire compared to what it had been before.” [p735]

Those of us given to dystopian fiction, concerned with the fragility of republics and civilization, and wondering aloud in the midst of a global pandemic and the rise of authoritarianism what our descendants might recall of us if it all fell to collapse tomorrow cannot help but be intrigued by how our ancestors coped—for better or for worse—after Rome was no more. If you want to learn more about that, there might be no better covers to crack than Freeman’s The Awakening. I highly recommend it.

NOTE: My review of Freeman’s earlier work appears here:

Review of: The Closing of the Western Mind: The Rise of Faith and the Fall of Reason by Charles Freeman

Featured

Review of: Seduction: Sex, Lies, and Stardom in Howard Hughes’s Hollywood, by Karina Longworth

“When people ask me if I went to film school, I tell them, ‘No, I went to films,’” Quentin Tarantino famously quipped. While I’m no iconic director, I too “went to films,” in a manner of speaking. I was raised by my grandmother in the 1960s—with a little help from a 19” console TV in the living room and seven channels delivered via rooftop antenna. When cartoons, soaps, or prime time westerns and sitcoms like Bonanza and Bewitched weren’t broadcasting, all the remaining airtime was filled with movies.  All kinds of movies: drama, screwball comedies, war movies, gangster movies, horror movies, sci-fi, musicals, love stories, murder mysteries—you name the genre, it ran. And ran. And ran. For untold hours and days and weeks and years.

Grandma—rest in peace—loved movies. Just loved them. All kinds of movies. But she didn’t have much of a discerning eye: for her, The Treasure of the Sierra Madre was no better or worse than Bedtime for Bonzo. At first, I didn’t know any better either, and whether I was four or fourteen I watched whatever was on, whenever she was watching. But I took a keen interest. The immersion paid dividends. My tastes evolved. One day I began calling them films instead of movies, and even turned into something of a “film geek,” arguing against the odds that Casablanca is a better picture than Citizen Kane, promoting Kubrick’s Paths of Glory over 2001, and shamelessly confessing to screening Tarantino’s Kill Bill I and II back-to-back more than a dozen times. In other words, I take films pretty seriously. So, when I noticed that Seduction: Sex, Lies, and Stardom in Howard Hughes’s Hollywood was up for grabs in an early reviewer program, I jumped at the opportunity. I was not to be disappointed.

In an extremely well-written and engaging narrative, film critic and journalist Karina Longworth has managed to turn out a remarkable history of Old Hollywood, in the guise of a kind of biography of Howard Hughes. In films, a “MacGuffin” is something insignificant or irrelevant in itself that serves as a device to trigger the plot. Examples include the “Letters of Transit” in Casablanca, the statuette in The Maltese Falcon, and the briefcase in Tarantino’s Pulp Fiction. Howard Hughes himself is the MacGuffin of sorts in Seduction, which is far less about him than his female victims and the peculiar nature of the studio system that enabled predators like Hughes and others who dominated the motion picture industry.

Howard Hughes was once one of the most famous men in America, known for his wealth and genius, a larger-than-life legend noted both for his exploits as aviator and flamboyance as a film producer given to extravagance and star-making.  But by the time I was growing up, all that was in the distant past, and Hughes was little more than a specter in supermarket tabloids, an eccentric billionaire turned recluse. It was later said that he spent most days alone, sitting naked in a hotel room watching movies. Long unseen by the public, at his death he was nearly unrecognizable, skeletal and covered in bedsores.  Director Martin Scorsese resurrected him for the big screen in his epic biopic “The Aviator,” headlined by Leonardo DiCaprio and a star-studded cast, which showcased Hughes as a heroic and brilliant iconoclast who in turn took on would-be censors, the Hollywood studio system, the aviation industry and anyone who might stand in the way of his quest for glory—all while courting a series of famed beauties. Just barely in frame was the mental instability, the emerging Obsessive-Compulsive Disorder that later brought him down.

Longworth finds Hughes a much smaller and more despicable man, an amoral narcissist and manipulator who was seemingly incapable of empathy for other human beings. (Yes, there is indeed a palpable resemblance to a certain president!) While Hughes carefully crafted an image of a titan who dominated the twin arenas of flight and film, in Longworth’s portrayal he seems to crash more planes than he lands, and churns out more bombs than blockbusters. In the public eye, he was a great celebrity, but off-screen he comes off as an unctuous villain, a charlatan whose real skill set was self-promotion empowered by vast sums of money and a network of hangers-on. The author gives him his due by denying him top billing as the star of the show, rather giving scrutiny to those in his orbit, the females in supporting roles whom he in turn dominated, exploited and discarded. You can almost hear refrains of Carly Simon’s You’re So Vain interposed in the narrative, taunting the ghost of Hughes with the chorus: “You probably think this song is about you”—which by the way would make a great soundtrack if there’s ever a screen adaptation of the book.

If not Hughes, the real main character is Old Hollywood itself, and with a skillful pen, Longworth turns out a solid history—a decidedly feminist history—of the place and time that is nothing less than superlative. The author recreates for us the early days before the tinsel, when a sleepy little “dry” town no one had ever heard of almost overnight became the celluloid capital of the country. Pretty girls from all over America would flock there on a pilgrimage to fame; most disappointed, many despairing, more than a few dead. Nearly all were preyed upon by a legion of the contemptible, used and abused with a thin tissue of lies and promises that anchored them not only to the geography but to the predominantly male movers and shakers who dominated the studio system that literally dominated everything else. This is a feminist history precisely because Longworth focuses on these women—more specifically ten women involved with Hughes—and through them brilliantly captures Hollywood’s golden age as manifested in both the glamorous and the tawdry.

Howard Hughes was not the only predator in Tinseltown, of course, but arguably its most depraved. If Hollywood power-brokers overpromised fame to hosts of young women just to bed them, for Hughes sex was not even always the principal motivation. It went way beyond that, often to twisted ends perhaps unclear to even Hughes himself. He indeed took many lovers, but those he didn’t sleep with were not exempt to his peculiar brand of exploitation.  What really got Howard Hughes off was exerting power over women, controlling them, owning them. He virtually enslaved some of these women, stripping them of their individual freedom of will for months or even years with vague hints at eventual stardom, abetted by assorted handlers appointed to spy on them and report back to him. Even the era of “Me Too” lacks the appropriate vocabulary to describe his level of “creepy!”

One of the women he apparently did not take to bed was Jane Russell. Hughes cast the beautiful, voluptuous nineteen year old in The Outlaw, a film that took forever to produce and release largely due to his fetishistic obsession with Russell’s breasts—and the way these spilled out of her dress in a promotional poster that provoked the ire of censors.  Longworth’s treatment of the way Russell unflappably endured her long association with Hughes—despite his relentless domination over her life and career—is just one of the many delightful highlights in Seduction.

The Outlaw, incidentally, was one of the movies I recall watching with Grandma back in the day.  Her notions of Hollywood had everything to do with the glamorous and the glorious, of handsome leading men and lovely leading ladies up on the silver screen. I can’t help wondering what she might think if she learned how those ladies were tormented by Hughes and other moguls of the time. I wish I could tell her about it, about this book. Alas, that’s not possible, but I can urge anyone interested in this era to read Seduction. If authors of film history could win an Academy Award, Longworth would have an Oscar on her mantle to mark this outstanding achievement.

Featured

Review of: Pox Americana: The Great Smallpox Epidemic of 1775-82, by Elizabeth A. Fenn

Imagine there’s a virus sweeping across the land claiming untold victims, the agent of the disease poorly understood, the population in terror of an unseen enemy that rages mercilessly through entire communities, leaving in its wake an exponential toll of victims.  As this review goes to press amid an alarming spike in new Coronavirus cases, Americans don’t need to stretch their collective imagination very far to envisage that at all. But now look back nearly two and a half centuries and consider an even worse case scenario: a war is on for the existential survival of our fledgling nation, a struggle compromised by mass attrition in the Continental Army due to another kind of virus, and the epidemic it spawns is characterized by symptoms and outcomes that are nothing less than nightmarish by any standard, then or now. For the culprit then was smallpox, one of the most dread diseases in human history.

This nearly forgotten chapter in America’s past left a deep impact on the course of the Revolution that has been long overshadowed by outsize events in the War of Independence and the birth of the Republic. This neglect has been masterfully redressed by Pox Americana: The Great Smallpox Epidemic of 1775-82, a brilliantly conceived and extremely well-written account by Pulitzer Prize-winning historian Elizabeth A. Fenn.  One of the advantages of having a fine personal library in your home is the delight of going to a random shelf and plucking off an edition that almost perfectly suits your current interests, a volume that has been sitting there unread for years or even decades, just waiting for your fingertips to locate it. Such was the case with my signed first edition of Pox Americana, a used bookstore find that turned out to be a serendipitous companion to my self-quarantine for Coronavirus, the great pandemic of our times.

As horrific as COVID-19 has been for us—as of this morning we are up to one hundred thirty four thousand deaths and three million cases in the United States, a significant portion of the more than half million dead and nearly twelve million cases worldwide—smallpox, known as “Variola,” was far, far worse.  In fact, almost unimaginably worse. Not only was it more than three times more contagious than Coronavirus, but rather than a mortality rate that ranges in the low single digits with COVID (the verdict’s not yet in), variola on average claimed an astonishing thirty percent of its victims, who often suffered horribly in the course of the illness and into their death throes, while survivors were frequently left disfigured by extensive scarring, and many were left blind. Smallpox has a long history that dates back to at least the third century BCE, as evidenced in Egyptian mummies. There were reportedly still fifteen million cases a year as late as 1967. In between it claimed untold hundreds of millions of lives over the years—some three hundred million in the twentieth century alone—until its ultimate eradication in 1980. There is perhaps some tragic irony that we are beset by Coronavirus on the fortieth anniversary of that milestone …

I typically eschew long excerpts for reviews, but Variola was so horrifying and Fenn writes so well that I believe it would be a disservice to do other than let her describe it here:

Headache, backache, fever, vomiting, and general malaise all are among the initial signs of infection. The headache can be splitting; the backache, excruciating … The fever usually abates after the first day or two … But … relief is fleeting. By the fourth day … the fever creeps upward again, and the first smallpox sores appear in the mouth, throat, and nasal passages …The rash now moves quickly. Over a twenty-four-hour period, it extends itself from the mucous membranes to the surface of the skin. On some, it turns inward, hemorrhaging subcutaneously. These victims die early, bleeding from the gums, eyes, nose, and other orifices. In most cases, however, the rash turns outward, covering the victim in raised pustules that concentrate in precisely the places where they will cause the most physical pain and psychological anguish: The soles of the feet, the palms of the hands, the face, forearms, neck, and back are focal points of the eruption … If the pustules remain discrete—if they do not run together— the prognosis is good. But if they converge upon one another in a single oozing mass, it is not. This is called confluent smallpox … For some, as the rash progresses in the mouth and throat, drinking becomes difficult, and dehydration follows. Often, an odor peculiar to smallpox develops… Patients at this stage of the disease can be hard to recognize. If damage to the eyes occurs, it begins now … Scabs start to form after two weeks of suffering … In confluent or semiconfluent cases of the disease, scabbing can encrust most of the body, making any movement excruciating … [One observation of such afflicted Native Americans noted that] “They lye on their hard matts, the poxe breaking and mattering, and runing one into another, their skin cleaving … to the matts they lye on; when they turne them, a whole side will flea of[f] at once.” … Death, when it occurs, usually comes after ten to sixteen days of suffering. Thereafter, the risk drops significantly … and unsightly scars replace scabs and pustules … the usual course of the disease—from initial infection to the loss of all scabs—runs a little over a month. Patients remain contagious until the last scab falls off …  Most survivors bear … numerous scars, and some are blinded. But despite the consequences, those who live through the illness can count themselves fortunate. Immune for life, they need never fear smallpox again. [p16-20]

Smallpox was an unfortunate component of the siege of Boston by the British in 1775, but—as Fenn explains—it was far worse for Bostonians than the Redcoats besieging them.  This was because smallpox was a fact of life in eighteenth century Europe—a series of outbreaks left about four hundred thousand people dead every year, and about a third of the survivors were blinded. As awful as that may seem, it meant that the vast majority of British soldiers had been exposed to the virus and were thus immune. Not so for the colonists, who not only had experienced less outbreaks but frequently lived in more rural settings at a greater distance from one another, which slowed exposure, leaving a far smaller quantity of those who could count on immunity to spare them. Nothing fuels the spread of a pestilence better than a crowded bottlenecked urban environment—such as Boston in 1775—except perhaps great encampments of susceptible men from disparate geographies suddenly crammed together, as was characteristic of the nascent Continental Army. To make matters worse, there was some credible evidence that the Brits at times engaged in a kind of embryonic biological warfare by deliberately sending known infected individuals back to the Colonial lines. All of this conspired to form a perfect storm for disaster.

Our late eighteenth-century forebears had a couple of things going for them that we lack today. First of all, while it was true that like COVID there was no cure for smallpox, there were ways to mitigate the spread and the severity that were far more effective than our masks and social distancing—or misguided calls to ingest hydroxychloroquine, for that matter.  Instead, their otherwise primitive medical toolkit did contain inoculation, an ancient technique that had only become known to the west in relatively recent times. Now, it is important to emphasize that inoculation—also known as “variolation”—is not comparable to vaccination, which did not come along until closer to the end of the century. Not for the squeamish, variolation instead involved deliberately inserting the live smallpox virus from scabs or pustules into superficial incisions in a healthy subject’s arm. The result was an actual case of smallpox, but generally a much milder one than if contracted from another infected person. Recovered, the survivor would walk away with permanent immunity. The downside was that some did not survive, and all remained contagious for the full course of the disease. This meant that the inoculated also had to be quarantined, no easy task in an army camp, for example.

The other thing they had going for them back then was a competent leader who took epidemics and how to contain them quite seriously—none other than George Washington himself. Washington was not president at the time, of course, but he was the commander of the Continental Army, and perhaps the most prominent man in the rebellious colonies. Like many of history’s notable figures, Washington was not only gifted with qualities such as courage, intelligence, and good sense, but also luck. In this case, Washington’s good fortune was to contract—and survive—smallpox as a young man, granting him immunity. But it was likewise the good fortune of the emerging new nation to have Washington in command. Initially reluctant to advance inoculation—not because he doubted the science but rather because he feared it might accelerate the spread of smallpox—he soon concluded that only a systematic program of variolation could save the army, and the Revolution! Washington’s other gifts—for organization and discipline—set in motion mass inoculations and enforced isolation of those affected. Absent this effort, it is likely that the War of Independence—ever a long shot—may not have succeeded.

Fenn argues convincingly that the course of the war was significantly affected by Variola in several arenas, most prominently in its savaging of Continental forces during the disastrous invasion of Quebec, which culminated in Benedict Arnold’s battered forces being driven back to Fort Ticonderoga.  And in the southern theater, enslaved blacks flocked to British lines, drawn by enticements to freedom, only to fall victim en masse to smallpox, and then tragically find themselves largely abandoned to suffering and death as the Brits retreated. There is a good deal more of this stuff, and many students of the American Revolution will find themselves wondering—as I did—why this fascinating perspective is so conspicuously absent in most treatments of this era?

Remarkably, despite the bounty of material, emphasis on the Revolution only occupies the first third of the book, leaving far more to explore as the virus travels to the west and southwest, and then on to Mexico, as well as to the Pacific northwest. As Fenn reminds us again and again, smallpox comes from where smallpox has been, and she painstakingly tracks hypothetical routes of the epidemic. Tragic bystanders in its path were frequently Native Americans, who typically manifested more severe symptoms and experienced greater rates of mortality.  It has been estimated that perhaps ninety percent of pre-contact indigenous inhabitants of the Americas were exterminated by exposure to European diseases for which they had no immunity, and smallpox was one of the great vehicles of that annihilation. Variola proved to be especially lethal as a “virgin soil” epidemic, and Native Americans not unexpectedly suffered far greater casualties than other populations, resulting in death on such a wide scale that entire tribes simply disappeared to history.

No review can properly capture all the ground that Fenn covers in this outstanding book, nor praise her achievement adequately. It is especially rare when a historian combines a highly original thesis with exhaustive research, keen analysis, and exceptional talent with a pen to deliver a magnificent work such as Pox Americana. And perhaps never has there been a moment when this book could find a greater relevance to readers than to Americans in 2020.

 

Featured

Review of: Lamarck’s Revenge: How Epigenetics is Revolutionizing Our Understanding of Evolution’s Past and Present, by Peter Ward

If you have studied evolution inside or outside of the classroom, you have no doubt encountered the figure of Jean-Baptiste Lamarck and the discredited notion of the inheritance of acquired characteristics attributed to him known as “Lamarckism.” This has most famously been represented in the example of giraffes straining to reach fruit on ever-higher branches, which results in the development of longer necks over succeeding generations. Never mind that Lamarck did not develop this concept—and while he echoed it, it remained only a tiny part of the greater body of his work—he was yet doomed to have it unfortunately cling to his legacy ever since. This is most regrettable, because Lamarck—who died three decades before Charles Darwin shook the spiritual and scientific world with his 1859 publication of On the Origin of Species—was actually a true pioneer in the field of evolutionary biology that recognized there were forces at work that put organisms on an ineluctable road to greater complexity. It was Darwin who identified this force as “natural selection,” and Lamarck was not only denied credit for his contributions to the field, but otherwise maligned and ridiculed.

But even if he did not invent the idea, what if Lamarck was right all along to believe, at least in part, that acquired characteristics can be passed along transgenerationally after all—perhaps not on the kind of macro scale manifested by giraffe necks, but in other more subtle yet no less critical components to the principles of evolution? That is the subject of Lamarck’s Revenge: How Epigenetics is Revolutionizing Our Understanding of Evolution’s Past and Present, by the noted paleontologist Peter Ward. The book’s cover naturally showcases a series of illustrated giraffes with ever-lengthening necks! Ward is an enthusiast for the relatively new, still developing—and controversial—science of epigenetics, which advances the hypothesis that certain circumstances can trigger markers that can be transmitted from parent to child by changing the gene expression without altering the primary structure of the DNA itself. Let’s imagine a Holocaust survivor, for instance: can the trauma of Auschwitz cut so deep that the devastating psychological impact of that horrific experience will be passed on to his children, and his children’s children?

This is heady stuff, of course.  We should pause for the uninitiated and explain the nature of Darwinian natural selection—the key mechanism of the Theory of Evolution—in its simplest terms. The key to survival for all organizations is adaptation. Random mutations occur over time, and if one of those mutations turns out to be better adapted to the environment, it is more likely to reproduce and thus pass along its genes to its offspring.  Over time, through “gradualism,” this can lead to the rise of new species. Complexity breeds complexity, and that is the road traveled by all organisms that has led from the simplest prokaryote unicellular organism—the 3.5-billion-year-old photosynthetic cyanobacteria—to modern homo sapiens sapiens.  This is, of course, a very, very long game; so long in fact that Darwin—who lived in a time when the age of the earth was vastly underestimated—fretted that there was not enough time for evolution as he envisioned it to occur. Advances in geology later determined that the earth was about 4.5 billion years old, which solved that problem, but still left other aspects of evolution unexplained by gradualism alone. The brilliant Stephen Jay Gould (along with Niles Eldredge) came along in 1972 and proposed that rather than gradualism most evolution more likely occurred through what he called “punctuated equilibrium,” often triggered by a catastrophic change in the environment. Debate has raged ever since, but it may well be that evolution is guided by forces of both gradualism and punctuated equilibrium. But could there still be other forces at work?

Transgenerational epigenetic inheritance represents another so-called force and is at the cutting edge of research in evolutionary biology today. But has the hypothesis of epigenetics been demonstrated to be truly plausible? And the answer to that is—maybe. In other words, there does seem to be studies that support transgenerational epigenetic inheritance, most famously—as detailed in Lamarck’s Revenge—in what has been dubbed the  “Dutch Hunger Winter Syndrome,” that saw children born during a famine smaller than those born before the famine, and with a later, greater risk of glucose intolerance, conditions then passed down to successive generations. On the other hand, the evidence for epigenetics has not been as firmly established as some proponents, such as Ward, might have us believe.

Lamarck’s Revenge is a very well-written and accessible scientific account of epigenetics for a popular audience, and while I have read enough evolutionary science to follow Ward’s arguments with some competence, I remain a layperson who can hardly endorse or counter his claims. The body of the narrative is comprised of Ward’s repeated examples of what he identifies as holes in traditional evolutionary biology that can only be explained by epigenetics. Is he right? I simply lack the expertise to say. I should note that I received this book as part of an “Early Reviewers” program, so I felt a responsibility to read it cover-to-cover, although my own interest lapsed as it moved beyond my own depth in the realm of evolutionary biology.

I should note that this is all breaking news, and as we appraise it we should be mindful of how those on the fringes of evangelicalism, categorically opposed to the science of human evolution, will cling to any debate over mechanisms in natural selection to proclaim it all a sham  sponsored by Satan—who has littered the earth with fossils to deceive us—to challenge the truth of the “Garden of Eden” related in the Book of Genesis.  Once dubbed “Creationists,” they have since rebranded themselves in association with the pseudoscience of so-called “Intelligent Design,” which somehow remains part of the curriculum at select accredited universities.  Science is self-correcting. These folks are not, so don’t ever let yourself be distracted by their fictional supernatural narrative. Evolution—whether through gradualism and/or punctuated equilibrium and/or epigenetics—remains central to both modern biology and modern medicine, and that is not the least bit controversial among scientific professionals. But if you want to find out more about the implications of epigenetics for human evolution, then I recommend that you pick up Lamarck’s Revenge and challenge yourself to learn more!

 

Note: While you are at it, if you want to learn more about 3.5-billion-year-old photosynthetic cyanobacteria, I highly recommend this: 

Review of: Cradle of Life: The Discovery of Earth’s Earliest Fossils, by J. William Schopf

Featured

Review of: Thomas Jefferson’s Education, by Alan Taylor

Here was buried Thomas Jefferson, Author of the Declaration of American Independence, of the Statute of Virginia for religious freedom & Father of the University of Virginia.

Thomas Jefferson wrote those very words and sketched out the obelisk they would be carved upon. For those who have studied him, that he not only composed his own epitaph but designed his own grave marker was—as we would say in contemporary parlance—just “so Jefferson.” His long life was marked by a catalog of achievements; these were intended to represent his proudest accomplishments. Much remarked upon is the conspicuous absence of his unhappy tenure as third President of the United States. Less noted is the omission of his time as Governor of Virginia during the Revolution, marred by his humiliating flight from Monticello just minutes ahead of British cavalry. Of the three that did make the final cut, his role as author of the Declaration has been much examined. The Virginia statute—seen as the critical antecedent to First Amendment guarantees of religious liberty—gets less press, but only because it is subsumed in a wider discussion of the Bill of Rights. But who really talks about Jefferson’s role as founder of the University of Virginia?

That is the ostensible focus of Thomas Jefferson’s Education, by Alan Taylor, perhaps the foremost living historian of the early Republic.  But in this extremely well-written and insightful analysis, Taylor casts a much wider net that ensnares a tangle of competing themes that not only traces the sometimes-fumbling transition of Virginia from colony to state, but speaks to underlying vulnerabilities in economic and political philosophy that were to extend well beyond its borders to the southern portion of the new nation. Some of these elements were to have consequences that echoed down to the Civil War; indeed, still echo to the present day.

Students of the American Civil War are often struck by the paradox of Virginia. How was it possible that this colony—so central to the Revolution and the founding of the Republic, the most populous and prominent, a place that boasted notable thinkers like Jefferson, Madison and Marshall, that indeed was home to four of the first five presidents of the new United States—could find itself on the eve of secession such a regressive backwater, soon doomed to serve as the capitol of the Confederacy? It turns out that the sweet waters of the Commonwealth were increasingly poisoned by the institution of human chattel slavery, once decried by its greatest intellects, then declared indispensable, finally deemed righteous. This tragedy has been well-documented in Susan Dunn’s superlative Dominion of Memories: Jefferson, Madison & the Decline of Virginia, as well as Alan Taylor’s own Pulitzer Prize winning work, The Internal Enemy: Slavery and the War in Virginia 1772-1832. What came to be euphemistically termed the “peculiar institution” polluted everything in its orbit, often invisibly except to the trained eye of the historian. This included, of course, higher education.

If the raison d’être of the Old Dominion was to protect and promote the interests of the wealthy planter elite that sat atop the pyramid of a slave society, then really how important was it for the scions of Virginia gentlemen to be educated beyond the rudimentary levels required to manage a plantation and move in polite society? And after all, wasn’t the “honor” of the up-and-coming young “masters” of far greater consequence than the aptitude to discourse in matters of rhetoric, logic or ethics? In Thomas Jefferson’s Education, Taylor takes us back to the nearly forgotten era of a colonial Virginia when the capitol was located in “Tidewater” Williamsburg and rowdy students—wealthy, spoiled sons of the planter aristocracy with an inflated sense of honor—clashed with professors at the prestigious College of William & Mary who dared to attempt to impose discipline upon their bad behavior. A few short years later, Williamsburg was in shambles, a near ghost town, badly mauled by the British during the Revolution, the capitol relocated north to “Piedmont” Richmond, William & Mary in steep decline. Thomas Jefferson’s determination over more than two decades to replace it with a secular institution devoted to the liberal arts that welcomed all white men, regardless of economic status, is the subject of this book.  How he realized his dream with the foundation of the University of Virginia in the very sunset of his life, as well as the spectacular failure of that institution to turn out as he envisioned it is the wickedly ironic element in the title of Thomas Jefferson’s Education.

The author is at his best when he reveals the unintended consequences of history. In his landmark study, American Revolutions: A Continental History, 1750-1804, Taylor underscores how American Independence—rightly heralded elsewhere as the dawn of representative democracy for the modern West—was at the same time to prove catastrophic for Native Americans and African Americans, whose fate would likely have been far more favorable had the colonies remained wedded to a British Crown that drew a line for westward expansion at the Appalachians, and later came to abolish slavery throughout the empire.  Likewise, there is the example of how the efforts of Jefferson and Madison—lauded for shaking off the vestiges of feudalism for the new nation by putting an end to institutions of primogeniture and entail that had formerly kept estates intact—expanded the rights of white Virginians while dooming countless numbers of the enslaved to be sold to distant geographies and forever separated from their families.

In Thomas Jefferson’s Education, the disestablishment of religion is the focal point for another unintended consequence. For Jefferson, an established church was anathema, and stripping the Anglican Church of its preferred status was central to his “Statute of Virginia for Religious Freedom” that was later enshrined in the First Amendment. But it turns out that religion and education were intertwined in colonial Virginia’s most prominent institution of higher learning, Williamsburg’s College of William & Mary, funded by the House of Burgesses, where professors were typically ordained Anglican clergymen. Moreover, tracts of land known as “glebes” that were formerly distributed by the colonial government for Anglican (later Episcopal) church rectors to farm or rent, came under assault by evangelical churches allied with secular forces after the Revolution in a movement that eventually was to result in confiscation. This put many local parishes—once both critical sponsors of education and poor relief—into a death spiral that begat still more unintended consequences that in some ways still resonate to the present-day politics and culture of the American south. As Taylor notes:

The move against church establishment decisively shifted public finance for Virginia. Prior to the revolution, the parish tax had been the greatest single tax levied on Virginians; its elimination cut the local tax burden by two thirds. Poor relief suffered as the new County overseers spent less per capita than had the old vestries. After 1790, per capita taxes, paid by free men in Virginia, were only a third of those in Massachusetts. Compared to northern states, Virginia favored individual autonomy over community obligation. Jefferson had hoped that Virginians would reinvest their tax savings from disestablishment by funding the public system of education for white children. Instead county elites decided to keep the money in their pockets and pose as champions of individual liberty. [p57-58]

For Jefferson, a creature of the Enlightenment, the sins of medievalism inherent to institutionalized religion were glaringly apparent, yet he was blinded to the positive contributions it could provide for the community. Jefferson also frequently perceived his own good intentions in the eyes of others who simply did not share them because they were either selfish or indifferent. Jefferson seemed to genuinely believe that an emphasis on individual liberty would in itself foster the public good, when in reality—then and now—many take such liberty as the license to simply advance their own interests. For all his brilliance, Jefferson was too often naïve when it came to the character of his countrymen.

Once near-universally revered, the legacy of Thomas Jefferson often triggers ambivalence for a modern audience and poses a singular challenge for historical analysis. A central Founder, Jefferson’s bold claim in the Declaration “that all men are created equal” defined both the struggle with Britain and the notion of “liberty” that not only came to characterize the Republic that eventually emerged, but gave echo with a deafening resonance to the French Revolution—and far beyond to legions of the oppressed yearning for the universal equality that Jefferson had asserted was their due. At the same time, over the course of his lifetime Jefferson owned hundreds of human beings as chattel property.  One of the enslaved almost certainly served as concubine to bear him several offspring who were also enslaved, and she almost certainly was the half-sister of Jefferson’s late wife.

The once popular view that imagined that Jefferson did not intend to include African Americans in his definition of “all men” has been clearly refuted by historians.  And Jefferson, like many of his elite peers of the Founding generation—Madison, Monroe, and Henry—decried the immorality of slavery as institution while consenting to its persistence, to their own profit. Most came to find grounds to justify it, but not Jefferson: the younger Jefferson cautiously advocated for abolition, while the older Jefferson made excuses for why it could not be achieved in his lifetime—made manifest in his much quoted “wolf by the ear” remark—but he never stopped believing it an existential wrong. As Joseph Ellis underscored in his superb study, American Sphinx, Jefferson frequently held more than one competing and contradictory view in his head simultaneously and was somehow immune to the cognitive dissonance such paradox might provoke in others.

It is what makes Jefferson such a fascinating study, not only because he was such a consequential figure for his time, but because the Republic then and now remains a creature of habitually irreconcilable contradictions remarkably emblematic of this man, one of its creators, who has carved out a symbolism that varies considerably from one audience to another. Jefferson, more than any of the other Founders, was responsible for the enduring national schizophrenia that pits federalism against localism, a central economic engine against entrepreneurialism, and the well-being of a community against personal liberties that would let you do as you please. Other elements have been, if not resolved, forced to the background, such as the industrial vs. the agricultural, and the military vs. the militia. Of course, slavery has been abolished, civil rights tentatively obtained, but the shadow of inequality stubbornly lingers, forced once more to the forefront by the murder of George Floyd; I myself participated in a “Black Lives Matter” protest on the day before this review was completed.

Perhaps much overlooked in the discussion but no less essential is the role of education in a democratic republic. Here too, Jefferson had much to offer and much to pass down to us, even if most of us have forgotten that it was his soft-spoken voice that pronounced it indispensable for the proper governance of both the state of Virginia and the new nation. That his ambition extended only to white, male universal education that excluded blacks and women naturally strikes us as shortsighted, even repugnant, but should not erase the fact that even this was a radical notion in its time. Rather than disparage Jefferson, who died two centuries ago, we should perhaps condemn the inequality in education that persists in America today, where a tradition of community schools funded by property taxes meant that my experience growing up in a white, middle class suburb in Fairfield, CT translated into an educational experience vastly superior to that of the people of color who attended the ancient crumbling edifices in the decaying urban environment of Bridgeport less than three miles from my home. How can we talk about “Black Lives Matter” without talking about that?

The granite obelisk that marked Jefferson’s final resting place was chipped away at by souvenir hunters until it was relocated in order to preserve it.  A joint resolution of Congress funded the replacement, erected in 1883, that visitors now encounter at Monticello. The original obelisk now incongruously sits in a quadrangle at the University of Missouri, perhaps as far removed from Jefferson’s grave as today’s diverse, co-ed institution of UVA at Charlottesville is at a distance from the both the university he founded and the one he envisioned. We have to wonder if Jefferson would be more surprised to learn that African Americans are enrolled at UVA—or that in 2020 they only comprise less than seven percent of the undergraduate population? And what would he make of the white supremacists who rallied at Charlottesville in 2017 and those who stood against them? I suspect a resurrected Jefferson would be no less enigmatic than the one who walked the earth so long ago.

Alan Taylor has written a number of outstanding works—I’ve read five of them—and he has twice won the Pulitzer Prize for History. He is also, incidentally, the Thomas Jefferson Memorial Foundation Professor of History at the University of Virginia, so Thomas Jefferson’s Education is not only an exceptional contribution to the historiography but no doubt a project dear to his heart. While I continue to admire Jefferson even as I acknowledge his many flaws, I cannot help wondering how Taylor—who has so carefully scrutinized him—personally feels about Thomas Jefferson. I recall that in the afterword to his magnificent historical novel, Burr, Gore Vidal admits: “All in all, I think rather more highly of Jefferson than Burr does …”  If someone puts Alan Taylor on the spot, I suppose that could be as good an answer as any …

Note: I have reviewed other works by Alan Taylor here:

Review of: American Revolutions: A Continental History, 1750-1804, by Alan Taylor

Review of: The Internal Enemy: Slavery and the War in Virginia 1772-1832, by Alan Taylor

Review of: The Civil War of 1812: American Citizens, British Subjects, Irish Rebels, & Indian Allies by Alan Taylor

Review of: American Republics: A Continental History of the United States, 1783-1850, by Alan Taylor

My review of Susan Dunn’s excellent book, referenced above, is here:

Review of: Dominion of Memories: Jefferson, Madison & the Decline of Virginia by Susan Dunn

Featured

Review of: The Testaments: The Sequel to The Handmaid’s Tale, by Margaret Atwood

Nolite te bastardes carborundorum could very well be the Latin phrase most familiar to a majority of Americans. Roughly translated as “Don’t let the bastards grind you down,” it has been emblazoned on tee shirts and coffee mugs, trotted out as bumper sticker and email signature, and—most prominently—has become an iconic feminist rallying cry for women. That this famous slogan is not really Latin or any language at all, but instead a kind of schoolkid’s “mock Latin,” speaks to the colossal cultural impact of the novel where it first made its appearance in 1985, The Handmaid’s Tale, by Margaret Atwood, as well as the media then spawned, including the 1990 film featuring Natasha Richardson, and the acclaimed series still streaming on Hulu. Consult any random critic’s list of the finest examples in the literary sub-genre “dystopian novels,” and you will likely find The Handmaid’s Tale in the top five, along with such other classic masterpieces as Orwell’s 1984, Huxley’s Brave New World and Bradbury’s Fahrenheit 451, which is no small achievement for Atwood.

For anyone who has not been locked in a box for decades, The Handmaid’s Tale relates the chilling story of the not-too-distant-future nation of “Gilead,” a remnant of a fractured United States that has become a totalitarian theonomy that demands absolute obedience to divine law, especially the harsh strictures of the Old Testament. A crisis in fertility has led to elite couples relying on semi-enslaved “handmaids” who serve as surrogates to be impregnated and carry babies to term for them, which includes a bizarre ritual where the handmaid lies in the embrace of the barren wife while being penetrated by the “Commander.” The protagonist is known as “Offred”—or “Of Fred,” the name of this Commander—but once upon a time, before the overthrow of the U.S., she was an independent woman, a wife, a mother.  It is Offred who one day happens upon Nolite te bastardes carborundorum scratched upon the wooden floor on her closet, presumably by the anonymous handmaid who preceded her.

Brilliantly structured as a kind of literary echo of Geoffrey Chaucer’s The Canterbury Tales, employing Biblical imagery—the eponymous “handmaid” based upon the Old Testament account of  Rachel and her handmaid Bilhah—and magnificently imagining a horrific near-future of a male-dominated society where all women are garbed in color-coded clothing to reflect their strictly assigned subservient roles, Atwood’s narrative achieves the almost impossible feat of imbuing what might otherwise smack of the fantastic with the highly persuasive badge of the authentic.

The 1990 film adaptation—which also starred Robert Duvall as the Commander and Faye Dunaway as his infertile wife Serena Joy—was largely faithful to the novel, while further fleshing out the character of Offred. But it is has been the Hulu series, updated to reflect a near-contemporary pre-Gilead America replete with cell phones and technology—and soon to beget (pun fully intended!) a fourth season—which both embellished and enriched Atwood’s creation for a new generation and a far wider audience.  And it has enjoyed broad resonance, at least partially due to its debut in early 2017, just months after the presidential election.  The coalition of right-wing evangelicals, white supremacists, and neofascists that has come to coalesce around the Republican Party in the Age of Trump has not only brought new relevance to The Handmaid’s Tale, but has also seen its scarlet handmaid’s cloaks adopted by many women as the de rigueur uniform of protest in the era of “Me Too.” Meanwhile, the series—which is distinguished by an outstanding cast of fine ensemble actors, headlined by Elisabeth Moss as Offred—has proved enduringly terrifying for three full seasons, while largely maintaining its authenticity.

Re-enter Margaret Atwood with The Testaments: The Sequel to The Handmaid’s Tale, released thirty-four years after the original novel. As a fan of both the book and the series, I looked forward to reading it, though my anticipation was tempered by a degree of trepidation based upon my time-honored conviction that sequels are ill-advised and should generally be avoided. (If Godfather II was the rare exception in film, Thomas Berger’s The Return of Little Big Man certainly proved the rule for literature!)  Complicating matters, Atwood penned a sequel not to her own novel, but rather to the Hulu series, which brought back memories of Michael Crichton’s awkward The Lost World, written as a follow-up to Spielberg’s Jurassic Park movie rather than his own book.

My fears were not misplaced.

The action in The Testaments takes place in both Gilead and in Atwood’s native Canada, which remains a bastion of freedom and democracy for those who can escape north. The timeframe is roughly fifteen years after the conclusion of Hulu’s Season Three. The narrative is told from the alternating perspectives of three separate protagonists, one of whom is Aunt Lydia, the outsize brown-clad villain of book and film known for both efficiency and brutality in her role as a “trainer” of handmaids.  Aunt Lydia turns out to have both a surprising pre-Gilead backstory as well as a secret life as an “Aunt,” although there are no hints of these in any previous works. Still, I found the Lydia portion of the book most interesting, and perhaps the more plausible in a storyline that often flirts with the farfetched.

In order to sidestep spoilers, I cannot say much about the identities of the other two main characters, who are each subject to surprise “reveals” in the narrative—except that I personally was less surprised than was clearly intended. Oh yes, I get it: the butler did it … but I still have hundreds of pages ahead of me. But that was not the worst of it.

The beauty of the original novel and the series has remained a remarkably consistent authenticity, despite an extraordinary futuristic landscape.  The test of all fiction—but most especially in science-fiction, fantasy, and the dystopian—is: can you successfully suspend disbelief? For me, The Testaments fails this test again and again, most prominently when one of our “unrevealed” characters—an otherwise ordinary teenage girl—is put through something like a “light” version of La Femme Nikita training, and then in short order trades high school for a dangerous undercover mission without missing a beat! Moreover, her character is not well-drawn, and the words put in her mouth ring counterfeit. It seems evident that the eighty-year-old Atwood does not know very many sixteen-year-old girls, and culturally this one acts and sounds like she was raised thirty years ago and then catapulted decades into the future. Overall, the plot is contrived, the action inauthentic, the characters artificial.

This is certainly not vintage Atwood, although some may try to spin it that way. The Handmaid’s Tale was not a one-hit wonder: Atwood is a prolific, accomplished author and I have read other works—including The Penelopiad and The Year of the Flood—that underscore her reputation as a literary master. But not this time.  In my disappointment, I was reminded of my experience with Khaled Hosseini, whose  The Kite Runner was a superlative novel that showcased a panoply of complex themes and nuanced characters that remained with me long after I closed the cover.  That was followed by A Thousand Splendid Suns, which though a bestseller was dramatically substandard to his earlier work, peopled with nearly one-dimensional caricatures assigned to be “good” or “evil” navigating a plot that smacked more of soap-opera than subtlety.

The Testaments too has proved a runaway bestseller, but it is the critical acclaim that I find most astonishing, even scoring the highly prestigious 2019 Booker Award—though I can’t bear to think of it sitting on the same shelf alongside … say … Richard Flanagan’s The Narrow Road to the Deep North, which took the title in 2014. It is tough for me to review a novel so well-received that I find so weak and inconsequential, especially when juxtaposed with the rest of the author’s catalog. I keep holding out hope that someone else might take notice that the emperor really isn’t wearing any clothes, but the bottom line is that lots of people loved this book; I did not.

On the other hand, a close friend countered that fiction, like music, is highly subjective. But I take some issue with that. Perhaps you personally might not have enjoyed Faulkner’s The Sound and the Fury, or Hemingway’s A Farewell to Arms, for that matter, but you cannot make the case that these are bad books. I would argue that The Testaments is a pretty bad book, and I would not recommend it. But here, it seems, I remain a lone voice in the literary wilderness.

 

Featured

Review of: A Warning, by Anonymous

DISCLAIMER: The review that follows and the book that is its subject each include a fact-based timeline, political polemic, and inflammatory language, some or all of which may be highly offensive to certain individuals, especially those who identify with the MAGA movement or abjure critical thinking.  If you or someone you care about fits that description, is highly sensitive, or is unable to handle views that contradict your political narrative, you are urged to stop reading now and put this review aside.  Those who proceed further do so at their own risk, and this reviewer will hold himself blameless for any fits of rage, dangerous increases in blood pressure, or Rumpelstiltskin-like attempts to stomp the ground so hard that the reader sinks into a chasm, that may result from continuing beyond this point …

President Trump is facing a test to his presidency unlike any faced by a modern American leader. It’s not just that the special counsel looms large. Or that the country is bitterly divided over Mr. Trump’s leadership. Or even that his party might well lose the House to an opposition hellbent on his downfall. The dilemma—which he does not fully grasp—is that many of the senior officials in his own administration are working diligently from within to frustrate parts of his agenda and his worst inclinations. I would know. I am one of them.

That is the opening excerpt from an Op-Ed entitled “I Am Part of the Resistance Inside the Trump Administration” published in the New York Times on September 5, 2018, along with this note from the editors:  “The Times is taking the rare step of publishing an anonymous Op-Ed essay. We have done so at the request of the author, a senior official in the Trump administration whose identity is known to us and whose job would be jeopardized by its disclosure.

The Op-Ed was written on the eve of the mid-term elections, before the release of the Mueller report, the murder of Khashoggi, the shutdown of the Trump Foundation for what was described as “a shocking pattern of illegality,” the expulsion of most remaining adults-in-the-room including Mattis and Kelly and Rosenstein, the “perfect call” with Volodymyr Zelensky that led to impeachment—which was just one shocking by-product of an erratic foreign policy of appeasement to Putin, ongoing saber-rattling with the Ayatollah and kissy-face with Kim Jung-un, the granting of dispensation to Mohammed bin Salman, and the green-lighting of Erdoğan to take out our Kurdish allies in Syria, not to mention the continuing crisis at home of kids in cages, and the ousting of any civil servant who dared contradict the President  with a fact-based narrative. And there was so very much more that it is truly a blur. In September 2019, Trump doctored a map with a Sharpie and flashed it on television to prove he was right all along about the path of Hurricane Dorian. In October 2019, the President of the United States actually expressed interest in constructing an electrified moat filled with alligators along the Mexican border and shooting migrants in the legs to slow them down! Who even remembers that now?

Shortly after the moat full of alligators rose to a brief crest in the 24 hour cable news cycle and then sank beneath the weight of the tide of whatever was next that no one can really recall anymore, while we collectively held our breaths for the next wave of … well, who knows what? …  A Warning, by Anonymous—the same “senior Trump administration official” who was author of that NYT editorial—was published. A Warning set a record for preorders and made the bestseller list, and while the staggering revelations by a senior insider that it contains would have no doubt thrust any other administration into a tailspin so severe that it could never have recovered, this book—much like the misadventures it chronicles—is essentially as forgotten to an overwhelmed amnesiac public as the moat full of alligators.  The notion that “nothing matters” has become such a cliché precisely because—as the subsequent impeachment acquittal underscored—when it comes to Trump, nothing truly does matter anymore. Or really ever has.

The thesis of A Warning—which picks up where the author’s editorial left off—is that 1) all hyperbole on left-leaning media aside, President Trump really is as he appears to the non-brainwashed observer: an unhinged, irrational, narcissistic, incompetent clown who left to his own devices would no doubt steer the clown car with all of us aboard right into the abyss; and 2) if not for the valiant efforts of the author and his or her furtive cohorts, working ceaselessly behind the scenes to curtail Trump’s most dangerous instincts, we would likely already be acquainted with said abyss. “Anonymous” claims that he/she is generally supportive of the administration’s conservative right-wing agenda, but fears what the President’s unbalanced behavior could bring. While Trump rambles on paranoiacally about the so-called imaginary “Deep State” plotting to undermine him, the author of A Warning refutes the notion of said “Deep State” while emphasizing what he/she terms the “Steady State,” an unidentified  alliance at the top tier of “glorified government babysitters” who quietly strive to “keep the wheels from coming off the White House wagon.”

But apparently the axle nuts are getting looser every day, and those wheels are about to let go, as underscored in the very first chapter, aptly entitled “Collapse of the Steady State,” where the author admits that:

I was wrong about the “quiet resistance” inside the Trump Administration. Unelected bureaucrats and cabinet appointees were never going to steer Donald Trump in the right direction in the long run, or refine his malignant management style.  He is who he is. Americans should not take comfort in knowing whether there are so-called adults in the room. We are not bulwarks against the president and shouldn’t be counted upon to keep him in check. That is not our job. That is the job of the voters …

If the original editorial was an attempt to reassure us that while the President was often indeed as mindlessly dangerous as a runaway bull amok in the national china shop, there was yet a significant presence of others sane and rational to rein him in before too much of value was irreparably wrecked, A Warning goes much further, urging a broad coalition to defeat him in 2020, especially targeting those in the right lane who otherwise cheer the lower taxes, frantic deregulation, and the ascent of ultraconservative Supreme Court justices that have been a byproduct of Trumpism. But does such a cohort actually exist?

For Trump and a polarized America in 2020, there are essentially four audiences to play to: 1) Donald Trump represents an existential threat to our values of freedom and democracy in our sacred Republic; 2) Donald Trump is a savior for America sent by the almighty God to restore our sacrosanct traditional values and lock up anyone who would even think about having an abortion; 3) Donald Trump is an absolutely offensive buffoon—of course—but the economy has been supercharged so why don’t they just let him do his job?; and, 4) Donald Trump is the same as Joe Biden, and if Bernie Sanders was President we’d all have free college and healthcare and everything else and if you don’t agree you should just die. A Warning makes a compelling argument, but I don’t see it changing anyone’s mind. Either the Emperor is wearing those new clothes or he isn’t.

Each chapter of A Warning is headed by a quotation from a former president—Madison, Washington, Jefferson, Kennedy, Reagan, etc.—that speaks to an aspect of government or the character of its leadership. What then follows are accounts of Trump’s resistance to expertise, paranoid ramblings, irrational behavior, and “malignant management style” that clearly stand as counterpoints to these ideals. At one point, the author reveals that: “Behind closed doors his own top officials deride him as an “idiot” and a “moron” with the understanding of a “fifth or sixth grader.” [p63] This excerpt that describes briefings with the President is a bit longish but perhaps most illustrative:

Early on, briefers were told not to send lengthy documents. Trump wouldn’t read them. Nor should they bring summaries to the Oval Office. If they must bring paper, then PowerPoint was preferred because he is a visual learner. Okay, that’s fine, many thought to themselves, leaders like to absorb information in different ways. Then officials were told that PowerPoint decks needed to be slimmed down. The president couldn’t digest too many slides. He needed more images to keep his interest—and fewer words. Then they were told to cut back the overall message (on complicated issues such as military readiness or the federal budget) to just three main points. Eh, that was still too much … Forget the three points. Come in with one main point and repeat it—over and over again, even if the president inevitably goes off on tangents—until he gets it. Just keep steering the subject back to it. One point. Just that one point. Because you cannot focus the commander-in-chief’s attention on more than one goddamned thing over the course of the meeting, okay? [p29-30]

This is just one of many persuasive arguments that the President is unfit for office, but again: whom is it likely to persuade?

A couple of things struck me about this book that have little to do with its message. First of all, it is not well-written.  Not at all. It may be that it was deliberately dumbed-down to target a less educated audience, but I don’t think so.  More likely, the author simply isn’t a very talented writer.  A Warning has a conversational style, and my guess is that it was dictated and transcribed by someone who is not generally comfortable with a pen.

Second, the author attempts to use history to make his/her point—beyond quotes from presidents, there are also numerous references in the narrative that reach back to ancient Greece and Rome. But the effort is clumsy, at best, and at worst just completely off the mark. At one point, when tracing the origins of the GOP, the author identifies it with “states’ rights,” which while a core value of the modern Republican Party was a hundred fifty years ago closely associated with rival Democrats. [p95] (In fact, one could argue that today’s “Party of Lincoln” has little in common with Lincoln at all.) Elsewhere, there is an awkward tussle with fact-based history as the author struggles to mine democracy in ancient Greece for workable analogies with today’s politics. Athenian demagogue Cleon is cast as a cloak-wearing precursor to Trump “… who will sound familiar to readers … [as he] … inherited money from his father and leveraged it to launch a career in politics.” The famous episode from Thucydides that has Cleon calling for the slaughter of the Mytilenean rebels is posited as an alleged signpost to the decline and fall of Athenian democracy. The later massacre of the Melians is also referenced, as is the execution of Socrates, along with a wild claim that “the latter was an exclamation point on the death of Athenian democracy …” [p183-86] All this is not only completely out of context but downright silly, and—as any historian of ancient Greece would point out—the radical democracy of Athens actually thrived for decades after the death of Socrates in 399 BCE, and even persisted well beyond the subjugation of the polis by Phillip II in 338 BCE.

But that the author is both a bad writer and a lousy historian to my mind just adds to his/her authenticity, as a “senior Trump administration official.”  After all, we know that the cabinet is comprised of second and third-rate individuals, and the quality—especially as we have made the shift to “acting” secretaries that don’t require Senate approval—has seen a pronounced downward slope.  Of course, the author’s lack of talent hardly diminishes the tale that is told.

The reason A Warning lacks shock-value to some degree is because we have heard much or all of this before, from multiple sources, some more respected than others.  While it might be easy to dismiss such schlocky work as Michael Wolff’s Fire and Fury: Inside the Trump White House, the much-celebrated expose of the administration that was frequently as long on bombshells as it was short on substantiation, it is far more difficult to ignore the chilling accounts from award-winning journalist Bob Woodward, whose 2018 book Fear: Trump in the White House identifies then-Secretary of Defense James Mattis as the source of the “fifth or sixth grader” quote. Woodward also reports then-Chief of Staff John Kelly describing the President as “unhinged”—exclaiming: “He’s an idiot. It’s pointless to try to convince him of anything. He’s gone off the rails. We’re in Crazytown.”  Far more worrisome than such anecdotes is Woodward’s revelation that then-Chief Economic Adviser Gary Cohn—alarmed that Trump was about to sign a document ending a key trade agreement with South Korea that also dove-tailed with a security arrangement that would alert us to North Korean nuclear adventurism—simply stole the document off the President’s desk! And the President never missed it …

Much of this material has been substantiated by insiders, and there is certainly plenty of evidence to suggest Trump is utterly incapable of serving as Chief Executive. But would anything convince his loyal acolytes of this? Apparently not, which is why A Warning both preached to the chorus and otherwise fell on deaf ears. In February 2020, fifty-two Republican Senators voted to acquit Trump in his impeachment trial—and you can bet that most or all of these “august” legislators know exactly what Donald Trump is really like behind closed doors.

As this review goes to press, we are in the midst of global pandemic that has hit the United States far harder than it should have, largely due to the ongoing incompetence of the President, who is not unsurprisingly the very worst person to be in charge during what is surely the greatest threat to the nation since Pearl Harbor, perhaps since Fort Sumter.  We need a Lincoln or a FDR or a JFK at the helm, and what we have is Basil Fawlty … although even that is unfair: Basil would have recognized that he was in over his head and sought Polly’s help, who would have enlisted Manual’s assistance, and we would at least have a chance. Trump, being Trump, believes he has all the answers; and thousands more succumb to the virus as the days go by …

So, who is the author of A Warning? Who exactly is “Anonymous?” There has been some speculation, but if I had to assign authorship, I would put my money on Kellyanne Conway. One clue that narrows it down a bit is that the tone in the narrative hints at a female voice rather than a male one, although I could be mishearing that. More persuasive is the style, which sounds an awful lot like Kellyanne in conversation, albeit spouting utterances diametrically opposed to those outrageous defenses of the President she concocts for the media.  Perhaps most compelling is the fact that Kellyanne has uncharacteristically outlasted most members of the administration, especially striking in light of the fact that her husband, attorney George Conway, is a loud and prominent critic of the President that has long called for his removal from office. That Kellyanne has managed to somehow keep her job despite this suggests that she has something on Trump that guarantees her tenure, and makes me think she more than anyone inside that circus tent wants us to hear this warning of why the ringmaster must be denied four more years …

UPDATE 10-28-20 I was wrong … it wasn’t Kellyanne Conway https://milestaylor.medium.com/a-statement-a13bc5173ee9

Link to: NYT Op-Ed:  “I Am Part of the Resistance Inside the Trump Administration”

Link to: Review of: Fear: Trump in the White House, by Bob Woodward

Featured

Review of: America’s War for the Greater Middle East: A Military History, by Andrew J. Bacevich

“From the end of World War II until 1980, virtually no American soldiers were killed in action while serving in the greater Middle East. Since 1980, virtually no American soldiers have been killed anywhere else. What caused that shift?”

            That stark question appears as a blurb on the back cover of my edition of America’s War for the Greater Middle East: A Military History, Andrew J. Bacevich’s ambitious, brilliantly conceived if flawed chronicle which seeks to both answer that question and place it in its appropriate context. It is, of course, quite the tall order: how is it that a geography ever on the periphery of an American foreign policy that for decades could best be described as benign neglect came to not only dominate our national attention but be identified as central to our strategic interests? And how is that as this review goes to press—nearly four years after the publication of Bacevich’s book—America’s longest war in its history endures beyond its eighteenth year …  in Afghanistan of all places?!

The short answer, I would posit, is oil. Bacevich is older than me, and I wasn’t yet driving at the time in 1969 when he notes dropping three bucks to fill up the tank of his new Mustang at 29.9 cents a gallon.  But I was on the road just a few years later, and I recall sitting in long lines at the pump for fuel priced nearly ten times that, as well as the random guy who threatened to shoot a certain long-haired teenager for trying to cut line, and that same teen later learning how to siphon gas from parked cars. It was a time.

That tumultuous time stemmed, of course, from the 1973 oil embargo placed on the United States by OPEC (Organization of the Petroleum Exporting Countries) in retaliation for its support of Israel during the Yom Kippur War. Because he has styled his book “A Military History,” the author does not dwell on the gasoline shortage that so shook American self-confidence in the early 1970s, nor on the related and still unresolved Israeli-Palestinian conflict that remains as central to the theme of Middle East unrest as slavery was to the American Civil War. Instead, after a brief “Prologue,” Bacevich rapidly shifts focus to the Iran hostage crisis and the 1980 debacle that was Operation Eagle Claw, the aborted mission to rescue those hostages that resulted in those first American casualties referenced in that jacket blurb.  This decision by the author to not accord oil and Israel their respective fundamental significance in far greater detail proves to be a weakness that tends to undermine an otherwise well-researched and well-written narrative history.

That author certainly has both the credentials and the skills worthy of the task before him.  Andrew Bacevich is a career army officer, veteran of the Vietnam and Persian Gulf wars, who retired with the rank of colonel. He is also a noted historian and award-winning author, someone who has described himself as a “Catholic conservative,” but defies traditional labels of parties and politics. He is a pronounced critic of American military interventionism, George W. Bush’s advocacy for so-called “preventive wars,” and especially of the U.S. invasion of Iraq.  In a kind of tragic irony, his own son, an army officer, was killed in combat in Iraq. I have read two of his previous books: Breach of Trust: How Americans Failed Their Soldiers and Their Country, and The Limits of Power: The End of American Exceptionalism, both magnificent treatises that reflect Bacevich’s ideological opposition to spending American lives needlessly in endless wars. But treatises don’t always translate well into narrative history—in fact these are and should be entirely separate channels—and Bacevich’s tendency to blur those boundaries here comes to weaken America’s War for the Greater Middle East.

The author points to repeated epic fails in Middle East policy that take us down all the wrong roads, while experts in and out of government shake their heads in bewilderment, yet one administration after another nevertheless presses on stubbornly. Bacevich is at his best when he underscores a series of unintended consequences on a road paved with occasional good intentions that not only exacerbate bad decision-making but cement unnecessary obligations to fickle, illusory allies that then put up almost insurmountable roadblocks to disentanglement. Two salient and substantial examples are: the poorly-conceived U.S. support for rebels opposed to the Russian-friendly regime in Afghanistan that was to spark Soviet intervention in 1979; and, subsequent U.S. backing for the Islamic fundamentalist Mujahideen that was to later spawn Al-Qaeda.

There is much more to come—more perhaps intended and incompetent rather than unintended—and much of that is either utterly unknown or long forgotten for most Americans, including the 1982 suicide-bombing of the Marine compound in Beirut that killed 241 but somehow failed to tarnish the “Teflon” presidency of Ronald Reagan, who retreated while euphemistically “redeploying.” From the vantage point of Washington, the greater enemy remained the Ayatollah, and all efforts were made to enable the brutal despot Saddam Hussein in his opportunistic war upon Iran, a decision that was to fuel Middle East instability for decades and lead to two future US conflicts with our former ally. And Reagan was still President and still all-Teflon in 1988 when the US shot down through either negligence or spite Iran Air Flight 655 over the Persian Gulf, a commercial airliner with 290 souls aboard.  George H. W. Bush led a coalition to liberate Kuwait from our erstwhile ally Iraq, but then left a wounded, isolated and still dangerous Saddam to plague our future. But, of course, it was under George W. Bush that the tragedy that was 9-11 was hijacked and turned into a fanciful “War on Terror” that ultimately was to embolden Islamic fundamentalism, served as a pretext for an illegal invasion of Iraq that strengthened Iran and utterly destabilized the region, and later bred ISIL to terrorize multiple corridors of the Middle East. You can indeed draw almost a straight line from the Afghan Mujahideen of 1979 to ISIL suicide bombers today.

Bacevich is masterful with a pen, and his history is so well-written that there are literally no dry spots.  The problem I found was with the tone, which while legitimately critical of American missteps is often needlessly arrogant, eye-rolling, even snarky—all of which detracts from the primary message, which is indeed spot-on.  My politics often align closely with those of MSNBC host Rachel Maddow, but I simply cannot watch her show: I find her breathless exhalations and intimations of “How-could-anyone-be-so-stupid?” and “We-told-you-so” coupled with lip-curling grimaces intolerable. Bacevich is not that bad here by any means, but there is certainly a whiff of it that puts me off. Moreover, while he makes a cogent case for why just about every policy we put in place was wrong-headed, I would have much welcomed the author’s alternative recipes. Bacevich is a brilliant man: I truly wanted to know what he would have done differently if he was sitting behind the Resolute Desk instead of Carter or Reagan or Bush or any of the others.

Bacevich does deserve much credit for his far more panoramic view of what he rightly calls the “Greater Middle East,” as he widens the lens to focus upon the often neglected yet certainly related periphery of the Balkans and the Muslim population in the former Yugoslavia subjected to ethnic cleansing. Few mention Eastern Europe in the same breath as the Middle East, but for some five hundred years much of that geography was integral to the same Ottoman Empire that ruled over present-day Syria and Iraq. There is a common history that cannot be ignored. But just as I was disappointed elsewhere that Bacevich failed to highlight the background noise of the Israeli-Palestinian conflict that truly informs every conversation about Middle East affairs, in this case little was made of the bond between post-Soviet Russia and Slavs of “Greater Serbia,” which not only deeply influenced the Balkan Civil Wars but soured emerging US-Russian relations in its aftermath and resounded across the Islamic landscape. Likewise, the narrative swerves to take a peek at “Black Hawk Down” in Mogadishu, but the long history of ties between East Africa and Arabia remains unexplored.

America’s War for the Greater Middle East is divided into three parts: the first takes the reader to the conclusion of the Persian Gulf War (which Bacevich brands as “Second Gulf War”), and the second wraps up on the eve of 9-11. But it is the last part, dominated by the Iraq War, that strikes a markedly different tone and smacks of the more somber, perhaps coincidental to Bacevich’s own deeply personal loss, perhaps not. Alas, none of the sections are large enough to bear the weight of the material.

Rarely would I lobby for any book to be longer, but in this case the 370 pages in my edition—plus the copious notes and excellent maps—is simply not enough. The topic not only deserves but demands more. This book should either be three times longer or, better still, should be a three-volume series.  A more comprehensive historical background—including the echo of the greater Ottoman heritage and the Russo-British grapple for Central Asia—of this entire milieu is requisite for getting a grasp upon how we got here. The Israeli-Palestinian conflict demands more focus.  As does the Shia-Sunni division.  And the relationships between Arab and non-Arab states, as well as the ties that transcend the regional to extend to Africa and Europe and beyond. There is no hope of a better grasp of all that has gone wrong with American entanglement in the Middle East without all of that and much more.

Given all these reservations, the reader of this review might be surprised that I nevertheless recommend this book. Warts and all, there is no other work out there that connects the dots of America’s involvement in the Middle East as well as it does, even as it cries for more depth, for more complexity. I would likely be less critical of this book if my admiration for Bacevich was less pronounced and my expectations for his work was not so high. Even if America’s War for the Greater Middle East falls short, it deserves to be on your reading list.

NOTE: I reviewed Bacevich’s earlier book here:

Review of: Breach of Trust: How Americans Failed Their Soldiers and Their Country, by Andrew J. Bacevich]

Featured

Review of: The Bank War and the Partisan Press: Newspapers, Financial Institutions, and the Post Office in Jacksonian America, by Stephen W. Campbell

Can you imagine a President of the United States who blatantly ignores its conventions, ridicules its established order and appeals beyond these directly to the electorate, pledging to elevate the interests of the average citizen over those of the elite, whom he brands as corrupt, while scorning the courts, financial institutions, and any who stand in his way, polarizing the nation while he yet shamelessly exploits a partisan press and rewards his supporters with government jobs and favors? No, it’s not who you think, but it does at least partially explain why the current occupant of the White House often appears with a portrait of Andrew Jackson as a backdrop, a painting that he directed be displayed prominently in the Oval Office.

Jackson once loomed large in our collective cultural memory, but I suspect that memory is now a bit fuzzy for most Americans, who when pressed might at best tentatively identify him as the grim-looking fellow on the face of the twenty-dollar bill. Of course, Jackson has hardly been forgotten by historians, who have long recognized his centrality as the most consequential president of the antebellum era, although their assessments of him have seen a marked rise and fall over time.  Once lionized as a giant in the emergence of a more democratic polity and a more egalitarian nation, a critical reexamination in the more recent historiography has revealed substantial “warts,” not only underscored by his leading role in “The Indian Removal Act” of 1830 that led to the deaths of thousands of Cherokees in the so-called “Trail of Tears,” but also in the ill-effects of the long echo of his “spoils system,” the dangerous naivety of his economic strategies including the “Bank War” that led to the Panic of 1837, as well as other forceful if misguided policies that some have argued set irrevocable forces in motion that later resulted in Civil War.

Andrew Jackson has been the subject of hundreds of biographies and related works.  A prominent chapter has frequently been devoted to the Bank War, long framed as a flamboyant clash of wills between Jackson, who loathed banks, and the shrewd if hapless Nicholas Biddle, president of the Second Bank of the United States. A famous game of cat and mouse prevailed, as the standard tale has been told, with Jackson ultimately victorious, the bank abolished and Biddle sent packing in surprising and ignominious defeat.

It is such a familiar story that has received so much attention in the literature that it might seem unlikely that anything new could be said of it. So, there is then something of real genius in the astute reexamination showcased in the recently published monograph, The Bank War and the Partisan Press: Newspapers, Financial Institutions, and the Post Office in Jacksonian America, by Stephen W. Campbell. In this brilliant if not always easily accessible book, Campbell—a historian and lecturer at Cal Poly Pomona—challenges the orthodox narrative that puts Jackson and Biddle front-and-center to widen the lens to encompass the nuance and complexity that informs a long overlooked and far more intricate, multilayered confluence of people and events on both sides. The Bank War was indeed a great drama, but it turns out that there were many more essential players than Jackson and Biddle, and much more at stake than simply re-chartering the bank. As the subtitle suggests, Campbell notes that integral to the Bank War were common threads that ran between post offices, branch banks, and newspapers in what was indeed such a tangled weave that much went unnoticed or disregarded by historians prone to focus on the larger tapestry.

Today we might bemoan certain cable news propaganda vehicles that eschew reporting in favor of distorting, yet at its worst this phenomenon bears almost no resemblance to the partisan press of Jackson’s day, when there was little expectation of any kind of objectivity. In fact, valuable contracts for printing government documents were doled out to the politically simpatico, who were expected to promote the official line. Meanwhile, the Second National Bank through its branches had powerful financial incentives at hand to entice their allies in the press to champion their point of view.

Then there was the post office, which to us perhaps smacks of the anachronistic and irrelevant. Yet, its importance to early nineteenth century Americans cannot be overstated, since it effectively served as the sole vehicle for personal, business, and official communication.  But it was not only first-class mail that passed through post offices, but also newspapers, so branches could—and did—act as a kind of local valve for what sort of media could be passed across the counter. It was after all Jackson’s Postmaster General, former newspaper editor Amos Kendall, who famously permitted southern postmasters to refuse to distribute abolitionist tracts, another spark that was to fan antebellum sectional flames. Odd as it may seem now, Postmaster General was the single most valuable cabinet office in that era because of the vast patronage it controlled. Through its direct and indirect influence over the press, the White House clearly stacked the deck against poor Biddle, who despite vast resources could not hope to compete in the arena of what today we might term “messaging.”

While little of this material is in itself new or groundbreaking, Campbell deserves much credit for being the first to astutely connect all the dots of these seemingly unrelated elements to the Bank War.  But he goes further, articulately probing the economic realities of American life in the 1830s and deftly fitting the financial institutions of the day into the larger picture. The way banks and the economy functioned then would be almost unrecognizable to modern students of finance. Campbell peels back the fascinating if arcane layers of antebellum banking that other historians of the period have long neglected.

For the world of academia, The Bank War and the Partisan Press is a magnificent achievement, but alas much of it may remain unknown to the wider public because it is not always easily accessible to the general reader. This is not Campbell’s fault: he is after all quite skillful with a pen. But this was originally a thesis expanded into a book, so the strictures of academic writing sometimes weigh heavily on the account. Also problematic, perhaps, is that the text is somewhat rigidly compartmentalized, so that each sub-topic is exhaustively explored by chapter, rather than more seamlessly woven into the narrative.  These are mere quibbles to a scholarly audience and hardly detract from the finished product, but I would like to see Campbell revisit this theme one day in another title designed to reach more readers of popular history. In the meantime, if you are a student of Jacksonian America, this is an essential read that receives my highest recommendation.

Featured

Review of: The Last Founding Father: James Monroe and a Nation’s Call to Greatness, by Harlow Giles Unger

Did you know that the single greatest president in America’s first half-century was James Monroe? Even more than that, did you know that the most significant Founder of the fledgling Republic was James Monroe? That Monroe’s long-overlooked accomplishments and contributions dwarfed those of Washington, Jefferson and Madison and all the rest? That Monroe was a towering figure in both establishing and leading the new nation?  I didn’t either, but that is the boast of The Last Founding Father: James Monroe and a Nation’s Call to Greatness, by Harlow Giles Unger.

Should you suspect that I am unfairly exaggerating the author’s bold claim, look no further than page two of the “Prologue” to learn that while Washington may have won American independence, his legacy was little more than a “fragile little nation” and his “… three successors—John Adams, Thomas Jefferson, and James Madison—were mere caretaker presidents who left the nation bankrupt, its people deeply divided, its borders under attack, its capital city in ashes.” It was, apparently, left to the heroic, brilliant, and larger-than-life character of James Monroe to step in and make America great, as summarized by Unger:

Monroe’s presidency made poor men rich, turned political allies into friends, and united a divided people … Political parties dissolved and disappeared. Americans of all political persuasions rallied around him under a single “Star Spangled Banner.” He created an era never seen before or since in American history … that propelled the nation and its people to greatness.

That’s from page three. I might have closed the cover after that burst of hyperbole, which better channels the ending of a Disney movie than a historian’s measured analysis. But then I checked the dust jacket bio to find that Unger is “A former Distinguished Visiting Fellow in American History at George Washington’s Mount Vernon … a veteran journalist, broadcaster, educator and historian … the author of sixteen books, including four other biographies of America’s Founding Fathers.” Perhaps I was misjudging him? So, I read on …

Spoiler alert: it does not get any better.

Presidential biography is a favorite of mine, and I have read more than a couple of dozen. For the uninitiated, the genre tends to diverge along three paths: the laudatory, the condemnatory and the analytical.  While closer to the first category, The Last Founding Father really fits into none of these classifications. In fact, one might argue that it is less biography than hagiography, for the author is so consumed with awe by his subject that the latter is simply incapable of transgression in any arena. When I was a child, I could do no wrong in my grandmother’s eyes. If I did go astray, she would redefine right and wrong to suit the circumstances, so I always landed on the positive side of the equation. Unger offers similar dispensation for Monroe throughout this work.

Unger’s inflated reverence for Monroe should not diminish his subject’s importance to the early Republic, only compel us to examine the man and his legacy with a more critical eye.  The list of “Founding Fathers”—a term only coined by Woodrow Wilson in 1916—is somewhat arbitrary, and Monroe does not even always make the cut. The essential seven that all historians agree upon are:  John Adams, Benjamin Franklin, Alexander Hamilton, John Jay, Thomas Jefferson, James Madison, and George Washington. Other lists are more broad, and many also include Monroe, who was after all not only the fifth President of the United States, but also U.S. Senator, Ambassador to France and England, Secretary of State, and Secretary of War—at one point even holding the latter two cabinet positions simultaneously. Monroe’s tenure in the White House has famously been dubbed the “Era of Good Feelings,” but only school kids—and Unger, apparently—believe that this is because suddenly faction disappeared, and both rival politics and personalities gave way to a mythical fellowship.  In fact, historians have long recognized that this period was characterized by the one-party rule of the Democratic-Republican Party that dominated after the disintegration of the Federalist Party, which had flirted with treason and been discredited in its opposition to the War of 1812. But Monroe’s Democratic-Republicans represented far more of a coalition of loose factions than the powerful central force that the party had been under the stewardship of Jefferson and Madison before him. The fissures unacknowledged by Unger were brewing all along, later made manifest in the Second Party System of Clay and Jackson.

Most studies of Monroe reveal a man of great personal courage with stalwart dedication to principle and service to his country. Few—Unger is the exception—credit him with the kind of intellectual brilliance seen in peers like Jefferson, Madison and Hamilton. Like Hamilton—who indeed once challenged him to a duel—Monroe seems to have possessed an outsize ego and a prickly sense of honor that was easily slighted if not subject to the praise and recognition he felt certain he rightly deserved, such as sole credit for the Louisiana Purchase! Nearly a decade earlier than that milestone, Monroe had served as ambassador to France but was later recalled by Washington, who found him too easily flattered and otherwise lacking in the traits essential to upholding American diplomatic interests. Monroe was stung by this, but in his long future in government service he was in turn to have fallings-out with both Jefferson and his old friend Madison, unable to tolerate differences in opinion and bristling in his perception of being ever snubbed by not being elevated to the prominence he felt due him. Like Jefferson, Madison’s presidency proved to be a disappointing chapter in a life marked by great achievements. But while the War of 1812 was hardly Madison’s finest hour, and Monroe indeed played a pivotal role during the existential crisis of the burning of Washington and its aftermath, Madison was hardly the bewildered, sniveling coward Unger portrays in his account, so incapacitated by events that Monroe had to heroically swoop in to serve as acting president and single-handedly rescue the Republic.

The many flaws in this biography are unfortunate, because Unger writes very well and citations are abundant, lending to the book the style and form of a solid history. On a closer look, however, the reader will find that the excerpts from primary sources that populate the narrative are often focused on superficial topics, such as food served at events, room furnishings, or styles of dress. And Unger seems to sport a weirdly singular crush on Monroe’s wife, Eliza, whom he describes as “beautiful” more than a dozen times in the text—and that before I gave up counting! Attractive of not, she seems as First Lady to have come off as cold and imperious, with aristocratic airs that she no doubt accumulated during her times abroad with her husband, when they lived often in a grand style that was well beyond their means. Oddly, far more paragraphs are devoted to descriptions of Eliza’s clothing and social activities, and her many debilitating illnesses, real or imagined, than to Monroe’s eight years in the White House.

A greater complaint is that for a book published as recently as 2009, conspicuous in its absence are the less privileged people that walked the earth in Monroe’s time, Native Americans and most especially the enslaved African Americans kept as chattel property by elite Virginia planters like Monroe—as well as Jefferson, Madison and Washington—something that manifestly flies in the face of recent historiographical trends.  Although Monroe owned hundreds of human beings over the course of his lifetime, the reader would hardly know it from turning the pages of The Last Founding Father, where the enslaved are mentioned in passing if mentioned at all, such as: “Although Monroe had to sell some slaves to rescue [his brother] Joseph from bankruptcy, he held to the belief that brotherly ties were indissoluble …” [p207] Long before the more famous Nat Turner Revolt, there was Gabriel’s Rebellion, and Monroe was Governor of Virginia when it was repressed and twenty-five blacks were hanged in retribution. The slightly more than two pages given to this episode lacks critical analysis but credits Monroe with promptly calling out the militia to put down the uprising [p140-142]. Such a cursory treatment of the inherent contradictions of the institution of chattel slavery to the ideals of the new Republic are an inexcusable blemish on any work of a twenty-first century historian. Since there is much in the literature about the incongruity of Monroe the plantation master—much like Jefferson—at times decrying while yet sustaining the peculiar institution, we can only conclude that Unger deliberately passed over this material lest it cast some aspersion upon the adoring portrait that this volume advances.

It pains me to write a bad review of any book. After all, the author typically labors mightily to generate the product, while I can read it—or not—in my leisure. But I am passionate about both historical studies and the rigors of scholarship, which should apply even more scrupulously to someone such as Harlow Giles Unger, who not only possesses appropriate credentials but has written widely in the field, and thus owes the student of history far more than this, which after all does no real service to the reader—nor to James Monroe by overstating his achievements while failing to contextualize his role as a key figure in the early Republic with the nuance and complexity that his legacy deserves.

Featured

Review of: Apollo 8: The Mission That Changed Everything, by Martin W. Sandler

Astronaut William Anders began: “For all the people on Earth the crew of Apollo 8 has a message we would like to send you:

In the beginning God created the heaven and the earth.
And the earth was without form, and void; and darkness was upon the face of the deep.
And the Spirit of God moved upon the face of the waters. And God said, Let there be light: and there was light.
And God saw the light, that it was good: and God divided the light from the darkness.”

On Christmas Eve fifty-one years ago, millions in the United States and around the globe—including this then eleven year old boy—gathered breathlessly around their TV’s to watch the first live broadcast from space, an extraordinary transmission beamed back to earth from more than two hundred thousand miles away from an American spacecraft in orbit around the moon. The largest television audience to that date was treated to remarkable photographs of the forbidding moonscape, but far more awe-inspiring and humbling were the images they viewed of their very own living planet, appearing so tiny and so remote from such a great distance. The three astronauts closed out the broadcast by reading passages from the biblical book of Genesis. Lunar Module Pilot Bill Anders was followed by Command Module Pilot Jim Lovell, and then Commander Frank Borman, who added: “And from the crew of Apollo 8, we close with good night, good luck, a Merry Christmas, and God bless all of you – all of you on the good Earth.”

While this episode remains a heartwarming moment that celebrates both the universality of the human endeavor as well as the singularity of this accomplishment, it should not obscure the reality of what was really happening on that blue planet viewed from afar, of the wars and famines and cruelty and disasters that did not take a pause while space travelers read aloud from an ancient book that itself once gave witness to its same share of wars and famines and cruelty and disasters. Nor should it fail to remind us that these representatives of the earth blasted off from a badly fractured landscape at home.

The claim that America on this Christmas Eve of 2019 has never been this divided is at once refuted by a glance back to 1968, replete with acts of terror, campus unrest, cities in flames, mass demonstrations, political assassinations, and violence in the streets—the perfect storm of the increasingly unpopular war in Vietnam and the revolution of rising expectations among long-disenfranchised blacks frustrated by the pace of change. If there was a kind of unifying force that remained to serve as some sort of glue amid the chaos and dissonance of a splintered national polity it had to be the space program and its race for the moon. The actual moon landing was not until the following year, but 1968 closed with the remarkable Apollo 8 mission, the first manned spacecraft to orbit the moon, made more dramatic by that live Christmas Eve audio-video transmission from space that included those readings from Genesis, and later forever enshrined in our collective consciousness by the iconic photo “Earthrise” that depicts the earth rising over the moon’s horizon, snapped by astronaut Bill Anders, that is said to have inspired the environmental movement.

Martin W. Sandler revisits this existential moment that briefly comforted a troubled nation with the oversize and lavishly illustrated Apollo 8: The Mission That Changed Everything, directed at a young adult (YA) audience but suitable for all. I have read and reviewed Sandler before. The author has a talent for clear, concise writing that while targeting a younger readership does not dumb-down the topic, an otherwise frequent tarnish to this genre of nonfiction. I obtained this book as part of an Early Reviewer program and my copy was an Advanced Reader’s Copy (ARC) with black and white images, but the published edition is full-color and worth the purchase if only for the magnificent color photographs, though these are nicely enhanced by a well-written narrative that encompasses the totality of this highly significant space mission and its ramifications back home. The only caution I would add is that I have detected glaring historical errors in some of Sandler’s other works. I did not stumble upon any here, but then I am hardly an expert on the space program. Thus, the reader should trust but verify!

Some—at the time and since—have objected to the astronauts’ choice of verses from Genesis, as if there was an attempt to impose religion from the beyond, or to celebrate the Judeo-Christian experience at the expense of others. We should not be so hard on them; they were simply seeking some kind of universal message to inspire us all. That they may have failed to please everyone may only underscore how diverse we are even as we transcend the myth of race to acknowledge that we all share the very same DNA, the same hopes and dreams and fears and needs and especially the desire to love and be loved.  Astronaut Bill Landers himself returned from space as an atheist, awed by his place in the vast universe.  I am not a religious person: I celebrate Christmas as a time for peace and love and Santa Claus. But I can still, like the astronauts on Apollo 8 fifty-one years ago, wish my readers a good night, good luck, and a Merry Christmas to all of you on the good Earth.

Click for: Apollo 8 Live Broadcast

Featured

Review of: Engleby, by Sebastian Faulks

Some years ago, I had the pleasure of reading the Booker-prize winning masterpiece Birdsong, by Sebastian Faulks, which motivated me to pick up a couple of his other novels for later consumption, including Engleby. One day, I randomly plucked it off the shelf and turned to the first page. Honestly, it was not easy to put down. Also, to be even more honest, there were times that I really wanted to.

As a reviewer, it sounds somewhat awkward or even unseemly to resort to a term like “creepy” to describe a novel, but that would most accurately describe the subtle if sustained punch in the gut I experienced while reading this one, propelled by a growing revulsion for the central character. As the narrative unfolds, that character—the eponymous Mike Engleby—is a working-class Brit on scholarship to “an ancient” university in the early 1970s. He comes across as a bit of an oddball, but for those of us who lived through this era that was hardly unusual nor especially undesirable, given that to be an iconoclast in those days was often seen as a virtue. But the reader cannot help but experience an emerging disquiet as Engleby develops an infatuation that veers to obsession that then turns more ominously to the outright stalking of his bright and beautiful classmate Jennifer Arkland. Along the way, there are flashbacks to the bitter poverty of Engleby’s youth, the regular beatings by his father, the quotidian brutality of his life at public school where he is condemned to the unfortunate nickname “Toilet” and subjected to an ongoing torment that stretches the limits of endurance to cruelty—the cumulative effect of which, it becomes clear, shapes him into a bully, a thief, a drug dealer, an opportunist. Flash forward again and Jennifer has disappeared, never found, presumed murdered.

Did Engleby murder her? Could he be a serial killer? Is he mere weirdo or sociopath? That’s for you to find out: I don’t believe in folding spoilers into reviews. But the narrative is laced with plenty of clues, scattered within an interior monologue that invites an uncertain sympathy for a protagonist whom at best provokes the uneasy, at worst the repellent. Yet, it is the genius of the author to tempt the reader to veer from repugnance to empathy, against all odds, even if this shift may prove temporary. And the reader, like it or not, is ensnared in an uncomfortable fascination with this very same well-crafted interior monologue, a kind of labyrinth pregnant with Engleby’s barely suppressed anxiety, which he overcompensates for with visions of grandeur and a disdainful arrogance for all others in his orbit—except perhaps, that is, for Jennifer Arkland. And then that anxiety grows contagious as the reader begins to question the reliability of the narrator! Are the things revealed by Engleby’s inner thoughts real or imagined? Is Faulks himself, acting as both wizard and jester, simply mocking us from behind the curtain?

The last time I found myself as deeply unsettled by a work of fiction, it was Perfume, by Patrick Süskind, the unlikely tale of an eighteenth-century serial killer, but that novel was tempered with a pronounced sense of the ironic if not especially comedic. Not so with this one: there’s nothing even a little bit funny about Engleby. For his part, Faulks proves himself a true artist of the written word, his pen taking full command of his character and his audience alike. I recommend it, even if it may keep you up at night.

Featured

Review of: Napoleon: A Life, by Adam Zamoyski

The most consequential figure of what historians dub Europe’s “long nineteenth century” (1789-1914)—from the start of the French Revolution to the outbreak of World War I—came to virtually define the first part of that era while setting forces into motion that shaped all that was to follow. Over the course of a single decade, Napoleon Bonaparte controlled not only much of the territory on the continent, but the entirety of its destiny.  When he fell from power, the peace that was crafted in his wake largely held for a full century. The Europe that was obliterated by the catastrophe of the Great War that followed was the Europe both made and unmade by Napoleon. And even well beyond that, in the nearly two centuries since he walked the earth, no other individual—not Bismarck, not Stalin, not Churchill, not even Hitler—has emerged in the West, for ill or for good, to rival his significance or challenge his legacy.  Yet for most, these days Napoleon is, if not exactly a forgotten character, a much overlooked one, a rarely referenced ghost of a distant past whose specter though perhaps unnoticed nevertheless still haunts the twenty-first century capitals of Paris, London, Rome, Berlin, Warsaw and Moscow.

An outstanding remedy to our collective negligence is Napoleon: A Life, by Adam Zamoyski, a noted historian and author with a long resume who masterfully resurrects the outsize character that was the living man and places him in the context of his times. At nearly seven hundred pages, at first glance this hefty tome might seem intimidating, but Zamoyski writes so well that there are few sluggish spots in a fast-moving, highly accessible narrative that will likely take its place in the historiography as the definitive single-volume biography. And this is surely the treatment his subject deserves.

There could perhaps not have been a more unlikely individual to command the world stage and change the course of history than Napoleon Bonaparte, born to a family of minor Italian nobility of quite modest means on Corsica in 1769, somewhat ironically in the same year that the Republic of Genoa ceded the island to France. It may be a minor point but it certainly adds to that irony that the future Emperor of France apparently ever spoke French with an atrocious accent, which—knowing the conceit of those native to the language—could only have rankled those in his orbit, both friend and foe. Yet, this is just one of the many, many contradictions that cling to Napoleon’s person. As a child, he was sent to a religious school in France, and later attended a military academy, which led to his commission as a second lieutenant in the artillery.

It was the outbreak of the French Revolution a few short years later that catapulted him onto the world stage in a bizarre trajectory that saw him first as a fervent Corsican nationalist seeking the island’s independence from France, then a pro-republican pamphleteer allied with Robespierre, and then artillery commander at the Siege of Toulon, where he first demonstrated his military genius. He was wounded but survived to be promoted to brigadier general at the age of twenty-four and later placed in command in Italy, where he led the army to victory in virtually every battle, while taking time out to crush a Royalist rebellion in Paris. He also survived his association with Robespierre. Proving himself as gifted in the partisan arena as he was on the battlefield, he adroitly commandeered the dangerous and ever-shifting political ground of revolutionary France to engineer a coup and make himself dictator, euphemistically styled as First Consul of what was now a republic in little more than name only.  He was just thirty years old. Within five years, he was Emperor of France in a retooled monarchy that both resembled and served as counterpoint to the ancien régime that revolution had swept away.

The rare general with talents equally exceptional in the tactical and the strategic, Napoleon managed both on and off the battlefield to defeat a succession of great power coalitions aligned against him until he commanded much of Europe directly or through his proxies, while crippling British trade through his “continental system” that controlled key ports. Like Alexander two millennia before him, Napoleon was brilliant, courageous, opportunistic and lucky—all the ingredients necessary for unparalleled triumph on such a grand scale. Unlike Alexander, he outlived his conquests to try to remake his realm, in his case by spreading liberal reforms, stamping out feudalism, promoting meritocracy and codifying laws. But he also lived to fall from power and to fall hard.  At the risk of stretching the metaphor, the ancient Greeks invented the term hubris to describe the tragedy in the excessive pride personified by men just such as Napoleon. Whereas Alexander looked to Achilles and the Olympic pantheon, Napoleon looked only to his own “star,” which he fully relied upon to guarantee his success in every endeavor. And one day that star dimmed. He famously overreached with the ill-conceived invasion of Russia that turned to debacle, but it was more than that. For all his genius, he ruled the French Empire like a medieval lord—or a crime boss—placing on the thrones of puppet states that served him members of his extended family or his cronies, most of whom lacked competence or even loyalty. His dramatic rise was met with an equally dramatic fall, and he ended his days in exile on a remote island in the South Atlantic, slowly succumbing to what was likely stomach cancer at the age of fifty-one.

Of course, you could learn all of this from the prevailing literature—there are literally thousands of books that chronicle Napoleon—but Zamoyski’s rare achievement is to capture the essential nature of his subject, something that too often eludes biographers. The Napoleon he conjures for us is a basket of contradictions: at once kind, despotic, magnanimous, ruthless, noble, petty, confident, insecure, charismatic, and socially awkward.  Zamoyski does not stoop to play psychoanalyst, but the Napoleon that emerges from the narrative often smacks of a narcissist and depressive who frequently rode waves of highs and lows. If nothing else, he was certainly a very peculiar man who was repellent to some just as others were somehow drawn to him irresistibly, a paradox perhaps captured best in this passage recounting the recollections of those who knew him as a young man:

He was out of his depth, not so much socially as in terms of simple human communication: he showed a curious lack of empathy which meant that he did not know what to say to people, and therefore either said nothing or something inappropriate. His gracelessness, unkempt appearance, and poor French … did not help … He could sit through a comedy … and remain impassive while the whole house laughed, and then laugh raucously at odd moments … [He once told] a tasteless joke about one of his men having his testicles shot off at Toulon, and laughing uproariously while all around sat horrified. Yet there was something about his manner that some found unaccountably attractive. [p92]

Zomoyski does not pass judgment on Napoleon, but deftly brings color, form and substance to his sketches of him so that the reader is rewarded with a genuine sense of familiarity with the living man, an accomplishment that cannot be overstated.  If there is a flaw, it is that the work is skimpy on the historical backdrop, on the prequel to Napoleon; those not already well-schooled with the milieu of late eighteenth century Europe may be at a disadvantage.  But this is perhaps a quibble, for to do so competently would have further swelled the size of the book and risked an unwieldy text. On the other hand, there is a welcome supply of many fine maps, as well as copious notes.

Napoleon’s ambition left thousands of dead in his wake, and he left his mark far beyond the Europe he transformed. Modern Egyptology was born out of Napoleon’s military campaign in Egypt; the famous “Rosetta Stone” was among the spoils of war, although it ultimately ended up in British rather than French hands.  Napoleon was the force behind the Louisiana Purchase, which effectively doubled the size of the nascent United States.  It was the impressment of American seamen during the Napoleonic Wars that was a leading Casus belli in the War of 1812, and it was British exhaustion at the conclusion of that conflict that spared the young republic a harsher price for peace.  Look closely and you will find Napoleon’s fingerprints nearly everywhere—and you will see them in far greater detail if you treat yourself to Zamoyski’s magnificent biography, which surely does justice to his legacy.

 

[CORRECTION: the podcast version of this review misidentifies the location of Napoleon’s death as on an island in the Pacific rather than in the South Atlantic, which has been corrected in the written text above.]

Featured

Review of: Working: Researching, Interviewing, Writing, by Robert A. Caro

While browsing a bookstore sometime in 1982, I picked up a thick hardcover entitled The Years of Lyndon Johnson: The Path to Power, by Robert A. Caro. I had never heard of Caro, but the jacket flap told of his winning the 1975 Pulitzer Prize for biography for his very first book, The Power Broker: Robert Moses and the Fall of New York. I had never heard of Moses either, but in the days before smartphones and Google might let me dig a little deeper, that accolade spoke directly to the author’s reputation. I did—and still do—like to browse bookstores and to read books about American presidents. The twenty bucks I shelled out to buy that book was probably most of the cash I had in my wallet that afternoon, something else that was and remains characteristic of me to this day: given a choice between buying lunch or a new book, I will almost always choose the latter. I mean, I can wait until dinner …

That volume of The Path to Power is 768 pages of small print, not including notes and back matter, of mostly dense material, but Caro’s voice is so commanding that I found myself both absorbed and obsessed. For those who have not read him, it is difficult to describe Caro’s style, which exists somewhere at the confluence of incisive reporting and towering epic, a kind of literary salad that blends the best of Edward R. Murrow and Robert Penn Warren—seasoned with a dash or two of Thucydides—that the reader is driven to devour.

There are great presidential biographers out there—think Robert Remini, David McCullough, Joseph Ellis, Jon Meacham—yet Caro is in a league all his own.  And unlike the others, he has not been prolific, devoting the decades since the publication of The Path to Power to just three books, all part of his The Years of Lyndon Johnson saga, one of which—Master of the Senate—is a landmark synthesis of history and biography and politics that won him a second Pulitzer Prize in 2003. Another ten years passed before the release of The Passage of Power, which only just follows LBJ into his first months in the White House. Now an octogenarian still doggedly at work on what is to be the final book in the series, Caro has broken precedent by releasing a slim volume that is a study of the author rather than his subjects.

This latest book, Working: Researching, Interviewing, Writing, is less a memoir than a profile of what Caro has set out to do and how he has approached the process, as neatly summarized by the subtitle. Surprisingly, Caro is not a historian, but instead started off as a journalist who won the respect of an old-fashioned hardboiled editor when his diligence in the field turned up info vital to a storyline. The editor, who had barely acknowledged him before, advised: “Turn every page. Never assume anything. Turn every goddamned page.” That has been his mantra ever since.

Caro is fascinated by power and those who wield it, and especially by the ways power can be obtained and exercised outside of ordinary channels.  For instance, his first subject— “master builder” Robert Moses—was never elected to any office, yet at one point simultaneously held twelve official titles and used his accumulated authority to preside over the utter and lasting reshaping of New York City and its suburbs. In his research on LBJ, by turning “every page,” Caro encountered an obscure reference that led him to learn that Lyndon Johnson’s political rise and own personal wealth was closely linked to a long-secret relationship with the principals of Brown & Root, a construction company that built roads and dams and was later enriched by government contracts sent their way by Johnson; in turn, their largesse was to overflow LBJ’s campaign coffers.  The rest is—quite literally—history.

A silent partner in Caro’s award-winning achievements has long been his wife Ina, who has quietly devoted her life to aiding his research and managing the household so that he could concentrate entirely on his book projects. In Working, Caro reveals that Ina once sold their home—without telling him—in order to ensure their financial solvency. Another time, when he announced they were moving to the Texas Hill Country for three years to continue his research on LBJ, Ina cracked: “Why can’t you do a biography of Napoleon?”  But she went along, without complaint.  And Caro makes it clear that Ina was no mere admin or assistant: she often sat across from him at long library tables and turned over half of those “goddamned pages” herself.

By my own calculation, I have read nearly three thousand pages of Robert Caro in his four volumes on Lyndon Johnson. I eagerly and impatiently await the final book. I did not know what to expect from Working, which is closer to memoir than autobiography but truly defies categorization.  Most great writers are incapable of talking about themselves without something like bitterness or bravado. Hemingway certainly couldn’t do it. Steinbeck—think Travels with Charley—was better at it, but he tended to conflate fiction and nonfiction along the way. Caro would have none of that. His work has always had a singular focus that has been about the unvarnished facts, about the warts and all, about the inconvenient truths that swirl about the lives of his subjects, and he delivers no more and certainly nothing less when he turns the lens on himself.

Working would be a party favor if written by anyone but Robert Caro. But because he is a magnificent writer gifted with extraordinary insight, it is a kind of a minor masterpiece packaged in an undersized edition that is an easy read of less than two hundred pages. If there is a fault, it is the odd inclusion of an interview with The Paris Review from 2016 that is not only superfluous but distracting; I would urge skipping it. But that’s a quibble. Even if you have never heard of Robert Caro yet are fascinated with history and how solid research serves as the foundation to analysis, interpretation and an ever-evolving historiography, you should read this. If you have read Caro’s other books, of course, then you must read this one!

Featured

Review of: The Land Shall be Deluged in Blood: A New History of the Nat Turner Revolt, by Patrick H. Breen

In August 1831, in Virginia’s Southampton County, a literate, highly intelligent if eccentric enslaved man—consumed with such an outsize religious fervor that he was nicknamed  “The Prophet” by those in his orbit—led what was to become the largest slave uprising in American history. Nat Turner’s Rebellion turned out to be a brief but bloody affair that resulted in the largely indiscriminate slaughter of dozens of whites—men, women, children, even infants—before it was put down. The failed revolt itself was and remains far less important than its repercussions and the dramatic echoes that still resounded many years hence during the secession crisis. Rarely would any historian of the American Civil War cite Nat Turner as a direct cause of the conflict—after all, the rebellion took place three decades prior to Fort Sumter—but it is almost always part of the conversation. Turner’s uprising not only reinforced but validated a deep-simmering paranoia of southern whites—who like ancient Spartans vastly outnumbered by Helots were often in the minority to their larger chattel population—and spawned a host of reactionary legislation in Virginia and throughout much of the south that outlawed teaching blacks to read and write, and prohibited religious gatherings without a white minister present.  And while for those below the Mason-Dixon it was an underscore to the perils of their peculiar institution, at a time when abolitionism was in its infancy it also served to remind at least some of their northern brethren that the morally questionable practice of owning other human beings was part of the fabric of southern life. Indeed, one could argue that the true dawn of what we conceive of as the antebellum era began with Nat Turner.

For such a pivotal event in the nation’s past, the historiography has been somewhat scant. There is the controversial “confession” that Turner dictated to lawyer Thomas Ruffin Gray in the days between his capture, trial and hanging, which some take at face value and others dispute. But in the intervening years, surprisingly few scholars have carefully scrutinized the rebellion and its legacy, which remains far better known to a wider audience from William Styron’s Pulitzer Prize-winning novel The Confessions of Nat Turner than from the analytical authority of credentialed historians.

A welcome remedy can be found in The Land Shall be Deluged in Blood: A New History of the Nat Turner Revolt, a brilliant if uneven treatment of the uprising and its aftermath by Patrick H. Breen, first published in 2016, that likely will serve as the academic gold standard for some time to come. While giving a respectful nod to the existing historiography—which has tended to breed competing narratives that pronounce Turner hero or villain or madman—Breen, an Associate Professor of History at Providence College, instead went all in by conducting an impressive amount of highly original research that locates the revolt within the greater sphere of the changing nature of the institution of slavery in southeastern Virginia in the early 1830s, which as a labor mechanism was in fact in a slow but pronounced decline. Nat Turner and his uprising certainly did not occur in a vacuum, but prior to Breen’s keen analysis, the rebellion was generally interpreted out of its critical context, which thus distorted conclusions that often pronounced it an anomaly nurtured by a passionate if deranged figure. For the modern historian, of course, this is not all that shocking, since the uncomfortable dynamics found in the relationships of the enslaved with wider communities of whites and other blacks (both free and enslaved) has until recent times been typically afforded only superficial attention or entirely overlooked. It is nevertheless surprising—given the notoriety of the Turner revolt—that until Breen there was such a lack of scholarly focus in this arena.

The book has eight chapters but there are three clear divisions that follow a distinct if sometimes awkward chronology. The first part traces the start and course of the rebellion and presents the full cast of characters of conspirators and victims. The second is devoted to subsequent events, including both the extrajudicial murder by whites of blacks swept up in the initial hysteria spawned by the revolt, as well as the carefully orchestrated trials and executions of many of the participants. The final and shortest section concerns the fate of Nat Turner himself, who evaded capture for two months—long after many of his accomplices had been tried and hanged.

The general reader may find the first part slow-going. The story of the revolt should be an exciting read, especially given the passion of prophecy that consumed Turner and the violence that it begat with its slaughter of innocents by an unlikely band of recruits whose motives were ambiguous. Instead, the prose at times is so dispassionate that the drama slips away. In my opinion, this is less Breen’s fault—he is, after all, a talented writer—than the stultifying structure of academic writing that burdens the field, the unfortunate reason why most best-selling works of history are not written by historians.  But I would encourage the discouraged to press on, because the effort is intellectually rewarding; the author has deftly stripped away myth and legend to separate fact from the surmise and invention pregnant in other accounts. If there can be such a thing as a definitive study of the Nat Turner rebellion, Breen has delivered it.

It is clear from the character of the narrative that follows that Breen’s true passion lies in the aftermath of the revolt, where he serves as revisionist to what has long been taken for granted as settled history. This is as it should be, because it was the repercussions of the rebellion and the way it was remembered (north and south) in the thirty years leading up to secession that was always of far greater importance to history than the uprising itself. And it is unfortunately this echo—much of which has been unsubstantiated—which has tainted later scholarship. The central notion that prevailed, which Breen challenges, is that the reaction to Nat Turner was a widespread bloodbath of African Americans by unruly mobs whose suspicion was that all blacks were complicit or were simply driven by revenge. The other, also disputed by Breen, is that whatever trust might have once existed between white masters and the enslaved had forever evaporated, the former ever in fear that the latter were secretly plotting a repeat of the Turner episode. Finally, Breen takes issue with the view of many historians that the authorial voice in Turner’s “confession” is unreliable because it was dictated to a white man who was guided by his own agenda when he published it.

Breen refutes the first by lending scrutiny to the empirical evidence in the extant records of the enslaved population. A little general background for the uninitiated here: the enslaved were treated as taxable chattel property in the antebellum era, so meticulous records were kept and a good deal of that survives. Many slave-owners insured their human “property,” often through insurance companies based in the north. If an enslaved person was convicted of a capital crime, the state compensated the slave-owner for the executed offender. Breen, as a good historian, simply reviewed the records to determine if prevailing views of the rebellion’s aftermath were accurate or exaggerated. What he learned was that there was indeed much hyperbole in reports of widespread massacres of African Americans. Yes, certain individuals and militias did commit atrocities by murdering blacks, and sometimes torturing them first. But the numbers were vastly overstated. And local officials quickly put a stop to this, motivated perhaps far less by ethical concerns than in an effort to protect valuable “property” from the extrajudicial depredations of the mob, whose owners would not then be duly compensated. Breen should be commended for his careful research—which demonstrates that long-accepted reports of mass murder are simply unsupported by the records—yet it seems astonishing that those who came before him failed to follow the same road of due diligence that he traveled. This should underscore to all budding historians out there that there remains lots of solid history work ahead, even and especially in otherwise familiar areas like this one where what turns out to be a flawed analysis has long been taken for granted as the scholarly consensus.

This business of assigning value to chattel human property is uncomfortable stuff for modern students of this era, but as those who have read The Price for Their Pound of Flesh, Daina Ramey Berry’s outstanding treatment of the topic, it is absolutely essential to understanding how slavery operated in the antebellum south. The Land Shall be Deluged in Blood steps beyond the specifics of Nat Turner to offer a wider perspective in this vein, as well. The enslaved were often subject to the arbitrary sanctions of their masters, but those accused of capital crimes were technically granted a kind of due process of law. Breen points out that special courts of “Oyer and Terminer” that lacked juries—the same kind that convicted and hanged those accused of witchcraft in Salem—were ordained in Virginia to judge such cases. Initially enacted to expedite the trial process of the enslaved, the courts—captained by five magistrates who were typically wealthy slave-owners, and which duly supplied defense attorneys to the accused—came to have the opposite effect, convicting only about a third of those brought before them. [p108] Much of the reason for these results seems to be connected to an effort to limit the cost of the state for compensation for those sent to the gallows for their crimes.

It turns out that these same courts also had a tempering effect on the trials of those accused of taking part in the rebellion.  But this time, it wasn’t only about the money.  Breen argues convincingly that the elite magistrates who controlled the trial process also created and marketed to the wider community a reassuring narrative that the uprising was a small affair involving only a small number of the misguided. In the end, eighteen were executed, more than a dozen were transported and there were even some acquittals. Thus, state liability was limited, and the peculiar institution was protected.

That reassurance seems to have been effective: freedom of movement for the enslaved subsequent to the revolt was not as constrained as some have maintained, as evidenced by the fact that Nat Turner was discovered in hiding and betrayed by other enslaved individuals who were hardly prohibited from wandering alone after dark. By the time Nat Turner was captured and executed, the rebellion was almost already history. As to the veracity of Turner’s “confessions” to Grey, Breen makes a compelling argument in support of Turner’s words as recorded, but that will likely remain both controversial and open to interpretation. So too will the person of Nat Turner. The horror of human chattel slavery might urge us to cheer Nat and his accomplices in their revolt, while the murder of babies in the course of events can’t help but give us pause. Likewise, we might harshly judge those white slave-owners who dared to judge them.  But, of course, that is not the strict business of historians, who must sift through the nuance and complexity of people and events to get to the bottom of what really happened, warts and all.

I first learned of The Land Shall be Deluged in Blood when I sat enthralled by Breen’s presentation of his research at the Civil War Institute (CWI) 2019 Summer Conference at Gettysburg College, and I purchased a copy at the college bookstore.  While I have some quibbles with the style and arrangement of the book, especially to the strict adherence to chronology that in part weakens the narrative flow, the author has made an invaluable contribution to the historiography with what is surely the authoritative account of the Nat Turner Rebellion. This is and should be required reading for all students of the antebellum era.

 

NOTE: My review of The Price for Their Pound of Flesh is here:

Review of: The Price for Their Pound of Flesh: The Value of the Enslaved, from Womb to Grave, in the Building of a Nation, by Daina Ramey Berry

 

Featured

Review of: Korea: Where the American Century Began, by Michael Pembroke

There’s an abiding irony to the fact that the United Nations, formed in the wake of a catastrophic global war to keep the peace, instead gave sanction to the first and most significant multinational armed conflict since World War II, not even five full years after Japan’s capitulation. It never would have happened had Stalin not ordered Soviet delegates to boycott that Security Council session in protest over the seating of Chiang Kai-shek’s government-in-exile on Taiwan instead of Mao’s de facto People’s Republic of China. It might never have happened if United States President Truman was not under enormous political pressure due to a hysterical campaign of right-wing outrage known as “Who Lost China” born out of Mao’s surprise victory in 1949, the same year that the Cold War grew much hotter when the Soviets successfully tested an atomic bomb, and fears of global communist domination magnified. It probably never would have found the support of so many other nations if the memories of appeasement to Hitler were still not so fresh and compelling.

“It”—of course—was the Korean War, which took place on a wide swath of East Asian geography that remains unresolved to this very day. Historically, the Korean peninsula hosted at various times both competing kingdoms and a unitary state but was always dominated by its more powerful neighbors: China, Russia and Japan. In 1910, Japan annexed Korea, and an especially brutal occupation ensued. Following the Japanese defeat, the peninsula was divided at the 38th parallel into two zones administered in the north by the Soviet Union and in the south by the United States. Cold War politics enabled the creation of two separate states in the two zones, each mutually hostile to one another. In June 1950, the Soviet-backed communist regime in the north invaded the pro-western capitalist state in the south, which spawned a UN resolution to intervene and launched the Korean War. At first South Korea fared poorly, but an American-led multinational coalition eventually pushed communist forces back across the 38th parallel. The fateful decision was then made by the Truman Administration to pursue the enemy and expand full-scale combat operations into North Korea. This brought China into the war and a long bloody struggle to stalemate ensued. Like a weird Twilight Zone loop, more than sixty-six years later a state of war still exists on the peninsula, and Kim Jong-un—the erratic supreme leader of a now nuclear-armed North Korea who regularly taunts the United States—is the grandson of supreme leader Kim Il-sung, whose invasion of the south sparked the conflict!

The origins, history and consequences of the Korea War makes for a fascinating story that—especially given both its scope and its dramatic contemporary echo—has received far less attention in the literature than it deserves. Unfortunately, Michael Pembroke’s recent attempt, Korea: Where the American Century Began, contributes almost nothing worthwhile to the historiography. This is a shame, because Pembroke—a self-styled historian who currently serves as a judge of the Supreme Court of New South Wales, Australia—is a talented writer who seems to have conducted significant research for this work. Alas, he squanders it all on what turns out to be little more than a lengthy philippic that serves as a multilayered condemnation of the United States.

As the subtitle suggests, Pembroke’s bitter polemic is directed not only at US intervention in Korea, but at the subsequent muscular but misguided American foreign policy that has begat a series of often pointless wars at a terrible cost in blood and treasure not only for the United States but also for the allies and adversaries in her orbit. Many—including this reviewer—might be in rough agreement with a good portion of that assessment. But the author sacrifices all credibility with a narrative that repeatedly acts as apologist for Mao, Kim Il-sung and even Stalin! For Pembroke, Truman takes on an outsize stature of a bloodthirsty monster who is not satisfied with the hundreds of thousands he vaporized at Hiroshima and Nagasaki, but is willing and even eager to sacrifice millions more in order to achieve his nefarious goal of global domination. Stalin and Mao, on the other hand, simply had their reasons, and were often misunderstood. Left unexplained is why, invested with that motivation and given that the United States in that era had overwhelming strategic nuclear and conventional superiority, Truman and his successors chose not to deploy that capability to pave a dramatic sanguinary road to hegemony.

To my mind, America’s war in Korea was a calamitous misstep, further exacerbated by the escalation that ensued with the crossing of the 38th parallel after achieving the initial objective of driving communist forces from the south. And one could make a good argument that none of the seemingly endless conflicts the United States has engaged in since that time was worth the life of a single American serviceman or woman. Yet, it is a hideous distortion to disfavorably juxtapose America—warts and all—with the endemic mass murder of Stalin’s Soviet Union. History, as I have often noted, is a matter of complexity and nuance, a perspective that seems utterly alien to Michael Pembroke in a book that is neither a history nor an analysis but simply an almost breathless diatribe that reduces characters to caricature and events to a bizarre comic book style of exposing villainy—but in this case all the villains happen to be American.

Because I received this book as part of an early reviewer’s program, I felt an obligation to plod through it to the very last page. In other circumstances, I would have abandoned it far, far earlier. As a reviewer, rarely would I suggest that a work has absolutely no value to a reader, but here I will make an exception: the best-case scenario for this book is for it to go out of print.

Featured

Review of: Hero of the Empire: The Boer War, a Daring Escape and the Making of Winston Churchill, by Candice Millard

The best book I ever read about Theodore Roosevelt was actually about a river, with T.R. in a supporting role. By lending focus to just a single episode in the colorful drama of his remarkable life in The River of Doubt, Candice Millard’s insight and gifted prose delivered a superlative study of the existential Roosevelt that has often eluded biographers, while recounting the little-known challenge of his sunset years that nearly broke him.

Millard brings a similar technique to her third and most recent effort, Hero of the Empire: The Boer War, a Daring Escape and the Making of Winston Churchill. With pen dipped in the inkwells of careful scholarship as well as great storytelling, the author adroitly marries history and literature to deliver an unexpectedly original and fascinating tale that reads like something from Robert Louis Stevenson. If there are similarities to her earlier work, there is also a twist, with the storied figures in nearly inverse circumstances. Rather than the late-in-life challenge that nearly does the central character in, this is the chronicle of a young man’s extraordinary adventure that was to launch his long celebrity.

Not that Churchill was ever really anonymous. But first: is it even possible to imagine a young Churchill? Think of the man and what comes to mind is the steely but beefy, even rotund British leader who was already all of sixty-five years old when he became Prime Minister at the onset of World War II, after many decades both in and out of power. (And he was to live yet another two decades after Hitler’s defeat, again both in and out of power!)  But the Churchill of Hero of the Empire is a slight fellow in his early twenties with an outsize ego and seemingly boundless ambition who talks too much and annoys most of those in his orbit. Yet, even then, he was hardly unknown, born into the upper echelons of the aristocracy, scion of a famous father who committed a kind of political suicide before his own early death, and the celebrated and sometimes notorious American beauty Jennie Randolph, a brilliant iconoclast legendary for her many lovers. Before the action unfolds in Hero of the Empire, the twenty-four-year-old Winston had already traveled much of the world, had a brief career as an army officer, served as war correspondent, published two books, and made an unsuccessful run for Parliament.

Anticipating what would become known as the Second Boer War and determined to be in the thick of the fray, in 1899 Churchill obtained credentials as a journalist and set off for Cape Town, then on to Ladysmith amid fierce hostilities. Journalist or not, when his train came under Boer attack, he took the lead and mounted a heroic defense that although it ultimately ended with his capture is credited with saving countless lives of those aboard, most of whom were in uniform.  His time as prisoner of war and his bold escape is the central focus of the narrative.

Telling this story as well as Millard does might well be achievement enough, but this book succeeds far beyond that because the author not only brings a singular authenticity to her portrait of Churchill, but also to the wider canvas of the milieu that was England, the British empire, and the Boer republics at the turn of the century. This is especially impressive because rather than a trained historian, Millard comes to her craft with a master’s degree in literature, although there is no lack of citations to underscore the meticulous research that is the foundation of her work.

Millard’s account of Churchill’s escape from prison in Pretoria is no less than thrilling, tracing his footsteps as he wandered alone in unknown territory, stowed away on freight trains, and even concealed himself for a time in the bowels of a mine.  Eventually he made it to safety, hundreds of miles away at what was then Portuguese East Africa. The British public followed Churchill’s exploits with great excitement, and at war’s end he returned home to wide acclaim. His next attempt at Parliament met with success; his long career in politics and public service had begun.

What would any Churchill book be without the anecdotes born of his eccentricities? Hero of the Empire has its share, especially as it recounts his captivity, where he demonstrated that regardless of his circumstances he was and ever would be a creature of the elite. So it was that as P.O.W. Churchill nevertheless regularly indulged in fine wines, traced troop movements on wall-size maps, and was only missed after his audacious escape because the local barber he had hired refused to be turned away by fellow prisoners when the time came for his regularly scheduled haircut!

Churchill has fallen out of favor to large portions of our modern audience. His racism, his imperialism, his misogyny, are all somewhat cringeworthy nearly one hundred fifty years after his birth. And it is not all political correctness: many of his views were well out of step with others more enlightened in his own era. At the same time, warts and all, Churchill was indeed a great man. It is impossible to imagine England under the siege of the Nazi war machine without Churchill cheering the Brits on, collaborating with FDR, demanding the sacrifice of the nation, and his clarion call to “Never, never, never give in.” The character, the determination, the heroism, the steadfastness of that iconic figure is already manifest in the form of that spindly young overconfident fellow brought back to life for us once more in the pages of this fine book. There are indeed too few characters like Winston Churchill to animate our history, and far too few writers like Candice Millard to deliver such readable accounts of past times.

Featured

Review of: Embattled Freedom: Journeys through the Civil War’s Slave Refugee Camps, by Amy Murrell Taylor

From the start of the Civil War, enslaved African Americans sensed the opportunity for freedom as Union forces seized territory at the outer margins of seceded states. Initially, there was the odd phenomenon of officers in blue uniforms turning over escapees to their slave masters. But all that changed in 1861 at Fort Monroe, at the southern tip of the Virginia Peninsula, when the famously chameleonlike General Benjamin Butler refused to return the three enslaved men who fled to his lines. Butler himself, at least at this stage of his life, could care less about blacks, slave or free, but reasoning that the Fugitive Slave Act no longer applied to the seceded states, and observing that every enslaved person serving as support behind Confederate lines freed up a white soldier to fire upon Union ranks, Butler ruled that such escapees be treated as “contrabands” of war and confiscated.  Contraband was an unfortunate term that equated the enslaved with property instead of people, but it nevertheless stuck—but then so too did Butler’s policy, which only a few months later was enshrined by Congress in the Confiscation Act of 1861.

What began as a trickle to Butler’s fort turned into a veritable flood that eventually was to bring something like a half-million formerly enslaved people to seek shelter with the Union army over the next four years.  About one-fifth of these would later serve, often heroically, as soldiers in the United States Colored Troops (USCT), but what about the other roughly four hundred thousand? What became of them? If their fate never occurred to you before, it is because the story of this huge, largely anonymous population has remained conspicuous in its absence in much of the vast historiography of the Civil War—at least until Amy Murrell Taylor’s brilliant, groundbreaking recent book, Embattled Freedom: Journeys through the Civil War’s Slave Refugee Camps.

Fleeing to Union lines was only possible if the army was in your vicinity, which put this option out of reach to much of the south’s enslaved population. That approximately one-seventh of the Confederacy’s enslaved population of 3.5 million fled to the surmised safety of Union lines when this limited opportunity knocked gives lie to the notion that the “peculiar institution” was benign and that the majority of the enslaved were satisfied with their lot—a sadly resurgent fiction promoted by “Lost Cause” apologists that has again found an unfortunate home within contemporary political discourse.  These 500,000 men, women and children—and yes, Taylor learned, there were indeed significant numbers of children—were of course not “contrabands” but refugees, as that term was understood both then and now. And they fled, usually in great peril, with little more than the rags on their backs, to what may have been a promise of freedom but also an unknown future fraught with difficulty.

What would become of them? It turns out that rather than a single shared outcome there was a variety of experiences that depended upon geography, the fortunes of war, and the arbitrary rule of local commanders. Neither the Union army nor the civilian north was prepared for the phenomenon of hundreds of thousands of black refugees, and the result was often not favorable to those who were the most vulnerable. At the dawn of the war, abolitionists still comprised only a tiny minority in the United States. Most of the north remained deeply racist, and those championing “free soil” generally had little concern for the welfare of African Americans on either side of the Mason-Dixon line. This reality informed policy, which even when well-intentioned tended to be patronizing, and was in fact frequently ignored. Embattled Freedom describes how orders were issued mandating both payments and provisions for refugees, who if physically capable were expected to provide the kind of support to the army as paid laborers that they might otherwise have given to the Confederate effort as slaves.  But in practice, they were rarely paid, their wages euphemistically diverted to the “general welfare,” or simply stolen by dishonest opportunists. And military necessity trumped all: there was a war on, blood was being shed, and the existential future of the nation was at stake. Refugees would ever remain a lower priority, at the mercy of the corrupt or the indifferent. Rarely consulted, decisions were made for them that often proved less than ideal.  The author treats us to a number of examples of this, but perhaps the most ironic is the campaign by well-meaning missionaries to equip refugee shelters with windows, when their occupants assiduously eschewed these for the sake of privacy and security.

Then there was the case of the Emancipation Proclamation, which freed the enslaved in Confederate-controlled territory, but paradoxically did not apply to areas controlled by the Union army. Only a rather obscure directive that would cashier any soldier returning a person to slavery served as an unlikely safety-net for refugees.  More significantly, there was the border state of Kentucky, which when it opted not to join the Confederacy became the largest slave state in the Union, something that endured until the Thirteenth Amendment was ratified, well beyond the end of the war. Refugee camps in Kentucky were ringed by slaveowners; wandering outside of camp could result in capture and enslavement that could be nearly impossible to dispute by a black person in a state where slavery was both legal and widespread.

Refugees ever lived at risk elsewhere in what can only be described as uncertain sanctuaries. Camps evolved into “freedman’s villages”—replete with churches, schools, stores and tidy public squares—that sprang up at the edges of Confederate territory occupied by Union troops, but long-term security was tenuous, dependent entirely on these garrisons.  If the army was redeployed, refugees were suddenly thrust into great danger and forced to flee once more lest they be captured and returned to slavery by roving bands of locals. It is well documented that Confederates habitually executed USCT troops wounded or seeking surrender. Less familiar perhaps was the devastation visited upon these undefended villages by rebels and their partisan allies enraged at the formerly enslaved living in freedom in their midst. Hunger often accompanied the refugee, even in the best of circumstances; a camp or village razed and burned could portend starvation.

The end of the war and abolition seemed to suggest a new beginning, but optimism was short-lived. Lincoln’s untimely death sent Andrew Johnson to the White House. The new president was deeply hostile to African Americans, and ensuing years saw pardons issued to former CSA political and military elites, property returned to once dislodged slave masters, and refugees terrorized and murdered, ultimately driven off the lands that once hosted thriving freedman’s villages. Where can you see a freedman’s village today?  You can’t: they were all plowed under, sometimes along with the bones of occupants less than willing to be displaced.

Embattled Freedom is an especially valuable resource because it contains not only a panoramic view of the refugee experience but an expertly narrowed lens that zooms in upon a handful of individuals that Taylor’s careful research has redeemed from obscurity.  Especially fascinating is the saga of Edward and Emma Whitehurst, an enslaved couple that had managed over time to stockpile a surprisingly large savings through Edward’s side work, in a unique arrangement with his owner.  Fleeing slavery, the entrepreneurial Whitehurst’s turned their nest egg into a highly successful and profitable store at a refugee camp in Virginia—only to one day lose it all to retreating Union forces desperate for supplies. There is also the inspiring story of Eliza Bogan of Helena, Arkansas, who as refugee leaves the harsh existence of picking cotton behind only to endure one obstacle after another in her pursuit of life as a free woman in uncertain circumstances.  There are other stories, as well. These personal studies not only enrich a well-written narrative, but ever engage the reader well beyond the typical scholarly work.

A week after I finished reading Embattled Freedom, I sat in the audience during Amy Taylor’s presentation at the Civil War Institute Summer Conference 2019 at Gettysburg College, which highlighted both her passion and her scholarship. During the Q&A, I asked what surprised her most during her research. Hard-pressed to answer, she finally settled on the number of children that turned up in the refugee population.  I would suggest that as a topic for her next book.  In the meantime, drop everything and read Embattled Freedom. You will not regret it.

Featured

Review of: The War for the Common Soldier: How Men Thought, Fought and Survived in Civil War Armies, by Peter S. Carmichael

A few years ago, I had the honor of being selected for a key role on a team engaged in scanning, transcribing and digitizing a trove of recently rediscovered letters, diaries and narratives of the Massachusetts 31st Infantry, which turned up more than a century after these were compiled by their regimental historian but left unpublished. In a lifetime of studying the American Civil War, soldiers’ letters were hardly new to me, of course, but I found myself surprisingly emotional as I became one of the very first in so many decades to get a glimpse at the sometimes-hidden hearts of these long-dead souls. And there was something else: rather than the random excerpt, often highlighted for its dramatic impact, that makes a familiar appearance in the pages of history books, these materials represent continuous strands of communication by nearly two dozen individuals, some of which stretched over a three-year period. The stories they tell run the gamut from the mundane to the comedic to the horrific, but collectively the nature and the personalities of the storytellers emerge to reveal authenticity in their experience too frequently lost in grand narratives about the war. A careful read of a man’s letters home over several years often unexpectedly expose truths that are omitted or deliberately distorted by the correspondent.

This overarching point is subtly but expertly made again and again in historian Peter S. Carmichael’s magnificent work, The War for the Common Soldier: How Men Thought, Fought and Survived in Civil War Armies, certainly one of the most significant recent contributions to the historiography. As primary sources, surviving letters from the front are critical and invaluable, but even more critical may be interpretation, which can be misled by taking these at face value, or plucking them out of context, or being seduced by the words of a man who wants his wife or mother—or especially himself—to believe that he is courageous or confident or committed to his cause when only some or none of those may be true.

In a dense, but highly readable account that brings a surprisingly fresh perspective to a frequently overlooked aspect of Civil War studies, Carmichael defies often prevailing generalizations of soldiers north and south that tend to predominate in the literature, reminding the reader that a tendency to oversimplification distorts the reality on the ground. Something like a total of 2.75 million men fought on both sides in the Civil War. These were living, breathing human beings, not simply the statistical figures fed into databases to produce the broad generalities pervasive in many narratives.  At the same time, he does not fail to locate and identify the commonalities in the rank and file that exist in multiple arenas, but his skillful approach to this end is guided by the nuance and complexity that is the mark of a great historian.

Carmichael’s well-written chronicle explores almost all aspects of a soldier’s life in camp, on the march and in battle, but that nuance is made most manifest in the chapter entitled “Desertion and Military Justice.” The accepted wisdom has long argued that bounty jumpers constituted the majority of those shot for desertion over the course of the war, and perhaps with some justification. But while the numbers underscore that there were plenty who likely fit that profile, Carmichael’s research demonstrates that such a broad brush obscures a reality that saw men on both sides leaving the lines and returning, frequently more than once, and typically with little or no penalty. This was especially common among Confederates, who usually fled not out of cowardice or convenience but rather to aid starving families back home desperate for survival. And there was, in many cases, a fine line between AWOL and desertion.  It is surprising how often luck or simply the vagaries of enforcement separated men made to sit on their own coffins with eyes bandaged while the firing squad formed up from those docked a month’s pay instead. It does seem that Lincoln’s moral compass was more finely oriented to the circumstances of the soldier missing from his company—even if this found friction among the Union brass—than was the case on the other side, for the reality was that by percentage far more men clad in gray were put to death than those in blue, and some of these were mass executions before the lines. What is clear is that on both sides, the common soldier—even the veteran accustomed to the gore and slaughter of battle—was deeply disturbed when compelled to witness the cold-blooded murder of a fellow soldier, even if he thought the man got his just deserts.

A review such as this cannot possibly touch upon all of the themes Carmichael surveys in this outstanding study, but I was especially drawn to his treatment of the phenomenon of malingering, which instantly found a familiar face in Cpl. Joshua W. Hawkes, one of my men from the 31st, who bragged in letters to his mother about his health while he served away from the cannon fire as part of the occupation army in New Orleans, even taking swipes at those pretending to be ill to avoid duty. Yet later, on the very eve of combat, he fell victim first to “diarrhoea” and then to a bewildering set of ever-shifting complaints that kept him confined to a hospital bed for months until he was eventually discharged for disability. I read this man’s letters in isolation, of course, but Carmichael’s impressive research demonstrates not only that this soldier’s manufactured symptoms put him in the company of thousands of other “shirkers,” but also underscores how difficult it was for doctors equipped with the primitive diagnostic tools of mid-nineteenth century medicine to distinguish the truly afflicted from those talented at feigning illness to avoid combat or earn a discharge. As such, there were men who genuinely suffered sent back to come under enemy fire, while others who were quite healthy succeeded in dodging the same.

Some years after my project with the 31st, I was given access to a private collection of unpublished letters from George W. Gould, a Massachusetts private killed at the bloody battle of Cold Harbor in 1864. I transcribed his correspondence and created a website for public access to honor him, and I visit his grave in Paxton MA several times a year. When I placed a flag on his grave to commemorate Memorial Day 2019, I found myself in somber reflection of not only the sacrifice of Private Gould, but also of the vast territory covered in The War for the Common Soldier, because although his name appears nowhere in the narrative this book is surely about George W. Gould and every man who marched alongside him, as well as every man he marched against in opposition with musket held high. Pvt. George W. Gould and Cpl. Joshua W. Hawkes are just two of the millions who either gasped their last breaths on Civil War battlefields or drank beer at memorials in the decades that followed.  If you want to understand that terrible war, you should indeed visit battlefields and explore the latest historiography, but you should also pause to read Carmichael’s superlative work. The truth is that you will never comprehend the Civil War until you come to understand the Civil War soldier. Some books should be required reading. This is one of them.

—————————————————————————————

[REVIEW ADDENDUM: Some years back, I had the great honor of being selected for a key role on a team engaged in scanning, transcribing and digitizing a trove of recently rediscovered letters, diaries and narratives of the Massachusetts 31st Infantry—a regiment that first served with Benjamin Butler as an occupying force in New Orleans, and later as part of the Red River campaign under Nathaniel Banks—which turned up in the archives of the Lyman & Merrie Wood Museum of Springfield History more than a century after they were compiled by their regimental historian but left unpublished due to his untimely death. These materials can be accessed at: https://31massinf.wordpress.com

I found Carmichael’s treatment of malingerers especially fascinating, because it related to my own work with the Massachusetts 31st and Cpl. Joshua W. Hawkes, who in letters to his mother made dozens of references to his generally good health during the first portion of his service, where he thrived as part of the occupying force under Benjamin Butler in New Orleans.  In one missive from the autumn of 1862 [letter 10/18/62], he even bragged about how quickly he recovered from the “ague” while taking a swipe at those who pretended to be ill, noting that while he was “back to duty now there is so much playing off sick I do not wish any such name.” Ironically then, in April 1863, on the eve of what would have been his first foray into combat, [letter 04/17/63] Hawkes was beset with “diarrhoea” [SIC] which eventually led to his return to New Orleans, this time to the St. James Hospital, where a bewildering set of ever-shifting complaints kept him confined—but not incapable of eating fairly well, such as “an egg in the morning, a piece of toasted bread each meal and a little claret wine,” [letter 6/4/63] and occasionally exploring the city when granted a pass—until he eventually succeeded in gaining a discharge for disability in July 1863.  In one of his more histrionic letters to mother, he proclaims:

“I am perhaps disposed to magnify my ails, but when I have seen men brought in here who had been forced to march with diarrhoea [SIC] … coming here too weak to walk and living but a week or two, then I have thought it was not best to beg to be sent away to the exposures of an army on active duty in the field. They can call me a coward, a shirk, what they choose, but I think it a duty to take care of my health not only for myself but on my mother’s account, what do you think of this logic?” [letter 06/04/63]

Apparently, this “logic” served Hawkes’ well, since he was sent home without ever coming under enemy fire and lived on until 1890!

Hawkes letters referenced above are accessible at:  https://31massinf.wordpress.com/correspondence/letters-of-joshua-w-hawkes-part-4-1863/

Some years after my project with the 31st, I was given access to a private collection of the unpublished letters of Pvt. George W. Gould, who was killed at the bloody battle of Cold Harbor in 1864. He has come to serve as my “adopted” Civil War soldier, so by honoring him I likewise honor all of those who have made the ultimate sacrifice. I scanned and transcribed his letters and created a website to honor him, which can be accessed at:  https://resurrectinglostvoices.com

I have attached this addendum not because these particular soldiers who fell or survived have a greater or lesser import than any of the other hundreds of thousands who served in the American Civil War, but rather to add meaningful context, and to underscore the essential point of Carmichael’s wonderful book, which is that you must read far more deeply into what these men had to say in their letters home if you really want to try to understand the war at all.]

Featured

Review of: Wanting, by Richard Flanagan

Tasmanian author Richard Flanagan has written seven novels, one of which—Gould’s Book of Fish—I would rank among the very finest of twenty-first century literature to date.  I primarily read books of history, biography and science these days, but I do stray to the realm of fiction from time to time. When I happen upon a writer whose literary output not only consistently transcends the best published fiction of its day, but is so iconic that it comes to define its own genre—Cormac McCarthy and Haruki Murakami also come to mind—I latch on to that novelist and set out to read their full body of work. Wanting marks my completion of all of Flanagan’s novels, and it turns out that I saved one of the very best for the very last.

There is irony here because I have long resisted it, based upon its off-putting description on Flanagan’s Wikipedia page—“Wanting tells two parallel stories: about the novelist Charles Dickens in England, and Mathinna, an Aboriginal orphan adopted by Sir John Franklin, the colonial governor of Van Diemen’s Land, and his wife, Lady Jane Franklin”—which struck me as a formula for fictional disaster! It turns out that I could not have been more wrong.

While several of Flanagan’s novels include characters from history, it would not be accurate to tag these as historical fiction, the way that category is generally understood. But then, the author’s work often defies classification. Flanagan is all about redefining genres—or creating new ones. Think Gabriel Garcia Marquez, John Irving, André Brink: Richard Flanagan truly belongs in that league.

The real Sir John Franklin did indeed serve as Lieutenant Governor of Van Diemen’s Land (today’s Tasmania), but he is better remembered as the arctic explorer who made a tragic end in 1847 in a disastrous attempt to chart the Northwest Passage, when his ships became icebound, resulting in his death as well as that of his entire crew.  The legend of the lost expedition he commanded, and the true fate of his crew, have been the subject of much speculation right down to the present day, and Franklin has often been lionized for his heroism. But the John Franklin of Wanting is not only less heroic, but rather instead a grotesque, self-absorbed, disturbing individual. Franklin and his equally narcissistic wife, Lady Jane—desperate for a child of her own—ignore prevailing taboos to adopt Mathinna, also a historic figure, one of the few full-blooded aborigines still remaining on the island after a sustained reign of terror by colonial settlers and a succession of pandemics had reduced their numbers to near extinction.  What at first glance smacks of altruism masks more questionable desires by each of the Franklins—their brand of “wanting”—that Mathinna comes to fulfill, or fails to fulfill.  The tragedy of Mathinna is brilliantly revealed through the nuance and complexity of a masterfully written narrative that subtly draws the reader in to expose a series of horrors hidden among the mundane that is ever chilling yet never stoops to the gratuitous.

As if these characters and themes were not sufficiently complicated for any work of fiction, the novel contains an equally compelling parallel tale, told in alternating chapters, of author Charles Dickens in London, some ten thousand miles away. The connection of the Franklins to Dickens was a visit by Lady Jane to the famed novelist, seeking his support.  In the years after her husband was lost to the Arctic, Lady Jane devoted her life both to memorializing him and sponsoring expeditions to locate him, in the feeble hope that he survived. Then evidence emerged that Franklin was in fact dead, hinting that in their last gasps he and the crew resorted to cannibalism to survive. Franklin’s widow will have none of it, and she enlists the aid of England’s most celebrated figure to defend Franklin’s honor against such horrid innuendo. Dickens, a Victorian rags-to-riches miracle who is both brilliant and wildly successful while yet morose and dissatisfied, haunted by the death of a favored child and locked in a loveless marriage, is plagued by his own sort of “wanting.” The intersection of his unrequited deepening well of discontent and Lady Jane’s determination to restore her husband’s reputation serves as the linchpin of the novel, spawning new purpose in Dickens even as Lady Jane basks in anticipation of the martyred explorer’s vindication. Dickens is far more intelligent and far more accomplished than either of the hapless Franklins, but despite his genius and outsize public persona he shares a similar unmistakable shallowness in his nature.  In Flanagan’s Wanting, Dickens struggles to exist outside of the characters in his novels, and then takes it upon himself to produce, direct and cast himself in a role on the stage that permits him to stand before an audience as the heroic, romantic figure he longed to be.

Fiction reviews should largely avoid spoilers so I will leave it here, but history buffs will certainly google the main characters to learn what really happened. It won’t be giving much away to note that six years after Wanting was published in 2008, the wreck of the HMS Erebus—one of Franklin’s ships—was discovered, and two years after that his second ship was found, the HMS Terror, said to be in pristine condition. Even prior to that, evidence that cannibalism was in fact part of the crew’s final days was substantiated, contradicting both Lady Jane and the ardent defense mounted by Dickens.  I will withhold the fate of poor Mathinna, other than to note that her gripping story—in the novel and in real life—will likely shadow the reader long after the last page of this book is turned.

I believe that every fiction review should include a snippet of the author’s own pen for those unfamiliar with their style and talent.  This bit concerns a minor character—if any of Flanagan’s characters can be said to be minor ones—an aging actress in Dickens’ London:

On the night she had received the news of Louisa’s death, leaving her the only surviving member of her family, Mrs Ternan had stifled her weeping with a pillow so her daughters would not hear her heart breaking and would never suspect what she now knew: that every death of those you love is the death also of so many shared memories and understanding, of a now irretrievable part of your own life; that every death is another irrevocable step in your own dying, and it ends not with the ovation of a full house, but the creak and crack and dust of the empty theatre. [p90]

That powerful excerpt is just a tiny sample of Flanagan’s superlative prose. Wanting ranks amongst his finest novels, which in addition to Gould’s Book of Fish should also include Death of a River Guide, and The Narrow Road to the Deep North, although there is not a bad one in the catalog. For the uninitiated who would like to experience Flanagan’s art, Wanting is a great place to start. Perhaps you may find yourself, like this reviewer, going on to read them all.

[I have reviewed several other novels by Richard Flanagan here: Death of a River Guidehttps://regarp.com/2015/07/23/review-of-death-of-a-river-guide-by-richard-flanagan/The Sound of One Hand Clappinghttps://regarp.com/2017/06/04/review-of-the-sound-of-one-hand-clapping-by-richard-flanagan/; The Narrow Road to the Deep Northhttps://regarp.com/2015/02/02/review-of-the-narrow-road-to-the-deep-north-by-richard-flanagan/; and, First Person: https://regarp.com/2018/09/02/review-of-first-person-a-novel-by-richard-flanagan/]

Featured

Review of: Life in Deep Time: Darwin’s “Missing” Fossil Record, by J. William Schopf

As a reader, some of my most serendipitous finds have been plucked off the shelves of used bookshops. Such was the case some years ago with Cradle of Life: The Discovery of Earth’s Earliest Fossils, by J. William Schopf, a fascinating account of how the author in 1965 was the first to discover Precambrian microfossils of prokaryotic life in stromatolitic sediments in Australia’s Apex chert dated to 3.5 billion years ago, the oldest confirmed evidence for life on earth at the time. My 2017 review of Cradle of Life—nearly twenty years after it was first published—sparked an email exchange with Bill Schopf that later led to his sending me a signed edition of his most recent book, Life in Deep Time: Darwin’s “Missing” Fossil Record.  He did not ask me to read and review it, but naturally I did.

In this work, Schopf—an  unusually modest man of outsize accomplishment—typically credits good fortune rather than his own estimable talents, often emphasizing the centrality of teamwork in the pursuit of sound science, as well as frequently paying tribute to the notion that each discovery and its discoverers are after all “standing on the shoulders of the giants” that preceded them.  A young grad student when he first got into the game, at seventy-seven the author now remains the most significant living survivor of those paleobiologists that devoted decades in an effort to identify and substantiate traces of the most ancient forms of life on the planet.  He feels the clock ticking, and thus is strongly motivated by a desire to leave a record of the journey that led to such consequential discoveries now that most of his peers have passed on.

The result is Life in Deep Time, a curious book—actually something of a blend of three different kinds of books—that succeeds more often than not in its efforts, even if at times it can be an uphill climb for the general reader. It is first and foremost a memoir that dwells for a surprisingly long time on the author’s youth and upbringing, which can be awkward at times because of his decision to employ a third-person limited literary technique in the narrative, so that it is “Bill wondered about …” rather than “I wondered about …” Early on, the reader might grow a bit impatient as Bill negotiates high school, often under the disapproving glare of his father, an admirable man who nevertheless sets impossibly high standards for his son and is quite difficult to please. Yet, even then Schopf is ever the optimist, always grateful for that which goes his way, and treating that which does not as a valuable learning experience. Rather than being scarred from the travails of enduring a demanding parent, he seems to sit in awe of a father who sets challenges that are always another chalk-mark higher than Bill can grasp. Such circumstances for another might leave that child a substance abuser or a ne’er-do-well, but it simply inspires Bill Schopf to be the best-of-the-best, fully absent an uncontainable ego or an axe to grind.

Beyond memoir, the second focal point of the book recounts Schopf’s scientific achievements, while paying tribute to those he worked with, many of whom are little known or entirely unknown outside of the paleobiology community. Science, the author repeatedly underscores, is a team effort. While the ever-modest Schopf does not dodge the recognition he clearly deserves for his key contributions to the field, he makes certain that credit gets appropriately shared among mentors and colleagues and even assistants.

Schopf’s work has spawned controversy that sometimes spilled over into the public arena. In the first case, there was pushback on his remarkable find of those 3.5-billion-year-old microfossils. Peer-reviewed science upheld his claim, although a prominent rival paleobiologist continued to dispute it. In the second, Schopf was brought in by NASA in 1996 to evaluate the extraordinary if premature announcement that life had been identified in a Martian meteorite, which was trumpeted by scientists, politicians and the media. Schopf was skeptical, and subsequent careful research proved him correct. The author’s well-written examination of these controversies is both coherent and enlightening, although blemished a bit by the continued use of that third-person limited literary technique, which feels especially awkward as he answers his critics through the narrative.

Schopf’s greatest triumph was certainly his discovery of those ancient fossils in Australia’s Apex chert, detailed in Cradle of Life and revisited in Life in Deep Time. Modern science has established that the earth is a little more than 4.5 billion years old, but in the mid-nineteenth century, when Charles Darwin devised his theory of evolution, no one could be sure what the true age of the planet was, although most scientists knew it was far older than the six thousand years that theologians claimed. In his groundbreaking 1859 treatise, On the Origin of Species, Darwin estimated that the erosion of England’s Sussex Weald must have taken some 300 million years, but he was taken to task on this by the famed Lord Kelvin, who publicly scolded that the earth could not possibly be older than 100 million years. Whatever the actual number, Darwin was deeply troubled because the process of natural selection that he envisioned would take much, much longer in order for higher life forms to evolve.  In the century that followed Darwin, greater scientific sophistication established the true age of the earth with greater specificity, but it turned out that identifying the planet’s earliest life forms proved quite elusive. This is because traces of these unicellular organisms lacking a membrane-bound nucleus—the prokaryotes that include Archaea and Bacteria—can be maddeningly difficult to identify, and often actually appear to be inorganic remains with strikingly similar characteristics. A famous false positive in this venue set paleobiology back for many decades. As a result, even as late as 1965, Schopf’s find of 3.5-billion-year-old microfossils of prokaryotic life proved controversial, although eventually gained full acceptance by the scientific community.

The science behind all this is remarkably complex, and that is the third focus in Life in Deep Time, a welcome addition for those comfortable with textbooks on paleobiology, but often inaccessible to the general reader.  I am trained in history rather than science, so I found some challenging moments in Cradle of Life that had me re-reading a paragraph or two, but much of it was indeed comprehensible to me as a non-scientist, which is not always the case with the final section of Life in Deep Time, which casually includes sentences such as this one:

“By this time, Bill had gained sufficient knowledge of the chemistry of kerogen, the coaly carbonaceous matter of which ancient microscopic fossils are composed, that he imagined that if the dominating polycyclic aromatic ring structures of the fossil kerogen were irradiated with an appropriate wavelength of laser light, they too would fluoresce and produce the images he sought.” [p186]

Material like this is certainly not impenetrable for an educated reader, but long discourses in this vein can lose a wider audience not schooled in paleobiology. Perhaps this content, although critical to scientists reading the book, might have been better placed in the appendix so as not to lose the flow of an otherwise engaging narrative.

While portions of Life in Deep Time may be difficult to navigate for the general reader, I would nevertheless recommend it. Bill Schopf is a remarkable man, a great scientist and a fine writer. The various threads of the tale he relates here add up to a storied saga of the evidenced-based search for the earliest life on the planet, as well as that of the distinguished if often otherwise anonymous men and women who were responsible for marking one of the greatest milestones in recent scientific history. The voice of Bill Schopf is a humble yet commanding one: it deserves to be heard.

 

[My review of: Cradle of Life: The Discovery of Earth’s Earliest Fossils, by J. William Schopf, referenced above, is here:  https://regarp.com/2017/05/14/review-of-cradle-of-life-the-discovery-of-earths-earliest-fossils-by-j-william-schopf/ ]

Featured

Review of Egypt: Lost Civilizations, by Christina Riggs

Apparently, Sigmund Freud spent the final year of his long and productive life as a refugee from the Nazi menace, in a house in London that is now a museum to his legacy. On the great exile’s preserved desk still sits a good number of statuettes from ancient cultures that he collected, including on one corner a carved stone baboon—known as the “Baboon of Thoth”—symbolic of that ancient Egyptian deity identified with both writing and wisdom. “Freud’s housekeeper recalled that he often stroked the smooth head of the stone baboon, like a favourite pet.” [p13] This anecdote serves as an introduction to Egypt, by Christina Riggs, a 2017 addition to the wonderful Lost Civilizations series that also features volumes devoted to the Etruscans, the Persians, and the Goths.

I was so taken by one of these—The Indus, by Andrew Robinson—that I put the others on a birthday list later fulfilled by my wonderful wife, so I now own the remainder of the set, each one destined to sit in queue in my ever-lengthening TBR until its time arrives.  Egypt came up first. But it turns out that Riggs’ book stands apart from the others because it is not at all a history of Egyptian civilization, but rather a studied essay on the numerous ways that ancient Egypt came to be understood by subsequent cultures, its historical record manipulated and frequently distorted to support forced interpretations that suited its various interpreters. The toolkit deployed to construct sometimes elaborate visions that reflected far more kindly upon the later civilizations that succeeded it rather than accurately representing the ancient one that inspired these included its monumental architecture, its tomb painting, its mummified dead, its hieroglyphs, even abstract and unfounded notions of race and superiority—as well as, of course, objets d’art like the “Baboon of Thoth.”

Riggs, whose background is in art and archaeology, writes well and presents a series of articulate arguments to support her examination of all the ways Egypt has echoed down through the ages. It is often overlooked that to the first century Roman tourists who scribbled graffiti on tombs in the Nile valley, the pyramids of Giza were more ancient by half a millennium than those long-dead Romans are to us today! So, it is a very long echo indeed. Alas, for all of Rigg’s talent, I myself made a poor audience for her narrative. I opened the cover yearning to learn more about Egypt, not more about how we recall it. I might not have made the mistake had I noticed at the outset how her title—which is absent the definitive article—differed from the others in the series. There is The Indus, The Barbarians, The Etruscans.  Riggs’ edition is simply Egypt. That should have been a clue! But that is, as we say on the street, “my bad,” not the author’s.  Despite this, I did find enough to hold my interest, to finish the book, and to recommend it—but only to those with a far greater interest in art history and interpretation than I possess.

I have reviewed other volumes in the Lost Civilizations series here:

Review of: The Indus: Lost Civilizations, by Andrew Robinson

Review of: The Etruscans: Lost Civilizations, by Lucy Shipley

Review of: The Sumerians: Lost Civilizations, by Paul Collins

 

Featured

Review of: The Phantom Atlas: The Greatest Myths, Lies and Blunders on Maps, by Edward Brooke-Hitching

A small island called “Bermeja” in the Gulf of Mexico that was first charted in 1539 was—after an extensive search of the coordinates—found to be a “phantom” that never actually existed in that latitude, or anywhere else for that matter. It turns out that this kind of thing is not unusual, that countless phantom islands, some the stuff of great legend, appeared on countless charts dating back well beyond the so-called “Age of Discovery” to the very earliest maps of antiquity. What is unusual about Bermeja is that its nonexistence was only determined in 2009, after showing up on maps for almost five hundred years!

The reader first encounters Bermejo in the “Introduction” to The Phantom Atlas: The Greatest Myths, Lies and Blunders on Maps, by Edward Brooke-Hitching, a delightful, beautifully illustrated volume that is marked by both the eclectic and the eccentric. But the island that never was also later gets its due in its own chapter, along with a wonderful, detailed map of its alleged location.  This is just one of nearly sixty such chapters that explores the mythical and the fantastical, ranging from the famous and near-famous—such as the Lost Continent of Atlantis and the Kingdom of Prester John—to the utterly obscure, like Bermeja, and the near-obscure, like the island of Wak-Wak.  While the latter, also known as Waq-Waq in some accounts, apparently existed only in the imagination of the author of one of the tales in One Thousand and One Nights, it nevertheless made it into the charts courtesy of Muhammad al-Idrisi, a respected twelfth-century Arab cartographer.

But The Phantom Atlas is not just all about islands. There are mythical lands, like El Dorado and the Lost City of the Kalahari; cartographic blunders, such as mapping California and Korea as islands; even persistent wrong-headed notions like the Flat Earth. There is also a highly entertaining chapter devoted to the outlandish beings that populate the 1493 “Nuremberg Chronicle Map,” featuring such wild and weird creatures as the “six-handed man,” hairy women known as “Gorgades,” the four-eyed Ethiopian “Nistyi,” and the dog-headed “Cynocephali.”  That at least some audiences once entertained the notion that such inhabitants thrived in various corners of the globe is a reminder that the exotic characters invented by Jonathan Swift for Gulliver’s Travels were not so outrageous after all.

One of the longer and most fascinating chapters, entitled “Earthly Paradise,” relates the many attempts to fix the Biblical Garden of Eden to a physical, mapped location. The author places that into the context of a wider concept that extends far beyond the People of the Book to a universal longing that he suggests is neatly conjured up with the Welsh word “Hiraeth,” which he loosely defines as “an overwhelming feeling of grief and longing for one’s people and land of the past, a kind of amplified spiritual homesickness for a place one has never been to.” [p92] It is charming prose like that which marks Brooke-Hitching as a talented writer and distinguishes this volume from so many other atlases that are often simply a collection of maps mated with text to serve as a kind of obligatory device to fill out the pages. In happy contrast, there are enchanting stories attached to these maps, and the author is a master raconteur. But the maps and other illustrations, nearly all in full color, clearly steal the show in The Phantom Atlas.

Because I obtained this book as part of an Early Reviewers program, I felt an obligation to read it cover-to-cover, but that is hardly necessary.  A better strategy is to simply pick up the book and let it open to any page at random, then feast your eyes on the maps and pause to read the narrative—if you can take your eyes off the maps! From al-Idrisi’s 1154 map of Wak-Wak, to Ortelius’s 1598 map of the Tartar Kingdom, to a 1939 map of Antarctica featuring Morrell’s Island—which of course does not really exist—you are guaranteed to never grow bored with the visual content or the chronicles.

There are, it should be noted, a couple of drawbacks in arrangement and design, but these are to be laid at the feet of the publisher, not the author. First of all, the book is organized alphabetically—from the Strait of Anian to the Phantom Lands of the Zeno—rather than grouped thematically, which would have no doubt made for a more sensible editorial alternative.  Most critically, while the volume is somewhat oversize, the pages are hardly large enough to do the maps full justice, even with the best reading glasses. Perhaps the cost was prohibitive but given the quality of the art this well-deserves treatment in a much grander coffee table size edition. Still, despite these quibbles, fans of both cartography and the mysteries of history will find themselves drawn to this fine book.

The phantom island of Bermeja, featured in an 1846 map.

____________________________________________________________________________________________

MAP CREDIT: Tanner, Henry S. – A Map of the United States of Mexico, 1846, public domain, https://en.wikipedia.org/wiki/Bermeja#/media/File:Bermeja.jpg

ILLUSTRATION CREDIT: A cynocephalus. From the Nuremberg Chronicle (1493) Hartmann Schedel (1440-1514), – Nuremberg Chronical (Schedel’sche Weltchronik), page XIIr, public domain, https://en.wikipedia.org/wiki/Cynocephaly#/media/File:Schedel%27sche_Weltchronik-Dog_head.jpg

Featured

Review of: The Thin Light of Freedom: The Civil War and Emancipation in the Heart of America, by Edward L. Ayers

When I was growing up in the 1960s, the Civil War was often dubbed a struggle of “brother against brother,” uttered with a smack of wonderment at how it was that a nation united by so many commonalities could have could come apart like that, only one short century prior, taking more than six hundred thousand lives in the process? Then, as the centrality of slavery came to be properly emphasized, both historiography and sentiment shifted.  Certainly, there were plenty of families divided by war—perhaps most famously Mary Lincoln’s, whose brothers fought for the Confederacy—but the real division turned out to be geographic and defined more by the South’s “peculiar institution” than habit or climate. Alexis de Tocqueville’s oft-cited anecdotal 1835 comments, in Democracy in America, that sharply characterized the vast cultural gulf that lay between free and slave states on opposite sides of the Ohio River, turned out to reflect a true demarcation that saw two different visions of America evolve within a single nation. Slavery defined the south, even if most southerners were not slaveowners, so that long before secession the south had indeed become another country.

That such conclusions can also be overdrawn was brilliantly demonstrated by historian Edward Ayers in his magnificent 2003 work, In the Presence of Mine Enemies: War in the Heart of America, 1859-1863, which surveys the “Great Valley” that stretches north of the Mason-Dixon line to encompass Franklin County, Pennsylvania, and south of it to include Augusta County, Virginia. Slavery was indeed part of the fabric of life in the lower valley in cities like Staunton, Virginia, yet on the eve of the war its citizens still had much more in common than not with denizens of the upper valley in cities like Chambersburg, Pennsylvania, which was free soil. Relationships that went well beyond trade flourished in a porous border of communities that shared a largely common identity. Much of Augusta County was old Whig and staunchly Unionist; when the secession crisis was upon them most fiercely resisted calls to leave the Union.  But when Virginia joined the Confederacy, those loyalties quickly shifted. Franklin County had little sympathy for what it viewed as the treason of their southern brethren. Men from both sides eagerly—or not so eagerly, depending upon the man—grabbed muskets and rushed off to the killing fields in the name of honor and duty or simply obligation. The war truly tore the Great Valley asunder, and before it was through both sides were littered with death and destruction utterly unimaginable just a few years earlier.

In his latest work, The Thin Light of Freedom: The Civil War and Emancipation in the Heart of America, Ayers picks up where he left off, taking the saga from the critical turning points of the war that characterized the summer of 1863 to Appomattox and its aftermath, and beyond that through Reconstruction and what was to be its tragic legacy for African Americans.  In chapters often bracketed by an italicized overview that puts events in the valley in context with the wider perspective of the war, Ayers narrows the lens to focus upon key individuals emblematic of the struggle on the ground. It is in these human stories that it becomes clear that the noise of cannon fire, calls to glory, and the plaintive cries of the wounded and the dying coming from the valley was actually something of a small-scale version of the greater thunder that echoed across the national landscape in a terrible, bloody conflict that claimed so very many lives before the guns fell silent.

Virginia’s Shenandoah Valley had long been the bread basket for the Confederacy, and its citizens still proudly recalled when Stonewall Jackson made a mockery of three Union armies in its environs in his brilliant Valley Campaign of 1862.  But Jackson was dead now, a victim of friendly fire at Chancellorsville, and Federal forces threatened both the farms and the rails that delivered their precious products to grey-clad stomachs. One of the chief motives that took Lee—sans his most famed lieutenant—to Gettysburg was an attempt to divert Union forces to take the pressure off valley farmers and protect cherished crops. Despite his failure there, the valley did win a brief respite, and—to Lincoln’s great chagrin—Lee managed an orderly retreat with a wounded yet still formidable army that was to persist in the field for nearly two full years. In between, the men in Lee’s army were forsworn from the kind of destruction and plunder that they found so abhorrent in the ravages—both real and imagined—visited upon the Shenandoah by the Union, which was universally branded by southerners as uncivilized. The exception was to be the African American, formerly enslaved or just of a matching color, that the Army of Northern Virginia gleefully rounded up and sent south to become chattel. Their version of civilization remained unrattled by such acts of cruelty.

The point has been made that the “total war” of the twentieth century was presaged by the acts of Union forces upon civilians in the Civil War, but that is manifestly overdrawn.  Even at its height, as Sherman marched to the sea and Sheridan despoiled the Shenandoah, Grant’s strategic imperative designed to deny the Confederacy foodstuffs and matériel hardly resulted in the slaughter of innocents seen in 1914 and beyond. At the same time, for those who lived through it, it seemed a line had been crossed from an earlier age, even if historians might argue that same line had already been crossed by the British some four-score years prior. There was palpable pain on both sides, even if the south suffered more as the war ground on to its final conclusion. Federal forces indeed quite ruthlessly put farms and factories out of business in the Shenandoah. Earlier restraint eventually gave way, and Confederates mercilessly and without regret retaliated by burning Chambersburg, Pennsylvania in 1864.

Virginia was, of course, the central battleground of the Civil War in the eastern theater, so she had more stories to tell. Many of these stories come from the valley, and some truly tear at the heartstrings:

On [one Virginia] farm, a Union officer ordered a fine mare bridled and led away. When the mare’s colt followed its mother, the farm woman begged them not to take an animal so young that it could be of no use to an army. The officer agreed the animal was useless and simply commanded one of his men to shoot the colt. The woman wept over its body. People remembered these stories for generations. [p240]

But the tear in the reader’s eye for the dead colt and the sobbing woman is quickly washed away and replaced with horror as Ayers recounts another telling episode:

[In Saltville, in 1864,] Confederates, enraged after discovering that they were fighting against black men, killed the wounded African-American soldiers left behind after the failed Union attack. [Diaries of those at the scene reported] … Confederate soldiers … “shooting every wounded negro they could find” [and] that scouts “went all over the field and … sung the death knell of many a poor negro who was unfortunate enough not to be killed yesterday. Our men took no negro prisoners. Great numbers of them were killed yesterday and today…” [General John Breckinridge arrived and] “ordered that the massacre should be stopped. He rode away and—the shooting went on. The men could not be restrained.” The murder continued for six more days, culminating with guerrillas forcing their way into a makeshift hospital at Emory & Henry College and shooting men, black and white, in their beds … A Richmond newspaper printed a tally that showed telling numbers: 150 black Union soldiers had been killed and only 6 wounded, while 106 white soldiers had been killed and 80 wounded. The ratios testified that dozens of wounded African-Americans had been killed …The Richmond paper celebrated the rare Confederate victory over all the “niggers” and Federal troops. [p242-43]

In extremely well-written accounts like these that marry a passionate narrative to solid history aimed at both the scholarly and popular audience, Ayers artfully brings the heartbreaking realities of war in the valley on both sides to our modern doorstep, forbidding us to look away, and compelling us to pick up and cradle the truth of what really transpired.

Of course, as the postwar “Lost Cause” myth took hold, we know now that stories like the dead colt would not only frequently be repeated, but magnified and romanticized, while the slaughter of wounded blacks in Saltville would deliberately be erased. Since most histories of the conflict end at Appomattox or shortly thereafter, readers are denied the painful epilogue of how that came to be so. Here Ayers bucks that trend and keeps going all the way to 1902.

A potent strain in the most recent historiography argues convincingly that while the north claimed military victory, the south ultimately won the Civil War. A week after Lee’s surrender, Lincoln was dead, replaced by Tennessean Andrew Johnson, who welcomed the ex-Confederates who seized political power as the states that had seceded were restored to the Union, while demonstrating little regard for the millions of formerly enslaved African Americans cast adrift in a hostile and economically devastated southern landscape. Despite the efforts of the “Radical Republicans” who controlled Congress to seek justice for blacks through Reconstruction, Johnson dominated events, and blacks found themselves terrorized and murdered by former Confederate elites who would not tolerate steps towards fairness and equality. With emancipation, the former “three-fifths” rule that defined representation was no more, and with millions of blacks now counted as actual persons, newly readmitted states actually gained more political power than they had possessed in the antebellum years. Institutionalized terror kept African Americans from the ballot box and transformed their status into that of second-class citizen, which was hardly challenged in the century to follow.

If the valley was a kind of microcosm of the Civil War in America, by extending his narrative Ayers superbly demonstrates that so too was its unfortunate aftermath for African Americans. The Thin Light of Freedom is an outstanding work on multiple levels, not least in its success in conjuring empathy for all of the victims on both sides, and guiding us to a greater appreciation of how and why the many unresolved elements of that long ago conflict continue to resonate, often uncomfortably, for the United States in the twenty-first century.

Featured

Review of: Imagine John Yoko, by John & Yoko Lennon

<

It was late and I was on my way home, rock n’ roll blasting on the car radio. It was the one-week anniversary of our very first apartment together as a couple, so there was a kind of glow around the day. Then the music cut off abruptly and the news broke: John Lennon had been shot. John Lennon was dead. When the tunes resumed, it was all Beatles and Lennon solo stuff. One of the songs was, of course, Imagine. Tears streamed down my face. It was December 8, 1980.

Imagine had been recorded and released in 1971, but as the year 1980 closed out that already felt like fifty years ago. The Vietnam War and Nixon were long gone. The sense of radicalism, of tumult—as well as innovative creative expression in music and the arts—had slipped away, its wake littered with the detritus of cocaine, schlocky pop music, and a kind of national ennui.  Most men, including myself, didn’t wear their hair shoulder-length anymore. Almost exactly a month before Lennon’s murder, Ronald Reagan was elected President, leaving many of us far more shaken than stirred.

John Lennon had recently reemerged after a long hiatus from the studio and public life. He was just forty, but he looked much older than that. Double Fantasy—his first album in five years, featuring songs by John and Yoko—was released just three weeks before his death. I personally found it weak and disappointing. But I bought it just days after it hit the record stores—of course—it was music from John Lennon!  Lennon had been my favorite Beatle, as well as a kind of personal hero: a peace activist, an iconoclast, a man who found himself trapped by the money and fame and lifestyle that others salivated for, a man willing to throw it all away (well, perhaps not all the money) for the love of his life, avant-garde artist Yoko Ono, even if many of us were puzzled by his obsession with her. It turned out that the sum of its parts that was the Beatles would ever far outshine the solo work of its members, including Lennon, but perhaps his best work was the album Imagine that featured that eponymous song of hope that remains a soft-rock national anthem. John’s murder sent Double Fantasy skyrocketing on the charts, if not to critical acclaim, but Imagine is the real legacy of John Lennon.

Thirty-eight Christmases after Lennon’s assassination, the stark white cover of the beautiful outsize volume Imagine John Yoko emerged beneath festive wrapping paper, a gift from my wife. Compiled by Yoko, but with author credits to John and Yoko Lennon, this gorgeous coffee table edition boasts extensive interviews, black and white photography, liner notes, illustrations, and ephemera, crafted to tell the “definitive inside story” of the making of the Imagine album and film of the same name at their English country mansion estate, Tittenhurst Park.

The spotlight is not only upon John and Yoko, but also on a generous cast of characters, including co-producer Phil Spector, then-giants of the music scene such as George Harrison, Nicky Hopkins and Mike Pinder, as well as lesser-known figures, plus all sorts of production assistants and the often uncredited folks who each play a significant if not always acknowledged role in the final cut of a masterpiece like Imagine. Interview excerpts are not dated; some are contemporary to production, while others look back from decades ahead. Sadly, like Lennon, many have passed on, including Harrison and Hopkins; King Curtis, who sat in on saxophone, was murdered in late summer of that same year. Ironically, Phil Spector and drummer Jim Gordon—of Derek and the Dominos fame—are both in prison serving life sentences for murder. Almost all the rest who are still alive have faded into obscurity. But thumbing through this magnificent book, for a moment it is the early part of 1971 again: John Lennon is just thirty, madly and obsessively in love with the older Yoko Ono, who just as madly and obsessively reciprocates. John has left the Beatles behind, his long collaboration and once-close friendship with Paul McCartney on the rocks, but there is a palpable sense of great promise in what the future holds for John and Yoko.

The very next day after I began perusing Imagine John Yoko—and before it turned into a cover-to-cover read for me—I dug out my old vinyl copy of Imagine and gave it a spin. I had not listened to it in many years and I had forgotten what a truly great album it is. The title track tends to get all the attention, but to my mind Gimme Some Truth is the best song on the record. Other iconic tunes include Crippled Inside, Jealous Guy and I Don’t Want to be a Soldier.  Some might argue that none of it lives up to Strawberry Fields Forever or Happiness is a Warm Gun, but there’s little doubt that the collection of songs on Imagine is outstanding and certainly Lennon’s best post-Beatles work. It was re-listening to the album after all this time that led me to carefully read, rather than skim, the entire book. Along the way, I also screened the Blu-ray DVD that contains the full length “rockumentary” film Imagine, replete with innovative music videos from the Imagine album as well as selections from Yoko’s Fly album, as well as a companion “making-of-Imagine” film entitled Gimme Some Truth. Icing on the cake includes cameos from Andy Warhol, Fred Astaire, Dick Cavett and Jack Palance. I highly recommend these audio-visual companions to the book to help to make it come to life in all its brilliance once more.

The highlight of the book and the film is John in the “White Room” at Tittenhurst recording Imagine, singing and playing on the all-white Steinway grand piano that he gave to Yoko for her birthday that year, while Yoko slowly opens a series of white shutters to let light stream in. At the end, Yoko is seated beside John at the piano, and they exchange looks that reflect such a degree of genuine mutual love and affection and admiration that that one single moment serves to validate the entire project. The combined experience of immersing myself in the book, the album and the films made me not only come to better appreciate the superlative achievement of Imagine, but also the integral role that Yoko represented as artist and inspiration throughout.  Like much of the public, back in the day I found it difficult to grasp John’s utter infatuation with Yoko, but the testimony of so many in this book underscores Yoko’s essential piece in the creation of this masterpiece. At the same time, listening to her vocals on portions of the Imagine film have yet to convince me that she has talent as a singer. Still, Yoko was clearly full partner to Imagine, not some assistant. It would never have been if not for her presence in John’s life.

One of my favorite bits in the book and in the Gimme Some Truth film feature Claudio, a Vietnam Vet suffering from PTSD, who was found to be living for some days in the woods at Tittenhurst. Claudio had become convinced that John was communicating with him through his lyrics. Disheveled and confused, he is brought before John, who tells him that “I’m just a guy who writes songs,” and patiently explains to an obviously crestfallen Claudio that the lyrics have nothing to do with him. There is a brief pause, and then John, with much empathy, asks: “Are you hungry?” John then brings him in and feeds him at his table.  Claudio was both disturbed and obsessed with John Lennon, and the recounting of this episode made me wonder how things might have turned out differently if John had managed to similarly engage someone else who was disturbed and obsessed with him—Mark David Chapman—before it was too late.

On the final pages of Imagine John Yoko, they each speak to us.  There’s an excerpt from an interview with John saying of he and Yoko that “We’d like to be remembered as the Romeo and Juliet of the 1970s.” When asked if he had a picture of “When I’m 64,” John replied:

“I hope we’re a nice old couple living off the coast of Ireland or something like that—looking at our scrapbook of madness. My ultimate goal is for Yoko and I to be happy and try and make other people happy through our happiness. I’d like everyone to remember us with a smile . . . The whole of life is a preparation for death. I’m not worried about dying. When we go, we’d like to leave behind a better place.” [p298]

Those days of turning scrapbook pages were, sadly, not to be. As a fan, as a reviewer, I would urge you to buy this book and to read it, but it is not for me but rather for Yoko to deliver the coda, of course:

“It was such an incredible loss when I think about it . . . See, most people think, ‘Well, he’s a rocker and just kind of rough, maybe,’ but no. At home he was a very gentle person and extremely concerned about me but also concerned about the world too. I still miss him, especially now because the world is not quite right and everybody seems to be suffering. And if he was here it would have been different, I think. I think that in many ways John was a simple Liverpool man right to the end. He was a chameleon, a bit of a chauvinist, but so human. In our fourteen years together he never stopped trying to improve himself from within. We were best friends. To me, he is still alive. Death alone doesn’t extinguish a flame and a spirit like John.” [p298]

PODCAST Review of: On Tyranny (Graphic Edition): Twenty Lessons from the Twentieth Century, by Timothy Snyder, Illustrated by Nora Krug


https://www.podbean.com/media/share/pb-uk6ef-15ec5f9

Review of:  On Tyranny (Graphic Edition): Twenty Lessons from the Twentieth Century,

by Timothy Snyder, Illustrated by Nora Krug

Reviewed by Stan Prager, Regarp Book Blog

PODCAST Review of: The Saddest Words: William Faulkner’s Civil War, by Michael Gorra

<iframe title=”Review of:  The Saddest Words: William Faulkner’s Civil War, by Michael Gorra” allowtransparency=”true” height=”150″ width=”100%” style=”border: none; min-width: min(100%, 430px);height:150px;” scrolling=”no” data-name=”pb-iframe-player” src=”https://www.podbean.com/player-v2/?i=y4fu4-15aed2d-pb&from=pb6admin&share=1&download=1&rtl=0&fonts=Arial&skin=1&font-color=&logo_link=episode_page&btn-skin=7&#8243; loading=”lazy”></iframe>

 

 

https://www.podbean.com/media/share/pb-y4fu4-15aed2d

Review of:  The Saddest Words: William Faulkner’s Civil War, by Michael Gorra

Reviewed by Stan Prager, Regarp Book Blog

PODCAST Review of: The Russo-Ukrainian War: The Return of History, by Serhii Plokhy

 

 

https://www.podbean.com/media/share/pb-95e8i-1585a2a

Review of:  The Russo-Ukrainian War: The Return of History, by Serhii Plokhy

Reviewed by Stan Prager, Regarp Book Blog

PODCAST Review of: Ota Benga: The Pygmy in the Zoo, by Phillips Verner Bradford & Harvey Blume

 

https://www.podbean.com/media/share/pb-tr6xc-15633f8

Review of:  Ota Benga: The Pygmy in the Zoo, by Phillips Verner Bradford & Harvey Blume

Reviewed by Stan Prager, Regarp Book Blog

PODCAST Review of: The Children of Athena: Greek Intellectuals in the Age of Rome: 150 BC-400 AD, by Charles Freeman

https://www.podbean.com/media/share/pb-8sha3-1550a13

Review of:  The Children of Athena: Greek Intellectuals in the Age of Rome: 150 BC-400 AD, by Charles Freeman

Reviewed by Stan Prager, Regarp Book Blog

%%footer%%