Featured

Review of: Scars on the Land: An Environmental History of Slavery in the American South, by David Silkenat

For several days we traversed a region, which had been deserted by the occupants—being no longer worth culture—and immense thickets of young red cedars, now occupied the fields, in digging of which, thousands of wretched slaves had worn out their lives in the service of merciless masters … It had originally been highly fertile and productive, and had it been properly treated, would doubtlessly have continued to yield abundant and prolific crops; but the gentlemen who became the early proprietors of this fine region, supplied themselves with slaves from Africa, cleared large plantations of many thousands of acres—cultivated tobacco—and became suddenly wealthy … they valued their lands less than their slaves, exhausted the kindly soil by unremitting crops of tobacco, declined in their circumstances, and finally grew poor, upon the very fields that had formerly made their possessors rich; abandoned one portion after another, as not worth planting any longer, and, pinched by necessity, at last sold their slaves to Georgian planters, to procure a subsistence … and when all was gone, took refuge in the wilds of Kentucky, again to act the same melancholy drama, leaving their native land to desolation and poverty … Virginia has become poor by the folly and wickedness of slavery, and dearly has she paid for the anguish and sufferings she has inflicted upon our injured, degraded, and fallen race.1

Those are the recollections of Charles Ball, an enslaved man in his mid-twenties from Maryland who was sold away from his wife and child and—wearing an iron collar shackled to a coffle with other unfortunates—was driven on foot to his new owner in Georgia in 1805. As he was marched through Virginia, the perspicacious Ball observed not only the ruin of what had once been fertile lands, but the practices that had brought these to devastation. Ball serves as a prominent witness in the extraordinary, ground-breaking work, Scars on the Land: An Environmental History of Slavery in the American South [2022], by David Silkenat, professor of history at the University of Edinburgh, which probes yet one more critical yet largely ignored component of Civil War studies.

Excerpts like this one from Ball’s memoir—an invaluable primary source written many years later once he had won his freedom—also well articulate the triple themes that combine to form the thesis of Silkenet’s book: southern planters perceived land as a disposable resource and had little regard for it beyond its potential for short-term profitability; slave labor directed on a colossal scale across the wider geography dramatically and permanently altered every environment it touched; and, the masses of the enslaved were far better attuned and adapted to their respective ecosystems, which they frequently turned to for privacy, nourishment, survival—and even escape. And there is too a darker ingredient that clings to all of these themes, and that was the almost unimaginable cruelty that defined the lives of the enslaved.

The men who force-marched Ball’s coffle as if they were cattle no doubt viewed him with contempt, yet though held as chattel, the African American Charles Ball was more familiar with the past, present, and likely future of the ground he trod upon than most of his white oppressors.  Frequently condemned to a lifetime of hard labor in unforgiving environments, often sustaining conditions little better than that afforded to livestock, this sophisticated intimacy of their natural surroundings could for the enslaved prove to be the only alternative to a cruel death in otherwise harsh elements. And, sometimes, it could—always at great risk—also translate into liberty.

Those who claimed ownership over their darker-complected fellow human beings were not entirely ignorant of the precarious balance of nature in the land they exploited, but they paid that little heed. Land was, after all, not only cheap but appeared to be limitless. As the Indigenous fell victim in greater numbers to European diseases, as militias drove the survivors deeper into the wilderness, as the British loss in the American Revolution removed the final barriers to westward expansion, the Chesapeake elite counted their wealth not in acreage but in human chattel. Deforestation was widespread, fostering erosion. First tobacco and later wheat sapped nutrients and strained the soil’s capacity to sustain bountiful yields over time. Well-known practices such as crop rotation, rigorously applied in the north, were largely scorned by the planter aristocracy. The land, as Ball had discerned, was rapidly used up.

Already in Jefferson’s time, “breeding” the enslaved for sale to the lower south was growing far more profitable than agriculture in the upper south. And demand increased exponentially with the introduction of the “cotton gin” and the subsequent boom in cotton production, as well as the end of the African slave trade that was to follow. Human beings became the most reliable “cash crop.” Charles Ball’s transport south was part of a trickle that grew to a multitude later dubbed the “Slave Trail of Tears” that stretched from Maryland to Louisiana and saw the involuntary migration of about a million enslaved souls in the five decades prior to the Civil War. Many, like Ball, were forced to cope with new environments unlike anything they had experienced before their forced resettlement. What did not change, apparently, was the utter disregard for these various environments by their new owners.

For those who imagined the enslaved limited to working cotton or sugar plantations, Silkenet’s book will be something of an eye-opener. In a region of the United States that with only some exceptions stubbornly remained pre-industrial, large forces of slave labor were enlisted to tame—and put to ruin—a wide variety of landscapes through extensive overexploitation that included forestry, mining, levee-building, and turpentine extraction, usually in extremely perilous conditions.

The enslaved already had to cope with an oppressive collection of unhealthy circumstances that included exposure to extreme heat, exhaustion, insects, a range of diseases including chronic ringworm, inadequate clothing, and an insufficient diet—as well as an ongoing unsanitary lifestyle that even kept them from washing their hands except on infrequent occasions. All this was further exacerbated by the demands inherent in certain kinds of more specialized work.

Enslaved “dippers” extracted turpentine from pine trees which left their “hands and clothing … smeared with the gum, which was almost impossible to remove. Dippers accumulated layers of dried sap and dirt on their skin and clothes, an accumulation that they could only effectively remove in November when the harvest ended. They also suffered from the toxic cumulative effect of inhaling turpentine fumes, which left them dizzy and their throats raw.” [p70] Mining for gold was an especially dangerous endeavor that had the additional hazard in the use of “mercury to cause gold to amalgamate … leaving concentrated amounts of the toxin in the spoil piles and mountain streams. Mercury mixed with the sulfuric acid created when deep earth soils came into contact with oxygen poisoned the watershed … Enslaved miners suffered from mercury poisoning, both from working with the liquid form with their bare hands and from inhaling fumes during distillation. Such exposure had both short- and long-term consequences, including skin irritation, numbness in the hands and feet, kidney problems, memory loss, and impaired speech, hearing, and sight.” [p24] There were dangers too for lumberjacks and levee-builders. Strangely perhaps, despite the increased risks many of the enslaved preferred to be working the mines and forests because of opportunities for limited periods of autonomy in wilder locales that would be impossible in plantation life.

In the end, mining and deforestation left the land useless for anything else. Levees, originally constructed to forestall flooding to enable rice agriculture, ended up increasing flooding, a problem that today’s New Orleans inherited from the antebellum. All these pursuits tended to lay waste to respective ecosystems, leaving just the “scars on the land” of the book’s title, but of course they also left lasting physical and psychological scars upon a workforce recruited against their will.

What was common to each and every milieu was the mutual abuse of the earth as well as those coerced to work it. Ball mused that the quotient for cruelty towards those who toiled the land seemed roughly similar to the degree that the land was ravaged. Indeed, cruelty abounds: the inhumanity that actually defines the otherwise euphemistically rendered “peculiar institution” stands stark throughout the narrative, supported by a wide range of accounts of those too often condemned to lives beset by a quotidian catalog of horrors as chattel property in a system marked by nearly inconceivable brutality.

Beatings and whippings were standard fare. Runaways, even those who intended to absent themselves only temporarily, were treated with singular harshness. Sallie Smith, a fourteen-year-old girl who went truant in the woods to avoid repeated abuse, was apprehended and “brutally tortured: suspended by ropes in a smoke house so that her toes barely touched the ground and then rolled across the plantation inside a nail-studded barrel, leaving her scarred and bruised.” [p78]

Slaveowners also commonly employed savage hunting dogs or bloodhounds that were specially trained to track runaways, which sometimes led to the maiming or even death of the enslaved:

“One enraged slave owner ‘hunted and caught’ a fugitive ‘with bloodhounds, and allowed the dogs to kill him. Then he cut his body up and fed the fragments to the hounds.’ Most slave owners sought to capture their runaway slaves alive; but unleashed bloodhounds could inflict serious wounds in minutes … Some masters saw the violence done by dogs as part of the punishment due to rebellious slaves. Over the course of ten weeks in 1845, Louisiana planter Bennet Barrow noted in his diary three occasions when bloodhounds attacked runaway slaves. First, they caught a runaway named Ginny Jerry, who sought refuge in the branches before the ‘negro hunters … made the dogs pull him out of the tree, Bit him very badly’ … Second, a few weeks later, while pursuing another truant, Barrow ‘came across Williams runaway,’ who found himself cornered by bloodhounds, and the ‘Dogs nearly et his legs off—near killing him.’  Finally, an unnamed third runaway managed to elude the hounds for half a mile before the ‘dogs soon tore him naked.’ When he returned to the plantation, Barrow ‘made the dogs give him another overhauling’ in front of the assembled enslaved community as a deterrent. Although Barrow may have taken unusual pleasure in watching dogs attack runaway slaves, his diary reveals that slave owners used dogs to track fugitives and torture them.” [p52-53]

That such practices were treated as unremarkable by white contemporaries finds a later echo in the routine bureaucracy of atrocities that the Nazis inflicted on Jews sent to forced labor camps. For his part, Silkenat reports episodes like these dispassionately, in what appears to be a deliberate effort on the author’s part to sidestep sensationalism. This technique is effective: hyperbolic editorial is unnecessary—the horror speaks for itself—and those well-read in the field are aware that such barbarity was hardly uncommon. Moreover, it serves as a robust rebuke to today’s “Lost Cause” enthusiasts who would cast slavery as benign or even benevolent, as well as to those promoting recent disturbing trends to reshape school curricula to minimize and even sugarcoat the awful realities that history reveals. (Sidenote to Florida’s Board of Education: exactly which skills did Sallie Smith in her nail-studded barrel, or those disfigured by ferocious dogs, develop that later could be used for their “personal benefit?”)

I first encountered the author and his book quite by accident. I was attending the Civil War Institute (CWI) 2023 Summer Conference at Gettysburg College2, and David Silkenat was one of the scheduled speakers for a particular presentation—“Slavery and the Environment in the American South”—that I nearly skipped because I worried it might be dull. As it turned out, I could not have been more wrong. I sat at rapt attention during the talk, then purchased the book immediately afterward.

Silkenet’s lecture took an especially compelling turn when he spoke at length of maroon communities of runaways who sought sanctuary in isolated locations that could be far too hostile to foster recapture even by slave hunters with vicious dogs. One popular refuge was the swamp, especially unwholesome but yet out of reach of the lash, another underscore by the author that enslaved blacks by virtue of necessity grew capable of living off the land—every kind of land, no matter how harsh—with a kind of adaptation out of reach to their white oppressors. Swamps tended to be inhospitable, given to fetid water populated with invisible pathogens, masses of biting and stinging insects, poisonous snakes, alligators, and even creatures such as panthers and bears that that had gone extinct elsewhere. But for the desperate it meant freedom.

A number of maroon communities appeared in secluded geographies that were populated by escapees mostly on the margins of settled areas, with inhabitants eking out a living by hunting and gathering as well as small scale farming, supplemented by surreptitious trading with the outside world. The largest was in the Great Dismal Swamp in Virginia and North Carolina, where thousands managed to thrive over multiple generations.

But not all flourished. In Scars on the Land, Silkenat repeats Ball’s tragic tale of coming upon a naked and dirty fugitive named Paul, an African survivor of the Middle Passage who had fled a beating to the swamp. On his neck, he wore a heavy iron collar that was fastened with bells to help discourage escape. Ball assisted him as best he could clandestinely, but could not remove the collar. When he returned a week later to offer additional assistance, his nostrils traced a rancid smell to the hapless Paul, a suicide, hanging by his neck from a tree, crows pecking at his eyes. 3 [p124]

Scars on the Land is directed at a scholarly audience, yet it is so well-written that any student of the Civil War and African American history will find it both accessible and engaging. But more importantly, in a genre that now boasts an inventory of more than 60,000 works, it is no small distinction to pronounce Silkenet’s book a significant contribution to the historiography that should be a required read for everyone with an interest in the field.

 

1Charles Ball. Slavery in the United States: A Narrative of the Life and Adventures of Charles Ball, a Black Man, Who Lived Forty Years in Maryland, South Carolina and Georgia, as a Slave Under Various Masters, and was One Year in the Navy with Commodore Barney, During the Late War. (NY: John S. Taylor, 1837)  Slavery in the United States

2 For more about the CWI Summer Conference at Gettysburg College see: Civil War Institute at Gettysburg Summer Conference 2024

3The illustration of Paul hanging from a tree appears alongside Ball’s narrative in this publication:  Nathaniel Southard, ed. The American Anti-Slavery Almanac for 1838, Vol I, Nr 3, The American Anti-Slavery Society, (Boston: Isaac Knapp, 1838), 13, The American Anti-Slavery Almanac for 1838

Note: I reviewed this book about a well-known maroon community here: Review of: The Battle of Negro Fort: The Rise and Fall of a Fugitive Slave Community, by Matthew J. Clavin

Featured

Review of: Harpers Ferry Under Fire: A Border Town in the American Civil War, by Dennis E. Frye

Most people only know of Harpers Ferry as the town in present day West Virginia where John Brown, a zealous if mercurial abolitionist, set out to launch an ill-fated slave insurrection by seizing the national armory located there, an attempt which was completely crushed, sending John Brown to the gallows and his body “a-mouldering” in the grave shortly thereafter. Those more familiar with the antebellum are aware that many historians consider that event to be the opening salvo of the Civil War, as hyper-paranoid southern planters—who no longer as in Jefferson’s day bemoaned the burden and the guilt of their “peculiar institution,” but instead championed human chattel slavery as the most perfect system ever ordained by the Almighty—imagined the mostly anti-slavery north as a hostile belligerency intent to deprive them of their property rights and to actively incite the enslaved to murder them in their sleep. Brown was hanged seventeen months prior to the assault on Fort Sumter, but some have suggested that first cannonball was loosed at his ghost.

Those in the know will also point out that the man in overall command when they took Brown down was Colonel Robert E. Lee, and that his aide-de-camp was J. E. B. Stuart. And perhaps to underscore the outrageous twists of fate history is known to fashion for us, they might add that present for Brown’s later execution were Thomas J. (later “Stonewall”) Jackson, John Wilkes Booth, Walt Whitman, and even Edmund Ruffin, the notable Fire-eater who was among the first to fire actual rather than metaphorical shots at Sumter in 1861. You can’t make this stuff up.

But it turns out that John Brown’s Raid in 1859 represents only a small portion of the Civil War history that clings to Harpers Ferry, perhaps the most quintessential border town of the day, which changed hands no less than eight times between 1861 and 1865. Both sides took turns destroying the successively rebuilt Baltimore & Ohio bridge—the only railroad bridge connecting northern and southern states across the Potomac. Harpers Ferry was integral to Lee’s invasion of Maryland that ended at Antietam, and had a supporting role at the outskirts of the Gettysburg campaign, as well as in Jubal Early’s aborted march on Washington. There’s much more, and perhaps the finest source for the best immersion in the big picture would be Harpers Ferry Under Fire: A Border Town in the American Civil War [2012], by the award-winning retired National Park Service Historian Dennis E. Frye, who spent some three decades of his career at Harpers Ferry National Park. Frye is a talented writer, the narrative is fascinating, and this volume is further enhanced by lavish illustrations, period photographs, and maps. Even better, while the book is clearly aimed towards a popular audience, it rigorously adheres to strict standards of scholarship in presentation, interpretation, and analysis.

West Virginia has the distinction of being the only state to secede from another state, as its Unionist sympathies took issue with Virginia’s secession from the United States. But it had been a long time coming. The hardscrabble farmers in the west had little in common with the wealthy elite slaveholding planter aristocracy that dominated the state’s government. This is not to say those to the west of Richmond were any less racist than the rest of the south, or much of the antislavery north for that matter; it was a nation then firmly based upon principles of white supremacy. For Virginia and its southern allies, the conflict hinged on their perceived right to spread slavery to the vast territories seized from Mexico in recent years. For the north, it was about free soil for white men and for Union. West Virginia went with Union. But back then, when John Brown took his crusade to free the enslaved to Harpers Ferry, it was still part of Virginia, and while some residents might have feared for the worst, most Americans could not have dreamed of the scale of bloodletting that was just around the corner, nor that the cause of emancipation—John Brown’s cause—would one day also become inextricably entwined with the preservation of the Union.

Harpers Ferry is most notable for its dramatic topography, which has nothing to do with its armory and arsenal—the object of Brown’s raid—but everything to do with its persistent pain at the very edge of Civil War. Strategically situated at the confluence of the Potomac and Shenandoah rivers, where today the states of Maryland, Virginia, and West Virginia meet, the town proper is surrounded on three sides by the high grounds at Bolivar Heights to the west, Loudoun Heights to the south, and Maryland Heights to the east that define its geography and the challenges facing both attackers and defenders. It is immediately clear to even the most amateur tactician that the town is indefensible without control of the heights.

I was drawn to Harpers Ferry Under Fire by design. I had already registered for the Civil War Institute (CWI) 2023 Summer Conference at Gettysburg College, and selected Harpers Ferry National Historic Park as one of my battlefield tours. While I have visited Antietam and Gettysburg on multiple occasions, somehow I had never made it to Harpers Ferry. These CWI conference tours are typically quite competitive, so I was pleased when I learned that I had won a seat on the bus. And not only that—the tour guide was to be none other than Dennis Frye himself! I have met Dennis before, at other Civil War events, including a weekend at Chambersburg some years ago with the late, legendary Ed Bearss. Like Ed, Dennis is very sharp, with an encyclopedic knowledge of people and events. I assigned myself his book as homework.

The original itinerary was scheduled to include a morning tour of the town, designated as the Harpers Ferry Historic District—which hosts John Brown’s Fort as well as many restored nineteenth century buildings that have been converted into museums—and an afternoon tour focused on the battles and the heights. Inclement weather threatened, so Dennis mixed it up and had us visit the heights first. In retrospect, in my opinion, this turned out to have been the better approach anyway, because when you stand on the heights and look down upon the town proper below, you understand instantly the strategic implications from a military standpoint. Later, walking the streets of the hamlet and looking up at those heights, you can fully imagine the terror of the citizens there during the war years, completely at the mercy of whatever side controlled that higher ground.

The most famous example of that was when, during Lee’s Antietam campaign, he sought to protect his supply line by splitting his forces and sending Stonewall Jackson to seize Harpers Ferry. Jackson’s victory there proved brilliant and decisive, a devastating federal capitulation that turned more than twelve thousand Union troops over to the rebels—the largest surrender of United States military personnel until the Battle of Bataan eighty years afterward! This event is covered in depth in Harpers Ferry Under Fire, but given Dennis Frye’s passion for history, the story proved to be a great deal more compelling when gathered with a group of fellow Civil War afficionados on Bolivar Heights, spectacular views of the Potomac River and the Cumberland Gap before us, while Dennis rocked on his heels, pumped his arms in the air, and let his voice boom with the drama and excitement of those events so very long past. While Dennis lectured, gesturing wildly, I think all of us, if only for an instant, were transported back to 1862, gazing down from the heights at the tiny town below through the eyes of a common soldier, garbed in blue or gray. The remainder of the day’s tour, including John Brown’s Fort and the town’s environs, was a superlative experience, but it was that stirring moment on Bolivar Heights that will remain with me for many years to come.

It is worth pausing for a moment to consider the Fort, where John Brown’s raid ended in disaster, ten of his men killed, including two of his sons, and the badly wounded Brown captured, along with a handful of survivors. The original structure, which served as the Armory’s fire engine and guard house, was later dismantled, moved out of state and rebuilt, then dismantled again and eventually re-erected not far from the location where Brown and his men sought refuge that day, before it was stormed by the militia. It is open to the public. Walking around and within it today, there is an omnipresent eerie feeling. Whatever Brown’s personal flaws—and those were manifold—he went to Harpers Ferry on a sort of holy quest and was martyred for it. The final words he scribbled down in his prison cell—”I, John Brown, am now quite certain that the crimes of this guilty land will never be purged away but with blood”—rang in my ears as I trod upon that sacred ground.

If you are a Civil War buff, you must visit Harpers Ferry. Frye himself is retired, but if you can somehow arrange to get a tour of the park with this man, jump on the chance. Failing that, read Harpers Ferry Under Fire, for it will enhance your understanding of what occurred there, and through the text the authoritative voice of Dennis Frye will speak to you.

A link to Harpers Ferry National Park is here: Harpers Ferry National Park

More on the CWI Summer Conference is here: Civil War Institute at Gettysburg Summer Conference 2024

NOTE: Except for the cover art, all photos featured here were taken by Stan Prager

Featured

Review of: The Making of the African Queen: Or How I went to Africa With Bogart, Bacall and Huston and almost lost my mind, by Katharine Hepburn

One of my favorite small venues for an intimate, unique concert experience is The Kate—short for The Katharine Hepburn Cultural Arts Center—in Old Saybrook, Connecticut, a 285-seat theater with outstanding acoustics that hosts multi-genre entertainment in a historic building dating back to 1911 that once served as both theater and Town Hall. In 2013, my wife and I had the great pleasure of seeing Jefferson Airplane alum Marty Balin rock out at The Kate. More recently, we swayed in our seats to the cool Delta blues of Tab Benoit. On each occasion, prior to the show, we explored the photographs and memorabilia on display in the Katharine Hepburn Museum on the lower level, dedicated to the life and achievements of an iconic individual who was certainly one of greatest actors of her generation.

Hepburn was a little girl when she first stayed at her affluent family’s summer home in the tony Fenwick section of Old Saybrook, just a year after the opening of the then newly constructed Town Hall that today bears her name. She later dubbed the area “paradise,” returning frequently over the course of her long life and eventually retiring to her mansion in Fenwick overlooking the water, where she spent her final years until her death at 96 in 2003. The newly restored performing arts center named in her honor opened six years later, with the blessings of the Hepburn family and her estate.

One of the eye-catching attractions in the museum includes an exhibit behind glass showcasing Hepburn’s performance with co-star Humphrey Bogart in the celebrated 1951 film, The African Queen, that features a copy of the 1987 memoir credited to her whimsically entitled The Making of the African Queen: Or How I went to Africa With Bogart, Bacall and Huston and almost lost my mind. I turned to my wife and asked her to add this book to my Christmas list.

Now, full disclosure: I am a huge Bogie fan (my wife less so!). I recently read and reviewed the thick biography Bogart, by A.M. Sperber & Eric Lax, and in the process screened twenty of his films in roughly chronological order. My wife sat in on some of these, including The African Queen, certainly her favorite of the bunch. If I had to pick five of the finest Bogie films of all time, that would certainly make the list. Often denied the recognition that was his due, he won his sole Oscar for his role here. A magnificent performer, in this case Bogart benefited not only from his repeat collaboration with the immensely talented director John Huston, but also by starring opposite the inimitable Kate Hepburn.

For those who are unfamiliar with the film (what planet are you from?), The African Queen, based on the C. S. Forester novel of the same name, is the story of the unlikely alliance and later romantic relationship between the staid, puritanical British missionary and “spinster” (a term suitable to the times) Rose Sayers (Hepburn) and the gin-soaked Canadian Charlie Allnut (Bogart), skipper of the riverboat African Queen, set in German East Africa (present-day Tanzania) at the outbreak of World War I. After aggression by German forces leaves Rose stranded, she is taken onboard by Allnut. In a classic journey motif that brilliantly courts elements of drama, adventure, comedy, and romance, the film follows this mismatched duo as they conspire to arm the African Queen with explosives and pilot it on a mission to torpedo a German gunboat. Those who watch the movie for the first time will be especially struck by the superlative performances of both Bogie and Hepburn, two middle-aged stars who not only complement one another beautifully but turn out an unexpected on-screen chemistry that has the audience emotionally involved, rooting for their romance and their cause. It is a tribute to their mutual talents that the two successfully communicated palpable on-screen passion to audiences of the time who must have been struck by the stark disparity between the movie posters depicting Bogie as a muscular he-man and Hepburn as a kind of Rita Hayworth twin—something neither the scrawny Bogart nor the aging Hepburn live up to in the Technicolor print. But even more so because those same 1951 audiences were well acquainted with the real-life 51-year-old Bogart’s marriage to the beautiful 27-year-old starlet Lauren (real name Betty) Bacall, born of an on-set romance when she was just 19.

Katharine Hepburn had a long career in Hollywood marked by dramatic ebbs and flows. While she was nominated for an Academy Award twelve times and set a record for winning the Best Actress Oscar four times, more than once her star power waned, and at one point she was even widely considered “box office poison.” Her offscreen persona was both unconventional and eccentric. She defied contemporary expectations of how a woman and a movie star should behave: shunning celebrity, sparring with the press, expressing unpopular political opinions, wearing trousers at a time that was unacceptable for ladies, fiercely guarding her privacy, and stubbornly clinging to an independent lifestyle. She was pilloried as boyish, and accused of lesbianism at a time when that was a vicious expletive, but she evolved into a twentieth century cultural icon. Divorced at a young age, she once dated Howard Hughes, but spent nearly three decades in a relationship with the married, alcoholic Spencer Tracy, with whom she costarred in nine films. Rumors of liaisons with other women still linger. Perhaps no other female figure cut a groove in Hollywood as deep as Kate Hepburn did.

Hepburn’s book, The Making of the African Queen, showed up under the tree last Christmas morning—the original hardcover first edition, for that matter—and I basically inhaled it over the next couple of days. It’s an easy read. Hepburn gets the byline but it’s clear pretty early on that the “narrative” is actually comprised of excerpts from interviews she sat for, strung together to give the appearance of a book-length chronicle. But no matter. Those familiar with Kate’s distinctive voice and the cadence of her signature Transatlantic accent will start to hear her pronouncing each syllable of the text in your head as you go along. That quality is comforting. But it is nevertheless plagued by features that should make you crazy: it’s anecdotal, it’s uneven, it’s conversational, it’s meandering, and maddingly it reveals only what Hepburn is willing to share. In short, if this were any other book about any other subject related by any other person, you would grow not only annoyed but fully exasperated. But somehow, unexpectedly, it turns out to be nothing less than a delight!

If The African Queen is a cinema adventure, aspects of the film production were a real-life one. Unusual for its time, bulky Technicolor cameras were transported to on-location shoots in Uganda and Congo, nations today that then were still under colonial rule. The heat was oppressive, and danger seemed to lurk everywhere, but fears of lions and crocodiles were trumped by smaller but fiercer army ants and mosquitoes, a host of water-borne pathogens, as well as an existential horror of leeches. Tough guy Bogie was miserable from start to finish, but Hepburn reveled in the moment, savoring the exotic flora and fauna, and bursting with excitement. Still, almost everyone—including Kate—fell terribly ill at least some of the time with dysentery and a variety of other jungle maladies. At one point Hepburn was vomiting between takes into a bucket placed off-screen. The running joke was that the only two who never got sick were Bogie and director Huston, because they eschewed the local water and only drank Scotch!

Huston went to Africa hoping to “out-Hemingway” Hemingway in big game hunting, but his safari chasing herds of elephants turned into a lone antelope instead. He seemed to do better with Kate. The book does not openly admit to an affair, but the intimacy between them leaps off the page. Hepburn proves affable through every paragraph, although sometimes less than heroic. Readers will wince when upon first arrival in Africa she instantly flies into a fit of rage that has her evict a staff member from an assigned hotel room that to her mind rightly should belong to a VIP of her caliber! And while she is especially kind, almost to a fault, to every African recruited to serve her in various capacities, there is a patronizing tone in her recollections that can’t help but make us a bit uncomfortable today. Still, you cannot detect even a hint of racism. You get the feeling that she genuinely liked people of all stations of life, but could be unrepentantly condescending towards those who did not, like her, walk among the stars. Yet, warts and all—and these are certainly apparent—Kate comes off today, long after her passing, as likeable as she did to those who knew her in her times. And what times those must have been!This book is pure entertainment, with the added bonus of forty-five wonderful behind-the-scenes photographs that readers may linger upon far longer than the pages of text. For those who loved the film as I do, the candid moments that are captured of Bogie, Hepburn, and Huston are precious relics of classic Hollywood that stir the heart and the soul. If you are a fan, carve out the time and read The Making of the African Queen. But more importantly, screen The African Queen again. Then you will truly know what I mean.

A link to The Kate: The Kate

A link to the The African Queen on IMDB: IMDB: The African Queen

My review of the Bogart bio: Review of: Bogart, by A.M. Sperber & Eric Lax

NOTE:  My top five Bogie films: Casablanca, The Maltese Falcon, Treasure of the Sierra Madre, The African Queen, The Caine Mutiny—but there are so many, it’s difficult to choose…

Featured

Review of: The Battle of Negro Fort: The Rise and Fall of a Fugitive Slave Community, by Matthew J. Clavin

In yet another fortuitous connection to my attendance at the Civil War Institute (CWI) 2023 Summer Conference at Gettysburg College, I sat in on an enlightening presentation by the historian David Silkenat1 on the environmental history of slavery in the American south that turned to a discussion of the frequently overlooked phenomenon of communities in secluded geographies that were populated by runaways who fled enslavement. These so-called “maroon communities” appeared mostly on the margins of settled areas across the upper and lower south, sometimes in tandem with the indigenous, with inhabitants eking out a living by hunting and gathering as well as small scale farming, supplemented by limited and surreptitious trading with the outside world. The origin of some of these maroon societies can be traced back to the Revolutionary War and the War of 1812, when the British offered freedom to the enslaved if they were willing to serve in the military. Many jumped at the chance. On both occasions, when hostilities concluded, those who were unable or unwilling to withdraw with British forces went into hiding to avoid recapture and a return to slavery. One such refuge in the Spanish Floridas became known as the “Negro Fort.”

My next out of state trip subsequent to Gettysburg brought me to a small town in southern Vermont that one lazy afternoon found me exploring a used bookstore—housed in, of all places, a yurt2—where I stumbled upon The Battle of Negro Fort: The Rise and Fall of a Fugitive Slave Community [2019], by Matthew J. Clavin. Silkenet’s fascinating talk about maroons rang in my head as I bought the book, and I started reading it that very day.

Southern planters often held competing, contradictory notions in their heads simultaneously, while sidestepping the cognitive dissonance that practice should have provoked. On the one hand, they deluded themselves that their enslaved “property” were content in their condition of servitude. At the same time, they held them inferior in every sense and thought them nearly helpless, unable to successfully function independently. Slaveowners also dismissed the idea that African Americans could possibly make good soldiers, even though they did manage to fight on both sides during the Revolution. On the other hand, whites nursed a deep visceral fear of slave uprisings by armed blacks, whom despite their apparent contentment and incompetence might somehow team up and murder them in their sleep.

This heavy load of contradictions got hoisted menacingly above them to cast an ever-lengthening shadow when numbers of escaped slaves recruited into service by the British in what was then the Spanish colony of East Florida during the War of 1812 opted to remain behind after the Treaty of Ghent in a military fortification on Prospect Bluff overlooking the Apalachicola River heavily stocked with cannon and munitions and bolstered with support from allied Native Americans. These were not handfuls of fugitives out of reach in an unknown, inaccessible swamp somewhere, like most maroon settlements; this was a prominent, fully equipped, self-sustaining, armed camp, which even had the temerity to continue to fly the Union Jack—the so-called “Negro Fort.” This was an invitation to fellow runaways. This was not only a challenge to the white man’s “peculiar institution,” this was a thumbing of the nose to the entire planter mentality. This was an unacceptable threat. They could not bear it; they would not bear it.

In The Battle of Negro Fort, Clavin, Professor of History at University of Houston, deftly explores not only the origin of this community and its eventual annihilation through the machinations of then General Andrew Jackson, quietly countenanced by the federal government, but places the fort and its destruction in its appropriate context by opening a wider lens upon the entire era. This was a surprisingly significant moment in American history that for too long fell victim to superficial treatments that overlooked the significance of the multiplicity of forces in play, a neglect much more recently remedied by Pulitzer Prize winning scholar Alan Taylor, whose body of work points not only to the far greater complexities attached to the War of 1812 that have usually remained unacknowledged, but also identifies the broader consequences that rose out of the series of conflicts Taylor collectively terms the “Wars of the 1810s.” Taylor’s brilliant American Republics3 specifically cites actions against the Negro Fort, and connects that to a series of events that included the First Seminole War, sparked by attempts to recapture runaway blacks living among Native Americans, and finally to Spain’s relinquishing of the Floridas to the United States. While never losing focus on the fort itself, Clavin too walks skillfully in this larger arena that hosts war, diplomacy, indigenous tribes pitted against each other, related maroon communities, as well as overriding issues of enslavement and the predominance of white supremacy.

The Battle of Negro Fort is very well-written, but it takes on an academic tone that makes it more accessible to a scholarly than a popular audience. But it is hardly dull, so those comfortable in the realm of historical studies will be undeterred. And it is, after all, a stirring tale that leads to a dramatic and tragic end. Just as the Venetians blew up the Parthenon in 1687 by scoring a hit on the gunpowder the Turks had stored there, a gunboat’s cannonball struck the powder magazine located in the center of the fort, which exploded spectacularly and obliterated the structure. Scores (or hundreds, depending upon the source) were killed, the leaders who survived executed, and those who failed to make their escape returned to slavery.

The author’s thesis underscores that the chief motive for the assault on the Negro Fort by Jackson’s agents in 1816 was to advance white supremacy rather than as part of a greater strategy to dominate the Floridas, which strikes as perhaps somewhat overstated. Still, Clavin cites later antebellum abolitionists who reference the Negro Fort with specificity in this regard, so he may very well have a point. In any case, this contribution to the historiography proves a worthy addition to the literature and an understanding of this less well-known period of early American history will be significantly enhanced by adding it to your reading list.

1Note: David Silkenat is the author of Scars on the Land: An Environmental History of Slavery in the American South.

2The used bookstore in the yurt is West End Used Books in Wilmington, VT

3Note: I reviewed the referenced Alan Taylor book here:  Review of: American Republics: A Continental History of the United States, 1783-1850, by Alan Taylor

 

 

 

 

Featured

Review of: Saving Yellowstone: Exploration and Preservation in Reconstruction America, by Megan Kate Nelson

As a child, one cartoon that habitually had me glued to our black and white TV set was the Yogi Bear Show, which spun a recurring comedic yarn starring that eponymous suave if mischievous anthropomorphic bear and his best bud Boo-Boo who routinely sparred with a ranger as they poached picnic baskets. It was set in Jellystone Park, a thinly veiled animated rendering of Yellowstone National Park. As I grew older, I wondered what it would be like to check out the natural wonders of the real Yellowstone, but many decades later it yet remains an unfulfilled checkbox on a long bucket list. Other than passing views of documentaries that splashed spectacular images of waterfalls, geysers, and herds of bison across my 4K screen, I rarely gave the park a second thought.

So it was while attending the Civil War Institute (CWI) 2023 Summer Conference at Gettysburg College that I learned with no little surprise that there was to be a scheduled segment on Yellowstone. I was puzzled; beyond the scenic imagery recalled from episodes of Nat Geo, what little I knew about Yellowstone was that it was established as our first national park in 1872—seven years after Lee’s surrender! What could this possibly have to do with the Civil War?

Fortunately, I got a clue at the conference’s opening night ice cream social when I was by chance introduced to Megan Kate Nelson, author of Saving Yellowstone: Exploration and Preservation in Reconstruction America [2022], who was slated to give that very presentation. As we chatted, Megan Kate—almost nonchalantly—made the bold statement that without the Civil War there never could have been a Yellowstone Park. Agnostic but intrigued, I sat in the audience a couple of days later for her talk, which turned out to be both engaging and persuasive. I purchased her book along with a stack of others at the conference, and it turned out to be my first read when I got home.

History is too frequently rendered in a vacuum, often isolated from the competing forces that shape it, which not only ignores key context but in the process distorts interpretation. In contrast, and hardly always immediately apparent, every historical experience is to some degree or another the consequence of its relationship to a variety of other less-than-obvious factors, such as climate, the environment, the prevalence of various flora and fauna (as well as pathogens), resources, trade networks, and sometimes the movements of peoples hundreds or even thousands of miles away. It is so rewarding to stumble upon a historian who not only identifies these kinds of wider forces in play but capitalizes upon their existence to turn out a stunning work of scholarship. In Saving Yellowstone, Megan Kate Nelson brilliantly locates a confluence of events, ideas, and individuals that characterize a unique moment in American history.

The Civil War was over. The fate of the disputed territories—the ill-begotten gains of the Mexican War that sparked secession when the south’s slave power was, by Lincoln’s election, stymied in their resolve to spread their so-called “peculiar institution” westward—had been settled: the Union had been preserved, slavery had been outlawed, and these would remain federal lands preserved for white free-soil settlement. This translated into immense opportunities for postwar Americans who pushed west towards what seemed like a limitless horizon of vast if barely explored open spaces, chasing opportunities in land or commerce or perhaps even a fortune in precious metals buried in the ground. Those in the way would be displaced: if not invisible, the Native Americans who had occupied these places for centuries were irrelevant, stubborn obstacles that could be either bought off or relocated or exterminated. Lakota Sioux chief Tȟatȟáŋka Íyotake, also known as Sitting Bull, would have something to say about that.

Ulysses S. Grant, the general who had humbled Lee at Appomattox, was now the President of the United States, and remained committed to a Reconstruction that was on shaky ground largely due to the disastrous administration of his predecessor, Andrew Johnson, who had allowed elites of the former Confederacy to regain political power and trample upon the newly won rights of the formerly enslaved. The emerging reality was looking much like the south had lost the war but somehow won the peace, as rebels were returned to elective office while African Americans were routinely terrorized and murdered. Postwar demilitarization left a shrunken force of uniforms stretched very thin, who could either protect blacks from racist violence or white settlers encroaching on Native lands—but could not do both.

Meanwhile, the landscape was being transformed by towns that seemed to spring up everywhere, many connected by the telegraph and within the orbit of transcontinental railroads that would perhaps one day include the Northern Pacific Railway, a kind of vanity project of millionaire financier Jay Cooke that nearly destroyed him. All of this sparked frenetic activities that centered upon exploration, bringing trailblazers and surveyors and scientists and artists and photographers west to determine exactly what was there and what use could be made of it. One of these men was geologist Ferdinand Hayden, who led a handpicked team on a federally funded geological survey to the wilderness of the Yellowstone Basin in 1871 and charted a course that led, only one short year later, to its designation as America’s first national park.

More than six hundred thousand years ago, a massive super volcano erupted and begat the Yellowstone Caldera and its underlying magma body that produces the extreme high temperatures that power the hydrothermal features it is well known for, including hot springs, mudpots, fumaroles, and more than three hundred geysers! Reports of phenomena like these preceded Hayden’s expedition, but most were chalked up to tall tales. Hayden sought to map the expanse and to separate truth from fantasy. Unlike white men on a quest of discovery, of course, there was nothing new about Yellowstone to neighboring Native Americans, who had inhabited the region into the deep mists of time.

The best crafted biographies employ a central protagonist to not only tell their story but also to immerse the reader in a grand narrative that reveals not only the subject but the age in which they walked the earth. Nelson’s technique here, deftly executed, is to likewise write a kind of biography of Yellowstone that lets it serve as the central protagonist amid a much larger cast in a rich chronicle of this unique historical moment. A moment for the United States, no longer debased by the burden of human chattel slavery, that on the one hand had it celebrating ambitious achievements on an expanding frontier that boasted not only thriving towns and cities and industry and invention but even the remarkable triumph of posterity over profit by creating a national park and setting it aside for the benefit of all Americans. But not, on the other hand, actually for all Americans. Not for Native Americans, certainly, who at the point of the bayonet were driven away, into decades of decline. And not for African Americans, who in the national reconciliation of whites found themselves essentially erased from history and forced to live under the shadow of Jim Crow for a full century hence. Later, when the “West was Won” so to speak, both blacks and Native Americans could very well visit Yellowstone Park as tourists, but never on the same terms as their white counterparts.

Saving Yellowstone is solid history as well as a terrific adventure tale, attractive to both popular and scholarly audiences. There are times, especially early on in the narrative, that it can be slow-going, and the quantity of characters that people the storyline can be dizzying, but as the author lays the groundwork the momentum picks up. You can perhaps sense that Nelson, as a careful historian, is perhaps sometimes holding back so that the drama does not outpace her citations. But it is, after all, a grand theme, and such details only enrich it. This is the rare book that will keeping you thinking long after you have turned the last page. Oh, and for Civil War enthusiasts, I should add: it turns out that Megan Kate was absolutely correct—for both better and for worse, without the Civil War there indeed never could have been a Yellowstone Park!

Featured

Review of: Harriet Tubman: The Road to Freedom, by Catherine Clinton

In 2016, Jack Lew, President Obama’s treasury secretary, announced a redesign of the twenty-dollar bill that would feature on its front a likeness of Harriet Tubman, arguably the most significant African American female of the Civil War era, while displacing Andrew Jackson, who not only owned slaves but championed the institution of human chattel slavery, and was likewise a driving force behind the Indian Removal Act, one of the most shameful episodes in our national saga. Immediate controversy ensued, which was ratcheted up when Donald Trump stepped into the White House. Many have correctly styled Trump as having almost no sense of history, but he did seem to have had a kind of boyhood crush on Jackson, who like Trump did whatever he liked with little regard for the consequences to others, especially the weak and powerless. Trump relocated a portrait of Jackson to a position of prominence in the Oval Office, and his new treasury secretary postponed the currency redesign, almost certainly an echo of Trump’s campaign grievance that putting Tubman on the bill was nothing but “pure political correctness.” Meanwhile, resistance on the left grew, as well, as many pointed to the disrespect of putting the face of one who was formerly enslaved on legal tender that is a direct descendant of that once used to buy and sell human beings. Then there is the paradox in the redesign that puts Tubman on the obverse while maintaining Jackson’s presence on the reverse side of the bill, perhaps reflecting with a dose of disturbing irony the glaring sense of polarization that manifests the national character these days, much as it did in Tubman’s time. Still, the Biden Administration has pledged to accelerate the pace of issuing the new currency, but there remains no sign that anyone will be buying groceries with Tubman twenties anytime soon. We can only imagine what Harriet Tubman, who was illiterate and lived most of her life in poverty, would make of all this!

A dozen years before all this hoopla over who will adorn the paper money, acclaimed historian Catherine Clinton published Harriet Tubman: The Road to Freedom [2004], a well-written, engaging study that turns out to be one of a string of books on Tubman to hit the press nearly at the same—the others are by Kate Larson and Jean Humez—which collectively represented the first scholarly biographies of her life in more than six decades. (There have since been additional contributions to the historiography.) Surprisingly, Tubman proves a tough subject to chronicle: a truly larger-than-life heroic figure who can be credited with verifiable exploits to free the enslaved both before and during the Civil War—admirers nicknamed her “Moses” for her role in spiriting fugitives to freedom, and she was later dubbed “General” by John Brown—her achievements have also long been distorted by myth and embellishment, something nourished early on by a subjective biography of sorts by her friend Sarah Bradford that is said to play loose with facts and events. Then there is the challenge in fashioning an accurate account of someone who spent much of her consequential years living in the shadows, both by the circumstance of anonymity imposed by her condition of enslavement, as well as the deliberate effort to wear a mask of invisibility by one operating outside the law where the penalty for detection would be a return to slavery or, much more likely, death. For the historian, that translates into a delicate—and precarious—balancing act.

Clinton’s approach is to recreate Tubman’s life as close to the colorful adventure it certainly was, without falling victim to sensationalism. She relies on scholarship to sketch the skeletal framework for Tubman’s life, then turns to a variety of sources and reports to put flesh upon it, sharing with the reader when she resorts to surmise to shade aspects of the complexion.  In this effort, she largely succeeds.

Born Araminta Ross in Maryland in perhaps 1822—like many of the enslaved she could only guess at her date of birth—Tubman survived an especially brutal upbringing in bondage that witnessed family members sold, a series of vicious beatings and whippings, and a severe head injury incurred in adolescence when a heavy metal weight tossed by an overseer at another struck her instead, which left her with a lingering dizziness, headaches, seizures, and what was likely chronic hypersomnia, a neurological disorder of excessive sleepiness. It also spawned vivid dreams and visions that reinforced religious convictions that God was communicating with her. By then, she was no stranger to physical abuse. Tubman was first hired out as a nursemaid when she herself was only about five years old, responsible for rocking a baby while it slept. If the baby woke and cried she was lashed as punishment. She recalled once being whipped five times before breakfast. She was left scarred for life. Tubman’s experiences serve as a strong rebuke to those deluded by “Lost Cause” narratives that would cast antebellum slavery as a benign institution.

Despite her harsh treatment at the hands of various enslavers, Tubman proved strong and resilient. Rather than break her, the cruelties she endured galvanized her, sustained by a religious devotion infused with Old Testament promises of deliverance. Still enslaved, she married John Tubman, a free black man, and changed her first name to Harriet shortly thereafter. When she fled to freedom in Philadelphia a few short years later, he did not accompany her. Tubman’s journey out of slavery was enabled by the so-called “Underground Railroad,” a route of safehouses hosted by sympathetic abolitionists and their allies.

For most runaways, that would be the end of the story, but for Tubman it proved just the beginning. Committed to liberating her family and friends, Tubman covertly made more than a dozen missions back to Maryland over a period of eight years and ultimately rescued some seventy individuals, while also confiding escape methods to dozens of others who successfully absconded. In the process, as Clinton points out, she leapfrogged from the role as a “conductor” on the Underground Railroad to an “abductor.” Now known to many as Moses, she was a master of disguise and subterfuge; the illiterate Tubman once famously pretended to read a newspaper in order to avoid detection. To those who knew her, she seemed to be utterly fearless. She carried a pistol, not only to defend herself against slavecatchers if needed, but also to threaten the fainthearted fugitive who entertained notions of turning back. She never lost a passenger.

At the same time, Harriet actively campaigned for abolition, which brought her into the orbit of John Brown, who dubbed her “General Tubman.” Unlike other antislavery allies, she concurred with his advocacy for armed insurrection, and she proved a valuable resource for him with her detailed knowledge of support networks in border states. Brown’s raid on Harper’s Ferry was, of course, a failure, and Brown was hanged, but her admiration for the man never diminished. With the onset of the Civil War, Tubman volunteered to help “contrabands” living in makeshift refugee camps, and also served as a nurse before immersing herself in intelligence-gathering activities. Most spectacularly, Tubman led an expedition of United States Colored Troops (USCT) on the remarkable 1863 Combahee River Raid in South Carolina that freed 750 of the formerly enslaved—then recruited more than 100 of them to enlist to fight for Union. She is thus credited as the first woman to lead American forces in combat! She was even involved with Colonel Robert Gould Shaw in his preparations for the assault on Fort Wagner, later dramatized in the film Glory. When the war ended, Tubman went on to lobby for women’s suffrage, and died in her nineties in 1913—the end of a life that was given to legend because so very much of it more closely resembled imagined epic than authentic experience.

In this biography, Clinton the historian wrestles against the myth, yet sometimes seems seduced by it. She reports claims of the numbers of the enslaved Tubman liberated that seem exaggerated, and references enormous sums slaveowners offered as reward for her capture that defy documented evidence. There’s also a couple of egregious factual errors that any student of the Civil War would stumble upon with mouth agape: she misidentifies the location of the battle of Shiloh from Tennessee to Virginia, and declares Delaware a free state, which would have been a surprise to the small but yet enduring population of the enslaved that lived there. For these blunders, I am inclined to give Clinton the benefit of the doubt; she is an esteemed scholar who likely relied on a lousy editor. Perhaps these mistakes have been corrected in later editions.

In June 2023, shortly after I read this volume, I had the pleasure to sit in on Catherine Clinton’s lecture on the life of Harriet Tubman at the Civil War Institute (CWI) Summer Conference at Gettysburg College. Unlike all too many academics, Clinton is hardly dull on stage, and her presentation was as lively and colorful as her subject certainly must have been in the days when she walked the earth. During a tangent that drifted to the currency controversy, she noted that one of the more superficial objections to the rebranding of the twenty was that there are no existing images of Tubman smiling, something Clinton—grinning mischievously—reminded the audience should hardly be surprising since Harriet once dealt with a toothache while smuggling human beings out of bondage by knocking her own tooth out with her pistol, an episode recounted in the book, as well. Harriet Tubman’s life was an extraordinary one. If you want to learn more, pick up Clinton’s book.

I reviewed an earlier book by Clinton here: Review of: Tara Revisited: Women, War & The Plantation Legend, by Catherine Clinton

Featured

Review of: The Failed Promise: Reconstruction, Frederick Douglass, and the Impeachment of Andrew Johnson, by Robert S. Levine

As President of the United States, he is ranked at or near the bottom by most historians, a dramatic contrast to the man whose untimely death elevated him to that office, who is consistently ranked at or near the top. When some today bemoan the paradox of a south that lost the Civil War but yet seemed in so many ways to have won the peace, his name is often cited as principal cause. While he cannot solely be held to blame, he bears an outsize responsibility for the mass rehabilitation of those who once fomented secession and led a rebellion against the United States, a process that saw men who once championed treason regain substantial political power—and put that authority forcefully to bear to make certain that the rights and privileges granted to formerly enslaved African Americans in the 14th and 15th amendments would not be realized. He was Andrew Johnson.

As foremost black abolitionist, as well as vigorous advocate for freedom and civil rights for African Americans before, during, and after the Civil War, he is almost universally acclaimed as the greatest figure of the day in that long struggle. Born enslaved, often hungry and clad in rags, he was once hired out to a so called “slave-breaker” who frequently whipped him savagely. But, like Abraham Lincoln, he proved himself a remarkable autodidact who not only taught himself to read but managed to obtain a solid education that was to shape a clearly sophisticated intellect. He escaped to freedom, and distinguished himself as orator, author, and activist. Lincoln welcomed him at the White House. He lived long enough to see much of the dreams of his youth realized, as well as many of his hopes for the future dashed. He was Frederick Douglass.

At first glance, it seemed a bit odd and even unsettling to find these two men juxtaposed in The Failed Promise: Reconstruction, Frederick Douglass, and the Impeachment of Andrew Johnson [2021], but it was that very peculiarity that drew me to this kind of dual biography by Robert S. Levine, a scholar of African American literature who has long focused on the writings of Frederick Douglass. But back to that first glance: it seemed to me that the more elegant contrast would have been of Johnson and Ulysses S. Grant, since the latter was the true heir to Lincoln’s (apparent) moderate stances on reconciliation with the south that also promoted the well-being of the formerly enslaved—which at times put Grant uncomfortably at odds with both Johnson and his eventual opponents who controlled Congress, the Radical Republicans, who were hell-bent on punishing states once in rebellion while insisting upon nothing less than a social revolution that mandated equality for blacks in every arena. Meanwhile, while Johnson was president of the United States in 1865, Douglass himself had neither basic civil rights nor the right to vote in the next election.

Still, with gifted prose, a fast-paced narrative, and a talent for analysis that one-ups a number of credentialed historians of this era, Levine sets out to demonstrate that Johnson’s real rival in his tumultuous tenure was neither Grant nor a recalcitrant Congress, but rather Douglass who—much like Martin Luther King a full century later—unshakably occupied the moral high ground. In this, he mostly succeeds.

The outline to his story of Johnson is a mostly familiar one, yet punctuated by some keen insights into the man overlooked in other studies. Johnson, who (also like Lincoln) grew from poverty to prominence, was a Democrat who served as governor of Tennessee and later as member of Congress. A staunch Unionist, he was the only sitting senator from a seceding state who did not resign his seat. Lincoln made him Military Governor of Tennessee soon after it was reoccupied, and in 1864 he replaced Hannibal Hamlin as Lincoln’s running mate on the Republican Party’s rechristened “National Union” ticket in an election Lincoln felt certain he would lose. Johnson showed up drunk on inauguration day—sparking an unresolved controversy over whether the cause was recreation or self-medication—which tarnished his reputation in some quarters. Still, there were some among the Radical Republicans who wished that Johnson was the president and not Lincoln. Johnson, a former slaveowner who had first emancipated his own human property and later Tennessee’s entire enslaved population, had an abiding hatred for the plantation elites who had long scorned men of humble beginnings like himself, and a deep anger towards those who had severed the bonds of union with the United States. He seemed to many in Congress like the better agent to wreak revenge upon the conquered south for the hundreds of thousands of lives lost to war than the conciliatory Lincoln, who was willing to welcome seceded states back into the fold if a mere ten percent of its male population took loyalty oaths to the union.

The inauguration with an inebriated Johnson in attendance took place on March 4, 1865. On April 9, Lee surrendered at Appomattox. On April 15, Lincoln was dead and Johnson was president. Quietly—very quietly indeed—some Radical Republicans rejoiced. Lincoln had led them through the war, but now Johnson would be the better man to make the kind of unforgiving peace they had in mind. Moreover, Johnson—who had styled himself as “Moses” to African Americans in Tennessee as he preemptively (and illegally) freed them statewide in 1864—seemed like the ideal candidate to lead their crusade to foster a new reality for the defeated south that would crush the Confederates while promoting civil equality for their formerly chattel property. In all this, they were to be proved mistaken.

Meanwhile, Douglass brooded—and entertained hopes for Johnson not unlike those of his white allies in Congress. While there’s no evidence that he celebrated Lincoln’s untimely demise, Levine brilliantly reveals that Douglass’s appraisal of Lincoln evolved over time, that his own idolatry for the president was a creature of his later reflections, long after the fact, when he came to fully appreciate in retrospect not only what Lincoln had truly achieved but how deeply the promise of Reconstruction was irrevocably derailed by his successor. In their time, they had forged a strong relationship and even a bond of sorts, but Douglass consistently had doubts about Lincoln’s real commitment to the cause of African American freedom and civil liberties. Douglass took seriously Lincoln’s onetime declaration that “If I could save the Union without freeing any slave I would do it,” and he was suitably horrified by what that implied. Like some in Congress, Douglass was deluded by the fantasy of what Johnson’s accession might mean for the road ahead. This serves both as a strong caution and timely reminder to all of us in the field that it is critical to evaluate not only what was said or written by any individual in the past, but when it was said or written.

The author’s analysis of Johnson proves fascinating. Levine maintains that Johnson’s contempt for the elites who once disdained him was genuine, but that this was counterbalanced by his secret longing for their acceptance. And he reveled in freeing and enabling the enslaved, but only paternalistically and only ever on his own terms. If he could not be Moses, he would be Pharaoh. Levine also argues that whatever his flaws—and they were manifold—Johnson’s vision of his role as president in Reconstruction mirrored Lincoln’s. Lincoln believed that Reconstruction must flow primarily from the executive branch, not the legislative, and he intended to direct it as such. Lincoln’s specific plans died with him, but Johnson had his own ideas. This suggests that it is just as likely there would have been a clash between Lincoln and the Congress had he lived, although knowing what we know of Lincoln we might speculate at more positive results.

Levine breaks no new ground in his coverage of the failed impeachment, which the narrative treats without the kind of scrutiny found, for instance, in Impeached: The Trial of President Andrew Johnson and the Fight for Lincoln’s Legacy, by David O. Stewart. But there is the advantage of added nuance in this account because it is enriched by the presence of Douglass as spectator and sometime commentator. And here is Levine’s real achievement: it is through Douglass’s eyes that we can vividly see the righteous cause of emancipation won, obtained at least partially with the blood of United States Colored Troops (USCT), a constitutional amendment passed forever prohibiting human chattel slavery, and subsequent amendments guaranteeing civil rights, equality, and the right to vote for African Americans. And through those same eyes we witness the disillusion and disgust as the accidental president turns against everything Douglass holds dear. Those elite slaveholders who led rebellion, championing a proud slave republic, have their political rights restored and later show up as governors and members of Congress. The promise of Reconstruction is derailed, replaced by “Redemption” as unreconstructed ex-Confederates recapture the statehouses, black codes are enacted, African Americans and their white allies are terrorized and murdered. Constitutional amendments turn moot. The formerly enslaved, once considered three-fifths of a person, are now counted as full citizens but despite the 15th Amendment denied the vote at the point of a gun, so representation for the former slave states that engineered the war effectively increases after rejoining the union. That union has been restored with the sacrifice of more than six hundred thousand lives, and while slavery is abolished Douglass grows old observing the reconciliation of white men on both sides of the Mason-Dixon along with an embrace of the “Lost Cause” ideology that sees the start of a process that enshrines repression and leads to the erasure of African Americans from Civil War history.

That Levine is a professor of literature rather than of history is perhaps why the story he relates has a more emotional impact upon the reader than it might have if rendered by those with roots in our own discipline. The scholarship is by no means lacking, as evidenced by the ample citations in the thick section of notes at the end of the volume, but thankfully he eschews the dry, academic tone that tends to dominate the history field. This is a work equally attractive to a popular or scholarly audience, something that should be both celebrated and emulated. As an added bonus, he includes as appendix Douglass’s 1867 speech, “Sources of Danger to the Republic,” which argues for constitutional reforms that nicely echo down to our own times. Among other things, Douglass boldly calls for eliminating the position of vice president to avoid accidental presidencies (such as that of Andrew Johnson!) and for curbing executive authority. It is well worth the read and unfortunately not easy to access elsewhere except through a paywall. The Failed Promise is an apt title: the optimism at the dawn of Reconstruction holds so much appeal because we know all too well the tragedy of its outcome. To get a sense of how it began, as well as how it went so wrong, I recommend this book.

 

Here’s a link to a rare free online transcript of Frederick Douglass’s 1867 speech: “Sources of Danger to the Republic”

I reviewed Stewart’s book here: Impeached: The Trial of President Andrew Johnson and the Fight for Lincoln’s Legacy, by David O. Stewart

Featured

Review of: Love & Duty: Confederate Widows and the Emotional Politics of Loss, by Angela Esco Elder

In 2008, one hundred forty-three years after Appomattox, ninety-three-year-old Maudie Hopkins of Arkansas passed away, most likely the final surviving widow of a Confederate Civil War soldier. Like her literary counterpart Lucy Marsden, the star of Allan Gurganus’s delightful novel Oldest Living Confederate Widow Tells All, Maudie married an elderly veteran decades after the war ended and benefited from his pension. The widows are gone, but their legacy is still celebrated by the United Daughters of the Confederacy (UDC), a neo-Confederate organization comprised of female descendants of rebel soldiers that promotes the “Myth of the Lost Cause” and has long been associated with white supremacy.

But for students of the Civil War curious about the various fates of southern women whose husbands were among the many thousands who lost their lives at places like Shiloh and Chancellorsville, or in some random hospital tent, there is little to learn from the second-hand tales of either the fictional Lucy Marsden nor the real-life Maudie Hopkins—and even less from the pseudohistorical fantasies peddled by the UDC.  For that, fortunately, there is the outstanding recent work by historian Angela Esco Elder, Love & Duty: Confederate Widows and the Emotional Politics of Loss (2022), a well-written and surprisingly gripping narrative that brings a fresh perspective to a mostly overlooked corner of Civil War studies.

Something like 620,000 soldiers died during four years of Civil War, far more of disease than bullets or bayonets, but for most of the estimated 200,000 left widowed on both sides, the specific cause was less significant than the shock, pain, and lingering tragedy of loss. This they shared, north and south alike. But for a variety of reasons, the aftermath for the southern widow was substantially more complicated, often more desperate, and for many bred a suffering not only persistent but perhaps chronic. Southern women not only lost a husband; they also lost a war and a way of life.

Building upon Drew Gilpin Faust’s magnificent and groundbreaking study, This Republic of Suffering: Death and the American Civil War, Elder yet carves her own unique corridor to a critical realm of the past too often treated superficially or not at all. In the process, she engages the reader on a journey peopled with long-dead characters that spring to life at the stroke of her pen, enriched with anecdote while anchored to solid scholarship. I have read deeply in Civil War literature. While this topic interested me, I approached the book with some trepidation: after all, this is a theme that in the wrong hands could be a dull slog. Instead, it turned out to be a page-turner! Thus the author has managed to attain a rare achievement in our field: she has written a book equally attractive to both a scholarly and a popular audience.

In war, the southern woman endured a reality far more difficult than her northern counterpart. For one thing, the war was either on—or potentially on—her doorstep. Other than brief, failed rebel incursions on the north, the American Civil War was fought almost entirely on the territory of the seceded states that formed the Confederacy. A certain menace ever loomed over that landscape. With imports choked off, there were critical shortages of goods, not simply luxuries but everyday items all households counted on. This was exacerbated by inflation that dramatically increased the cost of living.

And then there was the enslaved. Today’s “Lost Cause” proponents would insist that the war had nothing to do with what the south once styled as its “peculiar institution,” but the scholarly consensus has long established that slavery was the central cause of the war. For those on the home front, that translated into a variety of complex realities. While most southerners did not themselves own human property, communities lived in fear of violent uprisings, even if these were imagined ones. For that segment whose households included the enslaved, there was the matter of managing that population held in bondage, large or small, with their men away at war. And most of the men were indeed away. In such a slave-based society, labor to support the infrastructure was performed by the enslaved, freeing up a much larger proportion of military age males to go off to war.

For women, all this was further complicated by a culture that disdained manual labor for white men or women, and placed women on a romantic pedestal where they also functioned primarily as property of sorts: of their husbands or fathers. As if all of that was not challenging enough, when the war was over, the south lay in ruins: their economy shattered, their chattel slaves freed, their outlook utterly bleak. Even bleaker was the reality of the Confederate widow.

Elder succeeds in Love & Duty where others might have failed in that she launches the story with a magnetic snapshot of what life was like for southern women in the antebellum era in a culture of hyperbolic chivalry and courtship rituals and idealistic images of how a proper young lady should look and behave. Like James Cameron in the first part of the film Titanic, she vividly demonstrates what life was like prior to the metaphorical shipwreck, showcasing the experiences of a cast of characters far more fascinating than any contrived in fiction.

Among these, perhaps the most unforgettable is Octavia “Tivie” Bryant, a southern belle in the grand style of the literature whom we encounter when she is only fourteen, courted by the twenty-six-year-old plantation owner Winston Stephens. Tivie’s father objects and they are parted, but the romance never cools, and a few years later there is a kind of fairy-tale wedding. But life intervenes. In 1864, a sniper’s bullet takes Winston and abruptly turns twenty-two-year-old Tivie into a widow. She is inconsolable. She lives on with a grief she can never reconcile until she finally passes on in 1908, more than forty years later.

But those who have read Catherine Clinton’s brilliant Tara Revisited: Women, War & The Plantation Legend are well aware that most southern women could not boast lives like Scarlett O’Hara—or Tivie, for that matter. Clinton underscored that while there were indeed women like Scarlett from families of extreme wealth who lived on large plantations with many slaves and busied themselves with social dalliances, her demographic comprised the tiniest minority of antebellum southern women. In fact, plantation life typically meant hard work and much responsibility even for affluent women. And for others, it could be brutally demanding, before the war and even more so during the course of it, for wives and daughters with no slaves who had very modest means, deprived of husbands and fathers away at war while they struggled to survive. Many of the widows that Elder profiles represent this cohort, providing the reader with a colorful panoramic of what life was really like for those far from the front as the Confederate cause was gradually but eventually crushed on battlefields west and east.

Elder’s account is especially effective because she exposes the reader to the full range of the Confederate widow and how they coped with their grief (or lack thereof), which of course ran the full gamut of the human experience. In this deeply patriarchal society, a man who lost his spouse was expected to wear a black armband and mourn for a matter of months. A woman, on the other hand, was to dress only in black and be in mourning a full two and a half years. Not all complied. We might have empathy for poor Tivie, who wallowed in her agony for decades, but on the other hand, her station in life permitted her an extended period of grieving, for better or worse. Others lacked that option. Many struggled just to survive. Some lost or had to give up their children. Some turned to prostitution. Some turned to remarriage.

In war or peace, not every woman is devastated by the death of their spouse, especially if he was lecherous, adulterous, or abusive. Nineteenth century women, north and south, were essentially the property of first their fathers and then their husbands. In reality, this was even more true for southern women, subject to the sometimes-twisted romantic idealism of their culture. Some deaths were welcomed, albeit quietly. Elder relates that:

In 1849, one wife petitioned the North Carolina courts for a divorce after her husband continuously drank heavily, beat her, locked her out of the house overnight, slept with an enslaved woman, and in one instance, forced his wife to watch them have sex. The chief justice did not grant the absolute dissolution of the marriage, believing there was reasonable hope for the couple’s reconciliation. Another wife, in Virginia, would flee to the swamps when her husband drank. If he caught her in the kitchen seeking protection from the weather, he attacked “with his fists and with sticks” …  And within the patriarchy, men maintained the right to correct their wives … Alvin Preslar beat his wife so brutally that she fled with two of her children toward her father’s house, dying before she reached it. Three hundred people petitioned against Preslar’s sentence to hang, arguing his actions were not intentional but rather “the result of a drunken frolic.” [p29]

A woman who managed to survive a marriage to men like these likely would not mourn like Tivie if a bullet—or measles—took him from her far from home.

Some of the best portions of this book focus upon specific individuals. One of my favorites is the spotlight on Emilie Todd Helm, a younger sister of Mary Todd Lincoln, whose husband General Benjamin Hardin Helm was killed at Chickamauga. Emilie sought to return home from Georgia to the border state of Kentucky, but was denied entry because she refused to take a loyalty oath to the Union. Her brother-in-law President Lincoln himself intervened and an exception was made. Elder reports that:

When Emilie approached the White House in 1863, she was “a pathetic little figure in her trailing black crepe.” Her trials had transformed the beautiful woman into a “sad-faced girl with pallid cheeks, tragic eyes, and tight, unsmiling lips.” Reunited with Abe and Mary, Emilie wrote, “we were all too grief-stricken at first for speech…. We could only embrace each other in silence and tears.” Certainly, the war had not been easy on the Lincolns either. The Todd sisters had lost two brothers, Mary had lost a son, and Emilie’s loss of Benjamin gave them much to grieve over together. “I never saw Lincoln more moved,” recalled Senator David Davis, “than when he heard of the death of his young brother-in-law, Helm, only thirty-two-years-old, at Chickamauga.”… ‘Davis,’ said he, ‘I feel as David of old did when he was told of the death of Absalom. Would to God that I had died for thee, oh, Absalom, my son, my son?’” … Emilie and Mary found comfort in each other’s company, but their political differences divided them. [p91]

When Emilie, still loyal to the Confederacy, later petitioned to sell her cotton crop despite wartime strictures to the contrary, Lincoln refused her.

No review can appropriately assess the extent of Elder’s achievements in Love & Duty, but there is much worthy of praise. Are there shortcomings? In the end, I was left wanting more. I would have liked Elder to better connect the experiences of actual widows with the myths of the UDC that later subsumed these. I also wanted more on the experiences of the enslaved who lived in the shadows of these white widows. Finally, I thought there were too many direct references to Drew Gilpin Faust in the narrative. Yes, the author admires Faust and yes, Faust’s scholarship is extraordinary, but Elder’s work is a significant contribution to the historiography on its own: let’s let Faust live on in the endnotes, as is appropriate. But … these are quibbles.  This is a fine work, and if you are invested in Civil War studies it belongs on your bookshelf—and in your lap, turning each page!

 

I reviewed the Drew Gilpin Faust book here: Review of: This Republic of Suffering: Death and the American Civil War, by Drew Gilpin Faust

I reviewed the Catherine Clinton book here:  Review of: Tara Revisited: Women, War & The Plantation Legend, by Catherine Clinton

 

Featured

Review of: Searching for Black Confederates: The Civil War’s Most Persistent Myth, by Kevin M. Levin

In March 1865, just weeks before the fall of Richmond that was to be the last act ahead of Appomattox, curious onlookers gathered in that city’s Capitol Square to take in a sight not only never before seen but hardly ever even imagined: black men drilling in gray uniforms—a final desperate gasp by a Confederacy truly on life support. None were to ever see combat. Elsewhere, it is likely that a good number of the ragged white men marching with Robert E. Lee’s shrunken Army of Northern Virginia were aware of this recent development. News travels fast in the ranks, and after all it was pressure from General Lee himself that finally won over adamant resistance at the top to enlist black troops. We can suppose that many of Lee’s soldiers—who had over four years seen much blood and treasure spent to guard the principle that the most appropriate condition for African Americans was in human bondage—were quite surprised by this strange turn of events. But more than one hundred fifty years later, the ghosts of those same men would be astonished to learn that today’s “Lost Cause” celebrants of the Confederacy insist that legions of “Black Confederates” had marched alongside them throughout the struggle.

In Searching for Black Confederates: The Civil War’s Most Persistent Myth (2019), historian Kevin M. Levin brings thorough research and outstanding analytical skills to an engaging and very well-written study of how an entirely fictional, ahistorical notion not only found life, but also the oxygen to gain traction and somehow spawn an increasingly large if misguided audience. For those committed to history, Levin’s effort arrived not a moment too soon, as so many legitimate Civil War groups—on and off social networking—have come under assault by “Lost Cause” adherents who have weaponized debate with fantastical claims that lack evidence in the scholarship but are cleverly packaged and aggressively peddled to the uninformed. The aim is to sanitize history in an attempt to defend the Confederacy, shift the cause of secession from slavery to states’ rights, refashion their brand of slavery as benevolent, and reveal purported long suppressed “facts” allegedly erased by today’s “woke” mob eager to cast the south’s doomed quest to defend their liberty from northern aggression in a negative light. In this process, the concept of “Black Confederates” has turned into their most prominent and powerful meme, winning converts of not only the uninitiated but sometimes, unexpectedly, of those who should know better.

What has been dubbed the “Myth of the Lost Cause” was born of the smoldering ashes of the Confederacy. The south had been defeated; slavery not only outlawed but widely discredited. Many of the elite southern politicians who back in 1861 had proclaimed the Confederate States of America a “proud slave republic” after fostering secession because Lincoln’s Republicans would block their peculiar institution from the territories, now rewrote history to erase slavery as their chief grievance. Attention was instead refocused on “states’ rights,” which in prior decades had mostly served as euphemism for the right to own human beings as property. Still, the scholarly consensus has established that slavery was indeed the central cause of the war. As Gary Gallagher, one of today’s foremost Civil War historians, has urged: pay attention to what they said at the dawn of the war, not what they said when it was over. Of course, for those who promote the Lost Cause, it is just the opposite.

There are multiple prongs to the Lost Cause strategy. One holds slavery as a generally benign practice with deep roots to biblical times, along with a whiff of the popular antebellum trope that juxtaposed the enslaved with beleaguered New England mill workers, maintaining that the former lived better, more secure lives as property—and that they were content, even pleased, by their station in life. This theme was later exploited with much fanfare in the fiction and film of Gone with the Wind, with such memorable episodes as the enslaved Prissy screeching in terror that “De Yankees is comin!”—a cry that in real life would far more likely have been in celebration than distress.

But, as Levin reveals through careful research, the myth of black men in uniform fighting to defend the Confederacy did not emerge until the 1970s, as the actual treatment of African Americans—in slavery, in Jim Crow, as second-class citizens—became widely known to a much larger audience. This motivated Lost Cause proponents to not only further distance the southern cause from slavery, but to invent the idea that blacks actually laid down their lives to preserve it. In the internet age, this most conspicuously translated into memes featuring out-of-context photographs of black men clutching muskets and garbed in gray … the “Black Confederates” who bravely served to defend Dixie against marauding Yankees.

All of this seems counterintuitive, which is why it is remarkable that the belief not only caught on but has grown in popularity. In fact, some half million of the enslaved fled to Union lines over the course of the war. Two hundred thousand black men formed the ranks of the United States Colored Troops (USCT); ultimately a full ten percent of the Union Army was comprised of African Americans. If captured, blacks were returned to slavery or—all too frequently—murdered as they attempted to surrender at Fort Pillow, the Battle of the Crater, and elsewhere. That idea that African Americans would willingly fight for the Confederacy seems not only unlikely, but insane.

So what about those photographs of blacks in rebel uniforms? What is their provenance? To find out, Levin begins by exploring what life was like for white Confederates. In the process, he builds upon Colin Woodward’s brilliant 2014 study, Marching Masters: Slavery, Race, and the Confederate Army During the Civil War. Woodward challenged the popular assumption that while most rebels fought for southern independence, they remained largely agnostic about the politics of slavery, especially since only a minority were slaveowners themselves. Disputing this premise, Woodward argued that the peculiar institution was never some kind of abstract notion to the soldier in the ranks, since tens of thousands of blacks accompanied Confederate armies as “camp slaves” throughout the course of the war! (Many Civil War buffs are shocked to learn that Lee brought as many as six to ten thousand camp slaves with him on the Gettysburg campaign—this while indiscriminately scooping up any blacks encountered along the way, both fugitive and free.)

Levin skips the ideological debate at the heart of Woodward’s thesis while bringing focus to the omnipresence of the enslaved, whose role was entirely non-military, devoted instead to perform every kind of labor that would be part of the duties of soldiers on the other side. This included digging entrenchments, tending to sanitation, serving as teamsters, cooks, etc. Many were subject to impressment by the Confederate government to support the war effort, while others were the personal property of officers or enlisted men, body servants who accompanied their masters to the front. According to Levin, it turns out that some of the famous photographs of so-called Black Confederates were of these enslaved servants whom their owners dressed up for dramatic effect in the studio, decked out in a matching uniform with musket and sword—before even marching off to war. Once in camp, of course, these men would no longer be in costume: they were slaves, not soldiers.

After the war, legends persisted of loyal camp slaves who risked their lives under fire to tend to a wounded master or brought their bodies home for burial. While likely based upon actual events, the number of such occurrences was certainly overstated in Lost Cause lore that portrayed the enslaved as not only content to be chattel but even eager to assist those who held them as property. Also, as Reconstruction fell to Redemption, blacks in states of the former Confederacy who sought to enjoy rights guaranteed to them by the Fourteenth and Fifteenth Amendments were routinely terrorized and frequently murdered. For African Americans who faced potentially hostile circumstances, championing their roles as loyal camp slaves, real or imagined, translated into a survival mechanism. Meanwhile, whites who desperately wanted to remember that which was contrived or exaggerated zealously hawked such tales, later came to embrace them, and then finally enshrined them as incontrovertible truth, celebrated for decades hence at reunions where former camp slaves dutifully made appearances to act the part.

Still later, there was an intersection of such celebrity with financial reward, when southern states began to offer pensions for veterans and some provision was made for the most meritorious camp slaves. But, at the end of the day, these men remained slaves, not soldiers. Nevertheless, more than a full century hence, many of these pensioners were transformed into Black Confederates. And some of them people the memes of a now resurgent Lost Cause often inextricably entwined with today’s right-wing politics.

It is certainly likely that handfuls of camp slaves may have, on rare occasions, taken up a weapon alongside their masters and fired at soldiers in blue charging their positions. Such reports exist, even if these cannot always be corroborated. In the scheme of things, these numbers are certainly miniscule. And, of course, in every conflict there are collaborators. But the idea that African Americans served as organized, uniformed forces fighting for the south not only lacks evidence but rationality.

Yet, how can we really know for certain? For that, we turn to a point Levin makes repeatedly in the narrative: there are simply no contemporaneous accounts of such a thing. It has elsewhere been estimated that soldiers in the Civil War, north and south, collectively wrote several million letters. Tens of thousands of these survive, and touch on just about every imaginable topic. Not a one refers Black Confederate troops in the field.

On the other hand, quite a few letters home reference the sometimes-brutal discipline inflicted upon disobedient camp slaves. In one, a Georgia Lieutenant informed his wife that he whipped his enslaved servant Joe “about four hundred lashes … I tore his back and legs all to pieces. I was mad enough to kill him.” Another officer actually did beat a recalcitrant slave to death [p26-27]. Such acts went unpunished, of course, and that they were so frankly and unremarkably reported in letters to loved ones speaks volumes about the routine cruelty of chattel slavery while also contradicting modern fantasies that black men would willingly fight for such an ignoble cause. The white ex-Confederates who later hailed the heroic and loyal camp slave no doubt willingly erased from memory the harsh beatings that could characterize camp life; the formerly enslaved who survived likely never forgot.

Searching for Black Confederates is as much about disproving their existence as it is about the reasons some insist against all evidence that they did. With feet placed firmly in the past as well as the present, Levin—who has both a talent for scholarship as well as a gifted pen—has written what is unquestionably the definitive treatment of this controversy, and along the way has made a significant contribution to the historiography. The next time somebody tries to sell you on “Black Confederates,” advise them to read this book first, and then get back to you!

 

I reviewed the Woodward book here: Review of: Marching Masters: Slavery, Race, and the Confederate Army During the Civil War, by Colin Edward Woodward

Featured

Review of: Wrestling With His Angel: The Political Life of Abraham Lincoln Vol. II, 1849-1856, and All the Powers of Earth: The Political Life of Abraham Lincoln Vol. III, 1856-1860, by Sidney Blumenthal

On November 6, 1860, Abraham Lincoln was elected the 16th president of the United States, although his name did not appear on the ballot in ten southern states. Just about six weeks later, South Carolina seceded. This information is communicated in only the final few of the more than six hundred pages contained in All the Powers of Earth: The Political Life of Abraham Lincoln Vol. III, 1856-1860, the ambitious third installment in Sidney Blumenthal’s projected five-volume series. But this book, just as the similarly thick ones that preceded it, is burdened neither by unnecessary paragraphs nor even a single gratuitous sentence. Still, most noteworthy, Abraham Lincoln—the ostensible subject—is conspicuous in his absence in vast portions of this intricately detailed and extremely well-written narrative that goes well beyond the boundaries of ordinary biography to deliver a much-needed re-evaluation of the tumultuous age that he sprang from in order to account for how it was that this unlikely figure came to dominate it. The surprising result is that through this unique approach, the reader will come to know and appreciate the nuance and complexity that was the man and his times like never before.

When I was in school, in the standard textbooks Lincoln seems to come out of nowhere. A homespun, prairie lawyer who served a single, unremarkable term in the House of Representatives, he is thrust into national prominence when he debates Stephen A. Douglas in his ultimately unsuccessful campaign for the U.S. Senate, then somehow rebounds just two years later by skipping past Congress and into the White House. Douglas, once one of the most well-known and consequential figures of his day, slips into historical obscurity. Meanwhile, long-simmering sectional disputes between white men on both sides roar to life with Lincoln’s election, sparking secession by a south convinced that their constitutional rights and privileges are under assault. Slavery looms just vaguely on the periphery. Civil War ensues, an outgunned Confederacy falls, Lincoln is assassinated, slavery is abolished, national reconciliation follows, and African Americans are even more thoroughly erased from history than Stephen Douglas.

Of course, the historiography has come a long way since then. While fringe “Lost Cause” adherents still speak of states’ rights, the scholarly consensus has unequivocally established human chattel slavery as the central cause for the conflict, as well as resurrected the essential role of African Americans—who comprised a full ten percent of the Union army—in putting down the rebellion. In recent decades, this has motivated historians to reexamine the prewar and postwar years through a more polished lens. That has enabled a more thorough exploration of the antebellum period that had been too long cluttered with grievances of far less significance such as the frictions in rural vs. urban, agriculture vs. industry, and tariffs vs. free trade. Such elements may indeed have exacerbated tensions, but without slavery there could have been no Civil War.

And yet … and yet with all the literature that has resulted from this more recent scholarship, much of it certainly superlative, students of the era cannot help but detect the shadows of missing bits and pieces, like the dark matter in the universe we know exists but struggle to identify. This is at least partially due to timelines that fail to properly chart root causes that far precede traditional antebellum chronologies that sometimes look back no further than the Mexican War—which at the same time serves as a bold underscore to the lack of agreement on even a consistent “start date” for the antebellum. Not surprisingly perhaps, this murkiness has also crept into the realm of Lincoln studies, to the disfavor of genres that should be complementary rather than competing.

In fact, the trajectory of Lincoln’s life and the antebellum are inextricably conjoined, a reality that Sidney Blumenthal brilliantly captures with a revolutionary tactic that chronicles these as a single, intertwined narrative that begins with A Self-Made Man: The Political Life of Abraham Lincoln Vol. I, 1809–1849 (which I reviewed elsewhere). It is evident that at Lincoln’s birth the slave south already effectively controlled the government, not only by way of a string of chief executives who also happened to be Virginia plantation dynasts, but—of even greater consequence—outsize representation obtained via the Constitution’s “Three-Fifth’s Clause.” But even then, there were signs that the slave power—pregnant with an exaggerated sense of their own self-importance, a conviction of moral superiority, as well as a ruthless will to dominate—possessed an unquenchable appetite to enlarge their extraordinary political power to steer the ship of state—frequently enabled by the northern men of southern sympathies then disparaged as “doughfaces.” Lincoln was eleven at the time of the Missouri Compromise, twenty-three during the Nullification Crisis so closely identified with John C. Calhoun, twenty-seven when the first elements of the “gag rule” in the House so ardently opposed by John Quincy Adams were instituted, thirty-seven at the start of both the Mexican War and his sole term as an Illinois Congressman, where he questioned the legitimacy of that conflict. That same year, Stephen A. Douglas, also of Illinois, was elected U.S. Senator.

Through it all, the author proves as adept as historian of the United States as he is biographer of Lincoln—who sometimes goes missing for a chapter or more, only summoned when the account calls for him to make an appearance. Some critics have voiced their frustration at Lincoln’s own absence for extended portions in what is after all his own biography, but they seem to be missing the point. As Blumenthal demonstrates in this and subsequent volumes, it is not only impossible to study Lincoln without surveying the age that he walked the earth, but it turns out that it is equally impossible to analyze the causes of the Civil War absent an analysis of Lincoln, because he was such a critical figure along the way.

Wrestling With His Angel: The Political Life of Abraham Lincoln Vol. II, 1849-1856, picks up where A Self-Made Man leaves off, and that in turn is followed by All the Powers of Earth. All form a single unbroken narrative of politics and power, something that happens to fit with my growing affinity for political biography, as distinguished by David O. Stewart’s George Washington: The Political Rise of America’s Founding Father, Jon Meacham’s Thomas Jefferson: The Art of Power, and Franklin D. Roosevelt: A Political Life, by Robert Dallek. Blumenthal, of course, takes this not only to a whole new level, but to an entirely new dimension.

For more recent times, the best of the best in this genre appears in works by historian Rick Perlstein (author of Nixonland and Reaganland) who also happens to be the guy who recommended Blumenthal to me. In the pages of Perlstein’s Reaganland, Jimmy Carter occupies center-stage far more so than Ronald Reagan, since without Carter’s failed presidency there never could have been a President Reagan. Similarly, Blumenthal cedes a good deal of Lincoln’s spotlight to Stephen A. Douglas, Lincoln’s longtime rival and the most influential doughface of his time. Many have dubbed John C. Calhoun the true instigator in the process that led to Civil War a decade after his death. And while that reputation may not be undeserved, it might be overstated. Calhoun, a southerner who celebrated slavery, championed nullification, and normalized notions of secession, could indeed be credited with paving the road to disunion. But, as Blumenthal skillfully reveals, maniacally gripping the reins of the wagon that in a confluence of unintended consequences was to hurtle towards both secession and war was the under-sized, racist, alcoholic, bombastic, narcissistic, ambitious, pro-slavery but pro-union northerner Stephen A. Douglas, the so-called “Little Giant.”

Like Calhoun, Douglas was self-serving and opportunistic, with a talent for constructing an ideological framework for issues that suited his purposes. But unlike Calhoun, while he often served their interests Douglas was a northern man never accepted nor entirely trusted by the southern elite that he toadied to in his cyclical unrequited hopes they would back his presidential ambitions. Such support never materialized.

It may not have been clear at the time, and the history books tend to overlook it, but Blumenthal demonstrates that it was the rivalry between Douglas and Lincoln that truly defined the struggles and outcomes of the age. It was Douglas who—undeterred by the failed efforts of Henry Clay—shepherded through the Compromise of 1850, which included the Fugitive Slave Act that was such an anathema to the north. More significantly, the 1854 Kansas-Nebraska Act that repealed the Missouri Compromise was Douglas’s brainchild, and Douglas was to continue to champion his doctrine of “popular sovereignty” even after Taney’s ruling in Dred Scott invalidated it. It was Douglas’s fantasy that he alone could unite the states of north and south, even as the process of fragmentation was well underway, a course he himself surely if inadvertently set in motion. Douglas tried to be everyone’s man, and in the end he was to be no one’s. Throughout all of this, over many years, Blumenthal argues, Lincoln—out of elective office but hardly a bystander—followed Douglas. Lincoln’s election brought secession, but if a sole agent was to be named for fashioning the circumstances that ignited the Civil War, that discredit would surely go to Douglas, not Lincoln.

These two volumes combined well exceed a thousand pages, not including copious notes and back matter, so no review can appropriately capture it all except to say that collectively it represents a magnificent achievement that succeeds in treating the reader to what the living Lincoln was like while recreating the era that defined him. Indeed, including his first book, I have thus far read nearly sixteen hundred pages of Blumenthal’s Lincoln and my attention has never wavered. Only Robert Caro—with his Shakespearian multi-volume biography of Lyndon Johnson—has managed to keep my interest as long as Blumenthal. And I can’t wait for the next two in the series to hit the press!  To date, more than fifteen thousand books have been published about Abraham Lincoln, so there are many to choose from. Still, these from Blumenthal are absolutely required reading.

 

I reviewed Blumenthal’s first volume, A Self-Made Man, here:  Review of: A Self-Made Man: The Political Life of Abraham Lincoln Vol. I, 1809–1849, by Sidney Blumenthal

I reviewed Rick Perlstein’s Reaganland here:  Review of: Reaganland: America’s Right Turn 1976-1980, by Rick Perlstein

Featured

Review of: The Patriarchs: The Origins of Inequality, by Angela Saini

“Down With the Patriarchy” is a both a powerful rallying cry and a fashionable emblem broadcast in memes, coffee mugs, tee shirts—and even, paired with an expletive, sung aloud in a popular Taylor Swift anthem. But what exactly is the patriarchy? Is it, as feminists would have it, a reflection of an entrenched system of male domination that defines power relationships between men and women in arenas public and private? Or, as some on the right might style it, a “woke” whine of victimization that downplays the equality today’s women have achieved at home and at work? Regardless, is male dominance simply the natural order of things, born out of traditional gender roles in hunting and gathering, reaping and sowing, sword-wielding and childbearing? Or was it—and does it remain—an artificial institution imposed from above and stubbornly preserved? Do such patterns run deep into human history, or are they instead the relatively recent by-products of agriculture, of settled civilization, of states and empires? Did other lifeways once exist? And finally, perhaps most significantly, does it have to be this way?

A consideration of these and other related questions, both practical and existential, form the basis for The Patriarchs: The Origins of Inequality, an extraordinary tour de force by Angela Saini marked by both a brilliant gift for analysis and an extremely talented pen. Saini, a science journalist and author of the groundbreaking, highly acclaimed Superior: The Return of Race Science, one-ups her own prior achievements by widening the lens on entrenched inequalities in human societies to look beyond race as a factor, a somewhat recent phenomenon in the greater scheme of things, to that of gender, which—at least on the face of it—seems far more ancient and deep-seated.

To that end, in The Patriarchs Saini takes the reader on a fascinating expedition to explore male-female relationships—then and now—ranging as far back as the nearly ten-thousand-year-old proto-city Çatalhöyük in present-day Turkey, where some have suggested that female deities were worshipped and matriarchy may have been the status quo, and flashing forward to the still ongoing protests in Iran, sparked by the death in custody of a 22-year-old woman detained for wearing an “improper” hijab. There are many stops in between, including the city-states of Classical Greece, which saw women controlled and even confined by their husbands in democratic Athens, but yet celebrated for their strength and independence (of a sorts) in the rigidly structured autocracy that defined the Spartan polis.

But most of the journey is contemporary and global in scope, from Seneca Falls, New York, where many Onondaga Native American women continue to enjoy a kind of gender equality that white American women could hardly imagine when they launched their bid for women’s rights in that locale in 1848, to the modern-day states of Kerala and Meghalaya in India, which still retain deeply-rooted traditions of the matrilinear and the matriarchal, respectively, in a nation where arranged marriages remain surprisingly common. And to Afghanistan, where the recently reinstalled Taliban regime prohibits the education of girls and mandates the wearing of a Burqa in public, and Ethiopia, where in many parts of the country female genital mutilation is the rule, not the exception. There are even interviews with European women who grew up in the formerly socialist eastern bloc, some who look back wistfully to a time marked by better economic security and far greater opportunities for women, despite the repression that otherwise characterized the state.

I’m a big fan of Saini’s previous work, but still I cracked the cover of her latest book with some degree of trepidation. This is, after all, such a loaded topic that it could, if mishandled, too easily turn to polemic.  So I carefully sniffed around for manifesto-disguised-as-thesis, for axes cleverly cloaked from grinding, for cherry-picked data, and for broad brushes. (Metaphors gleefully mixed!) Thankfully, there was none of that. Instead, she approaches this effort throughout as a scientist, digging deep, asking questions, and reporting answers that sometimes are not to her liking. You have to respect that. My background is history, a study that emphasizes complexity and nuance, and mandates both careful research and an analytical evaluation of relevant data. Both science and history demand critical thinking skills. In The Patriarchs, Saini demonstrates that she walks with great competence in each of these disciplines.

A case in point is her discussion of Çatalhöyük, an astonishing neolithic site first excavated by English archaeologist James Mellaart in the late 1950s that revealed notable hallmarks of settled civilization uncommon for its era. Based on what he identified as figurines of female deities, such as the famous Seated Woman of Çatalhöyük that dates back to 6000 BCE, Mellaart claimed that a “Mother Goddess” culture prevailed. The notion that goddesses once dominated a distant past was dramatically boosted by Lithuanian archaeologist and anthropologist Marija Gimbutas, who wrote widely on this topic, and argued as well that a peaceful, matriarchal society was characteristic to the neolithic settlements of Old Europe prior to being overrun by Indo-European marauders from the north who imposed a warlike patriarchy upon the subjugated population.

I squirmed a bit in my seat as I read this, knowing that the respective conclusions of both Mellaart and Gimbutas have since been, based upon more rigorous studies, largely discredited as wildly overdrawn. But there was no need for such concerns, for in subsequent pages Saini herself points to current experts and the scholarly consensus to rebut at least some of the bolder assertions of these earlier archaeologists. It turns out that in both Çatalhöyük and Old Europe, while society was probably not hierarchal, it was likely more gender-neutral than matriarchal. It is clear that the author should be commended for her exhaustive research. While reading of Indo-European invaders—something Gimbutas got right—my thoughts instantly went to David Anthony’s magnificent study, The Horse, the Wheel, and Language: How Bronze-Age Riders from the Eurasian Steppes Shaped the Modern World, which I read some years back. When I thumbed ahead to the “Notes,” I was delighted to find a citation for the Anthony book!

It is soon clear that in her search for the origins of inequality, Saini’s goal is to ask more questions than insist upon answers. Also increasingly evident is that even if it seems to have become more common in the past centuries, patriarchy is not the norm. No, it doesn’t have to be this way. Perhaps matriarchy did not characterize Çatalhöyük—and we really can’t be certain—but there is evidence for matriarchal societies elsewhere; some still flourish to this day. History and events in the current millennium demonstrate that there are choices, and societies can—and we can—choose equality rather than a condition where one group is dominated by another based upon race, caste, or gender.

With all of the author’s questions and her search for answers, however, it is the journey that is most enjoyable. In such an expansive work of science, history, and philosophy, the narrative never bogs down. And while the scope is vast, it is only a couple of hundred pages. I actually found myself wanting more.

If there is one area where I would caution Saini, it was in her treatment of ancient Greece. Yes, based upon the literature, Athenian women seem to have been stifled and Spartan women less inhibited, but of the hundreds of poleis that existed in the Classical period, we really only have relevant information for a few, surviving data is weighted heavily towards the elites of Athens and Sparta, and much of it is tarnished by editorializing on both sides that reflected the antipathy between these two bitter rivals. There is more to the story. Aspasia, the mistress of the Athenian statesman Pericles, was a powerful figure in her own right. Lysistrata, the splendid political satire created by the Athenian Aristophanes, smacks of a kind of ancient feminism as it has women on both sides of the Peloponnesian War denying sex to their men until a truce is called. This play could never have resonated if the female characters were wholly imagined. And while we can perhaps admire the status of a Spartan woman when juxtaposed with her Athenian counterpart, we must remember that their primary role in that rigid, militaristic society was to bear the sons of warriors.

But the station of a Spartan woman raises an interesting counterintuitive that I had hoped Saini would explore. Why was it—and does it remain the case—that women seem to gain greater freedom in autocratic states than democratic ones? It is certainly anachronistic to style fifth century Sparta as totalitarian, but the structure of the state seems to have far more in common with the twentieth century Soviet Union and the Peoples Republic of China, where despite repression women achieved far greater equality than they did in Athens or, at least until very recently, in Europe and the United States. And I really wanted a chapter on China, where the crippling horror of foot-binding for girls was not abolished until 1912, and still lingered in places until the communist takeover mid-century. Mao was responsible for the wanton slaughter of millions, yet women attained a greater equality under his brutal regime than they had for the thousands of years that preceded him.

While she touches upon it, I also looked for a wider discussion of how conservative women can sometimes come to not only represent the greatest obstacle for women’s rights but to advance rather than impede the patriarchy. As an American, there are many painful reminders of that here, where in decades past the antifeminist Phyllis Schlafly nearly single-handedly derailed passage of the Equal Rights Amendment. Most recently, it was a coalition of Republican and Christian evangelical women who led the crusade that eventually succeeded in curbing abortion rights. But then, as I wished for another hundred pages to go over all this, Saini summed up the incongruity succinctly in a discussion of female genital mutilation in Africa, citing the resistance to change by an Ethiopian girl who asserted: “If our mothers should refuse to continue cutting us, we will cut ourselves.” [p191]

In the end, Saini’s strategy was sound. The Patriarchs boasts a manageable size and the kind of readability that might be sacrificed in a bulkier treatise. The author doesn’t try to say it all: only what is most significant. Also, both the length and the presentation lend appeal to a popular audience, while the research and extensive notes will suit an academic one, as well. That is an especially rare accomplishment these days!

Whatever preconceived notions the reader might have, based upon the title and its implications, Saini demonstrates again and again that it’s not her intention to prove a point, but rather to make you think. Here she succeeds wonderfully. And you get the impression that it is her intellectual curiosity that guides her life. Born in London of ethnic Indian parents and now residing in New York City, she is a highly educated woman with brown skin, feet that can step comfortably into milieus west and east, and an insightful mind that fully embraces the possibilities of the modern world. Thus, Saini is in so many ways ideally suited to address issues of racism and sexism. She is still quite young, and this is her fourth book. I suspect there will be many more. In the meantime, read this one. It will be well worth your time.

 

Note: This review was based upon an Uncorrected Page Proof edition

Note: I reviewed Saini’s previous book here: Review of: Superior: The Return of Race Science, by Angela Saini

Featured

Review of: Yippie Girl: Exploits in Protest and Defeating the FBI, by Judy Gumbo

The typical American family of 1968 sitting back to watch the nightly news on their nineteen-inch televisions could be excused for sometimes gripping their armrests as events unfolded before them—for most in living color, but for plenty of others still on the familiar black-and-white sets rapidly going extinct. (I was eleven: we had a color TV!) The first seven months of that year was especially tumultuous.

There was January’s spectacular Tet Offensive across South Vietnam, which while ultimately unsuccessful yet stunned a nation still mostly deluded by assurances from Lyndon Johnson’s White House that the war was going according to plan. Then in February, the South Carolina highway patrol opened fire on unarmed black Civil Rights protestors on the state university campus, leaving three dead and more than two dozen injured in what was popularly called the “Orangeburg Massacre.” In March, a shaken LBJ announced in a live broadcast that he would not seek reelection. In April, Martin Luther King Jr. was assassinated, sparking riots in cities across the country. Only two days later, a fierce firefight erupted between the Oakland police and Black Panther Party members Eldridge Cleaver and “Lil’ Bobby” Hutton, which left two officers injured, Hutton dead, and Cleaver in custody; some reports maintain that seventeen-year-old Hutton was executed by police after he surrendered. Later that same month, hundreds of antiwar students occupied buildings on Columbia University’s campus until the New York City police violently broke up the demonstration, beating and arresting protesters. In May, Catholic activists known as the Catonsville Nine removed draft files from a Maryland draft board which they set ablaze in the parking lot. In June, Robert F. Kennedy was assassinated. In July, what became known as the “Glenville Shootout” saw black militants engaged in an extended gunfight with police in Cleveland, Ohio that left seven dead.

In August, just days before the streets outside the arena hosting the Democratic National Convention deteriorated into violent battles between police and demonstrators that later set the stage for the famous trial of the “Chicago Seven,” a group of Yippies—members of the Youth International Party that specialized in pranks and street theatre— were placed under arrest by the Chicago police while in the process of nominating a pig named “Pigasus” for president. In addition to Pigasus, those taken into custody included Yippie organizer Jerry Rubin, folk singer Phil Ochs, and activist Stew Albert. Present but not detained was Judy Gumbo, Stew’s girlfriend and a feminist activist in her own right.

Known for their playful anarchy, many leaders of the New Left dismissed Yippies as “Groucho Marxists,” but for some reason the FBI, convinced they were violent insurrectionists intent on the overthrow of the United States government, became obsessed with the group, placing them on an intensive surveillance that lasted for years to come. A 1972 notation in Gumbo’s FBI files declared, without evidence, that she was “the most vicious, the most anti-American, the most anti-establishment, and the most dangerous to the internal security of the United States.” She was later to obtain copies of these files, which served as an enormously valuable diary of events of sorts for her (2022) memoir, Yippie Girl: Exploits in Protest and Defeating the FBI, a well-written if sometimes uneven account of her role in and around an organization at the vanguard of the potent political radicalism that swept the country in the late-sixties and early-seventies.

Born Judy Clavir in Toronto, Canada, she grew up a so-called “red diaper baby,” the child of rigidly ideological pro-Soviet communists. She married young and briefly to actor David Hemblen and then fled his unfaithfulness to start a new life in Berkeley, California in the fall of 1967, in the heyday of the emerging counterculture, and soon fell in with activists who ran in the same circles with new boyfriend Stew Albert. Albert’s best friends were Black Panther Eldridge Cleaver and Yippie founder Jerry Rubin. She squirmed when Cleaver referred to her as “Mrs. Stew,” insisting upon her own identity, until one day Eldridge playfully dubbed her “Gumbo”—since “gumbo goes with stew.” Ever after she was known as Judy Gumbo.

Gumbo took a job writing copy for a local newspaper, while becoming more deeply immersed in activism as a full-fledged member of the Yippies. As such, those in her immediate orbit were some of the most consequential members of the antiwar and Black Power movements, which sometimes overlapped, including Abbie and Anita Hoffman, Nancy Kurshan, Paul Krassner, Phil Ochs, William Kunstler, David Dellinger, Timothy Leary, Kathleen Cleaver, and Bobby Seale. She describes the often-immature jockeying for leadership that occurred between rivals Abbie Hoffman and Jerry Rubin, which also underscored her frustration in general with ostensibly enlightened left-wing radicals who nevertheless casually asserted male dominance in every arena—and fueled her increasingly more strident brand of feminism. She personalized the Yippie exhortation to “rise up and abandon the creeping meatball”—which means to conquer fear by turning it into an act of defiance and deliberately doing exactly what you most fear—by leaving her insecurities behind, as well as her reliance on other people, to grow into an assertive take-no-prisoners independent feminist woman with no regrets. How she achieves this is the journey motif of her life and this memoir.

Gumbo’s behind-the-scenes anecdotes culled from years of close contact with such a wide assortment of sixties notables is the most valuable part of Yippie Girl. There is no doubt that her ability to consult her FBI files—even if these contained wild exaggerations about her character and her activities—refreshed her memories of those days, more than a half century past, which lends authenticity to the book as a kind of primary source for life among Yippies, Panthers, and fellow revolutionaries of the time. And she successfully puts you in the front seat, with her, as she takes you on a tour of significant moments in the movement and in its immediate periphery in Berkeley, Chicago, and New York. Her style, if not elegant, is highly readable, which is an accomplishment for any author that merits mention in a review of their work.

The weakest part of the book is her unstated insistence on making herself the main character in every situation, which betrays an uncomfortable narcissism that the reader suspects had negative consequences in virtually all of her relationships with both allies and adversaries. Yes, it is her memoir. Yes, her significance in the movement deserves—and has to some degree been denied by history—the kind of notoriety accorded to what after all became household names like Abbie Hoffman and Jerry Rubin. But the reality is that she was never in a top leadership role. She was not arrested with Pigasus. She was not put on trial with the Chicago 7. You can detect in the narrative that she wishes she was.

This aspect of her personality makes her a less sympathetic figure than she should be as a committed activist tirelessly promoting peace and equality while being unfairly hounded by the FBI. But she carries something else unpleasant around with her that is unnerving: an allegiance to her cause and herself that boasts a kind of ruthless naïveté that rejects correction when challenged either by reality or morality. She condemns Cleaver’s infidelity to his wife, but abandons Stew for a series of random affairs, most notably with a North Vietnamese diplomat who happens to be married. She personally eschews violence, but cheers the Capitol bombing by the Weathermen, domestic terrorists who splintered from the former (SDS) Students for a Democratic Society.

To oppose the unjust U.S. intervention in Vietnam and decry the millions of lives lost across Southeast Asia was certainly an honorable cause, worthy of respect, then and now. But this red diaper baby never grew up: her vision of the just and righteous was distinguished by her admiration of oppressive, totalitarian regimes in the Soviet Union, North Korea, Cuba—and North Vietnam. Like too many in the antiwar movement, opposition to Washington’s involvement in the Vietnam War strangely morphed into a distorted veneration for Hanoi. There may indeed have been much to condemn about the America of that era in the realm of militarism, imperialism, and inequality, but that hardly justified—then or now—championing communist dictatorships on the other side known for regimes marked by repression and sometimes even terror.

Gumbo visited most of these repressive states that she supported, including North Vietnam. She reveals that while there she settled into the seat of a Russian anti-antiaircraft machine gun much like the one Jane Fonda later sat in. Fonda, branded a traitor by the right, later lamented that move, and publicly admitted it. Gumbo will have none of it: “I have never regretted looking through those gun sights,” she proudly asserts [p203]. She still celebrates the reunification of Vietnam, while ignoring its aftermath. Her stubborn allegiance to ideology over humanity, and her utter inability to evolve as a person further points to her inherent narcissism. She is never wrong. She is always right. Just ask her, she’ll tell you so.

Yippie Girl also lacks a greater context that would make it more accessible to a wider audience. The author assumes the reader is well aware of the climate of extremism that often characterized the United States in the sixties and seventies—like the litany of news events of the first half of 1968 that opened this review—when in fact for most Americans today those days likely seem like accounts from another planet in another dimension. I would have loved to see Gumbo write a bigger book that wasn’t just about her and her community. At the same time, if you are a junkie for American political life back in the day when today’s polarization seems tame by comparison, and youth activism ruled, I would recommend you read Gumbo’s book. I suspect that whether you end up liking or detesting her in the end, she will still crave the attention.

NOTE: This book was obtained as part of an Early Reviewers program

Featured

Review of: The Reopening of the Western Mind: The Resurgence of Intellectual Life from the End of Antiquity to the Dawn of the Enlightenment, by Charles Freeman

Nearly two decades have passed since Charles Freeman published The Closing of the Western Mind: The Rise of Faith and the Fall of Reason, a brilliant if controversial examination of the intellectual totalitarianism of Christianity that dated to the dawn of its dominance of Rome and the successor states that followed the fragmentation of the empire in the West.  Freeman argued persuasively that the early Christian church vehemently and often brutally rebuked the centuries-old classical tradition of philosophical enquiry and ultimately drove it to extinction with a singular intolerance of competing ideas crushed under the weight of a monolithic faith. Not only were pagan religions prohibited, but there would be virtually no provision for any dissent with official Christian doctrine, such that those who advanced even the most minor challenges to interpretation were branded heretics and sent to exile or put to death. That tragic state was to define medieval Europe for more than a millennium.

Now the renowned classical historian has returned with a follow-up epic, The Reopening of the Western Mind: The Resurgence of Intellectual Life from the End of Antiquity to the Dawn of the Enlightenment, (a revised and repolished version of The Awakening: A History of the Western Mind AD 500-1700, previously published in the UK), which recounts the slow—some might brand it glacial—evolution of Western thought that restored legitimacy to independent examination and analysis, and that eventually led to a celebration, albeit a cautious one, of reason over blind faith. In the process, Freeman reminds us that quality, engaging narrative history has not gone extinct, while demonstrating that it is possible to produce a work that is so well-written it is readable by a general audience while meeting the rigorous standards of scholarship demanded by academia. That this is no small achievement will be evident to anyone who—as I do—reads both popular and scholarly history and is struck by the stultifying prose that often typifies the academic. In contrast, here Freeman takes a skillful pen to reveal people, events, and occasionally obscure concepts, much of which may be unfamiliar to those who are not well versed in the medieval period.

Full disclosure: Charles Freeman and I began a long correspondence via email following my review of Closing. I was honored when he selected me as one of his readers for his drafts of Awakening, the earlier UK edition of this work, which he shared with me in 2018—but at the same time I approached this responsibility with some trepidation: given Freeman’s credentials and reputation, what if I found the work to be sub-standard? What if it was simply not a good book?  How would I address that? As it was, these worries turned out to be misplaced. It is a magnificent book, and I am grateful to have read much of it as a work in progress, and then again after publication. I did submit several pages of critical commentary to assist the author, to the best of my limited abilities, hone a better final product, and to that end I am proud see my name appear in the “Acknowledgments.” But to be clear: I am an independent reviewer and did not receive compensation for this review.

The fall of Rome remains a subject of debate for historians. While traditional notions of sudden collapse given to pillaging Vandals leaping over city walls and fora engulfed in flames have long been revised, competing visions of a more gradual transition that better reflect the scholarship sometimes distort the historiography to minimize both the fall and what was actually lost. And what was lost was indeed dramatic and incalculable. If, to take just one example, sanitation can be said to be a mark of civilization, the Roman aqueducts and complex network of sewers that fell into disuse and disrepair meant that fresh water was no longer reliable, and sewage that bred pestilence was to be the norm for fifteen centuries to follow. It was not until the late nineteenth century that sanitation in Europe even approached Roman standards. So, whatever the timeline—rapid or gradual—there was indeed a marked collapse. Causes are far more elusive.  But Gibbon’s largely discredited casting of Christianity as the villain that brought the empire down tends to raise hackles in those who suspect someone like Freeman attempting to point those fingers once more. But Freeman has nothing to say about why Rome fell, only what followed. The loss of the pursuit of reason was to be as devastating for the intellectual health of the post-Roman world in the West as sanitation was to prove for its physical health. And here Freeman does squarely take aim at the institutional Christian church as the proximate cause for the subsequent consequences for Western thought. This is well-underscored in the bleak assessment that follows in one of the final chapters in The Closing of the Western Mind:

Christian thought that emerged in the early centuries often gave irrationality the status of a universal “truth” to the exclusion of those truths to be found through reason. So the uneducated was preferred to the educated and the miracle to the operation of natural laws … This reversal of traditional values became embedded in the Christian tradi­tion … Intellectual self-confidence and curiosity, which lay at the heart of the Greek achievement, were recast as the dreaded sin of pride. Faith and obedience to the institutional authority of the church were more highly rated than the use of reasoned thought. The inevitable result was intellectual stagnation …

Reopening picks up where Closing leaves off, but the new work is marked by far greater optimism. Rather than dwell on what has been lost, Freeman puts focus not only upon the recovery of concepts long forgotten but how rediscovery eventually sparked new, original thought, as the spiritual and later increasingly secular world danced warily around one another—with a burning heretic all too often staked between them on Europe’s fraught intellectual ballroom. Because the timeline is so long—encompassing twelve centuries—the author sidesteps what could have been a dull chronological recounting of this slow progression to narrow his lens upon select people, events and ideas that collectively marked milestones on the way, which  comprise thematic chapters to broaden the scope. This approach thus transcends what might have been otherwise parochial to brilliantly convey the panoramic.

There are many superlative chapters in Reopening, including the very first one, entitled “The Saving of the Texts 500-750.” Freeman seems to delight in detecting the bits and pieces of the classical universe that managed to survive not only vigorous attempts by early Christians to erase pagan thought but the unintended ravages of deterioration that is every archivist’s nightmare. Ironically, the sacking of cities in ancient Mesopotamia begat conflagrations that baked inscribed clay tablets, preserving them for millennia. No such luck for the Mediterranean world, where papyrus scrolls, the favored medium for texts, fell to war, natural disasters, deliberate destruction, as well as to entropy—a familiar byproduct of the second law of thermodynamics—which was not kind in prevailing environmental conditions. We are happily still discovering papyri preserved by the dry conditions in parts of Egypt—the oldest dating back to 2500 BCE—but it seems that the European climate doomed papyrus to a scant two hundred years before it was no more.

Absent printing presses or digital scans, texts were preserved by painstakingly copying them by hand, typically onto vellum, a kind of parchment made from animal skins with a long shelf life, most frequently in monasteries by monks for whom literacy was deemed essential. But what to save? The two giants of ancient Greek philosophy, Plato and Aristotle, were preserved, but the latter far more grudgingly. Fledgling concepts of empiricism in Aristotle made the medieval mind uncomfortable. Plato, on the other hand, who pioneered notions of imaginary higher powers and perfect forms, could be (albeit somewhat awkwardly) adapted to the prevailing faith in the Trinity, and thus elements of Plato were syncretized into Christian orthodoxy. Of course, as we celebrate what was saved it is difficult not to likewise mourn what was lost to us forever. Fortunately, the Arab world put a much higher premium on the preservation of classical texts—an especially eclectic collection that included not only metaphysics but geography, medicine, and mathematics. When centuries later—as Freeman highlights in Reopening —these works reached Europe, they were to be instrumental as tinder to the embers that were to spark first a revival and then a revolution in science and discovery.

My favorite chapter in Reopening is “Abelard and the Battle for Reason,” which chronicles the extraordinary story of scholastic scholar Peter Abelard (1079-1142)—who flirted with the secular and attempted to connect rationalism with theology—told against the flamboyant backdrop of Abelard’s tragic love affair with Héloïse, a tale that yet remains the stuff of popular culture. In a fit of pique, Héloïse’s father was to have Abelard castrated. The church attempted something similar, metaphorically, with Abelard’s teachings, which led to an order of excommunication (later lifted), but despite official condemnation Abelard left a dramatic mark on European thought that long lingered.

There is too much material in a volume this thick to cover competently in a review, but the reader will find much of it well worth the time. Of course, some will be drawn to certain chapters more than others. Art historians will no doubt be taken with the one entitled “The Flowering of the Florentine Renaissance,” which for me hearkened back to the best elements of Kenneth Clark’s Civilisation, showcasing not only the evolution of European architecture but the author’s own adulation for both the art and the engineering feat demonstrated by Brunelleschi’s dome, the extraordinary fifteenth century adornment that crowns the Florence Cathedral. Of course, Freeman does temper his praise for such achievements with juxtaposition to what once had been, as in a later chapter that recounts the process of relocating an ancient Egyptian obelisk weighing 331 tons that had been placed on the Vatican Hill by the Emperor Caligula, which was seen as remarkable at the time. In a footnote, Freeman reminds us that: “One might talk of sixteenth-century technological miracles, but the obelisk had been successfully erected by the Egyptians, taken down by the Romans, brought by sea to Rome and then re-erected there—all the while remaining intact!”

If I was to find a fault with Reopening, it is that it does not, in my opinion, go far enough to emphasize the impact of the Columbian Experience on the reopening of the Western mind.  There is a terrific chapter devoted to the topic, which explores how the discovery of the Americas and its exotic inhabitants compelled the European mind to examine other human societies whose existence had never before even been contemplated. While that is a valid avenue for analysis, it yet hardly takes into account just how earth-shattering 1492 turned out to be—arguably the most consequential milestone for human civilization (and the biosphere!) since the first cities appeared in Sumer—in a myriad of ways, not least the exchange of flora and fauna (and microbes) that accompanied it. But this significance was perhaps greatest for Europe, which had been a backwater, long eclipsed by China and the Arab middle east.  It was the Columbian Experience that reoriented the center of the world, so to speak, from the Mediterranean to the Atlantic, which was exploited to the fullest by the Europeans who prowled those seas and first bridged the continents. It is difficult to imagine the subsequent accomplishments—intellectual and otherwise—had Columbus not landed at San Salvador. But this remains just a quibble that does not detract from Freeman’s overall accomplishment.

Interest in the medieval world has perhaps waned over time, but that is, of course, a mistake: how we got from point A to point B is an important story, even it has never been told before as well as Freeman has told it in Reopening. And it is not an easy story to tell. As the author acknowledges in a concluding chapter: “Bringing together the many different elements that led to the ‘reopening of the western mind’ is a challenge. It is important to stress just how bereft Europe was, economically and culturally, after the fall of the Roman empire compared to what it had been before …”

Those of us given to dystopian fiction, concerned with the fragility of republics and civilization, and perhaps wondering aloud in the midst of an ongoing global pandemic and the rise of authoritarianism what our descendants might recall of us if it all fell to collapse tomorrow, cannot help but be intrigued by how our ancestors coped—for better or for worse—after Rome was no more. If you want to learn more about that, there might be no better covers to crack than Freeman’s The Reopening of the Western Mind. I highly recommend it.

 

NOTE: Portions of this review also appear in my review of The Awakening: A History of the Western Mind AD 500-1700, by Charles Freeman, previously published in the UK, here: Review of: The Awakening: A History of the Western Mind AD 500-1700, by Charles Freeman

NOTE: My review of The Closing of the Western Mind: The Rise of Faith and the Fall of Reason, by Charles Freeman, is here:  Review of: The Closing of the Western Mind: The Rise of Faith and the Fall of Reason by Charles Freeman

Featured

Review of: Cloud Cuckoo Land, by Anthony Doerr

Ὦ ξένε, ὅστις εἶ, ἄνοιξον, ἵνα μάθῃς ἃ θαυμάζεις

“Stranger, whoever you are, open this to learn what will amaze you.”

This passage, in ancient Greek and in translation, is the key to Cloud Cuckoo Land, a big, ambitious, complicated novel by Anthony Doerr, the latest from the author of the magnificent, Pulitzer Prize-winning All the Light We Cannot See (2014). Classicists will recognize “Cloud Cuckoo Land” as borrowed from The Birds, the 414 BCE comedy by the Athenian satirist Aristophanes, a city in the sky constructed by birds that later became synonymous for any kind of fanciful world. In this case, Cloud Cuckoo Land serves as the purported title of a long-lost ancient work by Antonius Diogenes, rediscovered as a damaged but partially translatable codex in 2019, that relates the tale of Aethon, a hapless shepherd who transforms into a donkey, then into a fish, then into a crow, in a quest to reach that utopian city in the clouds. It serves as well as the literary glue that binds together the narrative and the central protagonists of Doerr’s novel.

There is the octogenarian Zeno, self-taught in classical Greek, who has translated the fragmentary codex and adapted it into a play that is to be performed by fifth graders in the public library located in Lakeport, Idaho in 2020.  Lurking in the vicinity is Seymour, an alienated teen with Asperger’s, flirting with eco-terrorism. And hundreds of years in the past, there is also the thirteen-year-old Anna, who has happened upon that same codex in Constantinople, on the eve of its fall to the Turks. Among the thousands of besiegers outside the city’s walls is Omeir, a harelipped youngster who with his team of oxen was conscripted to serve the Sultan in the cause of toppling the Byzantine capital. Finally, there is Konstance, fourteen years old, who has lived her entire life on the Argos, a twenty-second century spacecraft destined for a distant planet; she too comes to discover “Cloud Cuckoo Land.”

Alternating chapters, some short, others far longer, tell the stories of each protagonist, in real time or through flashbacks. For the long-lived Zeno, readers follow his hardscrabble youth, his struggle with his closeted homosexuality, his stint as a POW in the Korean War, and his long love affair with the language of the ancient Greeks. We observe how an uncertain and frequently bullied Seymour reacts to the destruction of wilderness and wildlife in his own geography. We watch the rebellious Anna abjure her work as a lowly seamstress to clandestinely translate the codex. We learn how the disfigured-at-birth Omeir is at first nearly left to die, then exiled along with his family because villagers believe he is a demon. We see Konstance, trapped in quarantine in what appears to be deep space, explore the old earth through an “atlas” in the ship’s library.

Cloud Cuckoo Land is in turn fascinating and captivating, but sometimes—unfortunately—also dull. There are not only the central protagonists to contend with, but also a number of secondary characters in each of their respective orbits, as well as the multiple timelines spanning centuries, so there is much to keep track of. I recall being so spellbound by All the Light We Cannot See that I read its entire 500-plus pages over a single weekend. This novel, much longer, did not hook me with a similar force. I found it a slow build: my enthusiasm tended to simmer rather than surge. Alas, I wanted to care about the characters far more than I did.  Still, the second half of the novel is a much more exciting read than the first portion.

Science—in multiple disciplines—is often central to a Doerr novel. That was certainly the case in All the Light We Cannot See, as well as in his earlier work, About Grace. In Cloud Cuckoo Land, in contrast, science—while hardly absent—takes a backseat. The sci-fi in the Argos voyage is pretty cool, but hardly the stuff of Asimov or Heinlein. And Seymour’s science of climate catastrophe strikes as little more than an afterthought in the narrative.

Multiple individuals with lives on separate trajectories centuries apart whose exploits resonated larger and often overlapping themes reminded me at first of another work with a cloud in its title: Cloud Atlas, by David Mitchell.  But Cloud Cuckoo Land lacks the spectacular brilliance of that novel, which manages to take your very breath away. It also falls short of the depth and intricacy that powers Doerr’s All the Light We Cannot See. And yet … and yet … I ended up really enjoying the book, even shedding a tear or two in its final pages. So there’s that. In the final analysis, Doerr is a talented writer and if this is not his finest work, it remains well worth the read.

I have reviewed other novels by Anthony Doerr here:

Review of: All the Light We Cannot See, by Anthony Doerr

Review of: About Grace, by Anthony Doerr

Featured

Review of: The Sumerians: Lost Civilizations, by Paul Collins

Reading the “The Epic of Gilgamesh,” in its entirety rekindled a long dormant interest in the Sumerians, the ancient Mesopotamian people that my school textbooks once boldly proclaimed as inventors not only of the written word, but of civilization itself! One of the pleasures of having a fine home library stocked with eclectic works is that there is frequently a volume near at hand to suit such inclinations, and in this case I turned to a relatively recent acquisition, The Sumerians, a fascinating and extremely well-written—if decidedly controversial—contribution to the Lost Civilizations series, by Paul Collins.

“The Epic of Gilgamesh” is, of course, the world’s oldest literary work: the earliest record of the five poems that form the heart of the epic were carved into Sumerian clay tablets that date back to 2100 BCE, and relate the exploits of the eponymous Gilgamesh, an actual historic king of the Mesopotamian city state Uruk circa 2750 BCE who later became the stuff of heroic legend. Most famously, a portion of the epic recounts a flood narrative nearly identical to the one reported in Genesis, making it the earliest reference to the Near East flood myth held in common by the later Abrahamic religions.

Uruk was just one of a number of remarkable city states—along with Eridu, Ur, and Kish—that formed urban and agricultural hubs between the Tigris and Euphrates rivers in what is today southern Iraq, between approximately 3500-2000 BCE, at a time when the Persian Gulf extended much further north, putting these cities very near the coast.  Some archaeologists also placed “Ur of the Chaldees,” the city in the Hebrew Bible noted as the birthplace of the Israelite patriarch Abraham, in this vicinity, reinforcing the Biblical flood connection.  A common culture that boasted the earliest system of writing that recorded in cuneiform script a language isolate unrelated to others, advances in mathematics that utilized a sexagesimal system, and the invention of both the wheel and the plow came to be attributed to these mysterious non-Semitic people, dubbed the Sumerians.

But who were the Sumerians? They were completely unknown, notes the author, until archaeologists stumbled upon the ruins of their forgotten cities about 150 years ago. Collins, who currently is Curator for Ancient Near East, Ashmolean Museum*, at University of Oxford, fittingly opens his work with the baked clay artifact known as a “prism” inscribed with the so-called Sumerian King List, circa 1800 BCE, currently housed in the Ashmolean Museum. The opening passage of the book is also the first lines of the Sumerian King List: “After the kingship descended from heaven, the kingship was in Eridu. In Eridu, Alulim became king; He ruled for 28,800 years.” Heady stuff.

“It is not history as we would understand it,” argues Collins, “but a combination of myth, legend and historical information.” This serves as a perfect metaphor for Collins’s thesis, which is that after a century and a half of archaeology and scholarship, we know less about the Sumerians—if such a structured, well-defined common culture ever even existed—and far more about the sometimes-spurious conclusions and even outright fictions that successive generations of academics and observers have attached to these ancient peoples.

Thus, Collins raises two separate if perhaps related issues that both independently and in tandem spark controversy. The first is the question of whether the Sumerians ever existed as a distinct culture, or whether—as the author suggests—scholars may have somehow mistakenly woven a misleading tapestry out of scraps and threads in the archaeological record representing a variety of inhabitants within a shared geography with material cultures that while overlapping were never of a single fabric?  The second is how deeply woven into that same tapestry are distortions—some intended and others inadvertent—tailored to interpretations fraught with the biases of excavators and researchers determined to locate the Sumerians as uber-ancestors central to the myth of Western Civilization that tends to dominate the historiography? And, of course, if there is merit to the former, was it entirely the product of the latter, or were other factors involved?

I personally lack both the expertise and the qualifications to weigh in on the first matter, especially given that its author’s credentials include not only an association with Oxford’s School of Archaeology, but also as the Chair of the British Institute for the Study of Iraq. Still, I will note in this regard that he makes many thought-provoking and salient points. As to the second, Collins is quite persuasive, and here great authority on the part of the reader is not nearly as requisite.

Nineteenth century explorers and archaeologists—as well as their early twentieth century successors—were often drawn to this Middle Eastern milieu in a quest for concordance between Biblical references and excavations, which bred distortions in outcomes and interpretation. At the same time, a conviction that race and civilization were inextricably linked—to be clear, the “white race” and “Western Civilization”—determined that what was perceived as “advanced” was ordained at the outset for association with “the West.” We know that the leading thinkers of the Renaissance rediscovered the Greeks and Romans as their cultural and intellectual forebears, with at least some measure of justification, but later far more tenuous links were drawn to ancient Egypt—and, of course, later still, to Babylon and Sumer. Misrepresentations, deliberate or not, were exacerbated by the fact that the standards of professionalism characteristic to today’s archaeology were either primitive or nonexistent.

None of this should be news to students of history who have observed how the latest historiography has frequently discredited interpretations long taken for granted—something I have witnessed firsthand as a dramatic work in progress in studies of the American Civil War in recent decades: notably, although slavery was central to the cause of secession and war, for more than a century African Americans were essentially erased from the textbooks and barely acknowledged other than at the very periphery of the conflict, in what was euphemistically constructed as a sectional struggle among white men, north and south. It was a lie, but a lie that sold very well for a very long time, and still clings to those invested in what has come to called “Lost Cause” mythology.

But yet it’s surprising, as Collins underscores, that what should long have been second-guessed about Sumer remains integral to far too much of what persists as current thinking. Whether the Sumerians are indeed a distinct culture or not, should those peoples more than five millennia removed from us continue to be artificially attached to what we pronounce Western Civilization? Probably not. And while we certainly recognize today that race is an artificial construct that relates zero information of importance about a people, ancient or modern, we can reasonably guess with some confidence that those indigenous to southern Iraq in 3500 BCE probably did not have the pale skin of a native of, say, Norway. We can rightfully assert that the people we call the Sumerians were responsible for extraordinary achievements that were later passed down to other cultures that followed, but an attempt to draw some kind of line from Sumer to Enlightenment-age Europe is shaky, at best.

As such, Collins’s book gives focus to what we have come to believe about the Sumerians, and why we should challenge that. I previously read (and reviewed) Egypt by Christina Riggs, another book in the Lost Civilizations series, which is preoccupied with how ancient Egypt has resonated for those who walked in its shadows, from Roman tourists to Napoleon’s troops to modern admirers, even if that vision little resembles its historic basis. Collins takes a similar tack but devotes far more attention to parsing out in greater detail exactly what is really known about the Sumerians and what we tend to collectively assume that we know. Of course, Sumer is far less familiar to a wider audience, and it lacks the romantic appeal of Egypt—there is no imagined exotic beauty like Cleopatra, only the blur of the distant god-king Gilgamesh—so the Sumerians come up far more rarely in conversation, and provoke far less strong feelings, one way or the other.

The Sumerians is a an accessible read for the non-specialist, and there are plenty of illustrations to enhance the text. Like other authors in the Lost Civilizations series, Collins deserves much credit for articulating sometimes arcane material in a manner that suits both a scholarly and a popular audience, which is by no means an easy achievement. If you are looking for an outstanding introduction to these ancient people that is neither too esoteric nor dumbed-down,  I highly recommend this volume.

*NOTE: I recently learned that Paul Collins has apparently left the Ashmolean Museum as of end October 2022, and is now associated with the Middle East Department, British Museum.

The Sumerian Kings List Prism at the Ashmolean Museum online here:       The Sumerian King List Prism

More about “The Epic of Gilgamesh” can be found in my review here: Review of: Gilgamesh: A New English Version, by Stephen Mitchell

I reviewed other volumes in the Lost Civilizations series here:

Review of: The Indus: Lost Civilizations, by Andrew Robinson

Review of Egypt: Lost Civilizations, by Christina Riggs

Review of: The Etruscans: Lost Civilizations, by Lucy Shipley

 

 

 

 

 

Featured

Review of: The Passenger and Stella Maris, by Cormac McCarthy

Imagine if God—or Gary Larson—had an enormous mayonnaise shaped jar at his disposal and stuffed it chock full of the collective consciousnesses of the greatest modern philosophers, psychoanalysts, neuroscientists, mathematicians, physicists, quantum theoreticians, and cosmologists … then lightly dusted it with a smattering of existential theologians, eschatologists, dream researchers, and violin makers, before tossing in a handful of race car drivers, criminals, salvage divers, and performers from an old-time circus sideshow … and next layered it with literary geniuses, heavy on William Faulkner and Ernest Hemingway with perhaps a dash of Haruki Murakami and just a smidge of Dashiell Hammett … before finally tossing in Socrates, or at least Plato’s version of Socrates, who takes Plato along with him because—love him or hate him—you just can’t peel Plato away from Socrates.  Now imagine that giant jar somehow being given a shake or two before being randomly dumped into the multiverse, so that all the blended yet still unique components poured out into our universe as well into other multiple hypothetical universes. If such a thing was possible, the contents that spilled forth might approximate The Passenger and Stella Maris, the pair of novels by Cormac McCarthy that has so stunned readers and critics alike that there is yet no consensus whether to pronounce these works garbage or magnificent—or, for that matter, magnificent garbage.

The eighty-nine-year-old McCarthy, perhaps America’s greatest living novelist, released these companion books in 2022 after a sixteen-year hiatus that followed publication of The Road, the 2006 postapocalyptic sensation that explored familiar Cormac McCarthy themes in a very different genre, employing literary techniques strikingly different from his previous works, and in the process finding a whole new audience. The same might be said, to some degree, of the novel that preceded it just a year earlier, No Country for Old Men, another clear break from his past that was after all a radical departure for readers of say, The Border Trilogy, and his magnum opus, Blood Meridian, which to my mind is not only a superlative work but truly one of the finest novels of the twentieth century.

Full disclosure: I have read all of Cormac McCarthy’s novels, as well as a play and a screenplay that he authored. To suggest that I am a fan would be a vast understatement. My very first McCarthy novel was The Crossing, randomly plucked from a grocery store magazine rack while on a family vacation. That was 2008. I inhaled the book and soon set out to read his full body of work.  The Crossing is actually the middle volume in The Border Trilogy, preceded by All the Pretty Horses and followed by Cities of the Plain, which collectively form a near- Shakespearean epic of the American southwest and the Mexican borderlands in the mid-twentieth century, which yet retain a stark primitivism barely removed from the milieu of Blood Meridian, set a full century earlier. The author’s style, in these sagas and beyond, has at times by critics been compared with both Faulkner and Hemingway, both favorably and unfavorably, but McCarthy’s voice is distinctive, and hardly derivative. There is indeed the rich vocabulary of a Faulkner or a Styron, that add a richness to the quality of the prose even as it challenges readers to sometimes seek out the dictionary app on their phones. There is also a magnificent use of the objective correlative, made famous by Hemingway and later in portions of the works of Gabriel Garcia Márquez, which evokes powerful emotions from inanimate objects. For McCarthy, this often manifests in the vast, seemingly otherworldly geography of the southwest. McCarthy also frequently makes use of Hemingway’s polysyndetic syntax that adds emphasis to sentences through a series of conjunctions. Most noticeable for those new to Cormac McCarthy is his omission of most traditional punctuation, such as quotation marks, which often improves the flow of the narrative even as it sometimes lends to a certain confusion in long dialogues between two characters that span several pages.

The Passenger opens with the prologue of a Christmas day suicide that must be recited in the author’s voice as an underscore to the beauty of his prose:

It had snowed lightly in the night and her frozen hair was gold and crystalline and her eyes were frozen cold and hard as stones. One of her yellow boots had fallen off and stood in the snow beneath her. The shape of her coat lay dusted in the snow where she’d dropped it and she wore only a white dress and she hung among the bare gray poles of the winter trees with her head bowed and her hands turned slightly outward like those of certain ecumenical statues whose attitude asks that their history be considered. That the deep foundation of the world be considered where it has its being in the sorrow of her creatures. The hunter knelt and stogged his rifle upright in the snow beside him … He looked up into those cold enameled eyes glinting blue in the weak winter light. She had tied her dress with a red sash so that she’d be found. Some bit of color in the scrupulous desolation. On this Christmas day.

With a poignancy reminiscent of the funeral of Peyton Loftis, also a suicide, in the opening of William Styron’s Lie Down in Darkness, the reader here encounters whom we later learn is Alicia Western, one of the two central protagonists in The Passenger and its companion volume, who much like Peyton in Styron’s novel haunts the narrative with chilling flashbacks. Ten years have passed when, on the very next page, we meet her brother Bobby, a salvage diver exploring a submerged plane wreck who happens upon clues that could put his life in jeopardy among those seeking something missing from that plane. Bobby is a brilliant intellect who could have been a physicist, but instead spends his life chasing down whatever provokes his greatest psychological fears. In this case, the terror of being deep underwater has driven him to salvage work in the oceans. Bobby is also a rugged and resourceful man’s man, a kind of Llewelyn Moss from No Country for Old Men, but with a much higher I.Q. Finally, Bobby, now thirty-seven years old, has never recovered from the death of his younger sister, with whom he had a close, passionate—and possibly incestuous—relationship.

Also integral to the plot is their now deceased father, a physicist who was once a key player in the Manhattan Project that produced the first atomic bombs that obliterated Hiroshima and Nagasaki. Their first names—Alicia and Bobby—seem to be an ironic echo of the “Alice and Bob” characters that are used as placeholders in science experiments, especially in physics.  Their surname, Western, could be a kind of doomed metaphor for the tragedy of mass murder on a scale never before imagined that has betrayed the promise of western civilization in the twentieth century and in its aftermath.

A real sense of doom, and a mounting paranoia, grips the narrative in general and Bobby in particular, in what appears to be a kind of mystery/thriller that meanders about, sometimes uncertainly. The cast of characters are extremely colorful, from a Vietnam veteran whose only regret from the many lives he brutally spent while in-country are the elephants that he exploded with rockets from his gunship just for fun, to a small-time swindler with a wallet full of credit cards that don’t belong to him, and a bombshell trans woman with a heart of gold. Some of these folks are like the sorts that turn up in John Steinbeck’s Tortilla Flat, but on steroids, and more likely to suffer an unpredictable death.

But it is Alicia who steals the show in flashback fragments that reveal a stunningly beautiful young woman whose own brilliance in mathematics, physics, and music overshadows even Bobby. She seems to be schizophrenic, plagued by extremely well-defined hallucinations of bedside visitors who could be incarnates of walk-ons from an old-time circus sideshow, right out of central casting. The most prominent is the “Thalidomide Kid”—replete with the flippers most commonly identified with those deformities—who engages her as interlocutor with long-winded, fascinating, and often disturbing dialogue that can run to several pages. Alicia has been on meds, and has checked herself into institutions, but in the end, she becomes convinced both that her visitors are real and that she herself does not belong in this world. But is Alicia even human? There are passing hints that she could be an alien, or perhaps from another universe.

There’s much more, including an episode where “The Kid,” Alicia’s hallucination (?) takes a long walk on the beach with Bobby. This is surprising, if only because McCarthy has long pilloried the magical realism that frequently populates the novels of Garcia Márquez or Haruki Murakami. Perhaps “The Kid” is no hallucination, after all? In any event, much like a Murakami novel—think 1Q84, for example—there are multiple plot lines in The Passenger that go nowhere, and the reader is left frustrated by the lack of resolution. And yet … and yet, the characters are so memorable, and the quality of the writing is so exceptional, that the cover when finally closed is closed without an ounce of regret for the experience. And at the same time, the reader demands more.

The “more” turns out to be Stella Maris, the companion volume that is absolutely essential to broadening your awareness of the plot of The Passenger. Stella Maris is a mental institution that Alicia—then a twenty-year-old dropout from a doctoral program in mathematics—has checked herself into one final time, in the very last year of her life, and so a full decade before the events recounted in The Passenger. She has no luggage, but forty thousand dollars in a plastic bag which she attempts to give to a receptionist. Bobby, in those days a race car driver, lies in a coma as the result of a crash. He is not expected to recover, but Alicia refuses to remove him from life support. The full extent of the novel is told solely in transcript form through the psychiatric sessions of Alicia and a certain Dr. Cohen, but it is every bit a Socratic dialogue of science and philosophy and the existential meaning of life—not only Alicia’s life, but all of our lives, collectively. And finally, there is the dark journey to the eschatological. Alicia—and I suppose by extension Cormac McCarthy—doesn’t take much stock in a traditional, Judeo-Christian god, which has to be a myth, of course. At the same time, she has left atheism behind: there has to be something, in her view, even if she cannot identify it. But most terrifying, Alicia has a certainty that there lies somewhere an undiluted force of evil, something she terms the “Archatron,” that we all resist, even if there is a futility to that resistance.

I consider myself an intelligent and well-informed individual, but reading The Passenger, and especially Stella Maris, was immeasurably humbling. I felt much as I did the first time that I read Faulkner’s The Sound and the Fury, and even the second time that I read Gould’s Book of Fish, by Richard Flanagan. As if there are minds so much greater than mine that I cannot hope to comprehend all that they have to share, but yet I can take full pleasure in immersing myself in their work. To borrow a line from Alicia, in her discussion of Oswald Spengler in Stella Maris, we might say also of Cormac McCarthy: “As with the general run of philosophers—if he is one—the most interesting thing was not his ideas but just the way his mind worked.”

Featured

Review of: The Gates of Europe: A History of Ukraine, by Serhii Plokhy

Still reeling from the pandemic, the world was rocked to its core on February 24, 2022, when Russian tanks rolled into Ukraine, an act of unprovoked aggression not seen in Europe since World War II that conjured up distressing historical parallels. If there were voices that previously denied the echo of Hitler’s Austrian Anschluss to Putin’s annexation of Crimea, as well as German adventurism in Sudetenland with Russian-sponsored separatism in the Donbas, there was no mistaking the similarity to the Nazi invasion of Poland in 1939. But it was Vladimir Putin’s challenge to the very legitimacy of Kyiv’s sovereignty—a shout out to the Kremlin’s rising chorus of irredentism that declares Ukraine a wayward chunk of the “near abroad” that is rightly integral to Russia—that compels us to look much further back in history.

Putin’s claim, however dubious, begs a larger question: by what right can any nation claim self-determination? Is Ukraine really just a modern construct, an opportunistic product of the collapse of the USSR that because it was historically a part of Russia should be once again? Or, perhaps counter-intuitively, should western Russia instead be incorporated into Ukraine? Or—let’s stretch it a bit further—should much of modern Germany rightly belong to France? Or vice versa? From a contemporary vantage point, these are tantalizing musings that challenge the notions of shifting boundaries, the formation of nation states, fact-based if sometimes uncomfortable chronicles of history, the clash of ethnicities, and, most critically, actualities on the ground. Naturally, such speculation abruptly shifts from the purely academic to a stark reality at the barrel of a gun, as the history of Europe has grimly demonstrated over centuries past.

To learn more, I turned to the recently updated edition of The Gates of Europe: A History of Ukraine, by historian and Harvard professor Serhii Plokhy, a dense, well-researched, deep dive into the past that at once fully establishes Ukraine’s right to exist, while expertly placing it into the context of Europe’s past and present. For those like myself largely unacquainted with the layers of complexity and overlapping hegemonies that have long dominated the region, it turns out that there is much to cover. At the same time, the wealth of material that strikes as unfamiliar places a strong and discouraging underscore to the western European bias in the classroom—which at least partially explains why it is that even those Americans capable of locating Ukraine on a map prior to the invasion knew almost nothing of its history.

Survey courses in my high school covered Charlamagne’s 9th century empire that encompassed much of Europe to the west, including what is today France and Germany, but never mentioned Kievan Rus’—the cultural ancestor of modern Ukraine, Belarus and Russia—that was in the 10th and 11th centuries the largest and by far the most powerful state on the continent, until it fragmented and then fell to Mongol invaders! To its east, the Grand Principality of Moscow, a 13th century Rus’ vassal state of the Mongols, formed the core of the later Russian Empire. In the 16th and 17th centuries, the Polish–Lithuanian Commonwealth was in its heyday among the largest and most populous on the continent, but both Poland and Lithuania were to fall to partition by Russia, Prussia, and Austria, and effectively ceased to exist for more than a century. Also missing from maps, of course, were Italy and Germany, which did not even achieve statehood until the later 19th century. And the many nations of today’s southeastern Europe were then provinces of the Ottoman Empire. That is European history, complicated and nuanced, as history tends to be.

Plokhy’s erudite study restores from obscurity Ukraine’s past and reveals a people who while enduring occupation and a series of partitions never abandoned an aspiration to sovereignty that was not to be realized until the late 20th century.  Once a dominant power, Ukraine was to be overrun by the Mongols, preyed upon for slave labor by the Crimean Khanate, and throughout the centuries sliced up into a variety of enclaves ruled by the Golden Horde, the Polish–Lithuanian Commonwealth, the Austrian Empire, the Tsardom of Russia, and finally the Soviet Union.

That long history was written with much blood and suffering inflicted by its various occupiers. Just in the last hundred years that included Soviet campaigns of terror, ethnic cleansing, and deportations, as well as the catastrophic Great Famine of 1932–33—known as the “Holodomor”—a product of Stalin’s forced collectivization that led to the starvation deaths of nearly four million Ukrainians. Then there was World War II, which claimed another four million lives, including about a million Jews. The immediate postwar period was marked by more tumult and bloodshed. Stability and a somewhat better quality of life emerged under Nikita Khrushchev, who himself spent many years of residence in Ukraine. It was Khrushchev who transferred title of the Crimea to Ukraine in 1954. The final years under Soviet domination saw the Chernobyl nuclear disaster.

The structure of the USSR was manifested in political units known as Soviet Socialist Republics, which asserted a fictional autonomy subject to central control. Somewhat ironically, as time passed this enabled and strengthened nationalism within each of the respective SSR’s. Ukraine (like Belarus) even held its own United Nations seat, although its UN votes were rubber-stamped by Moscow. Still, this further reinforced a sense of statehood, which was realized in the unexpected dissolution of the Soviet Union and Ukraine’s independence in 1991. In the years that followed, as Ukraine aspired to closer ties with the West, that statehood increasingly came under attack by Putin, who spoke in earnest of a “Greater Russia” that by all rights included Ukraine. Election meddling became common, but with the spectacular fall of the Russian-backed president in 2014, Putin annexed Crimea and fomented rebellion that sought to create breakaway “republics” in the Donbas of eastern Ukraine. This only intensified the desire of Kyiv for integration with the European Union and for NATO membership.

A vast country of forest and steppe, marked by fertile plains crisscrossed by rivers, Ukraine has long served as a strategic gateway between the east and west, as emphasized in the book’s title. Elements of western, central, and eastern Europe all in some ways give definition to Ukrainian life and culture, and as such Ukraine remains inextricably as much a part of the west as the east. While Russia has left a huge imprint upon the nation’s DNA, it hardly informs the entirety of its national character. The Russian language continues to be widely spoken, and at least prior to the invasion many Ukrainians had Russian sympathies—if never a desire for annexation! For Ukrainians, stateless for too long, their own national identity ever remained unquestioned. The Russian invasion has, rather than threatened that, only bolstered it.

Today, Ukraine is the second largest European nation, after Russia. Far too often overlooked by both statesmen and talking heads, Ukraine would also be the world’s third largest nuclear power—and would have little to fear from the tanks of its former overlord—had it not given up its stockpile of nukes in a deal brokered by the United States, an important reminder to those who question America’s obligation to defend Ukraine.

As this review goes to press, Russia’s war—which Putin euphemistically terms a “special military operation”—is going very poorly, and despite energy supply shortages and threats of nuclear brinksmanship, the West stands firmly with Ukraine, which in the course of the conflict has been subjected to horrific war crimes by Russian invaders.  However, as months pass, and both Europe and the United States endure the economic pain of inflation and rising fuel prices, as well as the ever-increasing odds of rightwing politicians gaining political power on both sides of the Atlantic, it remains to be seen if this alliance will hold steady. As battlefield defeats mount, and men and materiel run short, Putin seems to be running out the clock in anticipation of that outcome. We can only hope it does not come to that.

While I learned a great deal from The Gates of Europe, and I would offer much acclaim to its scholarship, there are portions that can prove to be a slog for a nonacademic audience. Too much of the author’s chronicle reads like a textbook—pregnant with names and dates and events—and thus lacks the sweep of a grand thematic narrative inspiring to the reader and so deserving of the Ukrainian people he treats. At the same time, that does not diminish Plokhy’s achievement in turning out what is certainly the authoritative history of Ukraine. With their right to exist under assault once more, this volume serves as a powerful defense—the weapon of history—against any who might challenge Ukraine’s sovereignty. If you believe, as I do, that facts must triumph over propaganda and polemic, then I highly recommend that you turn to Plokhy to best refute Putin.

 

 

 

Featured

Review of: After the Apocalypse: America’s Role in the World Transformed, by Andrew Bacevich

I often suffer pangs of guilt when a volume received through an early reviewer program languishes on the shelf unread for an extended period. Such was the case with the “Advanced Reader’s Edition” of After the Apocalypse: America’s Role in the World Transformed, by Andrew Bacevich, that arrived in August 2021 and sat forsaken for an entire year until it finally fell off the top of my TBR (To-Be-Read) list and onto my lap. While hardly deliberate, my delay was no doubt neglectful. But sometimes neglect can foster unexpected opportunities for evaluation. More on that later.

First, a little about Andrew Bacevich. A West Point graduate and platoon leader in Vietnam 1970-71, he went on to an army career that spanned twenty-three years, including the Gulf War, retiring with the rank of Colonel. (It is said his early retirement was due to being passed over for promotion after taking responsibility for an accidental explosion at a camp he commanded in Kuwait.) He later became an academic, Professor Emeritus of International Relations and History at Boston University, and one-time director of its Center for International Relations (1998-2005). He is now president and co-founder of the bipartisan think-tank, the Quincy Institute for Responsible Statecraft. Deeply influenced by the theologian and ethicist Reinhold Niebuhr, Bacevich was once tagged as a conservative Catholic historian, but he defies simple categorization, most often serving as an unlikely voice in the wilderness decrying America’s “endless wars.” He has been a vocal, longtime critic of George W. Bush’s doctrine of preventative war, most prominently manifested in the Iraqi conflict, which he has rightly termed a “catastrophic failure.” He has also denounced the conceit of “American Exceptionalism,” and chillingly notes that the reliance on an all-volunteer military force translates into the ongoing, almost anonymous sacrifice of our men and women for a nation that largely has no skin in the game. His own son, a young army lieutenant, was killed in Iraq in 2007.  I have previously read three other Bacevich works. As I noted in a review of one of these, his resumé attaches to Bacevich either enormous credibility or an axe to grind, or perhaps both. Still, as a scholar and gifted writer, he tends to be well worth the read.

The “apocalypse” central to the title of this book takes aim at the chaos that engulfed 2020, spawned by the sum total of the “toxic and divisive” Trump presidency, the increasing death toll of the pandemic, an economy in free fall, mass demonstrations by Black Lives Matter proponents seeking long-denied social justice, and rapidly spreading wildfires that dramatically underscored the looming catastrophe of global climate change. [p.1-3] Bacevich takes this armload of calamities as a flashing red signal that the country is not only headed in the wrong direction, but likely off a kind of cliff if we do not immediately take stock and change course. He draws odd parallels with the 1940 collapse of the French army under the Nazi onslaught, which—echoing French historian Marc Bloch—he lays to “utter incompetence” and “a failure of leadership” at the very top. [p.xiv] This then serves as a head-scratching segue into a long-winded polemic on national security and foreign policy that recycles familiar Bacevich themes but offers little in the way of fresh analysis. This trajectory strikes as especially incongruent given that the specific litany of woes besetting the nation that populate his opening narrative have—rarely indeed for the United States—almost nothing to do with the military or foreign affairs.

If ever history was to manufacture an example of a failure of leadership, of course, it would be hard-pressed to come up with a better model than Donald Trump, who drowned out the noise of a series of mounting crises with a deafening roar of self-serving, hateful rhetoric directed at enemies real and imaginary, deliberately ignoring the threat of both coronavirus and climate change, while stoking racial tensions. Bacevich gives him his due, noting that his “ascent to the White House exposed gaping flaws in the American political system, his manifest contempt for the Constitution and the rule of law placing in jeopardy our democratic traditions.” [p.2] But while he hardly masks his contempt for Trump, Bacevich makes plain that there’s plenty of blame to go around for political elites in both parties, and he takes no prisoners, landing a series of blows on George W. Bush, Barack Obama, Hillary Clinton, Joe Biden, and a host of other members of the Washington establishment that he holds accountable for fostering and maintaining the global post-Cold War “American Empire” responsible for the “endless wars” that he has long condemned. He credits Trump for urging a retreat from alliances and engagements, but faults the selfish motives of an “America First” predicated on isolationism. Bacevich instead envisions a more positive role for the United States in the international arena—one with its sword permanently sheathed.

All this is heady stuff, and regardless of your politics many readers will find themselves nodding their heads as Bacevich makes his case, outlining the many wrongheaded policy endeavors championed by Republicans and Democrats alike for a wobbly superpower clinging to an outdated and increasingly irrelevant sense of national identity that fails to align with the global realities of the twenty-first century.  But then, as Bacevich looks to the future for alternatives, as he seeks to map out on paper the next new world order, he stumbles, and stumbles badly, something only truly evident in retrospect when viewing his point of view through the prism of the events that followed the release of After the Apocalypse in June 2021.

Bacevich has little to add here to his longstanding condemnation of the U.S. occupation of Afghanistan, which after two long decades of failed attempts at nation-building came to an end with our messy withdrawal in August 2021, just shortly after this book’s publication. President Biden was pilloried for the chaotic retreat, but while his administration could rightly be held to account for a failure to prepare for the worst, the elephant in that room in the Kabul airport where the ISIS-K suicide bomber blew himself up was certainly former president Trump, who brokered the deal to return Afghanistan to Taliban control. Biden, who plummeted in the polls due to outcomes he could do little to control, was disparaged much the same way Obama once was when he was held to blame for the subsequent turmoil in Iraq after effecting the withdrawal of U.S. forces agreed to by his predecessor, G.W. Bush. Once again, history rhymes.  But the more salient point for those of us who share, as I do, Bacevich’s anti-imperialism, is that getting out is ever more difficult than going in.

But Bacevich has a great deal to say in After the Apocalypse about NATO, an alliance rooted in a past-tense Cold War stand-off that he pronounces counterproductive and obsolete. Bacevich disputes the long-held mythology of the so-called “West,” an artificial “sentiment” that has the United States and European nations bound together with common values of liberty, human rights, and democracy. Like Trump—who likely would have acted upon this had he been reelected—Bacevich calls for an end to US involvement with NATO. The United States and Europe have embarked on “divergent paths,” he argues, and that is as it should be. The Cold War is over. Relations with Russia and China are frosty, but entanglement in an alliance like NATO only fosters acrimony and fails to appropriately adapt our nation to the realities of the new millennium.

It is an interesting if academic argument that was abruptly crushed under the weight of the treads of Russian tanks in the premeditated invasion of Ukraine February 24, 2022. If some denied the echo of Hitler’s 1938 Austrian Anschluss to Putin’s 2014 annexation of Crimea, there was no mistaking the similarity of unprovoked attacks on Kyiv and sister cities to the Nazi war machine’s march on Poland in 1939. And yes, when Biden and French President Emmanuel Macron stood together to unite that so-called West against Russian belligerence, the memory of France’s 1940 defeat was hardly out of mind. All of a sudden, NATO became less a theoretical construct and somewhat more of a safe haven against brutal militarism, wanton aggression, and the unapologetic war crimes that livestream on twenty-first century social media of streets littered with the bodies of civilians, many of them children. All of a sudden, NATO is pretty goddamned relevant.

In all this, you could rightly argue against the wrong turns made after the dissolution of the USSR, of the failure of the West to allocate appropriate economic support for the heirs of the former Soviet Union, of how a pattern of NATO expansion both isolated and antagonized Russia. But there remains no legitimate defense for Putin’s attempt to invade, besiege, and absorb a weaker neighbor—or at least a neighbor he perceived to be weaker, a misstep that could lead to his own undoing. Either way, the institution we call NATO turned out to be something to celebrate rather than deprecate. The fact that it is working exactly the way it was designed to work could turn out to be the real road map to the new world order that emerges in the aftermath of this crisis. We can only imagine the horrific alternatives had Trump won re-election: the U.S. out of NATO, Europe divided, Ukraine overrun and annexed, and perhaps even Putin feted at a White House dinner. So far, without firing a shot, NATO has not only saved Ukraine; arguably, it has saved the world as we know it, a world that extends well beyond whatever we might want to consider the “West.”

As much as I respect Bacevich and admire his scholarship, his informed appraisal of our current foreign policy realities has turned out to be entirely incorrect. Yes, the United States should rein in the American Empire.  Yes, we should turn away from imperialist tendencies. Yes, we should focus our defense budget solely on defense, not aggression, resisting the urge to try to remake the world in our own image for either altruism or advantage. But at the same time, we must be mindful—like other empires in the past—that retreat can create vacuums, and we must be ever vigilant of what kinds of powers may fill those vacuums.  Because we can grow and evolve into a better nation, a better people, but that evolution may not be contagious to our adversaries. Because getting out remains ever more difficult than going in.

Finally, a word about the use of the term “apocalypse,” a characterization that is bandied about a bit too frequently these days. 2020 was a pretty bad year, indeed, but it was hardly apocalyptic. Not even close. Despite the twin horrors of Trump and the pandemic, we have had other years that were far worse. Think 1812, when the British burned Washington and sent the president fleeing for his life. And 1862, with tens of thousands already lying dead on Civil War battlefields as the Union army suffered a series of reverses. And 1942, still in the throes of economic depression, with Germany and Japan lined up against us. And 1968, marked by riots and assassinations, when it truly seemed that the nation was unraveling from within. Going forward, climate change may certainly breed apocalypse. So might a cornered Putin, equipped with an arsenal of nuclear weapons and diminishing options as Russian forces in the field teeter on collapse. But 2020 is already in the rear-view mirror. It will no doubt leave a mark upon us, but as we move on, it spins ever faster into our past. At the same time, predicting the future, even when armed with the best data, is fraught with unanticipated obstacles, and grand strategies almost always lead to failure. It remains our duty to study our history while we engage with our present. Apocalyptic or not, it’s all we’ve got …

I have reviewed other Bacevich books here:

Review of: Breach of Trust: How Americans Failed Their Soldiers and Their Country, by Andrew J. Bacevich

Review of: America’s War for the Greater Middle East: A Military History, by Andrew J. Bacevich

 

 

Featured

Review of: 1957: The Year That Launched the American Future, by Eric Burns

On October 4, 1957, the Soviet Union sent geopolitical shock waves across the planet with the launch of Sputnik 1, the first artificial Earth satellite. Sputnik was only twenty-three inches in diameter, transmitted radio signals for a mere twenty-one days, then burned up on reentry just three months after first achieving orbit, but it literally changed everything. Not only were the dynamics of the Cold War permanently altered by what came to be dubbed the “Space Race,” but the success of Sputnik ushered in a dramatic new era for developments in science and technology. I was not quite six months old.

America was to later win that race to the moon, but despite its fearsome specter as a diabolical power bent on world domination, the USSR turned out to be a kind of vast Potemkin Village that almost noiselessly went out of business at the close 1991. The United States had pretty much lost interest in space travel by then, but that was just about the time that the next critical phase in the emerging digital age—widespread public access to personal computers and the internet—first wrought the enormous changes upon the landscape of American life that today might have Gen Z “zoomers” considering 1957 as something like a date out of ancient times.

And now, as this review goes to press—in yet still one more recycle of Mark Twain’s bon mot “History Doesn’t Repeat Itself, but It Often Rhymes”—NASA temporarily scrubbed the much anticipated blastoff of lunar-bound Artemis I, but a real space race is again fiercely underway, although this time the rivals include not only Russia, but China and a whole host of billionaires, at least one of whom could potentially fit a template for a “James Bond” style villain. And while all this is going on, I recently registered for Medicare.

Sixty-five years later, there’s a lot to look back on. In 1957: The Year That Launched the American Future (2020), a fascinating, fast-paced chronicle manifested by articulately rendered, thought-provocative chapter-length essays, author and journalist Eric Burns reminds us of what a pivotal year that proved to be, not only by kindling that first contest to dominate space, but in multiple other arenas of the social, political, and cultural, much that is only apparent in retrospect.

That year, while Sputnik stoked alarms that nuclear-armed Russians would annihilate the United States with bombs dropped from outer space, tabloid journalism reached what was then new levels of the outrageous exploiting “The Mad Bomber of New York,” who turned out to be a pathetic little fellow whose series of explosives actually claimed not a single fatality. In another example of history’s unintended consequences, a congressional committee investigating illegal labor activities helped facilitate Jimmy Hoffa’s takeover of the Teamsters. A cloak of mystery was partially lifted from organized crime activities with a very public police raid at Apalachin that rounded up Mafia bosses by the score. The iconic ’57 Chevy ruled the road and cruised on newly constructed interstate highways that would revolutionize travel as well as wreak havoc on cityscapes. African Americans remained second-class citizens but struggles for equality ignited a series of flashpoints. In September 1957, President Eisenhower federalized the Arkansas National Guard and sent Army troops to Little Rock to enforce desegregation. That same month, Congress passed the Civil Rights Act of 1957, watered-down yet still landmark legislation that paved the way for more substantial action ahead. Published that year were Jack Kerouac’s On the Road and Nevil Shute’s On the Beach. Michael Landon starred in I Was a Teenage Werewolf. Little Richard, who claimed to see Sputnik while performing in concert and took it as a message from God, abruptly walked off stage and abandoned rock music to preach the word of the Lord. But the nation’s number one hit was Elvis Presley’s All Shook Up; rock n’ roll was here to stay.

Burns’ commentary on all this and more is engaging and generally a delight to read, but 1957 is by no means a comprehensive history of that year. In fact, it is a stretch to term this book a history at all except in the sense that the events it describes occurred in the past. Instead, it is rather a subjective collection of somewhat loosely linked commentaries that spotlight specific events and emerging trends that the author identifies as formative for the nation we would become in decades that followed. As such, the book succeeds due to Burn’s keen sense of how both key episodes as well as more subtle cultural waves influenced a country in transition from the conventional, consensus-driven postwar years to the radicalized, tumultuous times that lay just ahead.

His insight is most apparent in his cogent analysis of how Civil Rights advanced not only through lunch-counter sit-ins and a reaction that was marked by violent repression, but by cultural shifts among white Americans—and that rock n’ roll had at least some role in this evolution of outlooks. At the same time, his conservative roots are exposed in his treatment of On the Road and the rise of the “Beat generation;” Burns genuinely seems as baffled by their emergence as he is amazed that anyone could praise Kerouac’s literary talents. But, to his credit, he recognizes the impact the novel has upon a national audience that no longer could confidently boast of a certainty in its destiny. And it is Burns’ talent with a pen that captivates a markedly different audience, some sixty-five years later.

In the end, the author leaves us yearning for more. After all, other than references that border on the parenthetical to Richard Nixon, Robert F. Kennedy, and Dag Hammarskjöld, there is almost no discussion of national politics or international relations, essential elements in any study of a nation at what the author insists is at a critical juncture. Even more problematic, very conspicuous in its absence is the missing chapter that should have been devoted to television. In 1950, 3.9 million TV sets were in less than ten percent of American homes. By 1957, that number increased roughly tenfold to 38.9 million TVs in the homes of nearly eighty percent of the population! That year, I Love Lucy aired its final half-hour episode, but in addition to network news, families were glued to their black-and-white consoles watching Gunsmoke, Alfred Hitchcock, Lassie, You Bet Your Life, and Red Skelton. For the World War II generation, technology that brought motion pictures into their living rooms was something like miraculous. Nothing was more central to the identity of the life of the average American in 1957 than television, but Burns inexplicably ignores it.

Other than Sputnik, which clearly marked a turning-point for science and exploration, it is a matter of some debate whether 1957 should be singled out for demarcation as the start of a new era. One could perhaps argue instead for the election of John F. Kennedy in 1960, or with even greater conviction, for the date of his assassination in 1963, as a true crossroads of sorts for the past and future United States. Still, if for no other reason than the conceit that this was my birth year, I am willing to embrace Burns’ thesis that 1957 represented a collective critical moment for us all. Either way, his book promises an impressive tour of a time that seems increasingly more distant with the passing of each and every day.

 

 

Featured

Review of: Bogart, by A.M. Sperber & Eric Lax

Early in 2022, I saw Casablanca on the big screen for the first time, the 80th anniversary of its premiere. Although over the years I have watched it in excess of two dozen times, this was a stunning, even mesmerizing experience for me, not least because I consider Casablanca the finest film of Old Hollywood—this over the objections of some of my film-geek friends who would lobby for Citizen Kane in its stead. Even so, most would concur with me that its star, Humphrey Bogart, was indeed the greatest actor of that era.

Attendance was sparse, diminished by a resurgence of COVID, but I sat transfixed in that nearly empty theater as Bogie’s distraught, drunken Rick Blaine famously raged that “Of all the gin joints in all the towns in all the world, she walks into mine!” He is, of course, lamenting his earlier unexpected encounter with old flame Ilsa Lund, splendidly portrayed with a sadness indelibly etched upon her beautiful countenance by Ingrid Bergman, who with Bogart led the credits of a magnificent ensemble cast that also included Paul Henreid, Claude Rains, Conrad Veidt, Sydney Greenstreet, and Peter Lorre. But Bogie remains the central object of that universe; the plot and the players in orbit about him. There’s no doubt that without Bogart, there could never have been a Casablanca as we know it. Such a movie might have been made, but it could hardly have achieved a greatness on this order of magnitude.

Bogie never actually uttered the signature line “Play it again, Sam,” so closely identified with the production (and later whimsically poached by Woody Allen for the title of his iconic 1972 comedy peppered with clips from Casablanca). And although the film won Academy Awards for Best Picture and Best Director, as well as in almost every other major category, Bogart was nominated but missed out on the Oscar, which instead went to Paul Lukas—does anyone still remember Paul Lukas?—for his role in Watch on the Rhine. This turns out to be a familiar story for Bogart, who struggled with a lifelong frustration at typecasting, miscasting, studio manipulation, lousy roles, inadequate compensation, missed opportunities, and repeated snubs—public recognition of his talent and star-quality came only late in life and even still frequently eluded him, as on that Oscar night. He didn’t really expect to win, but we can yet only wonder at what Bogart must have been thinking . . . He was already forty-four years old on that disappointing evening when the Academy passed him over. There was no way he could have known that most of his greatest performances would lie ahead, that after multiple failed marriages (one still unraveling that very night) a young starlet he had only just met would come to be the love of his life and mother of his children, and that he would at last achieve not only the rare brand of stardom reserved for just a tiny slice of the top tier in his profession, but that he would go on become a legend in his own lifetime and well beyond it: the epitome of the cool, tough, cynical guy who wears a thin veneer of apathy over an incorruptible moral center, women swooning over him as he stares down villains, an unlikely hero that every real man would seek to emulate.

My appreciation of Casablanca and its star in this grand cinema setting was enhanced by the fact that I was at the time reading Bogart (1997), by A.M. Sperber & Eric Lax, which is certainly the definitive biography of his life. I was also engaged in a self-appointed effort to watch as many key Bogie films in roughly chronological order as I could while reading the bio, which eventually turned out to be a total of twenty movies, from his first big break in The Petrified Forest (1936) to The Harder They Fall (1956), his final role prior to his tragic, untimely death at fifty-seven from esophageal cancer.

Bogie’s story is told brilliantly in this unusual collaboration by two authors who had never actually met. Ann Sperber, who wrote a celebrated biography of journalist Edward R. Murrow, spent seven years researching Bogart’s life and conducted nearly two hundred interviews with those who knew him most intimately before her sudden death in 1994. Biographer Eric Lax stepped in and shaped her draft manuscript into a coherent finished product that reads seamlessly like a single voice. I frequently read biographies of American presidents not only to study the figure that is profiled, but because the very best ones serve double duty as chronicles of United States history, the respective president as the focal point. I looked to the Bogart book for something similar, in this case a study of Old Hollywood with Bogie in the starring role. I was not to be disappointed.

Humphrey DeForest Bogart was born on Christmas Day 1899 in New York City to wealth and privilege, with a father who was a cardiopulmonary surgeon and a mother who was a commercial illustrator. Both parents were distant and unaffectionate. They had an apartment on the Upper West side and a vast estate on Canandaigua Lake in upstate New York, where Bogie began his lifelong love affair with boating. Indifferent to higher education, he eventually flunked out of boarding school and joined the navy. There seems nothing noteworthy about his early life.

His acting career began almost accidentally, and he spent several years on the stage before making his first full-length feature in 1930, Up the River, with his drinking buddy Spencer Tracy, who called him “Bogie.” He was already thirty years old. What followed were largely lackluster roles on both coasts, alternating between Broadway theaters and Hollywood studios. He was frequently broke, drank heavily, and his second marriage was crumbling. Then he won rave reviews as escaped murderer Duke Mantee in The Petrified Forest, playing opposite Leslie Howard on the stage. The studio bought the rights, but characteristically for Bogie, they did not want to cast him to reprise his role, looking instead for an established actor, with Edward G. Robinson at the top of the list. Then Howard, who had production rights, stepped in to demand Bogart get the part. The 1936 film adaptation of the play, which also featured a young Bette Davis, channeled Bogart’s dark and chillingly realistic portrayal of a psychopathic killer—in an era when gangsters like Dillinger and Pretty Boy Floyd dominated the headlines—and made Bogie a star.

But again he faced a series of let-downs. This was the era of the studio system, with actors used and abused by big shots like Jack Warner, who locked Bogart into a low-paid contract that tightly controlled his professional life, casting him repeatedly in virtually  interchangeable gangster roles in a string of B-movies. It wasn’t until 1941, when he played  Sam Spade in The Maltese Falcon—quintessential film noir as well as John Huston’s directorial debut—that Bogie joined the ranks of undisputed A-list stars and began the process of taking revenge on the studio system by commanding greater compensation and demanding greater control of his screen destiny. But in those days, despite his celebrity, that remained an uphill battle.

I began watching his films while reading the bio as a lark, but it turned out to be an essential assignment: you can’t read about Bogie without watching him. Many of the twenty that I screened I had seen before, some multiple times, but others were new to me. I was raised by my grandparents in the 1960s with a little help from a console TV in the livingroom and all of seven channels delivered via rooftop antenna. When cartoons, soaps, and prime time westerns and sitcoms weren’t broadcasting, the remaining airtime was devoted to movies. All kinds of movies, from the dreadful to the superlative and everything in-between, often on repeat. Much of it was classic Hollywood and Bogart made the rounds. One of my grandfather’s favorite flicks was The Treasure of the Sierra Madre, and I can recall as a boy watching it with him multiple times. In general, he was a lousy parent, but I am grateful for that gift; it remains among my top Bogie films. We tend to most often think of Bogart as Rick Blaine or Philip Marlowe, but it is as Fred C. Dobbs in The Treasure of the Sierra Madre and Charlie Allnutt in The African Queen and Captain Queeg in The Caine Mutiny that the full range of his talent is revealed.

It was hardly his finest role or his finest film, but it was while starring as Harry Morgan in To Have and Have Not (1944) that Bogie met and fell for his co-star, the gorgeous, statuesque, nineteen-year-old Lauren Bacall—twenty-five years younger than him—spawning one of Hollywood’s greatest on-screen, off-screen romances. They would be soulmates for the remainder of his life, and it was she who brought out the very best of him. Despite his tough guy screen persona, the real-life Bogie tended to be a brooding intellectual who played chess, was well-read, and had a deeply analytical mind. An expert sailor, he preferred boating on the open sea to carousing in bars, although he managed to do plenty of both. During crackdowns on alleged communist influence in Hollywood, Bogart and Bacall together took controversial and sometimes courageous stands against emerging blacklists and the House Un-American Activities Committee (HUAC). But he also had his flaws. He could be cheap. He could be a mean drunk. He sometimes wore a chip on his shoulder carved out of years of frustration at what was after all a very slow rise to the top of his profession.  But warts and all, far more of his peers loved him than not.

Bogart is a massive tome, and the first section is rather slow-going because Bogie’s early life was just so unremarkable. But it holds the reader’s interest because it is extremely well-written, and it goes on to succeed masterfully in spotlighting Bogart’s life against the rich fabric that forms the backdrop of that distant era of Old Hollywood before the curtains fell for all time.  If you are curious about either, I highly recommend this book. If you are too busy for that, at the very least carve out some hours of screen time and watch Bogie’s films. You will not regret the time spent. Although his name never gets dropped in the lyrics by Ray Davies for the familiar Kinks tune, if there were indeed Celluloid Heroes, the greatest among them was certainly Humphrey Bogart.

 

NOTE: These are Bogart films I screened while reading this book:

The Petrified Forest (1936)

Dead End (1937)

High Sierra (1941)

The Maltese Falcon (1941)

Across the Pacific (1942)

Casablanca (1942)

Passage to Marseille (1944)

To Have and Have Not (1944)

The Big Sleep (1946)

Dark Passage (1947)

Dead Reckoning (1947)

Treasure of the Sierra Madre (1948)

Key Largo (1948)

In a Lonely Place (1950)

The African Queen (1951)

Beat the Devil (1953)

The Caine Mutiny (1954)

Sabrina (1954)

The Desperate Hours (1955)

The Harder They Fall (1956)

 

Featured

Review of: A Self-Made Man: The Political Life of Abraham Lincoln Vol. I, 1809–1849, by Sidney Blumenthal

Historians consistently rank him at the top, tied with Washington for first place or simply declared America’s greatest president. His tenure was almost precisely synchronous with the nation’s most critical existential threat: his very election sparked secession, first shots fired at Sumter a month after his inauguration, the cannon stilled at Appomattox a week before his murder.  There were still armies in the field, but he was gone, replaced by one of the most sinister men to ever take the oath of office, leaving generations of his countrymen to wonder what might have transpired with all the nation’s painful unfinished business had he survived, to the trampled hopes for equality for African Americans to the promise of a truly “New South” that never emerged.  A full century ago, decades after his death, he was reimagined as an enormous, seated marble man with the soulful gaze of fixed purpose, the central icon in his monument that provokes tears for so many visitors that stand in awe before him. When people think of Abraham Lincoln, that’s the image that usually springs to mind.

The seated figure rises to a height of nineteen feet; somebody calculated that if it stood up it would be some twenty-eight feet tall. The Lincoln that once walked the earth was not nearly that gargantuan, but he was nevertheless a giant in his time: physically, intellectually—and far too frequently overlooked—politically! He sometimes defies characterization because he was such a character, in so very many ways.

An autodidact gifted with a brilliant analytical mind, he was also a creature of great integrity loyal to a firm sense of a moral center that ever evolved when polished by new experiences and touched by unfamiliar ideas. A savvy politician, he understood how the world worked. He had unshakeable convictions, but he was tolerant of competing views. He had a pronounced sense of empathy for others, even and most especially his enemies. In company, he was a raconteur with a great sense of humor given to anecdotes often laced with self-deprecatory wit. (Lincoln, thought to be homely, when accused in debate of being two-faced, self-mockingly replied: “I leave it to my audience. If I had another face, do you think I’d wear this one?”) But despite his many admirable qualities, he was hardly flawless. He suffered with self-doubt, struggled with depression, stumbled through missteps, burned with ambition, and was capable of hosting a mean streak that loomed even as it was generally suppressed. More than anything else he had an outsize personality.

And Lincoln likewise left an outsize record of his life and times! So why has he generally posed such a challenge for biographers? Remarkably, some 15,000 books have been written about him—second, it is said, only to Jesus Christ—but yet in this vast literature, the essence of Lincoln again and again somehow seems out of reach to his chroniclers. We know what he did and how he did it all too well, but portraying what the living Lincoln must have been like has remained frustratingly elusive in all too many narratives. For instance, David Herbert Donald’s highly acclaimed bio—considered by many the best single volume treatment of his life—is indeed impressive scholarship but yet leaves us with a Lincoln who is curiously dull and lifeless. Known for his uproarious banter, the guy who joked about being ugly for political advantage is glaringly absent in most works outside of Gore Vidal’s Lincoln, which superbly captures him but remains, alas, a novel not a history.

All that changed with A Self-Made Man: The Political Life of Abraham Lincoln Vol. I, 1809–1849, by Sidney Blumenthal (2016), an epic, ambitious, magnificent contribution to the historiography that demonstrates not only that despite the thousands of pages written about him there still remains much to say about the man and his times, but even more significantly that it is possible to brilliantly recreate for readers what it must have been like to engage with the flesh and blood Lincoln. This is the first in a projected five-volume study (two subsequent volumes have been published to date) that—as the subtitle underscores—emphasize the “political life” of Lincoln, another welcome contribution to a rapidly expanding genre focused upon politics and power, as showcased in such works as Jon Meacham’s Thomas Jefferson: The Art of Power, Robert Dallek’s Franklin D. Roosevelt: A Political Life, and George Washington: The Political Rise of America’s Founding Father, by David O. Stewart.

At first glance, this tactic might strike as surprising, since prior to his election as president in 1860 Lincoln could boast of little in the realm of public office beyond service in the Illinois state legislature and a single term in the US House of Representatives in the late 1840s. But, as Blumenthal’s deeply researched and well-written account reveals, politics defined Lincoln to his very core, inextricably manifested in his life and character from his youth onward, something too often disregarded by biographers of his early days. It turns out that Lincoln was every bit a political animal, and there is a trace of that in nearly every job he ever took, every personal relationship he ever formed, and every goal he ever chased.

This approach triggers a surprising epiphany for the student of Lincoln. It is as if an entirely new dimension of the man has been exposed for the first time that lends new meaning to words and actions previously treated superficially or—worse—misunderstood by other biographers. Early on, Blumenthal argues that Donald and others have frequently been misled by Lincoln’s politically crafted utterances that cast him as marked by passivity, too often taking him at his word when a careful eye on the circumstances demonstrates the exact opposite. In contrast, Lincoln, ever maneuvering, if quietly, could hardly be branded as passive [p9]. Given this perspective, the life and times of young Abe is transformed into something far richer and more colorful than the usual accounts of his law practice and domestic pursuits. In another context, I once snarkily exclaimed  “God save us from The Prairie Years” because I found Lincoln’s formative period—and not just Sandburg’s version of it—so uninteresting and unrelated to his later rise. Blumenthal has proved me wrong, and that sentiment deeply misplaced.

But Blumenthal not only succeeds in fleshing out a far more nuanced portrait of Lincoln—an impressive accomplishment on its own—but in the process boldly sets out to do nothing less than scrupulously detail the political history of the United States in the antebellum years from the Jackson-Calhoun nullification crisis onward.  Ambitious is hardly an adequate descriptive for the elaborate narrative that results, a product of both prodigious research and a very talented pen. Scores of pages—indeed whole chapters—occur with literally no mention of Lincoln at all, a striking technique that is surprisingly successful; while Lincoln may appear conspicuous in his absence, he is nevertheless present, like the reader a studious observer of these tumultuous times even when he is not directly engaged, only making an appearance when the appropriate moment beckons.  As such, A Self-Made Man is every bit as much a book of history as it is biography, a key element to the unstated author’s thesis: that it is impossible to truly get to know Lincoln—especially the political Lincoln—except in the context and complexity of his times, a critical emphasis not afforded in other studies.

And there is much to chronicle in these times. Some of this material is well known, even if until recently subject to faulty analysis.  The conventional view of the widespread division that characterized the antebellum period centered on a sometimes-paranoid south on the defensive, jealous of its privileges, in fear of a north encroaching upon its rights. But in keeping with the latest historiography, Blumenthal deftly highlights how it was that, in contrast, the slave south—which already wielded a disproportionate share of national political power due to the Constitution’s three-fifths clause that inflated its representation—not only stifled debate on  slavery but aggressively lobbied for its expansion. And just as a distinctly southern political ideology evolved its notion of the peculiar institution from the “wolf by the ear” necessary evil of Jefferson’s time to a vaunted hallmark of civilization that boasted benefit to master and servant, so too did it come to view the threat of separation less in dread than anticipation. The roots of all that an older Lincoln would witness severing the ancient “bonds of affection” of the then no longer united states were planted in these, his early years.

Other material is less familiar. Who knew how integral to Illinois politics—for a time—was the cunning Joseph Smith and his Mormon sect?  Or that Smith’s path was once entangled with the budding career of Stephen A. Douglas? Meanwhile, the author sheds new light on the long rivalry between Lincoln and Douglas, which had deep roots that went back to the 1830s, decades before their celebrated clash on the national stage brought Lincoln to a prominence that finally eclipsed Douglas’s star.

Blumenthal’s insight also adeptly connects the present to the past, affording a greater relevance for today’s reader.  He suggests that the causes of the financial crisis of 2008 were not all that dissimilar to those that drove the Panic of 1837, but rather than mortgage-backed securities and a housing bubble, it was the monetization of human beings as slave property that leveraged enormous fortunes that vanished overnight when an oversupply of cotton sent market prices plummeting, which triggered British banks to call in loans on American debtors—a cotton bubble that burst spectacularly (p158-59). This point can hardly be overstated, since slavery was not only integral to the south’s economy, but by the eve of secession human property was to represent the largest single form of wealth in the nation, exceeding the combined value of all American railroads, banks, and factories. A cruel system that assigned values to men, women, and children like cattle had deep ramifications not only for masters who acted as “breeders” in the Chesapeake and markets in the deep south, but also for insurance companies in Hartford, textile mills in Lowell, and banks in London.

Although Blumenthal does not himself make this point, I could detect eerie if imperfect parallels to the elections of 2016 and 1844, with Lincoln seething as the perfect somehow became the enemy of the good. In that contest, Whig Henry Clay was up against Democrat James K. Polk. Both were slaveowners, but Clay opposed the expansion of slavery while Polk championed it. Antislavery purists in New York rejected Clay for the tiny Liberty Party, which by a slender margin tipped the election to Polk, who then boosted the slave power with Texas annexation, and served as principal author of the Mexican War that added vast territories to the nation, setting forces in motion that later spawned secession and Civil War. Lincoln was often prescient, but of course he could not know all that was to follow when, a year after Clay’s defeat, he bitterly denounced the “moral absolutism” that led to the “unintended tragic consequences” of Polk’s elevation to the White House (p303). To my mind, there was an echo of this in the 2016 disaster that saw Donald Trump prevail, a victory at least partially driven by those unwilling to support Hillary Clinton who—despite the stakes—threw away their votes on Jill Stein and Gary Johnson.

No review could properly summarize the wealth of the material contained here, nor overstate the quality of the presentation, which also suggests much promise for the volumes that follow. I must admit that at the outset I was reluctant to read yet another book about Lincoln, but A Self-Made Man was recommended to me by no less than historian Rick Perlstein, (author of Nixonland), and like Perlstein, Blumenthal’s style is distinguished by animated prose bundled with a kind of uncontained energy that frequently delivers paragraphs given to an almost breathless exhale of ideas and people and events that expertly locates the reader at the very center of concepts and consequences. The result is something exceedingly rare for books of history or biography: a page-turner! Whether new to studies of Lincoln or a long-time devotee, this book should be required reading.

 

A review of one of Rick Perlstein’s books is here: Review of: Nixonland: The Rise of a President and the Fracturing of America, by Rick Perlstein

I reviewed the subsequent two volumes in Blumenthal’s Lincoln series here: Review of: Wrestling With His Angel: The Political Life of Abraham Lincoln Vol. II, 1849-1856, and All the Powers of Earth: The Political Life of Abraham Lincoln Vol. III, 1856-1860, by Sidney Blumenthal

 

Featured

Review of: Maladies of Empire: How Colonialism, Slavery, and War Transformed Medicine, by Jim Downs

As the COVID-19 pandemic swept the globe in 2020, it left in its wake the near-paralysis of many hospital systems, unprepared and unequipped for the waves of illness and death that suddenly overwhelmed capacities for treatment that were after all at best only palliative care, since for this deadly new virus there was neither a cure nor a clear route to prevention. Overnight, epidemiologists—scrambling for answers or even just clues—became the most critically significant members of the public health community, even if their informed voices were often shouted down by the shriller ones of media pundits and political hacks.

Meanwhile, data collection began in earnest and the number of data dashboards swelled. In the analytical process, the first stop was identifying the quality of the data and the disparities in how data was collected. Was it true, as some suggested, that a disproportionate number of African Americans were dying from COVID? At first, there was no way to know since some states were not collecting data broken down by this kind of specific demographic. Data collection eventually became more standardized, more precise, and more reliable, serving as a key ingredient to combat the spread of this highly contagious virus, as well as one of the elements that guided the development of vaccines.  Even so, dubious data and questionable studies too often took center stage both at political rallies and in the media circus that echoed a growing polarization that had one side denouncing masks, resisting vaccination, and touting sideshow magic bullets like Ivermectin. But talking heads and captive audiences aside, masks reduce infection, vaccines are effective, and dosing with Ivermectin is a scam. How do we know that? Data. Mostly due to data. Certainly, other key parts of the mix include scientists, medical professionals, case studies, and peer reviewed papers, but data—first collected and then analyzed—is the gold standard, not only for COVID but for all disease treatment and prevention.

But it wasn’t always that way.

In the beginning, there was no such thing as epidemiology. Disease causes and treatments were anecdotal, mystical, or speculative.  Much of the progress in science and medicine that was the legacy of the classical world had long been lost to the west. The dawn of modern epidemiology rose above a horizon constructed of data painstakingly collected and compiled and subsequently analyzed. In fact, certain aspects of the origins of epidemiology were to run concurrent with the evolution of statistical analysis. In the early days, as the reader comes to learn in this brilliant and groundbreaking 2021 work by historian Jim Downs, Maladies of Empire: How Colonialism, Slavery, and War Transformed Medicine, the bulk of the initial data was derived from unlikely and unwilling participants who existed at the very margins: the enslaved, the imprisoned, the war-wounded, and the destitute condemned to the squalor of public hospitals. Their identities are mostly forgotten, or were never recorded in the first place, but yet collectively the data harvested from them was to provide the skeletal framework for the foundation of modern medicine.

In a remarkable achievement that could hardly be more relevant today, the author cleverly locates Maladies of Empire at the intersection of history and medicine, where data collection from unexpected and all too frequently wretched subjects comes to form the very basis of epidemiology itself. It is these early stories that send shudders to a modern audience. Nearly everyone is familiar with the wrenching 1787 diagram of the lower deck of the slave ship Brookes, where more than four hundred fifty enslaved human beings were packed like sardines for a months-long voyage, which became an emblem for the British antislavery movement. But, as Downs points out, few are aware that the sketch can be traced to the work of British naval surgeon Dr. Thomas Trotter, one of the first to recognize that poor ventilation in crowded conditions results in a lack of oxygen that breeds disease and death. His observations also led to a better understanding of how to prevent scurvy, a frequent cause of higher mortality rates among the seaborne citrus-deprived. Trotter himself was appalled by the conditions he encountered on the Brookes, and testified to this before the House of Commons. But that was hardly the case for many of his peers, and certainly not for the owners of slave ships, who looked past the moral dilemmas of a Trotter while exceedingly grateful for his insights; after all, the goal was keep larger quantities of their human cargo alive in order to turn greater profits. Dead slaves lack market value.

A little more than three decades prior to Trotter’s testimony, the critical need for ventilation was documented by another physician in the wake of the confinement of British soldiers in the infamous “Black Hole of Calcutta” during the revolution in Bengal, which resulted in the death by suffocation of the majority of the captives.  Downs makes the point that one of the unintended consequences of colonialism was that for early actors in the medical arena it served to vastly extend the theater of observation of the disease-afflicted to a virtually global stage that hosted the byproducts of colonialism: war, subjugated peoples, the slave trade, military hospitals and prisons. But it turns out that the starring roles belong less to the doctors and nurses that receive top billing in the history books than to the mostly uncredited bit players removed from the spotlight: the largely helpless and disadvantaged patients whose symptoms and outcomes were observed and cataloged, whose anonymous suffering translated into critical data that collectively advanced the emerging science of epidemiology.

Traditionally, history texts rarely showcased notable women, but one prominent exception was Florence Nightingale, frequently extolled for her role as a nurse during the Crimean War. But as underscored in Maladies of Empire, Nightingale’s real if often overlooked legacy was as a kind of disease statistician through her painstaking data collection and analysis—the very basis for epidemiology that was generally credited to white men rather than to “women working in makeshift hospitals.” [p111] But it was the poor outcomes for patients typically subjected to deplorable conditions in these makeshift military hospitals—which Nightingale assiduously observed and recorded—that drew attention to similarly appalling environments in civilian hospitals in England and the United States, which led to a studied analysis that eventually established systematic evidence for the causes, spread, and treatment of disease.

The conclusions these early epidemiologists reached were not always accurate. In fact, they were frequently wrong. But Downs emphasizes that what was significant was the development of the proper analytical framework. In these days prior to the revolutionary development of germ theory, notions on how to improve survival rates of the stricken put forward by Nightingale and others were controversial and often contradictory. Was the best course quarantine, a frequent resort? Or would improving the sickbed conditions, as Nightingale advocated, lead to better outcomes? Unaware of the role of germs in contagion, evidence could be both inconclusive and inconsistent, and competing ideas could each be partly right. After all, regardless of how disease spread, cleaner and better ventilated facilities might lead to lower mortality rates. Nightingale stubbornly resisted germ theory, even as it was widely adopted, but after it won her grudging acceptance, she continued to promote more sanitary hospital conditions to improve survival rates. Still, epidemiologists faced difficult challenges with diseases that did not conform to familiar patterns, such as cholera, spread by a tainted water supply, and yellow fever, a mosquito-borne pathogen.

In the early days, as noted, European observers collected data from slave ships, yet it never occurred to them that because their human subjects were black such evidence was not applicable to the white population. But epidemiology took a surprisingly different course in the United States, where race has long proved to be a defining element.  Of the more than six hundred thousand who lost their lives during the American Civil War, about two-thirds were felled not by bullets but by disease. The United States Sanitary Commission (USSC) was established in an attempt to ameliorate these dreadful outcomes, but its achievements on one hand were undermined on the other by an obsession with race, even going so far as the sending out to “. . . military doctors a questionnaire, ‘The Physiological Status of the Negro,’ whose questions were based on the belief that Black soldiers were innately different from white soldiers . . . The questionnaire also distinguished gradations of color among Black soldiers, asking doctors to compare how ‘pure Negroes’ differed from people of ‘mixed races’ and to describe ‘the effects of amalgamation on the vital endurance and vigor of the offspring.’” With its imprimatur of governmental authority, the USSC officially championed scientific racism, with profound and long-term social, political, and economic consequences for African Americans. [p134-35]

Some of these notions can be traced back to the antebellum musings of Alabama surgeon Josiah Nott—made famous after the war when he correctly connected mosquitoes to the etiology of Yellow Fever—who asserted that blacks and whites were members of separate species whose mixed-race offspring he deemed “hybrids” who were “physiologically inferior.” Nott believed that all three of these distinct “types” responded differently to disease. [p124-25] His was but one manifestation of the once widespread pseudoscience of physiognomy that alleged black inferiority in order to justify first slavery and later second-class citizenship. Such ideas persisted for far too long, and although scientific racism still endures on the alt-right, it has been thoroughly discredited by actual scientists.  It turns out that a larger percentage of African Americans did indeed succumb to death in the still ongoing COVID pandemic, but this has been shown to be due to factors of socioeconomic status and lack of access to healthcare, not genetics.

Still, although deemed inferior, enslaved blacks also proved useful when convenient. The author argues that “… enslaved children were most likely used as the primary source of [smallpox] vaccine matter in the Civil War South,” despite the danger of infection in harvesting lymph from human subjects in order to vaccinate Confederate soldiers in the field. In yet one more reminder of the moral turpitude that defined the south’s “peculiar institution,” the subjects also included infants whose resulting scar or pit, Downs points out,    “. . . would last a lifetime, indelibly marking a deliberate infection of war and bondage. Few, if any, knew that the scars and pit marks actually disclosed the infant’s first form of enslaved labor, an assignment that did not make it into the ledger books or the plantation records.” [p141-42]

Tragically, this episode was hardly an anomaly, and unethical medical practices involving blacks did not end with Appomattox. The infamous “Tuskegee Syphilis Study” that observed but failed to offer treatment to the nearly four hundred black men recruited without informed consent ran for forty years and was not terminated until 1972! One of the chief reasons for COVID vaccine hesitancy among African Americans has been identified as a distrust of a medical community that historically has either victimized or marginalized them.

Maladies of Empire is a well-written, highly readable book suitable to a scholarly as well as popular audience, and clearly represents a magnificent contribution to the historiography. But it is hardly only for students of history. Instead, it rightly belongs on the shelf of every medical professional practicing today—especially epidemiologists!

Featured

Review of: Chemistry for Breakfast: The Amazing Science of Everyday Life, by Mai Thi Nguyen-Kim

Is your morning coffee moving? Is there a particle party going on in your kitchen? What makes for a great-tasting gourmet meal? Does artificial flavoring really make a difference? Why does mixing soap with water get your dishes clean? Why do some say that “sitting is the new smoking?” How come one beer gives you a strong buzz but your friend can drink a bottle of wine without slurring her words?  When it comes to love, is the “right chemistry” just a metaphor? And would you dump your partner because he won’t use fluoridated toothpaste?

All this and much more makes for the delightful conversation packed into Chemistry for Breakfast: The Amazing Science of Everyday Life, by Mai Thi Nguyen-Kim, a fun, fascinating, and fast-moving slender volume that could very well turn you into a fan of—of all things—chemistry! This cool and quirky book is just the latest effort by the author—a real-life German chemist who hosts a YouTube channel and has delivered a TED Talk—to combat what she playfully dubs “chemism:” the notion that chemistry is dull and best left to the devices of boring nerdy chem-geeks! One reason it works is because Nguyen-Kim is herself the antithesis of such stereotypes, coming off in both print and video as a hip, brilliant, and articulate young woman with a passion for science and for living in the moment.

I rarely pick up a science book, but when I do, I typically punch above my intellectual weight, challenging myself to reach beyond my facility with history and literature to dare to tangle with the intimidating realms of physics, biology, and the like. I often emerge somewhat bruised but with the benefit of new insights, as I did after my time with Sean Carroll’s The Particle at the End of the Universe and Bill Schopf’s Cradle of Life. So it was with a mix of eagerness and trepidation that I approached Chemistry for Breakfast.

But this proved to be a vastly different experience! Using her typical day as a backdrop—from her own body’s release of stress hormones when the alarm sounds to the way postprandial glasses of wine mess with the neurotransmitters of her guests—Nguyen-Kim demonstrates the omnipresence of chemistry to our very existence, and distills its complexity into bite-size concepts that are easy to process but yet never dumbed-down. Apparently, there is a particle party going on in your kitchen every morning, with all kinds of atoms moving at different rates in the coffee you’re sipping, the mug in your hand, and the steam rising above it. It’s all about temperature and molecular bonds.  In a chapter whimsically entitled “Death by Toothpaste,” we find out how chemicals bond to produce sodium fluoride, the stuff of toothpaste, and why that not only makes for a potent weapon against cavities, but why the author’s best buddy might dump her boyfriend—because he thinks fluoride is poison! There’s much more to come—and it’s still only morning at Mai’s house …

As a reader, I found myself learning a lot about chemistry without studying chemistry, a remarkable achievement by the author, whose technique is so effective because it is so unique. Fielding humorous anecdotes plucked from everyday existence, Mai’s wit is infectious, so the “lessons” prove entertaining without turning silly. I love to cook, so I especially welcomed her return to the kitchen in a later chapter. Alas, I found out that while I can pride myself on my culinary expertise, it all really comes down to the way ingredients react with one another in a mixing bowl and on the hot stove.  Oh, and it turns out that despite the fearmongering in some quarters, most artificial flavors are no better or worse than natural ones. Yes, you should read the label—but you have to know what those ingredients are before you judge them healthy or not.

Throughout the narrative, Nguyen-Kim conveys an attractive brand of approachability that makes you want to sit down and have a beer with her, but unfortunately she can’t drink:  Mai, born of Vietnamese parents, has inherited a gene mutation in common with a certain segment of Asians which interferes with the way the body processes alcohol, so she becomes overly intoxicated after just a few sips of any strong drink. She explains in detail why her “broken” ALDH2 enzyme simply will not break down the acetaldehyde in the glass of wine that makes her guests a little tipsy but gives her nausea, a rapid-heartbeat, and sends a “weird, lobster-red tinge” to her face.  Mai’s issue with alcohol reminded me of recent studies that revealed the reason that some people of northern European ancestry always burn instead of tan at the beach is due to faulty genes that block the creation of melanin in response to sun exposure. This is a strong underscore that while race is of course a myth that otherwise communicates nothing of importance about human beings, in the medical world genetics has the potential of serving as a powerful tool to explain and treat disease.  As for Mai, given the overall health risks of alcohol consumption, she views her inability to drink as more of a blessing than a curse, and hopes to pass her broken gene on to her offspring!

The odds that I would ever deliberately set out to read a book about chemistry were never that favorable.  That I would do so and then rave about the experience seemed even more unlikely.  But here we are, along with my highest recommendations. Mai’s love of science is nothing less than contagious. If you read her work,  I can promise that not only will you learn a lot, but you will really enjoy the learning process. And that too, I suppose, is chemistry!

 

[Note: I read an Advance Reader’s Copy of this book as part of an early reviewer’s program]

Featured

Review of: The Lost Founding Father: John Quincy Adams and the Transformation of American Politics, by William J. Cooper

Until Jimmy Carter came along, there really was no rival to John Quincy Adams (1767-1848) as best ex-president, although perhaps William Howard Taft earns honorable mention for his later service as Chief Justice of the Supreme Court.  Carter—who at ninety-seven still walks among us as this review goes to press—has made his reputation as a humanitarian outside of government after what many view as a mostly failed single term in the White House. Adams, on the other hand, whose one term as the sixth President of the United States (1825-29) was likewise disappointing, managed to establish a memorable outsize official legacy when he returned to serve his country as a member of the House of Representatives from 1831 until his dramatic collapse at his desk and subsequent death inside the Capitol Building in 1848. Freshman Congressman Abraham Lincoln would be a pallbearer.

Like several of the Founders whose own later presidential years were troubled, including his own father, John Quincy had a far more distinguished and successful career prior to his time as Chief Executive. But quite remarkably, unlike these other men—John Adams, Jefferson, Madison—who lingered in mostly quiet retirement for decades beyond their respective tenures, in his long career John Quincy Adams could be said to have equaled or surpassed his accomplished pre-presidential service as diplomat, United States Senator, and Secretary of State, returning as just a simple Congressman from Massachusetts who was to be a giant in antislavery advocacy. Adams remains the only former president elected to the House, and until George W. Bush in 2001, the only man who could claim his own father as a fellow president.

Notably, the single unsatisfactory terms that he and his father served in the White House turned out to be bookends to a significant era in American history: John Adams was the first to run for president in a contested election (Washington had essentially been unopposed); his son’s tenure ended along with the Early Republic, shattered by the ascent of Jacksonian democracy. But if the Early Republic was no more, it marked only the beginning of another chapter in the extraordinary life of John Quincy Adams. And yet, for a figure that carved such indelible grooves in our nation’s history, present at the creation and active well into the crises of the antebellum period that not long after his death would threaten to annihilate the American experiment, it remains somewhat astonishing how utterly unfamiliar he remains to most citizens of the twenty-first century.

Prominent historian William J. Cooper seeks to remedy that with The Lost Founding Father: John Quincy Adams and the Transformation of American Politics (2017), an exhaustively researched, extremely well-written, if dense study that is likely to claim distinction as the definitive biography for some years to come. Cooper’s impressive work is old-fashioned narrative history at its best. John Quincy Adams is the main character, but his story is told amid the backdrop of the nation’s founding, its evolution as a young republic, and its descent to sectional crises over slavery, while many, at home and abroad, wondered at the likelihood of its survival. It is not only clever but entirely apt that in the book’s title the author dubs his subject the “Lost Founding Father.”

Some have called Benjamin Franklin the “grandfather of his country.” Likewise, John Quincy Adams could be said to be a sort of “grandson.” He was not only to witness the tumultuous era of the American Revolution and observe John Adams’ storied role as a principal Founder, he also accompanied his father on diplomatic missions to Europe while still a boy, and completed most of his early education there.  Like Franklin, Jefferson, and his father, he spent many years abroad during periods of fast-moving events and dramatic developments on American soil that altered the nation and could prove jarring upon return. Unlike the others, his extended absence coincided with his formative years; John Quincy grew up not in New England but rather in France, the Netherlands, Russia, and Great Britain, and this came to deeply affect him.

A brooding intellectual with a brilliant mind who sought solitude over society, dedicated to principle above all else, including loyalty to party, the Adams that emerges in these pages was a socially awkward workaholic subject to depression, blessed with a wide range of talents that ranged from the literary to languages to the deeply analytical, but lacking even the tiniest vestige of charisma.  He strikes the reader as the least suitable person to ever aspire to or serve as president of the United States. A gifted writer, he began a diary when he was twelve years old that he continued almost without interruption until shortly before his death. He frequently expressed dismay at his inability to keep up with his ambitious goals for daily diary entries that often ran to considerable length.

There is much in the man that resembles his father, also a principled intellect, whom he much admired even while he suffered a sense of inadequacy in his shadow. Both men were stubborn in their ideals and tended to alienate those who might otherwise be allies. While each could be self-righteous, John Adams was also ever firmly self-confident in a way that his son could never match. Of course, in his defense, the younger man not only felt obligated to live up to a figure who was a titan in the public arena, but he lacked a wife that was cut from the same cloth as his mother, with whom he had a sometimes-troubled relationship.

Modern historians have made much of the historic partnership that existed, mostly behind the scenes, between John and Abigail Adams; in every way except eighteenth century mores she seems his equal. John Quincy, on the other hand, was wedded to Louisa Catherine, a sickly woman given to fainting spells and frequent migraines whose multiple miscarriages coupled with the loss of an infant daughter certainly triggered severe psychological trauma. A modern audience can’t help but wonder if her many maladies and histrionics were not psychosomatic. At any rate, John Quincy treated his wife and other females he encountered with the patronizing male chauvinism typical of his times, so it is dubious that if he instead found an Abigail Adams at his side, he could have flourished in her orbit the way his father did.

Although Secretary of State John Quincy Adams was largely the force that drove the landmark “Monroe Doctrine” and other foreign policy achievements of the Monroe Administration, most who know of Adams tend to know of him only peripherally, through his legendary political confrontation with the far more celebrated Andrew Jackson. That conflict was forged in the election of 1824. The Federalist Party, scorned for threats of New England secession during the War of 1812, was essentially out of business. James Monroe was wrapping up his second term in what historians have called the “Era of Good Feelings” that ostensibly reflected a sense of national unity controlled by a single party, the Democratic-Republicans, but there were fissures, factions, local interests, and emerging coalitions beneath the surface. In the most contested election to date in the nation’s history, John Quincy, Andrew Jackson, Henry Clay, and William Crawford were chief contenders for the highest office. While Jackson received a plurality, none received a majority of the electoral votes, so as specified in the Constitution the race was sent to the House for decision. Crawford had suffered a devastating stroke and was thus out of consideration. Adams and Clay tended to clash, but both were aligned on many national issues, and Jackson was rightly seen as a dangerous demagogue. Clay threw his support to Adams, who became president. Jackson was furious, even more so when Adams later named Clay Secretary of State, which was then seen as a sure steppingstone to the presidency, something that further enraged Jackson, who branded his appointment by Adams a “Corrupt Bargain.” As it turned out, while Adams prevailed, his presidency was marked by frustration, his ambitious domestic goals stymied by Congress. In a run for reelection, he was dealt a humiliating defeat by Jackson, who headed the new Democratic Party. The politics of John Quincy Adams and the Early Republic went extinct.

While evaluating these two elections, it’s worth pausing here to emphasize John Quincy’s longtime objection to the nefarious if often overlooked impact of the three-fifths clause in the Constitution, which granted southern slaveholding states outsize political clout by counting an enslaved individual as three-fifths of a person for the purpose of representation. This was to prove significant, since the slave south claimed a disproportionate share of national political power when it came to advancing legislation or, for that matter, electing a president.  He found focus on this issue while Secretary of State in the debate that swirled around the Missouri Compromise of 1820, concluding that:

The bargain in the Constitution between freedom and slavery had conveyed to the South far too much political influence, its base the notorious three-fifths clause, which immorally increased southern power in the nation … the past two decades had witnessed a southern domination that had ravaged the Union … he emphasized what he saw as the moral viciousness of that founding accord. It contradicted the fundamental justification of the American Revolution by subjecting slaves to oppression while privileging their masters with about a double representation.  [p174]

This was years before he was himself to fall victim to the infamous clause. As underscored by historian Alan Taylor in his recent work, American Republics (2021), the disputed election of 1824 would have been far less disputed without the three-fifths clause, since in that case Adams would have led Andrew Jackson in the Electoral College 83 to 77 votes, instead of putting Jackson in the lead 99 to 84. When Jackson prevailed in the next election in 1828, it was the south that cemented his victory. The days of Virginia planters in the White House may have passed, but the slave south clearly dominated national politics and often served as antebellum kingmaker for the White House.

In any case, Adams’ dreams of vindicating his father’s single term were dashed.  A lesser man would have gone off into the exile of retirement, but Adams was to come back—and come back stronger than ever as a political figure to be reckoned with, distinguished by his fierce antislavery activism. His abhorrence of human bondage ran deep, and long preceded his return to Congress. And because he kept such a detailed journal, we have insight into his most personal convictions.

Musing once more about the Missouri Compromise, he confided to his diary his belief that a war over slavery was surely on the horizon that would ultimately result in its elimination:  “If slavery be the destined sword in the hand of the destroying angel which is to sever the ties of this Union … the same sword will cut in sunder the bonds of slavery itself.” [p173] He also wrote of his conversations with the fellow cabinet secretary he most admired at the time, South Carolina’s John C. Calhoun, who clearly articulated the doctrine of white supremacy that defined the south. To Adams’ disappointment, Calhoun told him that southerners did not believe the Declaration’s guarantees of universal rights applied to blacks, and “Calhoun maintained that racial slavery guaranteed equality among whites because it placed all of them above blacks.” [p175]

These diary entries from 1820 came to foreshadow the more crisis-driven politics in the decades hence when Adams—his unhappy presidency long behind him—was the leading figure in Congress who stood against the south’s “peculiar institution” and southern domination of national politics. These were, of course, far more fraught times. He opposed both Texas annexation and the Mexican War, which he correctly viewed as a conflict designed to extend slavery. But he most famously led the opposition against the 1836 resolution known as the “gag rule” that prohibited House debate on petitions to abolish slavery, which incensed the north and spawned greater polarization. Adams was eventually successful, and the gag rule was repealed, but not until 1844.

It has long been my goal to read at least one biography of each American president, and I came to Cooper’s book with that objective in mind.  I found my time with it a deeply satisfying experience, although I suspect because it is so pregnant in detail it will find less appeal among a more popular audience.  Still, if you want to learn about this too often overlooked critical figure and at the same time gain a greater understanding of an important era in American history, I would highly recommend that you turn to The Lost Founding Father.

——————————————

Note: I reviewed the referenced Alan Taylor work here: Review of: American Republics: A Continental History of the United States, 1783-1850, by Alan Taylor

 

Featured

Review of: Marching Masters: Slavery, Race, and the Confederate Army During the Civil War, by Colin Edward Woodward

Early in the war … a Union squad closed in on a single ragged Confederate, and he obviously didn’t own any slaves. He couldn’t have much interest in the Constitution or anything else. And said: “What are you fighting for, anyhow?” they asked him. And he said: “I’m fighting because you’re down here.” Which is a pretty satisfactory answer.

That excerpt is from Ken Burns’ epic The Civil War (1990) docuseries, Episode 1, “The Cause.” It was delivered by the avuncular Shelby Foote in his soft, reassuring—some might say mellifluous—cadence, the inflection decorated with a pronounced but gentle southern accent. As professor of history James M. Lundberg complains, Foote, author of a popular Civil War trilogy who was himself not a historian, “nearly negates Burns’ careful 15-minute portrait of slavery’s role in the coming of the war with a 15-second” anecdote.  Elsewhere, Foote rebukes the scholarly consensus that slavery was the central cause for secession and the conflict it spawned that would take well over 600,000 American lives.

While all but die-hard “Lost Cause” myth fanatics have relegated Foote’s ill-conceived dismissal of the centrality of slavery to the dustbin of history, the notion that southern soldiers fought solely for home and hearth has long persisted, even among historians. And on the face of it, it seems as if it should be true. After all, secession was the work of a narrow slice of the antebellum south, the slave-owning planter class which only comprised less than two percent of the population but dominated the political elite, in fury that Lincoln’s election by “Free-Soil” Republicans would likely deny their demands to transplant their “peculiar institution” to the new territories acquired in the Mexican War. More critically, three-quarters of southerners owned no slaves at all, and nearly ninety per cent of the remainder owned twenty or fewer. Most whites lived at the margins as yeoman farmers, although their skin color ensured a status markedly above those of blacks, free or enslaved. The Confederate army closely reflected that society: most rebel soldiers were not slaveowners.  So slavery could not have been important to them … or could it?

The first to challenge the assumption that Civil War soldiers, north or south, were political agnostics was James M. McPherson in What They Fought For 1861-1865 (1995). Based on extensive research on letters written home from the front, McPherson argued that most of those in uniform were far more ideological than previously acknowledged. In a magnificent contribution to the historiography, Colin Edward Woodward goes much further in Marching Masters: Slavery, Race, and the Confederate Army During the Civil War (2014), presenting compelling evidence that not only were most gray-clad combatants well-informed about the issues at stake, but a prime motivating force for a majority was to preserve the institution of human chattel bondage and the white supremacy that defined the Confederacy.

Like McPherson, Woodward does a deep dive into the wealth of still extant letters from those at the front to make his case in a deeply researched and well-written narrative that reveals that the average rebel was surprisingly well-versed in the greater issues manifested in the debates that launched an independent Confederacy and justified the blood and treasure being spent to sustain it. And just as in secession, the central focus was upon preserving a society that had its foundation in chattel slavery and white supremacy. Some letters were penned by those who left enslaved human beings—many or just a few—back at home with their families when they marched off to fight, while most were written by poor dirt farmers who had no human property nor the immediate prospect of obtaining any.

But what is fully astonishing, as Woodward exposes in the narrative, is not only how frequently slavery and the appropriate status for African Americans is referenced in such correspondence, but how remarkably similar the language is, whether the soldier is the son of a wealthy planter or a yeoman farmer barely scraping by. In nearly every case, the righteousness of their cause is defined again and again not by the euphemism of “states’ rights” that became the rallying cry of “Lost Cause” after the war, but by the sanctity of the institution of human bondage. More than once, letters resound with a disturbing yet familiar refrain that asserted that the most fitting condition for blacks is as human property, something seen as mutually beneficial to the master as well as to the enslaved.

If those without slaves risking life and limb to sustain slavery with both musket in hand and zealous declarations in letters home provokes a kind of cognitive dissonance to modern ears, we need only be reminded of our own contemporaries in doublewides who might sound the most passionate defense of Wall Street banks.  Have-nots in America often aspire to what is beyond their reach, for themselves or for their children. For poor southern whites of the time, in and out of the Confederate army, that turns out to be slave property.

One of the greatest sins of postwar reconciliation and the tenacity of the “Lost Cause” was the erasure of African Americans from history. In the myth-making that followed Appomattox, with human bondage extinct and its practice widely reviled, the Civil War was transformed into a sectional war of white brother against white brother, and blacks were relegated to roles as bit players. The centrality of slavery was excised from the record. In the literature, blacks were generally recalled as benign servants loyal to their masters, like the terrified Prissy in Gone with the Wind screeching “De Yankees is comin!” in distress rather than the celebration more likely characteristic to that moment in real time. That a half million of the enslaved fled to freedom in Union lines was lost to memory. Also forgotten was the fact that by the end of the war, fully ten percent of the Union Army was comprised of black soldiers in the United States Colored Troops (USCT)—and these men played a significant role in the south’s defeat.  Never mentioned was that Confederate soldiers routinely executed black men in blue uniforms who were wounded or attempting to surrender, not only in well-known encounters like at Fort Pillow and the Battle of the Crater, but frequently and anonymously. As Woodward reminds us, this brand of murder was often unofficial, but rarely acknowledged, and almost never condemned. Only recently have these aspects of Civil War history received the attention that is their due.

And yet, more remarkably, Marching Masters reveals that perhaps the deepest and most enduring erasure of African Americans was of the huge cohort that accompanied the Confederate army on its various campaigns throughout the war. Thousands and thousands of them. “Lost Cause” zealots have imagined great corps of “Black Confederates” who served as fighters fending off Yankee marauders, but if that is fantasy—and it certainly is—the massive numbers of blacks who served as laborers alongside white infantry were not only real but represented a significant reason why smaller forces of Confederates held out as well as they did against their often numerically superior northern opponents. We have long known that a greater percentage of southerners were able to join the military than their northern counterparts because slave labor at home in agriculture and industry freed up men to wield saber and musket, but Woodward uncovers the long-overlooked legions of the enslaved who travelled with the rebels performing the kind of labor that (mostly) fell on white enlisted men in northern armies.

A segment of these were also personal servants to the sons of planters, which sometimes provoked jealousy among the ranks. Certain letters home plead for just such a servile companion, sometimes arguing that the enslaved person would be less likely to flee to Union lines if he was to be a cook in an army camp instead! And there were occasionally indeed tender if somewhat perversely paternalistic bonds between the homesick soldier and the enslaved, some of which found wistful expression in letters, some manifested in relationships with servants in the encampments. Many soldiers had deep attachments to the enslaved that nurtured them as children in the bosom of their families; some of that was sincerely reciprocated. Woodward makes it clear that while certain generalities can be drawn, every individual—soldier or chattel—was a human being capable of a wide range of actions and emotions, from the cruel to the heartwarming. For better or for worse, all were creatures of their times and their circumstances. But, at the end of the day, white soldiers had something like free will; enslaved African Americans were subject to the will of others, sometimes for the better but more often for the worse.

And then there was impressment. One of the major issues relatively unexplored in the literature is the resistance of white soldiers in the Confederate army to perform menial labor—the same tasks routinely done by white soldiers in the Union army, who grumbled as all those in the ranks in every army were wont to do while nevertheless following orders. But southern boys were different. Nurtured in a society firmly grounded in white supremacy, with chattel property doomed to the most onerous toil, rebels not only typically looked down upon hard work but—as comes out in their letters—equated it with “slavery.” To cope with this and an overall shortage of manpower, legislation was passed in 1863 mandating impressment of the enslaved along with a commitment of compensation to owners.  This was not well received, but yet enacted, and thousands more blacks were sent to camps to do the work soldiers were not willing to do.

The numbers were staggering. When Lee invaded Pennsylvania, his army included 6000 enslaved blacks—which added an additional ten percent to the 60,000 infantry troops he led to Gettysburg! This of course does not include the runaways and free blacks his forces seized and enslaved after he crossed the state line. The point to all of this, of course, is that slavery was not some ideological abstraction for the average rebel soldier in the ranks, something that characterized the home front, whether your own family were owners of chattel property or not. Instead, the enslaved were with you in the field every day, not figuratively but in the flesh. With this in mind, sounding a denial that slavery served as a critical motivation for Confederate troops rings decidedly off-key.

While slavery was the central cause of the war, it was certainly not the only cause. There were other tensions that included agriculture vs. industry, rural vs. urban, states’ rights vs. central government, tariffs, etc. But as historians have long concluded, none of these factors on their own could ever have led to Civil War. Likewise, southern soldiers fought for a variety of reasons. While plenty were volunteers, many were also drafted into the war effort. Like soldiers from ancient times to the present day, they fought because they were ordered to, because of their personal honor, because they did not want to appear cowardly in the eyes of their companions. And because much of the war was decided on southern soil, they also fought for their homeland, to defend their families, to preserve their independence. So Shelby Foote might have had a point. But what was that independence based upon? It was fully and openly based upon creating and sustaining a proud slave republic, as all the rhetoric in the lead-up to secession loudly underscored.

Marching Masters argues convincingly that the long-held belief that southern soldiers were indifferent to or unacquainted with the principles that guided the Confederate States of America is in itself a kind of myth that encourages us to not only forgive those who fought for a reprehensible cause but to put them on a kind of heroic pedestal. Many fought valiantly, many lost their lives, and many were indeed heroes, but we must not overlook the cause that defined that sacrifice.  In this, we must recall the speech delivered by the formerly enslaved Frederick Douglass on “Remembering the Civil War” with his plea against moral equivalency that is as relevant today as it was when he delivered it on Decoration Day in 1878: “There was a right side and a wrong side in the late war, which no sentiment ought to cause us to forget, and while today we should have malice toward none, and charity toward all, it is no part of our duty to confound right with wrong, or loyalty with treason.”

For all of the more than 60,000 books on the Civil War, there still remains a great deal to explore and much that has long been cloaked in myth for us to unravel. It is the duty not only of historians but for all citizens of our nation—a nation that was truly reborn in that tragic, bloody conflict—to set aside popular if erroneous notions of what led to that war, as well as what motivated its long-dead combatants to take up arms against one another. To that end, Woodward’s Marching Masters is a book that is not only highly recommended but is most certainly required reading.

 

——————————-

Transcript of The Civil War (1990) docuseries, Episode 1, “The Cause:” https://subslikescript.com/series/The_Civil_War-98769/season-1/episode-1-The_Cause

Comments by James M. Lundberg: https://www.theatlantic.com/national/archive/2011/06/civil-war-sentimentalism/240082/

Speech by Frederick Douglass, “Remembering the Civil War,” delivered on Decoration Day 1878:   https://www.americanyawp.com/reader/reconstruction/frederick-douglass-on-remembering-the-civil-war-1877/

Featured

Review of: Sarah’s Long Walk: The Free Blacks of Boston and How Their Struggle for Equality Changed America, by Stephen Kendrick & Paul Kendrick

Several years ago, I published an article in a scholarly journal entitled “Strange Bedfellows: Nativism, Know-Nothings, African-Americans & School Desegregation in Antebellum Massachusetts,” that spotlighted the odd confluence of anti-Irish nativism and the struggle to desegregate Boston schools. The Know-Nothings—a populist, nativist coalition that contained elements that would later be folded into the emerging Republican Party—made a surprising sweep in the Massachusetts 1854 elections, fueled primarily by anti-Irish sentiment, as well as a pent-up popular rage against the elite status quo that had long dominated state politics. Suddenly, the governor, all forty senators, and all but three house representatives were Know-Nothings!

Perhaps more startling was that during their brief tenure, the Know-Nothing legislature enacted a host of progressive reforms, creating laws to protect workingmen, ending imprisonment for debt, strengthening women’s rights in property and marriage, and—most significantly—passing landmark legislation in 1855 that “prohibited the exclusion [from public schools] of children for either racial or religious reasons,” which effectively made Massachusetts the first state in the country to ban segregation in schools! Featured in the debate prior to passage of the desegregation bill is a quote from the record that is to today’s ears perhaps at once comic and cringeworthy, as one proponent of the new law sincerely voiced his regret “that Negroes living on the outskirts . . . were forced to go a long distance to [the segregated] Smith School. . . while . . . the ‘dirtiest Irish,’ were allowed to step from their houses into the nearest school.”

My article focused on Massachusetts politics and the bizarre incongruity of nativists unexpectedly delivering the long sought-after prize of desegregated schools to the African American community. It is also the story of the nearly forgotten black abolitionist and integrationist William Cooper Nell, a mild if charismatic figure who united disparate forces of blacks and whites in a long, stubborn, determined campaign to end Boston school segregation. But there are lots of other important stories of people and events that led to that moment which due to space constraints could not receive adequate treatment in my effort.

Arguably the most significant one, which my article references but does not dwell upon, centers upon a little black girl named Sarah Roberts.  Her father, Benjamin R. Roberts, sued for equal protection rights under the state constitution because his daughter was barred from attending a school near her residence and was compelled to a long walk to the rundown and crowded Smith School instead. He was represented by Robert Morris, one of the first African American attorneys in the United States, and Charles Sumner, who would later serve as United States Senator. In April 1850, in Roberts v. The City of Boston, the state Supreme Court ruled against him, declaring that each locality could decide for itself whether to have or end segregation. This ruling was to serve as an unfortunate precedent for the ignominious separate but equal ruling in Plessy v. Ferguson some decades hence and was also an obstacle Thurgood Marshall had to surmount when he successfully argued to have the Supreme Court strike down school segregation across the nation in 1954’s breakthrough Brown v. Board of Education case—just a little more than a century after the disappointing ruling in the Roberts case.

Father and son Stephen Kendrick and Paul Kendrick teamed up to tell the Roberts story and a good deal more in Sarah’s Long Walk: The Free Blacks of Boston and How Their Struggle for Equality Changed America, an extremely well-written, comprehensive, if occasionally slow-moving chronicle that recovers for the reader the vibrant, long overlooked black community that once peopled Boston in the years before the Civil War. In the process, the authors reveal how it was that while the state of Massachusetts offered the best overall quality of life in the nation for free blacks, it was also the home to the same stark, virulent racism characteristic of much of the north in the antebellum era, a deep-seated prejudice that manifested itself not only in segregated schools but also in a strict separation in other arenas such as transportation and theaters.

Doctrines of abolition were widely despised, north and south, and while abolitionists remained a minority in Massachusetts, as well, it was perhaps the only state in the country where antislavery ideology achieved widespread legitimacy. But true history is all nuance, and those who might rail passionately against the inherent evil in holding humans as chattel property did not necessarily also advance notions of racial equality. That was indeed far less common. Moreover, it is too rarely underscored that the majority of northern “Freesoilers” who were later to become the most critical component of the Republican Party vehemently opposed the spread of slavery to the new territories acquired in the Mexican War while concomitantly despising blacks, free or enslaved.

At the same time, there was hardly unanimity in the free black community when it came to integration; some blacks welcomed separation.  Still, as Sarah’s Long Walk relates, there were a number of significant African American leaders like Robert Morris and William Cooper Nell whom, with their white abolitionist allies, played the long game and pursued compelling, nonviolent mechanisms to achieve both integration and equality, many of which presaged the tactics of Martin Luther King and other Civil Rights figures a full century later.  For instance, rather than lose hope after the Roberts court decision, Nell doubled down his efforts, this time with a new strategy—a taxpayer’s boycott of Boston which saw prominent blacks move out of the city to suburbs that featured integrated schools, depriving Boston of tax revenue.

The Kendrick’s open the narrative with a discussion of Thurgood Marshall’s efforts to overturn the Roberts precedent in Brown v. Board of Education, and then trace that back to the flesh and blood Boston inhabitants who made Roberts v. The City of Boston possible, revealing the free blacks who have too long been lost to history. Readers not familiar with this material will come across much that will surprise them between the covers of this fine book. The most glaring might be how thoroughly in the decades after Reconstruction blacks have been erased from our history, north and south. Until recently, how many growing up in Massachusetts knew anything at all about the thriving free black community in Boston, or similar ones elsewhere above the Mason-Dixon?

But most astonishing for many will be the fact that the separation of races that that would become the new normal in the post-Civil War “Jim Crow” south had its roots fully nurtured in the north decades before Appomattox.  Whites and their enslaved chattels shared lives intertwined in the antebellum south, while separation between whites and blacks was fiercely enforced in the north.  Many African Americans in Massachusetts had fled bondage, or had family members that were runaways, and knew full well that southern slaveowners commonly traveled by rail accompanied by their enslaved servants, while free blacks in Boston were relegated to a separate car until the state prohibited racial segregation in mass transportation in 1842.

Sarah may not have been spared her long walk to school, but the efforts of integrationists eventually paid off when school segregation was prohibited by Massachusetts law just five years after Sarah’s father lost his case in court. Unfortunately, this battle had to be waged all over again in the 1970s, this time accompanied by episodes of violence, as Boston struggled to achieve educational equality through controversial busing mandates that in the long term generated far more ill will than sustainable results. Despite the elevation of Thurgood Marshall to the Supreme Court bench, and the election of the first African American president, more than one hundred fifty years after the Fourteenth Amendment became the law of the land, the Black Lives Matter (BLM) movement reminds us that there is still much work to be done to achieve anything like real equality in the United States.

For historians and educators, an even greater concern these days lies in the concerted efforts by some on the political right to erase the true story of African American history from public schools. As this review goes to press in Black History Month, February 2022, shameful acts are becoming law across a number of states that by means of gaslighting legislation ostensibly designed to ban Critical Race Theory (CRT) effectively prohibit educators from teaching their students the true history of slavery, Reconstruction, and Civil Rights.  As of this morning, there are some one hundred thirteen other bills being advanced across the nation that could serve as potential gag orders in schools. How can we best combat that? One way is to loudly protest to state and federal officials, to insist that black history is also American history and should not be erased. The other is to freely share black history in your own networks. The best weapons for that in our collective arsenal are quality books like Sarah’s Long Walk.

 

My journal article, “Strange Bedfellows: Nativism, Know-Nothings, African-Americans & School Desegregation in Antebellum Massachusetts,” and related materials can be accessed by clicking here: Know-Nothings

For more about the Know-Nothings, I recommend this book which I reviewed here: Review of: The Know-Nothing Party in Massachusetts: The Rise and Fall of a People’s Movement, by John R. Mulkern

 

 

 

 

Featured

Review of: Into the Heart of the World: A Journey to the Center of the Earth, by David Whitehouse

A familiar trope in the Looney Tunes cartoons of my boyhood had Elmer Fudd or some other zany character digging a hole with such vigor and determination that they emerged on the other side of the world in China, greeted by one or more of the stereotypically racist Asian animated figures of the day.  In the 1964 Road Runner vehicle “War and Pieces,” Wile E. Coyote goes it one better, riding a rocket clear through the earth—presumably passing through its center—until he appears on the other side dangling upside down, only to then encounter a Chinese Road Runner embellished with a braided pigtail and conical hat who bangs a gong with such force that he is driven back through the tunnel to end up right where he started from. In an added flourish, the Chinese Road Runner then peeps his head out of the hole and beep-beep’s faux Chinese characters that turn into letters that spell “The End.”

There were healthy doses of both hilarious comedy and uncomfortable caricature here, but what really stuck in a kid’s mind was the notion that you could somehow burrow through the earth with a shovel or some explosive force, which it turns out is just as impossible in 2022 as it was in 1964. But if you hypothetically wanted to give it a go, you would have to start at China’s actual antipode in this hemisphere, which lies in Chile or Argentina, and then tunnel some 7,918 miles: twice the distance to the center of the earth you would pass through, which lies at around 3,959 miles (6,371 km) from the surface.

So what about the center of the earth? Could we go there? After all, we did visit the moon, and the average distance there—238,855 miles away—is far more distant.  But of course what lies between the earth and its single satellite is mostly empty space, not the crust, mantle, outer core, and inner core of a rocky earth that is a blend of the solid and the molten.  Okay, it’s a challenge, you grant, but how far have we actually made it in our effort to explore our inner planet? We must have made some headway, right? Well, it turns out that the answer is: not very much. A long, concerted effort at drilling that began in 1970 by the then Soviet Union resulted in a measly milestone of a mere 7.6 miles (12.3 km) at the Kola Superdeep Borehole near the Russian border with Norway; efforts were abandoned in 1994 because of higher-than-expected temperatures of 356 °F (180 °C). Will new technologies take us deeper one day at this site or another? Undoubtedly. But it likely will not be in the near future. After all, there’s another 3,951.4 miles to go and conditions will only grow more perilous at greater depths.

But we can dream, can’t we? Indeed. And it was Jules Verne who did so most famously when he imagined just such a trip in his classic 1864 science fiction novel, Journey to the Center of the Earth.  Astrophysicist and journalist David Whitehouse cleverly models his grand exploration of earth’s interior, Into the Heart of the World: A Journey to the Center of the Earth, on Verne’s tale, a well-written, highly accessible, and occasionally exciting work of popular science that relies on geology rather than fiction to transport the reader beneath the earth’s crust through the layers below and eventually to what we can theoretically conceive based upon the latest research as the inner core that comprises the planet’s center.

It is surprising just how few people today possess a basic understanding of the mechanics that power the forces of the earth.  But perhaps even more astonishing is how new—relatively—this science is. When I was a child watching Looney Tunes on our black-and-white television, my school textbooks admitted that although certain hypotheses had been suggested, the causes of sometimes catastrophic events such as earthquakes and volcanoes remained essentially unknown. All that changed effectively overnight—around the time my family got our first color TV—with the widespread acceptance by geologists of the theory of plate tectonics, constructed on the foundation of the much earlier hypothesis of German meteorologist and geophysicist Alfred Wegener, who in 1912 advanced the view of continents in motion known as “continental drift,” which was ridiculed in his time. By 1966, the long-dead Wegener was vindicated, and continental drift was upgraded to the more elegant model of plate tectonics that fully explained not only earthquakes and volcanoes, but mountain-building, seafloor spreading, and the whole host of other processes that power a dynamic earth.

Unlike some disciplines such as astrophysics, the basic concepts that make up earth science are hardly insurmountable to any individual with an average intelligence, so for those who have no idea how plate tectonics work and are curious enough to want to learn, Into the Heart of the World is a wonderful starting point. Whitehouse can be credited with articulating complicated processes in an easy to follow narrative that consistently holds the reader’s interest and remains fully comprehensible to the non-scientist. I came to this book with more than a passing familiarity with plate tectonics, but I nevertheless added to my knowledge base and enjoyed the way the author united disparate topics into this single theme of a journey to the earth’s inner core.

If I have a complaint, and as such it is only a quibble tied to my own preferences, Into the Heart of the World often devotes far more paragraphs to a history of “how we know what we know” rather than a more detailed explanation of the science itself. The author is not to be faulted for what is integral to the structure of the work—after all the cover does boast “A Remarkable Voyage of Scientific Discovery,” but it left me longing for more. Also, some readers may stumble over these backstories of people and events, eager instead to get to the fascinating essence of what drives the forces that shape our planet.

A running gag in more than one Bugs Bunny episode has the whacky rabbit inadvertently tunneling to the other side of the world, then admonishing himself that “I knew I shoulda taken that left turn at Albuquerque!”  He doesn’t comment on what turn he took at his juncture with the center of the earth, but many kids who sat cross-legged in front their TVs wondered what that trip might look like. For grownups who still wonder, I recommend Into the Heart of the World as your first stop.

 

[Note: this book has also been published under the alternate title, Journey to the Centre of the Earth: The Remarkable Voyage of Scientific Discovery into the Heart of Our World.]

[A link to the referenced 1964 Road Runner episode is here: War and Pieces]

Featured

Review of: Ancient Bones: Unearthing the Astonishing New Story of How We Became Human, by Madelaine Böhme, Rüdiger Braun, and Florian Breier

In southern Greece in 1944, German forces constructing a wartime bunker reportedly unearthed a single mandible that paleontologist Bruno von Freyberg incorrectly identified as an extinct Old-World monkey. A decades-later reexamination by another paleoanthropologist determined that the tooth instead belonged to a 7.2-million-year-old extinct species of great ape which in 1972 was dubbed Graecopithecus freybergi and came to be more popularly known as “El Graeco.” Another tooth was discovered in Bulgaria in 2012. Then, in 2017, an international team led by German paleontologist Madelaine Böhme conducted an analysis that came to the astonishing conclusion that El Graeco in fact represents the oldest hominin—our oldest direct human ancestor! At the same, Böhme challenged the scientific consensus that all humans are “Out-of-Africa” with her competing “North Side Story” that suggests Mediterranean ape ancestry instead.  Both of these notions remain widely disputed in the paleontological community.

In Ancient Bones: Unearthing the Astonishing New Story of How We Became Human, Böhme—with coauthors Rüdiger Braun and Florian Breier—advances this North Side Story with a vengeance, scorning the naysayers and intimating the presence of some wider conspiracy in the paleontological community to suppress findings that dispute the status quo. Böhme brings other ammunition to the table, including the so-called “Trachilos footprints,” the 5.7-million-year-old potentially hominin footprints found on Crete, which—if fully substantiated—would make these more than 2.5 million years earlier than the footprints of Australopithecus afarensis found in Tanzania. Perhaps these were made by El Graeco?! And then there’s Böhme’s own discovery of the 11.6-million-year-old Danuvius guggenmosi, an extinct species of great ape she uncovered near the town of Pforzen in southern Germany, which according to the author revolutionizes the origins of bipedalism. Throughout, she positions herself as the lonely voice in the wilderness shouting truth to power.

I lack the scientific credentials to quarrel with Böhme’s assertions, but I have studied paleoanthropology as a layman long enough to both follow her arguments and to understand why accepted authorities would be reluctant to embrace her somewhat outrageous claims that are after all based on rather thin evidence. But for the uninitiated, some background to this discussion is in order:

While human evolution is in itself not controversial (for scientists, at least; Christian evangelicals are another story), the theoretical process of how we ended up as Homo sapiens sapiens, the only living members of genus Homo, based upon both molecular biology and fossil evidence, has long been open to spirited debate in the field, especially because new fossil finds occur with some frequency and the rules of somewhat secretive peer-reviewed scholarship that lead to publication in scientific journals often delays what should otherwise be breaking news.

Paleontologists have long been known to disagree vociferously with one other, sometimes spawning feuds that erupt in the public arena, such as the famous one in the 1970s between the esteemed, pedigreed Richard Leakey and Donald Johanson over Johanson’s discovery and identification of  the 3.2-million-year-old Australopithecine “Lucy,” which was eventually accepted by the scientific community over Leakey’s objections.  At one time, it was said that all hominin fossils could be placed on one single, large table. Now there are far more than that: Homo, Australopithecine, and many that defy simple categorization. Also at one time human evolution was envisioned as a direct progression from primitive to sophistication, but today it is accepted that rather than a “tree” our own evolution can best be imagined as a bush, with many relatives—and many of those relatives not on a direct path to the humans that walk the earth today.

Another controversary has been between those who favored an “Out-of-Africa” origin for humanity, and those who advanced what used to be called the multi-regional hypothesis. Since all living Homo sapiens sapiens are very, very closely related to each other—even more closely related than chimpanzees that live in different parts of Africa today—multiregionalism smacked a bit of the illogical and has largely fallen out of favor. The scholarly consensus that Böhme takes head on is that humans can clearly trace their ancestry back to Africa. Another point that should be made is that there are loud voices of white supremacist “race science” proponents outside of the scientific community whom without any substantiation vehemently oppose the “Out-of-Africa” origin theory for racist political purposes, as underscored in Angela Saini’s brilliant recent book, Superior: The Return of Race Science. This is not to suggest that Böhme is racist nor that her motives should be suspect—there is zero evidence that is the case—but the reader must be aware of the greater “noise” that circulates around this topic.

My most pointed criticism of Ancient Bones is that it is highly disorganized, meandering between science and polemic and unexpected later chapters that read like a college textbook on human evolution.  It is often hard to know what to make of it.  And it’s difficult for me to accept that there is a larger conspiracy in the paleoanthropological community to preserve “Out-of-Africa” against better evidence that few beyond Böhme and her allies have turned up. The author also makes a great deal of identifying singular features in both El Graeco and Danuvius that she insists must establish that her hypotheses are the only correct ones, but as those who are familiar with the work of noted paleoanthropologists John Hawks and Lee Berger are well aware, mosaics—primitive and more advanced characteristics occurring in the same hominin—are far more common than once suspected and thus should give pause to those tempted to conclusions that actual evidence does not unambiguously support.

As noted earlier, I am not a paleontologist or even a scientist, and thus I am far from qualified to peer-review Böhme’s arguments or pronounce judgment on her work. But as a layman with some familiarity with the current scholarship, I remain unconvinced. She also left me uncomfortable with what appears to be a lack of respect for rival ideas and for those who fail to find concordance with her conclusions. More significantly, her book is poorly edited and too often lacks focus. Still, for those like myself who want to stay current with the latest twists-and-turns in the ever-developing story of human evolution, at least some portions of Ancient Bones might be worth a read.

 

[Note: I read an Advance Reader’s Copy (ARC) of this book obtained through an early reviewer’s group.]

[Note: I reviewed Superior: The Return of Race Science, by Angela Saini, here: Review of: Superior: The Return of Race Science, by Angela Saini]

[Note: I reviewed Almost Human: The Astonishing Tale of Homo naledi and the Discovery that Changed Our Human Story,” by Lee Berger and John Hawks here: Review of: Almost Human: The Astonishing Tale of Homo naledi and the Discovery that Changed Our Human Story, by Lee Berger and John Hawks]

 

Featured

Review of: George Washington: The Political Rise of America’s Founding Father, by David O. Stewart

Is another biography of George Washington really necessary? A Google search reveals some nine hundred already exist, not to mention more than five thousand journal articles that chronicle some portion of his life. But the answer turns out to be a resounding yes, and David O. Stewart makes that case magnificently with his latest work, George Washington: The Political Rise of America’s Founding Father, an extremely well-written, insightful, and surprisingly innovative contribution to the historiography.

Many years ago, I recall reading the classic study, Washington: The Indispensable Man, by James Thomas Flexner, which looks beyond his achievements to put emphasis on his most extraordinary contribution, defined not by what he did but what he deliberately did not do: seize power and rule as tyrant. This, of course, is no little thing, as seen in the pages of history from Caesar to Napoleon.  When told he would resign his commission and surrender power to a civilian government, King George III—who no doubt would have had him hanged (or worse) had the war gone differently—famously declared that “If he does that, he will be the greatest man in the world.” Washington demonstrated that greatness again when he voluntarily—you might say eagerly—stepped down after his tenure as President of the United States to retire to private life. Indispensable he was: it is difficult to imagine the course of the American experiment had another served in his place in either of those pivotal roles.

But there is more to Washington than that, and some of it is less than admirable. Notably, there was Washington’s heroic fumble as a young Virginia officer leading colonial forces to warn away the French at what turned into the Battle of Jumonville Glen and helped to spark the French & Indian War. Brash, headstrong, arrogant, thin-skinned, and ever given to an unshakable certitude that his judgment was the sole correct perspective in every matter, the young Washington distinguished himself for his courage and his integrity while at the same time routinely clashing with authority figures, including former mentors that he frequently left exasperated by his demands for recognition.

Biographers tend to visit this period of his life and then fast-forward two decades ahead to the moment when the esteemed if austere middle-aged Washington showed up to the Continental Congress resplendent in his military uniform, the near-unanimous choice to lead the Revolutionary Army in the struggle against Britain. But how did he get here? In most studies, it is not clear. But this is where Stewart shines!  The author, whose background is the law rather than academia—he was once a constitutional lawyer who clerked for Supreme Court Justice Lewis Powell, Jr.—has proved himself a brilliant historian in several fine works, including his groundbreaking reassessment of a key episode of the early post-Civil War era, Impeached: The Trial of President Andrew Johnson and the Fight for Lincoln’s Legacy. And in Madison’s Gift: Five Partnerships that Built America, Stewart’s careful research, analytical skills, and intuitive approach successfully resurrected portions of James Madison’s elusive personality that had been otherwise mostly lost to history.

This talent is on display here, as well, as Stewart adeptly examines and interprets Washington’s evolution from Jumonville Glen to Valley Forge. Washington’s own personality is something of a conundrum for biographers, as he can seem to be simultaneously both selfless and self-centered.  The young Washington so frequently in turn infuriated and alienated peers and superiors alike that it may strike us as fully remarkable that this is the same individual who could later harness the talents and loyalty of both rival generals during the war and the outsize egos of fellow Founders as the new Republic took shape. Stewart demonstrates that Washington was the author of his own success in this arena, quietly in touch with his strengths and weaknesses while earning respect and cultivating goodwill over the years as he established himself as a key figure in the Commonwealth. Washington himself was not in this regard a changed man as much as he was a more mature man who taught himself to modify his demeanor and his behavior in the company of others for mutual advantage. This too, is no small thing.

The subtitle of this book—The Political Rise of America’s Founding Father—is thus hardly accidental, the latest contribution to a rapidly expanding genre focused upon politics and power, showcased in such works as Jon Meacham’s Thomas Jefferson: The Art of Power, and Robert Dallek’s Franklin D. Roosevelt: A Political Life. Collectively, these studies serve to underscore that politics is ever at the heart of leadership, as well as that great leaders are not born fully formed, but rather evolve and emerge.  George Washington perhaps personifies the most salient example of this phenomenon.

The elephant in the room of any examination of Washington—or the other Virginia Founders who championed liberty and equality for that matter—is slavery. Like Jefferson and Madison and a host of others, Washington on various occasions decried the institution of enslaving human beings—while he himself held hundreds as chattel property. Washington is often credited with freeing the enslaved he held direct title to in his will, but that hardly absolves him of the sin of a lifetime of buying, selling, and maintaining an unpaid labor force for nothing less than his own personal gain, especially since he was aware of the moral blemish in doing so. Today’s apologists often caution that is unfair to judge those who walked the earth in the late eighteenth-century by our own contemporary standards, but the reality is that these were Enlightenment-era men that in their own words declared slavery abhorrent while—like Jefferson with his famous “wolf by the ear” cop-out—making excuses to justify participating in and perpetuating a cruel inhumanity that served their own economic self-interests.  As biographer, Stewart’s strategy for this dimension of Washington’s life is to treat very little with it in the course of the narrative, while devoting the second to last chapter to a frank and balanced discussion of the ambivalence that governed the thoughts and actions of the master of Mount Vernon. It is neither whitewash nor condemnation.

Stewart’s study is by no means hagiography, but the author clearly admires his subject. Washington gets a pass for his shortcomings at Jumonville, and he is hardly held to strict account for his role as an enslaver. Still, the result of Stewart’s research, analysis, and approach is the most readable and best single-volume account of Washington’s life to date.  This is a significant contribution to the scholarship that I suspect will long be deemed required reading.

 

I reviewed other works by David O. Stewart here:

Review of: Impeached: The Trial of President Andrew Johnson and the Fight for Lincoln’s Legacy, by David O. Stewart

Review of: Madison’s Gift: Five Partnerships that Built America, by David O. Stewart

My review of Dallek’s Franklin D. Roosevelt, referenced above, is here: Review of: Franklin D. Roosevelt: A Political Life, by Robert Dallek

Featured

Review of: American Republics: A Continental History of the United States, 1783-1850, by Alan Taylor

 

Conspicuous in their absence from my 1960s elementary education were African Americans and Native Americans. Enslaved blacks made an appearance in my textbooks, of course, but slavery as an institution was sketched out as little more than a vague and largely benign product of the times. Then there was a Civil War fought over white men’s sectional grievances; there were dates, and battles, and generals, winners and losers. There was Lincoln’s Emancipation Proclamation, then constitutional amendments that ended slavery and guaranteed equality. There was some bitterness but soon there was reconciliation, and we went on to finish building the transcontinental railroad.  There were the obligatory walk-on cameos by Frederick Douglass and Harriet Tubman, and later George Washington Carver, who had something to do with peanuts. For Native Americans, the record was even worse.  Our texts featured vignettes of Squanto, Pocahontas, Sacajawea, and Sitting Bull. The millions of Amerindians that once populated the country from coast to coast had been effectively erased.

Alan Taylor, Pulitzer Prize winning author and arguably the foremost living historian of early America, has devoted a lifetime to redressing those twin wrongs while restoring the nuanced complexity of our past that was utterly excised from the standard celebration of our national heritage that for so long dominated our historiography. In the process, in the eleven books he has published to date, he has also dramatically shifted the perspective and widened the lens from the familiar approach that more rigidly defines the boundaries of the geography and the established chapters in the history of the United States—a stunning collective achievement that reveals key peoples, critical elements, and greater themes often obscured by the traditional methodology.

I first encountered Taylor some years ago when I read his magnificent American Colonies: The Settling of North America, which restores the long overlooked multicultural and multinational participants who peopled the landscape, while at the same time enlarging the geographic scope beyond the English colonies that later comprised the United States to encompass the rest of the continent that was destined to become Canada and Mexico, as well as highlighting vital links to the West Indies. Later, in American Revolutions, Taylor identifies a series of social, economic and political revolutions of outsize significance over more than five decades that often go unnoticed in the shadows of the War of Independence, which receives all the attention.

Still, as Taylor underscores, it was the outcome of the latter struggle—in which white, former English colonists established a new nation—that was to have the most lasting and dire consequences for all those in their orbit who were not white, former English colonists, most especially blacks and Native Americans.  The defeated British had previously drawn boundaries that served as a brake on westward expansion and left more of that vast territory as a home to the indigenous. That brake was now off. Some decades later, Britain was to abolish slavery throughout its empire, which no longer included its former colonies. Thus the legacy of the American Revolution was the tragic irony that a Republic established to champion liberty and equality for white men would ultimately be constructed upon the backs of blacks doomed to chattel slavery, as well as the banishment or extermination of Native Americans. This theme dominates much of Taylor’s work.

In his latest book, American Republics: A Continental History of the United States, 1783-1850, which roughly spans the period from the Peace of Paris to California statehood, Taylor further explores this grim theme in a brilliant analysis of how the principles of white supremacy—present at the creation—impacted the subsequent course of United States history. Now this is, of course, uncomfortable stuff for many Americans, who might cringe at that very notion amid cries of revisionism that insist contemporary models and morality are being appropriated and unfairly leveraged against the past. But terminology is less important than outcomes: non-whites were not only foreclosed from participating as citizens in the new Republic, but also from enjoying the life, liberty and pursuit of happiness allegedly granted to their white counterparts. At the same time, southern states where slavery thrived wielded outsize political power that frequently plotted the nation’s destiny. As in his other works, Taylor is a master of identifying unintended consequences, and there are more than a few to go around in the insightful, deeply analytical, and well-written narrative that follows.

These days, it is almost de rigueur for historians to decry the failure of the Founders to resolve the contradictions of permitting human chattel slavery to coexist within what was declared to be a Republic based upon freedom and equality. In almost the same breath, however, many in the field still champion the spirit of compromise that has marked the nation’s history. But if there is an original sin to underscore, it is less that slavery was allowed to endure than that it was codified within the very text of the Constitution of the United States by means of the infamous compromise that was the “three-fifths rule,” which for the purposes of representation permitted each state to count enslaved African Americans as three-fifths of a person, thus inflating the political power of each state based upon their enslaved population. This might have benefited all states equally, but since slavery was to rapidly decline and all but disappear above what would be drawn as the Mason-Dixon, all the advantage flowed to the south, where eventually some states saw its enslaved population outnumber its free white citizenry.

This was to prove dramatic, since the slave south claimed a disproportionate share of national political power when it came to advancing legislation or, for that matter, electing a president! Taylor notes that the disputed election of 1824 that went for decision to the House of Representatives would have been far less disputed without the three-fifths clause, since in that case John Quincy Adams would have led Andrew Jackson in the Electoral College 83 to 77 votes, instead of putting Jackson in the lead 99 to 84. [p253] When Jackson prevailed in the next election, it was the south that cemented his victory.

The scholarly consensus has established the centrality of slavery to the Civil War, but Taylor goes further, arguing that its significance extended long before secession: slavery was ever the central issue in American history, representing wealth, power, and political advantage. The revolutionary generation decried slavery on paper—slave masters Washington, Jefferson and Madison all pronounced it one form of abomination or another—but nevertheless failed to act against it, or even part with their own human property. Jefferson famously declared himself helpless, saying of the peculiar institution that “We have the wolf by the ear, and we can neither hold him, nor safely let him go,” but as slavery grew less profitable for Virginia in the upper south, Jefferson and his counterparts turned to breeding the enslaved for sale to the lower south, where the demand was great. Taylor points out that “In 1803 a male field hand sold for about $600 in South Carolina compared to $400 in Virginia: a $200 difference enticing to Virginia sellers and Carolina slave traders … Between 1790 and 1860, in one of the largest forced migrations in world history, slave traders and migrants herded over a million slaves from Virginia and Maryland to expand southern society …” [p159]  Data and statistics may obscure it, but these were after all living, breathing, sentient human beings who were frequently subjected to great brutalities while enriching those who held them as chattel property.

Jefferson and others of his ilk imagined that slavery would somehow fall out of favor at some distant date, but optimistically kicking the can down the road to future generations proved a fraught strategy: nothing but civil war could ever have ended it. As Taylor notes:

Contrary to the wishful thinking of many Patriots, slavery did not wither away after the American Revolution. Instead, it became more profitable and entrenched as the South expanded westward. From 698,600 in 1790, the number of enslaved people soared to nearly 4 million by 1860, when they comprised a third of the South’s population … In 1860, the monetary value of enslaved people exceeded that of all the nation’s banks, factories, and railroads combined. Masters would never part with so much valuable human property without a fight. [p196]

As bad as it was for enslaved blacks, in the end Native Americans fared far worse. It has been estimated that up to 90% of Amerindians died as a result to exposure to foreign pathogens within a century of the Columbian Experience. The survivors faced a grim future competing for land and resources with rapacious settlers who were better armed and better organized. It may very well be that conflict between colonists and the indigenous was inevitable, but as Taylor emphasizes, the trajectory of the relationship became especially disastrous for the latter after British retreat essentially removed all constraints on territorial expansion.

The stated goal of the American government was peaceful coexistence that emphasized native assimilation to “white civilization.” The Cherokees who once inhabited present-day Georgia actually attempted that, transitioning from hunting and gathering to agriculture, living in wooden houses, learning English, creating a written language. Many practiced Christianity. Some of the wealthiest worked plantations with enslaved human property. It was all for naught. With the discovery of gold in the vicinity, the Cherokees were stripped of their lands in the Indian Removal Act of 1830, championed by President Andrew Jackson, and marched at bayonet point over several months some 1200 miles to the far west. Thousands died in what has been dubbed the “Trail of Tears,” certainly one of the most shameful episodes of United States history. Sadly, rather than an exception, the fate of the Cherokees proved to be indicative of what lay in store for the rest of the indigenous as the new nation grew and the hunger for land exploded.

That hunger, of course, also fueled the Mexican War, launched on a pretext in yet another shameful episode that resulted in an enormous land grab that saw a weaker neighbor forced to cede one-third of its former domains. It was the determination of southern states to transplant plantation-based slavery to these new territories—and the fierce resistance to that by “Free-Soilers” in Lincoln’s Republican Party—that lit the fuse of secession and the bloody Civil War that it spawned.

If there are faults to this fine book, one is that there is simply too much material to capably cover in less than four hundred pages, despite the talented pen and brilliant analytical skills of Alan Taylor. The author devoted an entire volume—The Civil War of 1812—to the events surrounding the War of 1812, a conflict also central to a subsequent effort, The Internal Enemy. This kind of emphasis on a particular event or specific theme is typical of Taylor’s work. In American Republics, he strays from that technique to attempt the kind of grand narrative survey seen by other chroniclers of the Republic, powering through decades of significance at sometimes dizzying speeds, no doubt a delight for some readers but yet disappointing to others long accustomed to the author’s detailed focus on the more narrowly defined.

Characteristic of his remarkable perspicacity, Taylor identifies what other historians overlook, arguing in American Republics that the War of 1812 was only the most well-known struggle in a consequential if neglected era he calls the “Wars of the 1810s” that also saw the British retreat northward, the Spanish forsake Florida, and the dispossession of Native Americans accelerate. [p148] That could be a volume in itself.  Likewise, American culture and politics in the twelve years that separate Madison and Jackson is worthy of book-length treatment. There is so much more.

Another issue is balance—or a lack thereof. If the history of my childhood was written solely in the triumphs of white men, such accomplishments are wholly absent in American Republics, which reveals the long-suppressed saga of the once invisible victims of white supremacy.  It’s a true story, an important story—but it’s not the only story. Surely there are some achievements of the Republic worthy of recognition here?

As the culture wars heat to volcanic temperatures, such omissions only add tinder to the flames of those dedicated to the whitewash that promotes heritage over history.  Already the right has conjured an imaginary bugaboo in Critical Race Theory (CRT), with legislation in place or pending in a string of states that proscribes the teaching of CRT. These laws have nothing to do with Critical Race Theory, of course, but rather give cover to the dog whistles of those who would intimidate educators so they cannot teach the truth about slavery, about Reconstruction, about Civil Rights. These laws put grade-school teachers at a risk of termination for incorporating factual elements of our past into their curriculum, effectively banning from the classroom the content of much of American Republics. This is very serious stuff: Alan Taylor is a distinguished professor at the University of Virginia, a state that saw the governor-elect recently ride to an unlikely victory astride a sort of anti-CRT Trojan Horse. Historians cannot afford any unforced errors in a game that scholars seem to be ceding to dogmatists. If the current trend continues, we may very well witness reprints of my childhood textbooks, with blacks and the indigenous once more consigned to the periphery.

I have read seven of Taylor’s books to date. Like the others, his most recent work represents a critical achievement for historical scholarship, as well as a powerful antidote to the propaganda that formerly tarnished studies of the American Experience. The United States was and remains a nation unique in the family of nations, replete with a fascinating history that is at once complicated, messy, and controversial. American history, at its most basic, is simply a story of how we got from then to now: it can only be properly understood and appreciated in the context of its entirety, warts and all. Anything less is a disservice to the discipline as well as to the audience. To that end, American Republics is required reading.

 

Note: I have reviewed other works by Alan Taylor here:

Review of: American Revolutions: A Continental History, 1750-1804, by Alan Taylor

Review of: The Internal Enemy: Slavery and the War in Virginia 1772-1832, by Alan Taylor

Review of: The Civil War of 1812: American Citizens, British Subjects, Irish Rebels, & Indian Allies by Alan Taylor

Review of: Thomas Jefferson’s Education, by Alan Taylor

 

Featured

Review of: Superior: The Return of Race Science, by Angela Saini

In what has to be the most shameful decision rendered in the long and otherwise distinguished career of Justice Oliver Wendell Holmes, in 1927 the Supreme Court ruled in Buck v. Bell to uphold a compulsory sterilization law in Virginia. The case centered on eighteen-year-old Carrie Buck, confined to the Virginia State Colony for Epileptics and Feebleminded, and Holmes wrote the majority opinion in the near unanimous decision, famously concluding that “three generations of idiots is enough.”

Similar laws prevailed in some thirty-two states, resulting in the forced sterilization of more than 60,000 Americans. Had Carrie lived in Massachusetts, she would have avoided this fate, but she likely would have been condemned to the Belchertown State School for the Feeble-Minded, which—like similar institutions of this era—had its foundation in the eugenics, racism and Social Darwinism of the time that argued that “defectives” with low moral character threatened the very health of the population by breeding others of their kind, raising fears that a kind of contagious degeneracy would permanently damage the otherwise worthy inhabitants of the nation. I have written elsewhere of the horror-show of inhumane conditions and patient abuse at the Belchertown State School, which did not finally close its doors until 1992.

Sterilization was only one chilling byproduct of “eugenics,” a term coined by Francis Galton, a cousin of Charles Darwin whose misunderstanding of the principles of Darwinian evolution led to his championing of scientific racism. Eugenics was also the driving force behind the 1924 immigration law that dramatically reduced the number of Jews, Italians, and East Europeans admitted to the United States. White supremacy did not only consign blacks and other people of color to the ranks of the “less developed” races, but specifically exalted those of northern and central European origin as the best and the brightest. This was all pseudoscience of course, but it was quite widely accepted and “respectable” in its day.

Then, along came Hitler and the Holocaust, and more than six million Jews and other “undesirables” were systematically murdered in the name of racial purity. Eugenics was respectable no more. Most of us born in the decades that followed the almost unfathomable horror of that Nazi sponsored genocide may have assumed that race science was finally discredited and disappeared forever, relegated to a blood-spattered dustbin of history.  But, as Angela Saini reveals in her well-written, deeply researched, and sometimes startling book, Superior: The Return of Race Science, scientific racism not only never really went extinct, but it has returned in our day with a kind of vengeance, fueling the fever for calls to action on the right for anti-immigration legislation.

Saini, a science journalist, broadcaster, and author with a pair of master’s degrees may be uniquely qualified to tell this story. Born in London of Indian parents, in a world seemingly obsessed with racial classification she relates how her background and brown complexion defies categorization; some may consider her Indian, or Asian—or even black. But of course in reality she could not be more British, even if for many her skin color sets her apart. The UK’s legacy of empire and Kipling’s “white man’s burden” still loom large.

But Superior is not a screed and is not about Saini, but rather about how mistaken notions of race and the pseudoscience of scientific racism have not only persisted but are rapidly gaining ground for a new audience and a new era. To achieve this, the author conducted comprehensive research into the origins of eugenics, but even more significantly identified how the ideology of race science that fueled National Socialism and begat Auschwitz and Birkenau quietly if no less adamantly endured post-Nuremberg cloaked in the less fiery rhetoric of pseudoscientific journals grasping at the periphery of legitimacy. Moreover, a modern revolution in paleogenetics and DNA research that should firmly refute such dangerous musings has instead been incorporated for a new generation of acolytes to scientific racism that serve to both undergird and add a false sense of authenticity to dangerous political tendencies on the right that long simmered and now have burst forth in the public arena.

Whatever some may believe, science has long established that race, for all intents and purposes, is a myth, a social construct that advances no important information about any given population. Regardless of superficial characteristics, all living humans—designated homo sapiens sapiens—are biologically the same and by every other critical metric are essentially members of the same closely related population. In fact, various groups of chimpanzees of Central Africa demonstrate greater genetic diversity than all humans across the globe today. Modern humans likely evolved from a common ancestor in Africa, and thus all of humanity is out of Africa. It is just as likely that all humans once had dark skin, and that lighter skin, what we would term “white” or Caucasian, developed later as populations moved north and melanin—a pigment located in the outer skin layer called the epidermis—was reduced as an adaptation to cope with relatively weak solar radiation in far northern latitudes. The latest scholarship reveals that Europeans only developed their fairer complexion as recently as 8500 years ago!

The deepest and most glaring flaw in the race science that was foundational to Nazism is that it is actually a lack of diversity that often results in a less healthy population.  This is not only apparent in the hemophilia that plagued the closely related royal houses of the European monarchies, but on a more macro scale with genetic conditions more common to certain ethnic groups, such as sickle cell disease for those of African heritage, and Tay-Sachs disease among Ashkenazi Jews.

Counterintuitively, modern proponents of race science cherry pick DNA data to attempt to promote superiority for whites that concomitantly assigns a lesser status for people of color, and these concepts are then repackaged to champion policies that limit immigration from certain parts of the world. Once anathema for all but those on the very fringes of the political spectrum, this dangerous rebirth of genetic pseudoscience is now given voice on right-wing media. Worse perhaps, the tendency of mainstream media to promote fairness in what has come to be dubbed “bothsiderism” sometimes offers an underserved platform to those spinning racist dogma in the guise of scientific studies. Of course, social media has now transcended television as a messaging vehicle, and it is far better suited to spreading misinformation, especially in an era given to a mistrust of expertise, thus granting a seat at the table to the unsupported on the same platform with credible fact-based reality, urging the audience to do their own research and come to their own conclusions.

The United States was collectively shaken in 2017 when white supremacists wielding tiki torches marched at Charlottesville chanting “Jews will not replace us,” and shaken once more when then-president Donald Trump subsequently asserted that there “were very fine people, on both sides.” But there was far less outrage the following year when Trump both sounded a dog whistle and startled lawmakers as he wondered aloud why we should allow in more immigrants from Haiti and “shithole countries” in Africa instead of from places like Norway. (Unanswered, of course, is why a person would want to abandon the arguably higher quality of life in Norway to come to the U.S. …)  But the volume on such dog whistles has been turned up alarmingly as of late by popular Fox News host Tucker Carlson, who in between fear-mongering messaging that casts the Black Lives Matter (BLM) movement and Critical Race Theory (CRT) as Marxist conspiracies that threaten the American way of life, openly advocates against the paranoid alt-right terror of the “Great Replacement” theory, a staple of the white supremacist canon, declaring the Biden administration actively engaged in trying “to change the racial mix of the country … to reduce the political power of people whose ancestors lived here, and dramatically increase the proportion of Americans newly-arrived from the third world.” Translation: people of color are trying to supplant white people. Carlson doesn’t cite race science, but he did recently allow comments to go unchallenged by his guest, the racist extremist social scientist Charles Murray, that the “the cognitive demands” of some occupations mean “a whole lot of more white people qualify than Black people.” Superior was published in 2019 but is chillingly prescient about the dangerous trajectory of both racism and race science on the right.

There is a lot of material between the covers of this book, but because Saini writes so well and speaks to the more arcane matters in language comprehensible to a wide audience, it is not a difficult read. Throughout, the research is impeccable and the analysis spot-on. Still, there are moments Saini strays a bit, at one point seeming to speculate whether we should hold back on paleogenetic research lest this data be further perverted by proponents of scientific racism. That is, of course, the wrong approach: the best weapon against pseudoscience remains science itself.  Still, the warning bells she sounds here must be heeded. The twin threats of racism and the rebirth of race science into the mainstream are indeed clear and present dangers that must be confronted and combated at every corner.  The author’s message is clear and perhaps more relevant now than at any time since the 1930s, another era when hate and racism served as by-products that informed an angry brand of populism that claimed legitimacy through race science.  We all know how that ended.

I have written of the Belchertown State School here:

https://stanprager.com/projects/belchertown-state-school-historic-preservation-proposal/

I reviewed Saini’s latest book here: Review of: The Patriarchs: The Origins of Inequality, by Angela Saini

 

 

Featured

Review of: Franklin D. Roosevelt: A Political Life, by Robert Dallek

When identifying the “greatest presidents,” historians consistently rank Washington and Lincoln in the top two slots; the third spot almost always goes to Franklin Delano Roosevelt, who served as chief executive longer than any before or since and shepherded the nation through twin existential crises of economic depression and world war. FDR left an indelible legacy upon America that echoes loudly both forward to our present and future as well as back to his day.  Lionized by the left today—especially by its progressive wing—far more than he was in his own time, he remains vilified by the right, then and now. Today’s right, which basks in the extreme and often eschews common sense, conflating social security with socialism, frequently casts him as villain. Yet his memory, be it applauded or heckled, is nevertheless of an iconic figure who forever changed the course of American history, for good or ill.

FDR has been widely chronicled, by such luminaries as James MacGregor Burns, William Leuchtenburg, Doris Kearns Goodwin, Jay Winik, Geoffrey C. Ward, and a host of others, including presidential biographer Robert Dallek, winner of the Bancroft Prize for Franklin D. Roosevelt and American Foreign Policy, 1932–1945. Dallek now revisits his subject with Franklin D. Roosevelt: A Political Life, the latest contribution to a rapidly expanding genre focused upon politics and power, showcased in such works as Jon Meacham’s Thomas Jefferson: The Art of Power, and most recently, in George Washington: The Political Rise of America’s Founding Father, by David O. Stewart.

A rough sketch of FDR’s life is well known. Born to wealth and sheltered by privilege, at school he had difficulty forming friendships with peers. He practiced law for a time, but his passion turned to politics, which seemed ideally suited to the tall, handsome, and gregarious Franklin. To this end, he modeled himself on his famous cousin, President Theodore Roosevelt. He married T.R.’s favorite niece, Eleanor, and like Theodore eventually became Assistant Secretary of the Navy. Unsuccessful as a vice-presidential candidate in the 1920 election, his political future still seemed assured until he was struck down by polio. His legs were paralyzed, but not his ambition. He never walked again, but equipped with heavy leg braces and an impressive upper body strength, he perfected a swinging gait that propelled him forward while leaning into an aide that served, at least for brief periods, as a reasonable facsimile of the same. He made a remarkable political comeback as governor of New York in 1928, and won national attention for his public relief efforts, which proved essential in his even more remarkable bid to win the White House four years later. Reimagining government to cope with the consequences of economic devastation never before seen in the United States, then reimagining it again to construct a vast war machine to counter Hitler and Tojo, he bucked tradition to win reelection three times, then stunned the nation with his death by cerebral hemorrhage only a few months into the fourth term of one of the most consequential presidencies in American history.

That “brief sketch” translates into mountains of material for any biographer, so narrowing the lens to FDR’s “political life” proves to be a sound strategy that underscores the route to his many achievements as well as the sometimes-shameful ways he juggled competing demands and realities. Among historians, even his most ardent admirers tend to question his judgment in the run-up to the disaster at Pearl Harbor, as well as his moral compass in exiling Japanese Americans to confinement camps, but as Dallek reveals again and again in this finely wrought study, these may simply be the most familiar instances of his shortcomings. If FDR is often recalled as smart and heroic—as he indeed deserves to be—there are yet plenty of salient examples where he proves himself to be neither. Eleanor Roosevelt once famously quipped that John F. Kennedy should show a little less profile and a little more courage, but there were certainly times this advice must have been just as suitable to her husband.  What is clear is that while he was genuinely a compassionate man capable of great empathy, FDR was at the same time at his very core driven by an almost limitless ambition that, reinforced by a conviction that he was always in the right, spawned an ever-evolving strategy to prevail that sometimes blurred the boundaries of the greater good he sought to impose. Shrewd, disciplined, and gifted with finely tuned political instincts, he knew how to balance demands, ideals, and realities to shape outcomes favorable to his goals. He was a man who knew how to wield power to deliver his vision of America, and the truth is, he could be quite ruthless in that pursuit. To his credit, much like Lincoln and Washington before him, his lasting achievements have tended to paper over flaws that might otherwise cling with greater prominence to his legacy.

I read portions of this volume during the 2020 election cycle and its aftermath, especially relevant given that the new President, Joe Biden—born just days after the Battle of Guadalcanal during FDR’s third term—had an oversize portrait of Roosevelt prominently hung in the Oval Office across from the Resolute Desk. But even more significantly, Biden the candidate was pilloried by progressives in the run-up to November as far too centrist, as a man who had abandoned the vision of Franklin Roosevelt. But if the left correctly recalls FDR as the most liberal president in American history, it also badly misremembers Roosevelt the man, who in his day very deftly navigated the politics of the center lane.

Dallek brilliantly restores for us the authentic FDR of his own era, unclouded by the mists of time that has begotten both a greater belligerence from the right as well as a distorted worship from the left.  This context is critical: when FDR first won election in 1932, the nation was reeling from its greatest crisis since the Civil War, the economy in a tailspin and his predecessor, Herbert Hoover, unwilling to use the power of the federal government to intervene while nearly a quarter of the nation’s workforce was unemployed, at a time when a social safety net was nearly nonexistent.  People literally starved to death in the United States of America! This provoked radical tugs to the extreme left and extreme right. There was loud speculation that the Republic would not survive, with cries by some for Soviet-style communism and by others for a strongman akin to those spearheading an emerging fascism in Europe. It was into this arena that FDR was thrust. Beyond fringe radical calls for revolution or reaction, despite his party’s congressional majority, like Lincoln before him perhaps Roosevelt’s greatest challenge after stabilizing the state was contending with the forces to the left and right in his own party. This, as Dallek details in a well-written, fast-moving narrative, was to be characteristic of much of his long tenure.

In spite of an alphabet soup of New Deal programs that sought to both rescue the sagging economy and the struggling citizen, for the liberal wing of the Democratic Party FDR never went far enough. For conservative Democrats, on the other hand, the power of the state was growing too large and there was far too much interference with market forces. But, as Dallek stresses repeatedly, Roosevelt struggled the most with forces on the left, especially populist demagogues like Huey Long and the antisemitic radio host Father Coughlin. And with the outbreak of World War II, the left was unforgiving when FDR seemed to abandon his commitment to the New Deal to focus on combating Germany and Japan. Today’s democratic socialists may want to claim him as their own, but FDR was no socialist, seeking to reform capitalism rather than replace it, earning Coughlin’s eventual enmity for being too friendly with bankers. At the same time, Republicans obstructed the president at every turn, calling him a would-be dictator. And most wealthy Americans branded him a traitor to his class. There was also an increasingly hostile Supreme Court, which was to ride roughshod over some of FDR’s most cherished programs, including the National Recovery Act (NRA), which was just one of several that were struck down as unconstitutional. We tend to recall the successes such as the Social Security Act that indelibly define FDR’s legacy, yet he endured many losses as well. But while Roosevelt did not win every battle, as Dallek details, only a leader with FDR’s political acumen could have succeeded so often while tackling so much amid a rising chorus of opposition on all sides during such a crisis-driven presidency. If the left in America tends to fail so frequently, it could be because it often fails to grasp the politics of the possible. In this realm, there has perhaps been no greater genius in the White House than Franklin Delano Roosevelt.

Fault can be found in Dallek’s book. For one thing, in the body of the narrative he too often namedrops references to other notable Roosevelt chroniclers such as Doris Kearns Goodwin and William Leuchtenburg, which feels awkward given that the author is not some unknown seeking to establish credibility, but Robert Dallek himself, distinguished presidential biographer! And less a flaw than a weakness, despite his skill with a pen in these chapters the reader carefully observes FDR but never really gets to know him intimately. I have encountered this in other Dallek works. If you were, for instance, to juxtapose the Lyndon Johnson biographies of Robert Caro with those by Dallek, Caro’s LBJ colorfully leaps off the page the flesh-and-blood menacing figure who grasps you by the lapels and bends you to his will, while Dallek’s LBJ remains off in the distance. Caro has that gift; Dallek does not.

Still, this is a fine book that marks a significant contribution to the literature. FDR was indeed a giant; there has never been anyone like him in the White House, nor are we likely to ever see a rival. Dallek succeeds in placing Roosevelt firmly in the context of his time, warts and all, so that we can better appreciate who he was and how he should be remembered.

Featured

Review of: Flight to Freedom, by Ellen Oppenheimer

When I first met Ellen Oppenheimer she was in her eighties, a spry woman with a contagious smile and a mischievous teenager’s twinkle in her eyes standing beside her now-late husband Marty on a housecall visit that I made on behalf of the computer services company that I own and operate.  But many, many years before that, when she was a very young child, she and her family fled the Nazis, literally one step ahead of a brutal killing machine that claimed too many of her relations, marked for death only because of certain strands of DNA that overlap with centuries of irrational hatred directed at Europeans of Jewish descent.

On subsequent visits, we got to know each other better, and she shared with me bits and pieces of the story of how her family cheated death and made it to America. In turn, I told her of my love of history—and the fact that while I was not raised as a Jew, we had in common some of those same Ashkenazi strands of DNA through my great-grandfather, who fled Russia on the heels of a pogrom. I also mentioned my book blog, and she asked that I bookmark the page on the browser of the custom computer they had purchased from me.

It was on a later housecall, as Marty’s health declined, that I detected shadows intruding on Ellen’s characteristic optimism, and the deep concern for him looming behind the stoic face she wore. But all at once her eyes brightened when she announced with visible pride that she had published a book called Flight to Freedom about her childhood escape from the Nazis. She urged me to buy it and to read it, and she genuinely wanted my opinion of her work.

I did not do so.

But she persisted. On later visits, the urging evolved to something of an admonishment. When I would arrive, she always greeted me with a big hug, as if she was more a grandmother than a client, but she was clearly disappointed that I—a book reviewer no less—had not yet read her book. For my part, my resistance was deliberate. I had too many memories of good friends who pestered me to see their bands playing at a local pub, only to discover that they were terrible. I liked Ellen: what if her book was awful?  I did not want to take the chance.

Then, during the pandemic, I saw Marty’s obituary. Covid had taken him. Ellen and Marty had moved out of state, but I still had Ellen’s email, so I reached out with condolences. We both had much to say to each other, but in the end, she asked once more: “Have you read my book yet?” So I broke down and ordered it on Amazon, then took it with me on a week-long birthday getaway to an Airbnb.

Flight to Freedom is a thin volume with a simple all-white cover stamped with only title and author. I brought it with me to a comfortable chair in a great room lined with windows that gave breathtaking views to waves lapping the shore in East Lyme, CT. I popped the cap on a cold IPA and cracked the cover of Ellen’s book. Once I began, all my earlier reluctance slipped away: I simply could not stop reading it.

In an extremely well-written account—especially for someone with virtually no literary background—the author transports the reader back to a time when an educated, affluent middle-class German family overnight was set upon a road of potential extermination in the wake of the Nazi rise to power.  Few, of course, believed that a barbarity of such enormity could ever come to pass in 1933, when three-year-old Ellen’s father Adolf was seized on a pretext and jailed. But Grete, Ellen’s mother—the true hero of Flight to Freedom—was far more prescient. In a compelling narrative with a pace that never slows down, we follow the brilliant and intrepid Grete as she more than once engineers Adolf’s release from captivity and serves as the indefatigable engine of her family’s escape from the Nazis, first to Paris, and then later, as war erupted, to Marseille and Oran and finally Casablanca—the iconic route of refugees etched on a map that is splashed across the screen in the classic film featuring Bogart and Bergman.

The last leg then was a Portuguese boat that finally delivered them to safety on Staten Island in 1942. In the passages of Flight to Freedom that describe that voyage, the author cannot disguise her disgust at the contempt displayed shipboard for the less fortunate by those who have purchased more expensive berth, when all were Jews who would of course have found a kind of hideous equality in Germany’s death camps.  This was, tragically, the fate of much of Ellen’s extended family who did not heed Grete’s warnings of what might befall them, by those who simply could not believe that such horrors could lurk in their future. Throughout the tale, there is a kind of nuance and complexity one might expect to find in a book by a trained academic or a gifted novelist that instead is delightfully on display by a novice author. Her voice and her pen are both strong from start to finish in this powerful and stirring work.

As a reviewer, can I find some flaws? Of course I can. In the narrative, Ellen treats her childhood character simply as a bystander; the story is instead told primarily through Grete’s eyes.  As such, the omniscient point of view often serves as vehicle to the chronicle, with observations and emotions the author could not really know for certain. And sometimes, the point of view shifts awkwardly. But these are quibbles. This is a fine book on so many levels, and the author deserves much praise for telling this story and telling it so well!

A few days after I read Flight to Freedom, I dug into my client database to come up with Ellen’s phone number and rang her up. She was, as I anticipated, thrilled that I had finally read the book, and naturally welcomed my positive feedback. After we chatted for a while, I confessed that my only complaint, if it could be called a complaint, was that the child character of Ellen stood mute much of time in the narrative, and I wondered why she did not relate more of her feelings as a key actor in the drama. With a firm voice she told me (and I am paraphrasing here): “Because it is my mother’s story. It was she who saved our lives. She was the hero for our family.”

I think Grete would be proud of her little girl for telling the story this way, so many decades later. And I’m proud to know Ellen, who shared it so beautifully. Buy this book. Read it. It is a story you need to experience, as well.

 

 

Featured

Review of: Behind Putin’s Curtain: Friendships and Misadventures Inside Russia, by Stephan Orth

I must admit that I knew nothing of the apparently widespread practice of “couchsurfing” before I read Stephan Orth’s quirky, sometime comic, and utterly entertaining travelogue, Behind Putin’s Curtain: Friendships and Misadventures Inside Russia. For the uninitiated, couchsurfing is a global community said to be comprised of more than 14 million members in over 200,000 cities that includes virtually every country on the map. The purpose is to provide free if bare bones lodging for travelers in exchange for forming new friendships and spawning new adventures. The term couchsurfing is an apt one, since frequently the visitor in fact beds down on a couch, although accommodations range from actual beds with sheets and pillowcases to blankets strewn on a kitchen floor—or, as Orth discovers to his amusement, a cot in a bathroom, just across from the toilet! Obviously, if your idea of a good time is a $2000/week Airbnb with a memory foam mattress and a breathtaking view, this is not for you, but if you are scraping together your loose change and want to see the world from the bottom up, couchsurfing offers an unusual alternative that will instantly plug you into the local culture by pairing you up with an authentic member of the community. Of course, authentic does not necessarily translate into typical. More on that later.

Orth, an acclaimed journalist from Germany, is no novice to couchsurfing, but rather a practiced aficionado, who has not only long relied upon it as a travel mechanism but has upped the ante by doing so in distant and out of the ordinary spots like Iran, Saudi Arabia and China, the subjects of his several best-selling books. This time he gives it a go in Russia: from Grozny in the North Caucasus, on to Volgograd and Saint Petersburg, then to Novosibirsk and the Altai Republic in Siberia, and finally Yakutsk and Vladivostok in the Far East. (Full disclosure: I never knew Yakutsk existed other than as a strategic corner of the board in the game of Risk.) All the while Orth proves a keen, non-judgmental observer of peoples and customs who navigates the mundane, the hazardous, and the zany with an enthusiasm instantly contagious to the reader. He’s a fine writer, with a style underscored by impeccable timing, comedic and otherwise, and passages often punctuated with wit and sometime wicked irony. You can imagine him penning the narrative impatiently, eager to work through one paragraph to the next so he can detail another encounter, express another anecdote, or simply mock his circumstances once more, all while wearing a twinkle in his eye and a wry twist to his lips.

Couchsurfing may be routine for the author, but he wisely assumes this is not the case for his audience, so he introduces this fascinating milieu by detailing the process of booking a room. The very first one he describes turns out to be a hilarious online race down various rabbit holes over a sequence of seventy-nine web pages where his utterly eccentric eventual host peppers him with bizarre, even existential observations, and challenges potential guests to fill in various blanks while warning them “that he follows the principle of ‘rational egoism’” and “doesn’t have ten dwarves cleaning up after guests.” [p7] Orth, unintimidated, responds with a wiseass retort and wins the invitation.

Perhaps the most delightful portions of this book are Orth’s profiles of his various hosts, who tend to run the full spectrum of the odd to the peculiar.  I say this absent any negative connotation that might otherwise be implied. After all, Einstein and Lincoln were both peculiar fellows. I only mean that the reader, eager to get a taste of local culture, should not mistake Orth’s bunkmates for typical representatives of their respective communities. This makes sense, of course, since regardless of nationality the average person is unlikely to welcome complete strangers into their homes as overnight guests for free. That said, most of his hosts come off as fascinating if unconventional folks you might love to hang out with, at least for a time. And they put as much trust in the author as he puts in them. One couple even briefly leaves Orth to babysit their toddler. Another host turns over the keys of his private dacha and leaves him unattended with his dog.

Of course, the self-deprecating Orth, who seems equally gifted as patient listener and engaging raconteur, could very well be the ideal guest in these circumstances. At the same time, he could also very well be a magnet for the outrageous and the bizarre, as witnessed by the madcap week-long car trip through Siberia he ends up taking with this wild and crazy chick named Nadya that begins when they meet and bond over lamb soup and a spirited debate as to what was the best Queen album, survives a rental car catastrophe on a remote roadway, and winds up with them horseback riding on the steppe. Throughout, with only a single exception, the two disagree about … well … absolutely everything, but still manage to have a good time. If you don’t literally laugh out loud while reading through this long episode, you should be banned for life from using the LOL emoji.

You would think that travel via couchsurfing could very well be dangerous—perhaps less for Orth, who is well over six feet tall and a veteran couchsurfer—but certainly for young, attractive women bedding down in unknown environs. But it turns out that such incidents while not unknown are very, very rare. The couchsurfing community is self-policing: guests and hosts rely on ratings and reviews not unlike those on Airbnb, which tends to minimize if not entirely eliminate creeps and psychos. Still, while 14 million people cannot be wrong, it’s not for everyone. Which leads me to note that the only fault I can find with this outstanding work is its title, Behind Putin’s Curtain, since it has little to do with Putin or the lives led by ordinary Russians: certainly the peeps that Orth runs with are anything but ordinary or typical! I have seen this book published elsewhere simply as Couchsurfing in Russia, which I think suits it far better. Other than that quibble, this is one of the best travel books that I have ever read, and I highly recommend it. And while I might be a little too far along in years to start experimenting with couchsurfing, I admire Orth’s spirit and I’m eager to read more of his adventures going forward.

 

[Note: the edition of this book that I read was an ARC (Advance Reader’s Copy), as part of an early reviewer’s program.]

Featured

Review of: Tim & Tigon, by Tim Cope

About five years ago, I read what I still consider to be the finest travel and adventure book I have ever come across, On the Trail of Genghis Khan: An Epic Journey Through the Land of the Nomads, by Tim Cope, a remarkable tale of an intrepid young Australian who in 2004 set out on a three-year mostly solo trek on horseback across the Eurasian steppe from Mongolia to Hungary—some 10,000 kilometers (about 6,200 miles)—roughly retracing routes followed by Genghis Khan and his steppe warriors. An extraordinary individual, Cope refused to carry a firearm, despite warnings against potential predators of the animal or human kind to menace an untested foreigner alone on the vast and often perilous steppe corridor, instead relying on his instincts, personality, and determination to succeed, regardless of the odds. Oh, and those odds seem further stacked against him because despite his outsize ambition, he is quite an inexperienced horseman—in fact his only previous attempt on horseback as a child left him with a broken arm! Nevertheless, his only companions for the bulk of the journey ahead would be three horses—and a dog named Tigon foisted upon him against his will that would become his best friend.

My 2016 review of On the Trail of Genghis Khan—which Cope featured on his website for a time—sparked an email correspondence between us, and shortly after publication he sent me an inscribed copy of his latest work, Tim & Tigon, stamped with Tigon’s footprints. I’m always a little nervous in these circumstances: what if the new book falls short? As it turned out, such concerns were misplaced; I enjoyed it so much I bought another copy to give as a gift!

In Kazakhstan, early in his journey, a herder named Aset connived to shift custody of a scrawny six-month-old puppy to Cope, insisting it would serve both as badly needed company during long periods of isolation as well as an ally to warn against wolves. The dog, a short-haired breed of hound known as a tazi, was named Tigon, which translates into something like “fast wind.” Tim was less than receptive, but Aset was persuasive: “In our country dogs choose their owners. Tigon is yours.” [p89] That initial grudging acceptance was to develop into a critical bond that was strengthened again and again during the many challenges that lay ahead. In fact, Tim’s connection with Tigon came to represent the author’s single most significant relationship in the course of this epic trek. Hence the title of this book.

Tim & Tigon defies simple categorization. On one level, it is a compact re-telling of On the Trail of Genghis Khan, but it’s not simply an abridged version of the earlier book. Styled as a Young Adult (YA) work, it has appeal to a much broader audience. And while it might be tempting to brand it as some kind of heartwarming boy and his dog tale, it is marked by a much greater complexity. Finally, as with the first book, it is bound to frustrate any librarian looking to shelve it properly: Is it memoir? Is it travel? Is it adventure? Is it survival? Is it a book about animals? It turns out to be about all of these and more.

As the title suggests, the emphasis this time finds focus upon the unique connection that develops between a once reluctant Tim and the dog that becomes nothing less than his full partner in the struggle to survive over thousands of miles of terrain marked by an often-hostile environment that frequently saw extreme temperatures of heat and cold, conditions both difficult and dangerous, as well as numerous obstacles.   But despite the top billing neither Tim nor Tigon are the main characters here. Instead, as the narrative comes to reveal again and again, the true stars of this magnificent odyssey are the land and its peoples, a sometimes-forbidding landscape that hosts remarkably resilient, enterprising, and surprisingly optimistic folks—clans, families and individuals that are ever undaunted by highly challenging lifeways that have their roots in centuries-old customs.

Stalin effectively strangled their traditional nomadic ways in the former Soviet Union by enforcing borders that were unknown to their ancestors, but he never crushed their collective spirit. And long after the U.S.S.R went out of business, these nomads still thrive, their orbits perhaps more circumscribed, their horses and camels supplemented—if not supplanted—by jeeps and motorbikes. They still make their homes in portable tents known as yurts, although these days many sport TV sets served by satellite and powered by generators. The overwhelming majority welcome the author into their humble camps, often with unexpected enthusiasm and outsize hospitality, generously offering him food and shelter and tending to his animals, even as many are themselves scraping by in conditions that can best be described as hardscrabble. The shared empathy between Cope and his hosts is marvelously palpable throughout the narrative, and it is this authenticity that distinguishes his work. It is clear that Tim is a great listener, and despite how alien he must have appeared upon arrival in these remote camps, he quickly establishes rapport with men, women, children, clan elders—the old and the young—and remarkably repeats this feat in Mongolia, in Kazakhstan, in Russia, and beyond. This turns out to be his finest achievement: his talents with a pen are evident, to be sure, but the story he relates would hardly be as impressive if not for that element.

When Tim’s amazing journey across the steppe ended in Hungary in 2007, joy mingled with a certain melancholy at the realization that he would have to leave Tigon behind when he returned home. But the obstacles of a an out-of-reach price tag and a mandatory quarantine were eventually overcome, and a little more than a year later, Tigon joined Tim in Australia. Tigon went on to sire many puppies and lived to a ripe old age before, tragically, the dog that once braved perils large and small on the harsh landscapes of the Eurasian steppe fell before the wheels of a speeding car on the Australian macadam. Tim was devastated by his loss, so this book is also, of course, a tribute to Tigon. My signed copy is inscribed with the Kazakh saying that served as a kind of ongoing guidepost to their trek together: “Trust in fate … but always tie up the camel.” That made me smile, but that smile was tinged with sadness as I gazed upon Tigon’s footprint stamped just below it. Tigon is gone, but he left an indelible mark not only on Tim, who perhaps still grieves for him, but also upon every reader, young and old, who is touched by his story.

[I reviewed Tim Cope’s earlier book here: Review of: On the Trail of Genghis Khan: An Epic Journey Through the Land of the Nomads, by Tim Cope]

 

 

 

Featured

Review of: The Caucasus: An Introduction, by Thomas De Waal

Some would argue that the precise moment that marked the beginning of the eventual dissolution of the Soviet Union was February 20, 1988, when the regional soviet governing the Nagorno-Karabakh Oblast—an autonomous region of mostly ethnic Armenians within the Soviet Republic of Azerbaijan—voted to redraw the maps and attach Nagorno-Karabakh to the Soviet Republic of Armenia. Thus began a long, bloody, and yet unresolved conflict in the Caucasus that has ravaged once proud cities and claimed many thousands of lives of combatants and civilians alike.  The U.S.S.R. went out of business on December 25, 1991, about midway through what has been dubbed the First Nagorno-Karabakh War, which ended on May 12, 1994, an Armenian victory that established de facto—if internationally unrecognized—independence for the Republic of Artsakh (also known as the Nagorno-Karabakh Republic), but left much unsettled. Smoldering grievances that remained would come to spark future hostilities.

That day came last fall, when the long uneasy stalemate ended suddenly with an Azerbaijani offensive in the short-lived 2020 Nagorno-Karabakh War that had ruinous consequences for the Armenian side. Few Americans have ever heard of Nagorno-Karabakh, but I was far better informed because when the war broke out I happened to be reading The Caucasus: An Introduction, by Thomas De Waal, a well-written, insightful, and—as it turns out—powerfully relevant book that in its careful analysis of this particular region raises troubling questions about human behavior in similar socio-political environments elsewhere.

What is the Caucasus? A region best described as a corridor between the Black Sea on one side and the Caspian Sea on the other, with boundaries at the south on Turkey and Iran, and at the north by Russia and the Greater Caucasus mountain range that has long been seen as the natural border between Eastern Europe and Western Asia. Above those mountains in southern Russia is what is commonly referred to as the North Caucasus, which includes Dagestan and Chechnya. Beneath them lies Transcaucasia, comprised of the three tiny nations of Armenia, Azerbaijan, and Georgia, whose modern history began with the collapse of the Soviet Union and are the focus of De Waal’s fascinating study. The history of the Caucasus is the story of peoples dominated by the great powers beyond their borders, and despite independence this remains true to this day: Russia invaded Georgia in 2008 to support separatist enclaves in Abkhazia and South Ossetia, in the first European war of the twenty-first century; Turkey provided military support to Azerbaijan in the 2020 Nagorno-Karabakh War.

At this point, some readers of this review will pause, intimidated by exotic place names in an unfamiliar geography. Fortunately, De Waal makes that part easy with a series of outstanding maps that puts the past and the present into appropriate context. At the same time, the author eases our journey through an often-uncertain terrain by applying a talented pen to a dense, but highly readable narrative that assumes no prior knowledge of the Caucasus. At first glance, this work has the look and feel of a textbook of sorts, but because De Waal has such a fine-tuned sense of the lands and the peoples he chronicles, there are times when the reader feels as if a skilled travel writer was escorting them through history and then delivering them to the brink of tomorrow. Throughout, breakout boxes lend a captivating sense of intimacy to places and events that after all host human beings who like their counterparts in other troubled regions live, laugh, and sometimes tragically perish because of their proximity to armed conflict that typically has little to do with them personally.

De Waal proves himself a strong researcher, as well as an excellent observer highly gifted with an analytical acumen that not only carefully scrutinizes the complexity of a region bordered by potentially menacing great powers, and pregnant with territorial disputes, historic enmities, and religious division, but identifies the tolerance and common ground in shared cultures enjoyed by its ordinary inhabitants if left to their own devices. More than once, the author bemoans the division driven by elites on all sides of competing causes that have swept up the common folk who have lived peacefully side-by-side for generations, igniting passions that led to brutality and even massacre. This is a tragic tale we have seen replayed elsewhere, with escalation to genocide among former neighbors in what was once Yugoslavia, for instance, and also in Rwanda. For all the bloodletting, it has not risen to that level in the Caucasus, but unfortunately spots like Nagorno-Karabakh have all the ingredients for some future catastrophe if wiser heads do not prevail.

I picked up this book quite randomly last summer en route from a Vermont Airbnb in my first visit to a brick-and-mortar bookstore since the start of the pandemic. A rare positive from quarantine has been a good deal of time to read and reflect. I am grateful that The Caucasus: An Introduction was in the fat stack of books that I consumed in that period. Place names and details are certain to fade, but I will long remember the greater themes De Waal explored here. If you are curious about the world, I would definitely recommend this book to you.

[Note: Thomas de Waal is a senior fellow with Carnegie Europe, specializing in Eastern Europe and the Caucasus region.]

 

 

 

Featured

Review of: A History of Crete, by Chris Moorey

Myth has it that before he became king of Athens, Theseus went to Crete and slew the Minotaur, a creature half-man and half-bull that roamed the labyrinth in Knossos. According to Homer’s Iliad, Idomeneus, King of Crete, was one of the top-ranked generals of the Greek alliance in the Trojan War.  But long before the legends and the literature, Crete hosted Europe’s most advanced early Bronze Age civilization—dubbed the Minoan—which was then overrun and absorbed by the Mycenean Greeks that are later said to have made war at Troy. Minoan Civilization flourished circa 3000 BCE-1450 BCE, when the Myceneans moved in. What remains of the Minoans are magnificent ruins of palace complexes, brilliantly rendered frescoes depicting dolphins, bull-leaping lads, and bare-breasted maidens, and a still yet undeciphered script known as Linear A. The deepest roots of Western Civilization run to the ancient Hellenes, so much so that some historians proclaim the Greeks the grandfathers of the modern West. If that is true, then the Minoans of Crete were the grandfathers of the Greeks.

Unfortunately, if you want to learn more about the Minoans, do not turn to A History of Crete, by former educator Chris Moorey, an ambitious if too often dull work that affords this landmark civilization a mere 22 pages. Of course, the author has every right to emphasize what he deems most relevant, but the reader also has a right to feel misled—especially as the jacket cover sports a bull-leaping scene from a Minoan fresco! And it isn’t only the Minoans that are bypassed; Moorey’s treatment of Crete’s glorious ancient past is at best superficial. After a promising start that touches on recent discoveries of Paleolithic hand-axes, he fast-forwards at a dizzying rate: Minoan Civilization ends on page 39; more than a thousand years of Greek dominance concludes on page 66, and Roman rule is over by page 84. Thus begins the long saga of Crete as a relative backwater, under the sway of distant colonial masters.

I am not certain what the author’s strategy was, but it appears that his goal was to divide Crete’s long history into equal segments, an awkward tactic akin to a biographer of Lincoln lending equal time to his rail-splitting and his presidency. At any rate, much of the story is simply not all that interesting the way Moorey tells it.  In fact, too much of it reads like an expanded Wikipedia entry, while sub-headings too frequently serve as unwelcome interrupts to a narrative that generally tends to be stilted and colorless. The result is a chronological report of facts about people and events, conspicuously absent the analysis and interpretation critical to a historical treatment.  Moreover, the author’s voice lacks enthusiasm and remains maddeningly neutral, whether the topic is tax collection or captive rebels impaled on hooks. As the chronicle plods across the many centuries, there is also a lack of connective tissue, so the reader never really gets a sense of what distinguishes the people of Crete from people anywhere else. What are their salient characteristics? What is the cultural glue that bonds them together? We never really find out.

To be fair, there is a lot of information here. And Moorey is not a bad writer, just an uninspired one. Could this be because the book is directed at a scholarly rather than a popular audience, and academic writing by its nature can often be stultifying? That’s one possibility.  But is it even a scholarly work? The endnotes are slim, and few point to primary sources.

A History of Crete is a broad survey that may serve as a useful reference for those seeking a concise study of the island’s past, but it seems like an opportunity missed.  In the final paragraph, the author concludes: “In spite of all difficulties, it is likely the spirt of Crete will survive.” What is this spirit of Crete he speaks of? Whatever it may be, the reader must look elsewhere to find out.

Featured

Review of: The Steppe and the Sea: Pearls in the Mongol Empire, by Thomas T. Allsen

In the aftermath of a clash in Turkistan in 1221, a woman held captive by Mongol soldiers admitted she had swallowed her pearls to safeguard them. She was immediately executed and eviscerated. When pearls were indeed recovered, Genghis Khan “ordered that they open the bellies of the slain” on the battlefield to look for more. [p23] Such was the consequence of pearls for the Mongol Empire.

As this review goes to press (5-12-21), the value of a single Bitcoin is about $56,000 U.S. dollars—dwarfing the price for an ounce of gold at a mere $1830—an astonishing number for a popular cybercurrency that few even accept for payment. Those ridiculing the rise of Bitcoin dismiss it as imaginary currency. But aren’t all currencies imaginary? The paper a dollar is printed on certainly is not worth much, but it can be exchanged for a buck because that United States government says so, subject to inflation of course. All else rises and falls on a market that declares a value, which varies from day-to-day. Then why, you might ask, in the rational world of the twenty-first century, are functionally worthless shiny objects like gold and diamonds (for non-industrial applications) worth anything at all? It’s a good question, but hardly a new one—long before the days of Jericho and Troy people have attached value to the pretty but otherwise useless. Circa 4200 BCE, spondylus shells were money of a sort in both Old Europe and the faraway Andes. Remarkably, cowries once served as the chief economic mechanism in the African slave trade; for centuries human beings were bought and sold as chattel in exchange for shells that once housed sea snails!

The point is that even the most frivolous item can be deemed of great worth if enough agree that it is valuable. With that in mind, it is hardly shocking to learn that pearls were treasured above all else by the Mongols during their heady days of empire. It may nevertheless seem surprising that this phenomenon would be worthy of a book-length treatment, but acclaimed educator, author and historian Thomas T. Allsen makes a convincing case that it does in his final book prior to his passing in 2019, The Steppe and the Sea: Pearls in the Mongol Empire, which will likely remain the definitive work on this subject for some time to come.

The oversize footprint of the Mongols and their significance to global human history has been vast if too often underemphasized, a casualty of the Eurocentric focus on so-called “Western Civilization.” Originally nomads that roamed the steppe, by the thirteenth and fourteenth centuries the transcontinental Mongol Empire formed the largest contiguous land empire in history, stretching from Eastern Europe to the Sea of Japan, encompassing parts or all of China, Southeast Asia, the Iranian plateau, and the Indian subcontinent. Ruthlessly effective warriors, numerous kingdoms tumbled before the fierce onslaughts that marked their famously brutal trails of conquest. Less well-known, as Allsen reveals, was their devotion to plunder, conducted with both a ferocious appetite and perhaps the greatest degree of organization ever seen in the sacking of cities. No spoils were prized more than pearls, acquired from looted state treasuries as well as individuals such as that random unfortunate who was sliced open at Genghis Khan’s command. Pearls were more than simply booty; the Mongols were obsessed with them.

This is a story that turns out to be as fascinating as it is unexpected. The author’s approach is highly original, cogently marrying economics to political culture and state-building without losing sight of his central theme. In a well-written if decidedly academic narrative, Allsen focuses on the Mongol passion for pearls as symbols of wealth and status to explore a variety of related topics. One of the most entertaining examines the Yuan court, where pearls were the central element for wardrobe and fashion, and rank rigidly determined if and how these could be displayed. At the very top tier, naturally, was the emperor and his several wives, who were spectacularly identifiable in their extravagant ornamentation. The emperor’s consorts wore earrings of “matched tear-shaped pearls” said to be the size of hazelnuts, or alternately, pendant earrings with as many as sixty-five matched pearls attached to each pendant! More flamboyant was their elaborate headgear, notably the tall, unwieldy boot-shaped headdress called a boghta that was decorated with plumes and gems and—of course—many more pearls! [p52-53]

Beyond the spotlight on court life, the author widens his lens to explore broader arenas. The Mongols may have been the most fanatical about acquiring pearls, but they certainly were not the first to value them, nor the last; pearls remain among the “shiny objects” with no real function beyond adornment that command high prices to this day. Allsen provides a highly engaging short course for the reader as to where pearls come from and why the calcium carbonate that forms a gemstone in one oyster is—based upon shape, size, luster, and color—prized more than another. This is especially important because of the very paradox the book’s title underscores: it is remarkable that products from the sea became the most prized possession for a people of the steppe! There is also a compelling discussion of the transition from conquering nomad warrior to settled overlord that offers a studied debate on whether the “self-indulgent” habit of coveting consumer goods such as “fine textiles, precious metals, and gems, especially pearls” was the result of being corrupted by the sedentary “civilized” they subjected, or if such cravings were born in a more distant past. [p61]

While I enjoyed The Steppe and the Sea, especially the first half, which concludes with the disintegration of the Mongol Empire, this book is not for everyone. Academic writing imposes a certain stylistic rigidity that suits the scholarly audience it is intended for, but that tends to create barriers for the general reader. In this case accessibility is further diminished by Allsen’s translation of Mongolian proper names into ones likely unfamiliar to those outside of his field: Genghis Khan is accurately rendered as Chinggis Qan, and Kubalai Khan as Qubilai Qan, but this causes confusion that might have been mitigated by a parenthetical reference to the more common name. And the second part of the book, “Comparisons and Influence,” which looks beyond the Mongol realm, is slow going. It seemed like a better tactic might have been to incorporate much of it into the previous narrative, strengthening connections and contrasts while improving readability. On the plus side, sound historical maps are included that proved a critical reference throughout the read.

The Mongol Empire is ancient history, but these days a wild pearl of high quality could still be worth as much as $100,000, although most range in price from $300 to $1500. It seems like civilization is still pretty immature when it comes to shiny objects. On the other hand, this morning, an ounce of palladium—a precious metal valued for its use in catalytic converters and multi-layer ceramic capacitors rather than jewelry—was priced at almost $3000, some 62% more than an ounce of gold! So maybe there is hope for us, after all. I wish Dr. Allsen was still alive so I could reach out via email and find out his thoughts on the subject. Since that is impossible, I can only urge you to read his final book and consider how little human appetites have changed throughout the ages.

Featured

Review of: A Tidewater Morning, by William Styron

In April 1962, President John F. Kennedy hosted a remarkable dinner for more than four dozen Nobel Prize winners and assorted other luminaries drawn from the top echelons of the arts and sciences.  With his characteristic wit, JFK pronounced it “The most extraordinary collection of talent, of human knowledge, that has ever been gathered together at the White House with the possible exception of when Thomas Jefferson dined alone.” One of the least prominent guests that evening was the novelist William Styron, who attended with his wife Rose, and recalled his surprise at the invitation. Styron was not yet then the critically-acclaimed, Pulitzer Prize-winning literary icon he was to later become, but he was hardly an unknown figure, and it turns out that his most recent novel of American expatriates, Set This House on Fire, was the talk of the White House in the weeks leading up to the event. So he had the good fortune to dine not only with the President and First Lady, but with the likes of John Glenn, Linus Pauling, and Pearl Buck—and in the after-party forged a long-term intimate relationship with the Kennedy family.

My first Styron was The Confessions of Nat Turner, which I read as a teen. Its merits somewhat unfairly subsumed at the time by the controversy it sparked over race and remembrance, it remains a notable achievement, as well as a reminder that literature is not synonymous with history, nor should it be held to that account.  I found Set This House on Fire largely forgettable, but as an undergrad was utterly blown away when I read Lie Down in Darkness, his first novel and a true masterpiece that while yet indisputably original clearly evoked the Faulknerian southern gothic.  I went on to read anything by the author I could get my hands on. Also a creature of controversy upon publication, Sophie’s Choice, winner of the National Book Award for Fiction in 1980, remains in my opinion one of the finest novels of the twentieth century.

I thought I had read all of Styron’s fiction, so it was with certain surprise that I learned from a friend who is both author and bibliophile of the existence of A Tidewater Morning, a collection of three novellas I had somehow overlooked.  I bought the book immediately, and packed it to take along for a pandemic retreat to a Vermont cabin in the woods where I read it through in the course of the first day and a half of the getaway, parked in a comfortable chair on the porch sipping hot coffee in the morning and cold beer in late afternoon. Perhaps it was the fact that this was our first breakaway from months of quarantine isolation, or maybe it was the alcohol content of the IPA I was tossing down, but there was definitely a palpable emotional tug for me reading Styron again—works previously unknown to me no less—so many decades after my last encounter with his work, back when I was a much younger man than the one turning these pages. The effect was more pronounced, I suppose, because the semi-autobiographical stories in this collection look back to Styron’s own youth in the Virginia Tidewater in the 1930s and were written when he too was a much older man.

“Love Day,” the first tale of the collection, has him as a young Marine in April 1945 yet untested in combat, awaiting orders to join the invasion of Okinawa and wrestling the ambivalence of chasing heroic destiny while privately entertaining “gut-heaving frights.” There’s much banter among the men awaiting their fate, but the story of real significance is told through flashbacks to an episode some years prior, he still a boy in the back seat of his father’s Oldsmobile, broken down on the side of the road.  War is looming—the very war he is about to join—although it was far from certain then, but the catastrophe of an unprepared America overrun by barbaric Japanese invaders is the near-future imagined in the Saturday Evening Post piece the boy is reading in the back of the stalled car. Simmering tempers flare when he lends voice to the prediction. His mother, stoic in her leg brace, slowly dying of a cancer known to all but unacknowledged, had earlier furiously rebuked him for mouthing a racist epithet and now upbraided him again for characterizing the Japanese as “slimy butchers,” while belittling the notion of a forthcoming war. Unexpectedly, his father—a mild, highly-educated man quietly raging at his own inability to effect a simple car repair—lashes out at his wife, branding her “idiotic” and “a fool” for her naïve idealism, then crumbles under the weight of his words to beg her forgiveness.  It is a dramatic snapshot not only of a moment of a family in turmoil, but of a time and a place that has long faded from view. Only Styron’s talent with a pen could leave us with so much from what is after only a few pages.

The third story is the title tale, “A Tidewater Morning,” which revisits the family to follow his mother’s final, agonizing days. It concludes with both the boy and his father experiencing twin if unrelated epiphanies. It’s a good read, but I found it a bit overwrought, lacking the subtlety characteristic of Styron’s prose.

Sandwiched between these two is my own favorite, “Shadrach,” the story of a 99-year-old former slave—sold away to an Alabama plantation in antebellum days—who shows up unpredictably with the dying wish to be buried in the soil of the Dabney property where he was born. The problem is that the Dabney descendant currently living there is a struggling, dirt-poor fellow who could be a literary cousin of one of the Snopes often resident in Faulkner novels.  The law prohibits interring a black man on his property, and he likewise lacks the means to afford to bury him elsewhere. On the surface, “Shadrach” appears to be a simple story, but on closer scrutiny reveals itself to be a very complex one peopled with multidimensional characters and layered with vigorous doses of both comedy and tragedy.

I highly recommend Styron to those who have not yet read him. For the uninitiated, (spoiler alert!) I will close this review with a worthy passage:

“Death ain’t nothin’ to be afraid about,” he blurted in a quick, choked voice … “Life is where you’ve got to be terrified!” he cried as the unplugged rage spilled forth. … Where in the goddamned hell am I goin’ to get the money to put him in the ground? … I ain’t got thirty-five-dollars! I ain’t got twenty-five dollars! I ain’t got five dollars!” … “And one other thing!” He stopped. Then suddenly his fury—or the harsher, wilder part of it—seemed to evaporate, sucked up into the moonlit night with its soft summery cricketing sounds and its scent of warm loam and honeysuckle. For an instant he looked shrunken, runtier than ever, so light and frail that he might blow away like a leaf, and he ran a nervous, trembling hand through his shock of tangled black hair. “I know, I know,” he said in a faint, unsteady voice edged with grief. “Poor old man, he couldn’t help it. He was a decent, pitiful old thing, probably never done anybody the slightest harm. I ain’t got a thing in the world against Shadrach. Poor old man.” …
“And anyway,” Trixie said, touching her husband’s hand, “he died on Dabney ground like he wanted to. Even if he’s got to be put away in a strange graveyard.”
“Well, he won’t know the difference,” said Mr. Dabney. “When you’re dead nobody knows the difference. Death ain’t much.” [p76-78]

 

NOTE: To learn more about JFK’s Nobel Dinner, check out this outstanding book, which contains a foreword by Rose Styron: Review of: Dinner in Camelot: The Night America’s Greatest Scientists, Writers, and Scholars Partied at the Kennedy White House, by Joseph A. Esposito

Featured

Review of: Africa: A Biography of the Continent, by John Reader

Africa. My youth largely knew of it only through the distorted lens of racist cartoons peopled with bone-in-their-nose cannibals, B-grade movies showcasing explorers in pith helmets who somehow always managed to stumble into quicksand, and of course Tarzan. It was still even then sometimes referred to as the “Dark Continent,” something that was supposed to mean dangerous and mysterious but also translated, for most of us, into the kind of blackness that was synonymous with race and skin color.

My interest in Africa came via the somewhat circuitous route of my study of the Civil War. The central cause of that conflict was, of course, human chattel slavery, and nearly all the enslaved were descendants of lives stolen from Africa. So, for me, a closer scrutiny of the continent was the logical next step. One of the benefits of a fine personal library is that there are hundreds of volumes sitting on shelves waiting for me to find the moment to find them. Such was the case for Africa: A Biography of the Continent, by John Reader, which sat unattended but beckoning for some two decades until a random evening found a finger on the spine and then the cover was open and the book was in my lap. I did not turn back.

With a literary flourish rarely present in nonfiction combined with the ambitious sweep of something like a novel of James Michener, Reader attempts nothing less than the epic as he boldly surveys the history of Africa from the tectonic activities that billions of years ago shaped the continent, to the evolution of the single human species that now populates the globe, to the rise and fall of empires, to colonialism and independence, and finally to the twin witness of the glorious and the horrific in the peaceful dismantling of South African apartheid and the Rwandan genocide. In nearly seven hundred pages of dense but highly readable text, the author succeeds magnificently, identifying the myriad differences in peoples and lifeways and environments while not neglecting the shared themes that then and now much of the continent holds in common.

Africa is the world’s second largest continent, and it hosts by far the largest number of sovereign nations: with the addition of South Sudan in 2011—twelve years after Reader’s book was published—there are now fifty-four, as well as a couple of disputed territories. But nearly all of these states are artificial constructs that are relics of European colonialism, lines on maps once penciled in by elite overlords in distant drawing rooms in places like London, Paris, Berlin, and Brussels, and those maps were heavily influenced by earlier incursions by the Spanish, Portuguese, and Dutch. Much of the poverty, instability, and often dreadful standards of living in Africa are the vestiges of these artificial borders that mostly ignored prior states, tribes, clans, languages, religions, identities, lifeways. When their colonial masters, who had long raped the land for its resources and the people for their self-esteem, withdrew in the whirlwind decolonization era of 1956-1976—some at the strike of the pen, others at the point of the sword—the exploiters left little of value for nation-building to the exploited beyond the mockery of those boundaries. That of the ancestral that had been lost in the process, had been irrevocably lost. That is one of Reader’s themes. But there is so much more.

The focus is, as it should be, on sub-Saharan Africa; the continent’s northern portion is an extension of the Mediterranean world, marked by the storied legacies of ancient Greeks, Carthaginians, Romans, and the later Arab conquest. And Egypt, then and now, belongs more properly to the Middle East. But most of Africa’s vast geography stretches south of that, along the coasts and deep into the interior. Reader delivers “Big History” at its best, and the sub-Saharan offers up an immense arena for the drama that entails—from the fossil beds that begat Homo habilis in Tanzania’s Olduvai Gorge, to the South African diamond mines that spawned enormous wealth for a few on the backs of the suffering of a multitude, to today’s Maasai Mara game reserve in Kenya that we learn is not as we would suppose a remnant of some ancient pristine habitat, but rather a breeding ground for the deadly sleeping sickness carried by the tsetse fly that turned once productive land into a place unsuitable for human habitation.

Perhaps the most remarkable theme in Reader’s book is population sustainability and migration. While Africa is the second largest of earth’s continents, it remains vastly underpopulated relative to its size. Given the harsh environment, limited resources, and prevalence of devastating disease, there is strong evidence that it has likely always been this way.  Slave-trading was, of course, an example of a kind forced migration, but more typically Africa’s history has long been characterized by a voluntary movement of peoples away from the continent, to the Middle East, to Europe, to all the rest of the world.  Migration has always been—and remains today—subject to the dual factors of “push” and “pull,” but the push factor has dominated. That is perhaps the best explanation for what drove the migrations of archaic and anatomically modern humans out of Africa to populate the rest of the globe. The recently identified 210,000-year-old Homo sapiens skull in a cave in Greece reminds us that this has been going on a very long time. Homo erectus skulls found in Dmansi, Georgia that date to 1.8 million years old underscore just how long!

Slavery is, not unexpectedly, also a major theme for Reader, largely because of the impact of the Atlantic slave trade on Africa and how it forever transformed the lifeways of the people directly and indirectly affected by its pernicious hold—culturally, politically and economically. The slavery that was a fact of life on the continent before the arrival of European traders closely resembled its ancient roots; certainly race and skin color had nothing to do with it. As noted, I came to study Africa via the Civil War and antebellum slavery. To this day, a favored logical fallacy advanced by “Lost Cause” apologists for the Confederate slave republic asks rhetorically “But their own people sold them as slaves, didn’t they?” As if this contention—if it was indeed true—would somehow expiate or at least attenuate the sin of enslaving human beings. But is it true? Hardly. Captors of slaves taken in raids or in war by one tribe or one ethnicity would hardly consider them “their own people,” any more than the Vikings that for centuries took Slavs to feed the hungry slave markets of the Arab world would have considered them “their own people.” This is a painful reminder that such notions endure in the mindset of the deeply entrenched racism that still defines modern America—a racism derived from African chattel slavery to begin with. It reflects how outsiders might view Africa, but not how Africans view themselves.

The Atlantic slave trade left a mark on every African who was touched by it as buyer, seller or unfortunate victim. The insatiable thirst for cheap labor to work sugar (and later cotton) plantations in the Americas overnight turned human beings into Africa’s most valuable export. Traditions were trampled. An ever-increasing demand put pressure on delivering supply at any cost. Since Europeans tended to perish in Africa’s hostile environment of climate and disease, a whole new class of “middle-men” came to prominence. Slavery, which dominated trade relations, corrupted all it encountered and left scars from its legacy upon the continent that have yet to fully heal.

This review barely scratches the surface of the range of material Reader covers in this impressive work. It’s a big book, but there is not a wasted page or paragraph, and it neither neglects the diversity nor what is held in common by the land and its peoples. Are there flaws? The included maps are terrible, but for that the publisher should be faulted rather than the author. To compensate, I hung a map of modern Africa on the door of my study and kept a historical atlas as companion to the narrative. Other than that quibble, the author’s achievement is superlative. Rarely have I read something of this size and scope and walked away so impressed, both with how much I learned as well as the learning process itself. If you have any interest in Africa, this book is an essential read. Don’t miss it.

Featured

Review of: Hymns of the Republic: The Story of the Final Year of the American Civil War, by S.C. Gwynne

Some years ago, I had the pleasure to stay in a historic cabin on a property in Spotsylvania that still hosts extant Civil War trenches. Those who imagine great armies clad in blue and grey massed against each other with pennants aloft on open fields would not be wrong for the first years of the struggle, but those trenches better reflect the reality of the war as it ground to its slow, bloody conclusion in its final year. Those last months contained some of the greatest drama and most intense suffering of the entire conflict, yet often receive far less attention than deserved. A welcome redress to this neglect is Hymns of the Republic: The Story of the Final Year of the American Civil War, by journalist and historian S.C. Gwynne, that neatly marries literature to history and resurrects for us the kind of stirring narratives that once dominated the field.

Looking back, for all too many Civil War buffs it might seem that a certain Fourth of July in 1863—when in the east a battered Lee retreated from Gettysburg on the same day that Vicksburg fell in the west—marked the beginning of the end for the Confederacy.  But experts know that assessment is overdrawn. Certainly, the south had sustained severe body blows on both fronts, but the war yet remained undecided. Like the colonists four score and seven years prior to that day, these rebels did not need to “win” the war, only to avoid losing it. As it was, a full ninety-two weeks—nearly two years—lay ahead until Appomattox, some six hundred forty-six days of bloodshed and uncertainty for both sides, most of what truly mattered compressed into the last twelve months of the war. And, tragically, those trenches played a starring role.

Hymns of the Republic opens in March 1864, when Ulysses Grant—architect of the fall of Vicksburg that was by far the more significant victory on that Independence Day 1863—was brought east and given command of all Union Armies. In the three years since Fort Sumter, the war had not gone well in the east, largely as the result of a series of less-than-competent northern generals who had squandered opportunities and been repeatedly driven to defeat or denied outright victory by the wily tactician, Robert E. Lee. The seat of the Confederacy at Richmond—only a tantalizing ninety-five miles from Washington—lay unmolested, while European powers toyed with the notion of granting them recognition. The strategic narrative in the west was largely reversed, marked by a series of dramatic Union victories crafted by skilled generals, crowned by Grant’s brilliant campaign that saw Vicksburg fall and the Confederacy virtually cut in half. But all eyes had been on the east, to Lincoln’s great frustration. Now events in the west were largely settled, and Lincoln brought Grant east, confident that he had finally found his general who would defeat Lee and end the war. But while Lincoln’s instincts proved sound in the long term, misplaced optimism for an early close to the conflict soon evaporated. More than a year of blood and tears lay ahead.

Much of the battle tactics are a familiar story—Grant Takes Command was the exact title of a Bruce Catton classic—but Gwynne updates the narrative with the benefit of the latest scholarship that not only looks beyond the stereotypes of Grant and Lee, but the very dynamics of more traditional treatments focused solely upon battles and leaders. Most prominently, he resurrects the African Americans that until somewhat recently were for too long conspicuously absent from much Civil War history, buried beneath layers of propaganda spun by unreconstructed Confederates who fashioned an alternate history of the war—the “Lost Cause” myth—that for too long dominated Civil War studies and still stubbornly persists both in right-wing politics and the curricula of some southern school systems to this day.  In the process, Gwynne restores the role of African Americans as central players to the struggle who have long been erased from the history books.

Erased. Remarkably, most Americans rarely thought of blacks at all in the context of the war until the film Glory (1989) and Ken Burns’ docuseries The Civil War (1990) came along. And there are still books—Joseph Wheelan’s Their Last Full Measure: The Final Days of the Civil War, published in 2015, springs to mind—that demote these key actors to bit parts. Yet, without enslaved African Americans there would have never been a Civil War. The centrality of slavery to secession has been just as incontrovertibly asserted by the scholarly consensus as it has been vehemently resisted by Lost Cause proponents who would strike out that uncomfortable reference and replace it with the euphemistic “States’ Rights,” neatly obscuring the fact that southern states seceded to champion and perpetuate the right to own dark-complected human beings as chattel property. Social media is replete with concocted fantasies of legions of “Black Confederates,” but the reality is that about a half million African Americans fled to Union lines, and so many enlisted to make war on their former masters that by the end of the war fully ten percent of the Union army was comprised of United States Colored Troops (USCT). Blacks knew what the war was about, and ultimately proved a force to be reckoned with that drove Union victory, even as a deeply racist north often proved less than grateful for their service.

Borrowing a page from the latest scholarship, Gwynne points to the prominence of African Americans throughout the war, but especially in its final months—marked both by remarkable heroism and a trail of tragedy. His story of the final year of the conflict commences with the massacre at Fort Pillow in April 1864 of hundreds of surrendering federal troops—the bulk of whom were uniformed blacks—by Confederates under the command of Nathan Bedford Forrest. The author gives Forrest a bit of a pass here—while the general was himself not on the field, he later bragged about the carnage—but Gwynne rightly puts focus on the long-term consequences, which were manifold.

The Civil War was the rare conflict in history not marred by wide scale atrocities—except towards African Americans. Lee’s allegedly “gallant” forces in the Gettysburg campaign kidnapped blacks they encountered to send south into slavery, and while Fort Pillow might have been the most significant open slaughter of black soldiers by southerners, it was hardly the exception. Confederates were enraged to see blacks garbed in uniform and sporting a rifle, and thus they were frequently murdered once disarmed rather than taken prisoner like their white counterparts. Something like a replay of Fort Pillow occurred at the Battle of the Crater during the siege of Petersburg, although the circumstances were more ambiguous, as the blacks gunned down in what rebels termed a “turkey shoot” were not begging for their lives as at Pillow.  This was not far removed from official policy, of course: the Confederate government threatened to execute or sell into slavery captured black soldiers, and refused to consider them for prisoner exchange. This was a critical factor that led to the breakdown of the parole and exchange processes that had served as guiding principles throughout much of the war. The result bred conditions on both sides that led to the horrors of overcrowding and deplorable conditions in places like Georgia’s Andersonville and Camp Douglas in Chicago.

Meanwhile, Grant was hardly disappointed with the collapse of prisoner exchange. To his mind, anything that denied the south men or materiel would hasten the end of the war, which was his single-minded pursuit. Grant has long been subjected to calumnies that branded him “Grant the Butcher” because he seemed to throw lives away in hopeless attempts to dislodge a heavily fortified enemy. The most infamous example of this was Cold Harbor, which saw massive Union casualties. But Lee’s tactical victory there—it was to be his last of the war—further depleted his rapidly diminishing supply of men and arms which simply could not be replaced. Grant had a strategic vision that set him apart from the rest. That Lee pushed on as the odds shrunk for any outcome other than ultimate defeat came to beget what Gwynne terms “the Lee paradox: the more the Confederates prolonged the war, the more the Confederacy was destroyed.” [p252] And that destruction was no unintended consequence, but a deliberate component of Grant’s grand strategy to prevent food, munitions, pack animals, and slave labor from supporting the enemy’s war effort. Gwynne finds fault with Sherman’s generalship, but his “march to the sea” certainly achieved what had been intended. And while a northern public divided between those who would make peace with the rebels and those impatient with both Grant and Lincoln for an elusive victory, it was Sherman who delivered Atlanta and ensured the reelection of the president, something much in doubt even in Lincoln’s own mind.

There is far more contained within the covers of this fine work than any review could properly summarize. Much to his credit, the author does not neglect those often marginalized by history, devoting a well-deserved chapter to Clara Barton entitled “Battlefield Angel.” And the very last paragraph of the final chapter settles upon Juneteenth, when—far removed from the now quiet battlefields—the last of the enslaved finally learned they were free. Thus, the narrative ends as it has begun, with African Americans in the central role in the struggle too often denied to them in other accounts.  For those well-read in the most recent scholarship, there is little new in Hymns of the Republic, but the general audience will find much to surprise them, if only because a good deal of this material has long been overlooked. Perhaps Gwynne’s greatest achievement is in distilling a grand story from the latest historiography and presenting it as the kind of exciting read Civil War literature is meant to be. I highly recommend it.

 

I reviewed Their Last Full Measure: The Final Days of the Civil War, by Joseph Wheelan, here: Review of: Their Last Full Measure: The Final Days of the Civil War, by Joseph Wheelan

 

The definitive study of the massacre at Fort Pillow is River Run Red: The Fort Pillow Massacre in the  American Civil War, by Andrew Ward, which I reviewed here: Review of: River Run Red: The Fort Pillow Massacre in the American Civil War, by Andrew Ward

 

Featured

Review of: A World on Edge: The End of the Great War and the Dawn of a New Age, by Daniel Schönpflug

A familiar construct for students of European history is what is known as “The Long Nineteenth Century,” a period bookended by the French Revolution and the start of the Great War.  The Great War.  That is what it used to be called, before it was diminished by its rechristening as World War I, to distinguish it from the even more horrific conflict that was to follow just two decades hence. It is the latter that in retrospect tends to overshadow the former. Some are even tempted to characterize one as simply a continuation of the other, but that is an oversimplification. There was in fact far more than semantics to that designation of “Great War,” and historians are correct to flag it as a definitive turning point, for by the time it was over Europe’s cherished notions of civilization—for better and for worse—lay in ruins, and her soil hosted not only the scars of vast, abandoned trenches, but the bones of millions who once held the myths those notions turned out to be dear in their heads and their hearts.

The war ended with a stopwatch of sorts. The Armistice that went into effect on November 11, 1918 at 11AM Paris time marked the end of hostilities, a synchronized moment of collective European consciousness it is said all who experienced would recall for as long as they lived. Of course, something like 22 million souls—military and civilian—could not share that moment: they were the dead. Nearly three thousand died that very morning, as fighting continued right up to the final moments when the clock ran out.

What happened next? There is a tendency to fast forward because we know how it ends: the imperfect Peace of Versailles, the impotent League of Nations, economic depression, the rise of fascism and Nazism, American isolationism, Hitler invades Poland. In the process, so much is lost. Instead, Daniel Schönpflug artfully slows the pace with his well-written, highly original strain of microhistory, A World on Edge: The End of the Great War and the Dawn of a New Age.  The author, an internationally recognized scholar and adjunct professor of history at the Free University of Berlin, blends the careful analytical skills of a historian with a talented pen to turn out one of the finest works in this genre to date.

First, he presses the pause button.  That pause—the Armistice—is just a fragment of time, albeit one of great significance. But it is what follows that most concerns Schönpflug, who has a great drama to convey and does so through the voices of an eclectic array of characters from various walks of life across multiple geographies. When the action resumes, alternating and occasionally overlapping vignettes chronicle the postwar years from the unique, often unexpected vantage points of just over two dozen individuals—some very well known, others less so—who were to leave an imprint of larger or smaller consequence upon the changed world they walked upon.

There is Harry S Truman, who regrets that the military glory he aspired to as a boy has eluded him, yet is confident he has acquitted himself well, and cannot wait to return home to marry his sweetheart Bess and—ironically—vows he will never fire another shot as long as he lives. Former pacifist and deeply religious Medal of Honor winner Sergeant Alvin York receives a hero’s welcome Truman could only dream of, but eschews offers of money and fame to return to his backwoods home in Tennessee, where he finds purpose by leveraging his celebrity to bring roads and schools to his community. Another heroic figure is Sergeant Henry Johnson, of the famed 369th Infantry known as the “Harlem Hellfighters,” who incurred no less than twenty-one combat injuries fending off the enemy while keeping a fellow soldier from capture, but because of his skin color returns to an America where he remains a second-class citizen who does not receive the Medal of Honor he deserves until its posthumous award by President Barack Obama nearly a century later. James Reese Europe, the regimental band leader of the “Harlem Hellfighters,” who has been credited with introducing jazz to Europe, also returns home to an ugly twist of fate.

And there’s Käthe Kollwitz, an artist who lost a son in the war and finds herself in the uncertain environment of a defeated Germany engulfed in street battles between Reds and reactionaries, both flanks squeezing the center of a nascent democracy struggling to assert itself in the wake of the Kaiser’s abdication. One of the key members of that tenuous center is Matthias Erzberger, perhaps the most hated man in the country, who had the ill luck to be chosen as the official who formally accedes to Germany’s humiliating terms for Armistice, and as a result wears a target on his back for the rest of his life. At the same time, the former Kaiser’s son, Crown Prince Wilhelm von Preussen, is largely a forgotten figure who waits in exile for a call to destiny that never comes. Meanwhile in Paris, Marshal Ferdinand Foch lobbies for Germany to pay an even harsher price, as journalist Louise Weiss charts a new course for women in publishing and longs to be reunited with her lover, Milan Štefánik, an advocate for Czechoslovak sovereignty.

Others championing independence elsewhere include Nguyễn Tất Thành (later Hồ Chí Minh), polishing plates and politics while working as a dishwasher in Paris; Mohandas Gandhi, who barely survives the Spanish flu and now struggles to hold his followers to a regimen of nonviolent resistance in the face of increasingly violent British repression; T.E. Lawrence, increasingly disillusioned by the failure of the victorious allies to live up to promises of Arab self-determination; and, Terence MacSwiney, who is willing to starve himself to death in the cause of Irish nationhood. No such lofty goals motivate assassin Soghomon Tehlirian, a survivor of the Armenian genocide, who only seeks revenge on the Turks; nor future Auschwitz commandant Rudolf Höss, who emerges from the war an eager and merciless recruit for right-wing paramilitary forces.

There are many more voices, including several from the realms of art, literature, and music such as George Grosz, Virginia Woolf, and Arnold Schönberg. The importance of the postwar evolution of the arts is underscored in quotations and illustrations that head up each chapter. Perhaps the most haunting is Paul Nash’s 1918 oil-on-canvas of a scarred landscape entitled—with a hint of either optimism or sarcasm—We Are Making a New World.  All the stories the voices convey are derived from their respective letters, diaries, and memoirs; only in the “Epilogue” does the reader learn that some of those accounts are clearly fabricated.

Many of my favorite characters in A World on Edge are ones that I had never heard of before, such as Moina Michael, who was so inspired by the sacrifice of those who perished in the Great War that she singlehandedly led a campaign to memorialize the dead with the poppy as her chosen emblem for the fallen, an enduring symbol to this very day. But I found no story more gripping than that of Marina Yurlova, a fourteen year old Cossack girl who became a child soldier in the Russian army, was so badly wounded she was hospitalized for a year, then entered combat once more during the ensuing civil war and was wounded again, this time by the Bolsheviks. Upon recovery, Yurlova embarked upon a precarious journey on foot through Siberia that lasted a month before she was able to flee Russia for Japan and eventually settle in the United States, where despite her injuries she became a dancer of some distinction.

I am a little embarrassed to admit that I received an advance reader’s edition (ARC) of A World on Edge as part of an early reviewer’s program way back in November 2018, but then let it linger in my to-be-read (TBR) pile until I finally got around to it near the end of June 2020.  I loved the book but did not take any notes for later reference. So, by the time I sat down to review it in January 2021, given the size of the cast and the complexity of their stories, I felt there was no way I could do justice to the author and his work without re-reading it—so I did, over just a couple of days! And that is the true beauty of this book: for all its many characters, competing storylines, and what turns out to be multilevel, deeply profound messaging, for something of the grand saga that it is it remains a fast-paced, exciting read. Schönpflug’s technique of employing bit players to recount an epic tale succeeds so masterfully that the reader is hardly aware of what has been happening until the final pages are being turned. This is history, of course, this is indeed nonfiction, but yet the result invites a favorable comparison to great literature, to a collection of short stories by Ernest Hemingway, or to a novel by André Brink. If European history is an interest, A World on Edge is not only a recommended read, but a required one.

Featured

Review of: Liar Temptress Soldier Spy: Four Women Undercover in the Civil War, by Karen Abbott

Women are conspicuously absent in most Civil War chronicles.  With a few notable exceptions—Clara Barton, Harriet Tubman, Mary Todd Lincoln—female figures largely appear in the literature as bit players, if they make an appearance at all. Author Karen Abbott seeks a welcome redress to this neglect with Liar Temptress Soldier Spy: Four Women Undercover in the Civil War, an exciting and extremely well-written, if deeply flawed account of some ladies who made a significant contribution to the war effort, north and south.

The concept is sound enough. Abbott focuses on four very different women and relates their respective stories in alternating chapters. There is Belle Boyd, a teenage seductress with a lethal temper who serves as rebel spy and courier; Emma Edmonds, who puts on trousers to masquerade as Frank Thompson and joins the Union army; Rose O’Neal Greenhow, an attractive widow who romances northern politicians to obtain intel for the south; and, Elizabeth Van Lew, a prominent Richmond abolitionist who maintains a sophisticated espionage ring that infiltrates the inner circles of the Confederate government. Each of these is worthy of book-length treatment, but weaving their exploits together is an effective technique that makes for a readable and compelling narrative.

I had never heard of Karen Abbott—the pen name for Abbott Kahler—a journalist and highly acclaimed best-selling author dubbed the “pioneer of sizzle history” by USA Today.  She is certainly a gifted writer, and unlike all too many works of history, her prose is fast-moving and engaging.  I was swept along by her colorful recounting of the 1861 Battle of Bull Run, with flourishes such as: “Union troops fumbled backward and the Confederates rammed forward, a brutal and uneven dance, with soldiers felled like rotting trees.”  I got so carried away I almost made it through the following passage without stumbling:

Some Northern soldiers claimed that every angle, every viewpoint, offered a fresh horror. The rebels slashed throats from ear to ear. They sliced off heads and dropkicked them across the field. They carved off noses and ears and testicles and kept them as souvenirs. They propped the limp bodies of wounded soldiers against trees and practiced aiming for the heart. They wrested muskets and swords from the clenched hands of corpses. They plunged bayonets deep into the backsides of the maimed and the dead. They burned the bodies, collecting “Yankee shin-bones” to whittle into drumsticks, and skulls to use as steins. [p34]

Almost. But I have a master’s degree in history and have spent a lifetime studying the American Civil War, and I have never heard this account of such barbarism at Bull Run. So I paused and flipped to Abbott’s notes for the corresponding page at the back of the book, where with a whiff of insouciance she admits that: “Throughout the war both the North and the South exaggerated the atrocities committed by the enemy, and it’s difficult to determine which incidents were real and which were apocryphal.” [p442] Which is another way of saying that her account is highly sensationalized, if not outright fabrication.

To my mind, Abbott commits an unpardonable sin here. A little research reveals that there were in fact a handful of allegations of brutality in the course of the battle, including the mutilation of corpses, but much of it anecdotal. There were several episodes of Confederate savagery later in the war, principally inflicted upon black soldiers in blue uniforms, but that is another story.  How many readers of a popular history would without question take her at her word about what transpired at Bull Run? How many when confronted with stories of testicles taken as souvenirs would think to consult her citations? Lively paragraphs like this may certainly make for “sizzle”—but where’s the history? Historical novels have their place—The Killer Angels, by Michael Shaara, and Gore Vidal’s Lincoln, are among my favorites—but that is not the same thing as history, which must abide by a strict allegiance to fact-based reporting, informed analysis, and documentation. Apparently, this author demonstrates little loyalty to such constraints.

I read on, but with far more skepticism. Abbott’s style is seductive, so it’s easy to keep going. But sins do continue to accumulate. I have a passing familiarity with three of the four main characters, but fact-checking remained essential. Certainly the best known and most consequential was Van Lew, a heroic figure who aided the escape of prisoners of war and provided key intelligence to Union forces in the field. Greenhow is often cited as her counterpart working for the southern cause. Belle Boyd, on the other hand, has become a creature of legend who turns up more frequently in fiction or film than in history texts. I had never heard of Emma Edmonds, but I came to find her story the most fascinating of them all.

It seems that the more documented the subject—such as Van Lew, for example—the closer Abbott’s portrait comes to reliable biography.  Beyond that, the imaginative seems to intrude, indeed dominate. The astonishing tale of Emma Edmonds has her not only impersonating a male Union soldier, but also variously posing as an Irish peddler and in blackface disguised as a contraband, engaged in thrilling espionage missions behind enemy lines! It rang of the stuff that Thomas Berger’s Little Big Man was made of. I was suitably sucked in, but also wary. And rightly so: Abbott’s version of Emma Edmonds’ life is based almost entirely on Edmonds’ own memoir, with little that corroborates it, but the author doesn’t bother to reveal that in the narrative. That Edmonds pretended to be a man in order to enlist seems plausible; her spy missions perhaps only fantasy. We simply just don’t know; a true historian would help us draw conclusions. Abbott seems content to let it play out as so much drama to tickle her audience.

But the worst of all is when the time comes to reveal the fate of luckless Confederate spy Greenhow, who drowns when her lifeboat capsizes with Union vessels bearing down on the steamer she abandoned, the moment where the superlative talent of Abbott’s pen collides with her concomitant disloyalty to scholarship:

She was sideways, upside down, somersaulting inside the wet darkness. She screamed noiselessly, the water rushing in. She tried to hold her breath—thirty seconds, sixty, ninety—before her mouth gave way and water filled it again. Tiny streams of bubbles escaped from her nostrils. A burning scythed through her chest. That bag of gold yanked like a noose around her neck. Her hair unspooled and leeched to her skin, twining around her neck. She tried to aim her arms up and her legs down, to push and pull, but every direction seemed the same. No moonlight skimmed along the surface, showing her the way; there was no light at all. [p389]

Entertaining, right? Outstanding writing, correct? Solid history—of course not! Imagining Greenhow’s final agonizing moments of life with a literary flourish may very well enrich the pages of a work of fiction, but it is nothing less than an outrage to a work of history.

This book was a fun read. Were it a novel I would likely give it high marks. But that is not how it is packaged. Emma Edmonds pretended to be a man to save the Union. Karen Abbott pretends to be a historian to sell books. Both make for great stories. But don’t confuse either with reliable history.

Featured

Review of: The Etruscans: Lost Civilizations, by Lucy Shipley

When I visited New York’s Metropolitan Museum of Art some years ago, the object I found most stunning was the “Monteleone Chariot,” a sixth century Etruscan bronze chariot inlaid with ivory.  I stood staring at it, transfixed, long enough for my wife to shuffle her feet impatiently. Still I lingered, dwelling on every detail, especially the panels depicting episodes from the life of Homeric hero Achilles. By that time, I had read The Iliad more than once, and had long been immersed in studies of ancient Greece. How was it then, I wondered, that I could speak knowledgeably about Solon and Pisistratus, but yet know so little about the Etruscans who crafted that chariot in the same century those notables walked the earth?

Long before anyone had heard of the Romans, city-states of Etruria dominated the Italian peninsula—and, along with Carthage and a handful of Greek poleis—the central Mediterranean, as well. Later, Rome would absorb, crush or colonize all of them. In the case of the Etruscans, it was to be a little of each. And somehow, somewhat incongruously, over the millennia Etruscan civilization—or at least what the living, breathing Etruscans would have recognized as such—has been lost to us. But not lost in the way we usually think of “lost civilizations,” like Teotihuacan, for instance, or the Indus Valley, where what remains are ruins of a vanished culture that disappeared from living memory, an undeciphered script, and even the uncertain ethnicity of its inhabitants. The Etruscans, on the other hand, were never forgotten, their alphabet can be read although their language largely defies translation, and their DNA lingers in at least some present-day Italians. Yet, by all accounts they are nevertheless lost, and tantalizingly so.

Such a conundrum breeds frustration, of course: Romans supplanted the Etruscans but hardly exterminated them. Moreover, unlike other civilizations deemed “lost to history,” the Etruscans appear in ancient texts going as far back as Hesiod. There are also hundreds of excavated tombs, rich with decorative art and grave goods, the latter top-heavy with Greek imports they clearly treasured.  So how can we know so much about the Etruscans and at the same time so little? Fortunately, Lucy Shipley, who holds a PhD in Etruscan archaeology, comes to a rescue of sorts with her well-written, delightful contribution to the scholarship, entitled simply The Etruscans, a volume in the digest-sized Lost Civilization series published by Reaktion Books.

Most Etruscan studies are dominated by discussions of the ancient sources and—most prominently—the tombs, which are nothing short of magnificent. But where does that lead us? Herodotus references the Etruscans, as does Livy. But are the sources reliable? Rather dubious, as it turns out. Herodotus may be a dependable chronicler of the Hellenes, but anyone who has read his comically misguided account of Egyptian life and culture is aware how far he can stray from reality. And Roman authors such as Livy routinely trumped a decidedly negative perspective, most evident in disdainful memories of the unwelcome semi-legendary Etruscan kings that are said to have ruled Rome until the overthrow of “Tarquin the Proud” in 509 BCE.

Then there are the tombs. Attempts to extrapolate what ancient life was like from the art that decorates the tombs of the dead—awe inspiring as it may be—can present a distorted picture (pun fully intended!) that ignores all but the wealthiest elite slice of the population. Much like Egyptology’s one-time obsession for pyramids and the pharaoh’s list tended to obscure the no less interesting lives of the non-royal—such as those of the workers who collected daily beer rations and left graffiti within the walls of pyramids they constructed—the emphasis on tombs that is standard to Etruscan studies reveals little of the lives of the vast majority of ordinary folks that peopled their world.

Shipley neatly sidesteps these traditional traps by failing to be constrained by them. Instead, she relies on her training as an archaeologist to ask questions: what do we know about the Etruscans and how do we know it? And, perhaps more critically: what don’t we know and why don’t we know it? In the process, she brings a surprisingly fresh look to an enigmatic people in a highly readable narrative suitable to both academic and popular audiences. Arranged thematically rather than chronologically, the author selects a specific artifact or site for each chapter to serve as a visual trigger for the discussion.  Because Shipley is so talented with a pen, it is worth pausing to let her explain her methodology in her own words:

Why focus on the archaeology? Because it is the very materiality, the physicality, the toughness and durability of things and the way they insidiously slip and slide into every corner of our lives that makes them so compelling … We are continually making and remaking ourselves, with the help of things. I would argue that the past is no different in this respect. It’s through things that we can get at the people who made, used and ultimately discarded them—their projects of self-production are as wrapped up in stuff as our own. And always, wrapped up in these things, are fundamental questions about how we choose to be in the world, questions that structure our actions and reactions, questions that change and challenge how we think and what we feel. Questions and objects—the two mainstays of human experience.  [p19-20]

Shipley’s approach succeeds masterfully. Because many of these objects—critical artifacts for the archaeologist but often also spectacular works of art for the casual observer—are rendered in full color in this striking edition, the reader is instantly hooked: effortlessly chasing the author’s captivating prose down a host of intriguing rabbit holes in pursuit of answers to the questions she has mated with these objects.  Along the way, she showcases the latest scholarship with a concise treatment of a broad range of topics informed by the kind of multi-disciplinary research that defines twenty-first century historical inquiry.

This includes DNA studies of both cattle and human populations in an attempt to resolve the long debate over Etruscan origins. While Herodotus and legions of other ancient and modern detectives have long pointed to legendary migrations from Anatolia, it turns out that the Etruscans are likely autochthonous, speaking a pre-Indo European language that may possibly be related to the one spoken by Ötzi, the mummified iceman, thousands of years ago. Shipley also takes the time to explain how it is that we can read enough of the Etruscan alphabet to decipher proper names while remaining otherwise frustrated in efforts aimed at meaningful translation. Much that we identify as Roman was borrowed from Etruria, but as Rome assimilated the Etruscans over the centuries, their language was left behind. Later, Etruscan literature—like all too much of the classical world—fell victim to the zeal of early Christians in campaigns to purge any remnants of paganism. Most offensive in this regard were writings that described the practices of the “haruspex,” a specialist who sought to divine the future by examining the livers of sacrificial animals, an Etruscan ritual later integrated into Roman religious practices. Texts of haruspices appear prominently in the “hit lists” drawn up by Christian thinkers Tertullian and Arnobius.

My favorite chapter is entitled “Super Rich, Invisible Poor,” which highlights the inevitable distortion that results from the attention paid to the exquisite art and grave goods of the wealthy elite at the expense of the sizeable majority of the inhabitants of a dozen city-states comprised of numerous towns, villages and some larger cities with populations thought to number in the tens of thousands. Although, to be fair, this has hardly been deliberate: there remains a stark scarcity in the archaeological record of the teeming masses, so to speak. While it may smack of the cliché, the famous aphorism “Absence of evidence is not evidence of absence” should be triple underscored here! The Met’s Monteleone Chariot, originally part of an elaborate chariot burial, makes an appearance in this chapter, but perhaps far more fascinating is a look at the great complex of workshops at a site called Poggio Civitate, more than a hundred miles from Monteleone, where skilled craftspeople labored to produce a whole range of goods in the same century that chariot was fashioned. But what of those workers? There seemed to be no trace of them. You can clearly detect the author’s delight as she describes recent excavations that uncovered remains of a settlement that likely housed them. Shipley returns again and again to her stated objective of connecting the material culture to the living Etruscans who were once integral to it.

Another chapter worthy of superlatives is “Sex, Lives and Etruscans.” While it is tempting to impose modern notions of feminism on earlier peoples, Etruscan women do seem to have had claimed lives of far greater independence than their classical contemporaries in Greece and Rome. And there are also compelling hints at an openness in sexuality—including wife-sharing—that horrified ancient observers who nevertheless thrilled in recounting licentious tales of wicked Etruscan behavior! Shipley describes tomb art that depicts overt sex acts with multiple partners, while letting the reader ponder whether legendary accounts of Etruscan profligacy are given to hyperbole or not.

In addition to beautiful illustrations and an engaging narrative, this volume also features a useful map, a chronology, recommended reading, and plenty of notes. It is rare that any author can so effectively tackle a topic so wide-ranging in such a compact format, so Shipley deserves special recognition for turning out such an outstanding work.  The Etruscans rightly belongs on the shelf of anyone eager to learn more about a people who certainly made a vital contribution to the history of western civilization.

Monteleone Chariot photo credit: Image is in public domain.  More about the Monteleone Chariot here:   https://www.metmuseum.org/art/collection/search/247020

I reviewed other books in the Lost Civilizations series here:

Review of: The Indus: Lost Civilizations, by Andrew Robinson

Review of Egypt: Lost Civilizations, by Christina Riggs

Featured

Review of: Reaganland: America’s Right Turn 1976-1980, by Rick Perlstein

In Hearts of Atlantis, Stephen King channels the fabled lost continent as metaphor for the glorious promise of the sixties that vanished so utterly that nary a trace remains. Atlantis sank, King declares bitterly in his fiction. He has a point. If you want to chart the actual moments those collective hopes and dreams were swamped by currents of reaction and finally submerged in the merciless wake of a new brand of unforgiving conservatism, you absolutely must turn to Reaganland: America’s Right Turn 1976-1980, Rick Perlstein’s brilliant, epic political history of an era too often overlooked that surely echoes upon America in 2020 with far greater resonance than perhaps any before or since. But be warned: you may need forearms even bigger than the sign-spinning guy in the Progressive commercial to handle this dense, massive 914-page tome that is nevertheless so readable and engaging that your wrists will tire before your interest flags.

Reaganland is a big book because it is actually several overlapping books. It is first and foremost the history of the United States at an existential crossroads. At the same time, it is a close account of the ill-fated presidency of Jimmy Carter. And, too, it is something of a “making of the president 1980.” This is truly ambitious stuff, and that Perlstein largely succeeds in pulling it off should earn him wide and lasting accolades both as a historian and an observer of the American experience.

Reaganland is the final volume in a series launched nearly two decades ago by Perlstein, a progressive historian, that chronicles the rise of the right in modern American politics. Before the Storm focused on Goldwater’s ascent upon the banner of far-right conservatism. This was followed by Nixonland, which profiled a president who thrived on division and earned the author outsize critical acclaim; and, The Invisible Bridge, which revealed how Ronald Reagan—stridently unapologetic for the Vietnam debacle, for Nixon’s crimes, and for angry white reaction to Civil Rights—brought notions once the creature of the extreme right into the mainstream, and began to pave the road that would take him to the White House. Reaganland is written in the same captivating, breathless style Perlstein made famous in his earlier works, but he has clearly honed his craft: the narrative is more measured, less frenetic, and is crowned with a strong concluding chapter—something conspicuously absent in The Invisible Bridge.

The grand—and sometimes allied—causes of the Sixties were Civil Rights and opposition to the Vietnam War, but concomitant social and political revolutions spawned a myriad of others that included antipoverty efforts for the underprivileged, environmental activism, equal treatment for homosexuals and other marginalized groups such as Native Americans and Chicano farm workers, constitutional reform, consumer safety, and most especially equality for women, of which the right to terminate a pregnancy was only one component. The common theme was inclusion, equality, and cultural secularism. The antiwar movement came to not only dominate but virtually overshadow all else, but at the same time served as a unifying factor that stitched together a kind of counterculture coat of many colors to oppose an often stubbornly unyielding status quo. When the war wound down, that fabric frayed. Those who once marched together now marched apart.

This fragmentation was not generally adversarial; groups once in alliance simply went their own ways, organically seeking to advance the causes dear to them. And there was much optimism. Vietnam was history. Civil Rights had made such strides, even if there remained so much unfinished business. Much of what had been counterculture appeared to have entered the mainstream. It seemed like so much was possible. At Woodstock, Grace Slick had declared that “It’s a new dawn,” and the equality and opportunity that assurance heralded actually seemed within reach. Yet, there were unseen, menacing clouds forming just beneath the horizon.

Few suspected that forces of reaction quietly gathering strength would one day unite to destroy the progress towards a more just society that seemed to lie just ahead. Perlstein’s genius in Reaganland lies in his meticulous identification of each of these disparate forces, revealing their respective origin stories and relating how they came to maximize strength in a collective embrace. The Equal Rights Amendment, riding on a wave of massive bipartisan public support, was but three states away from ratification when a bizarre woman named Phyllis Schlafly seemingly crawled out of the woodwork to mobilize legions of conservative women to oppose it. Gay people were on their way to greater social acceptance via local ordinances which one by one went down to defeat after former beauty queen and orange juice hawker Anita Bryant mounted what turned into a nationwide campaign of resistance. The landmark Roe v. Wade case that guaranteed a woman’s right to choose sparked the birth of a passionate right-to-life movement that soon became the central creature of the emerging Christian evangelical “Moral Majority,” that found easy alliance with those condemning gays and women’s lib. Most critically—in a key component that was to have lasting implications, as Perlstein deftly underscores—the Christian right also pioneered a political doctrine of “co-belligerency” that encouraged groups otherwise not aligned to make common ground against shared “enemies.” Sure, Catholics, Mormons and Jews were destined to burn in a fiery hell one day, reasoned evangelical Protestants, but in the meantime they could be enlisted as partners in a crusade to combat abortion, homosexuality and other miscellaneous signposts of moral decay besetting the nation.

That all this moral outrage could turn into a formidable political dynamic seems to have been largely unanticipated. But, as Perlstein reminds us, maybe it should not have been so surprising: candidate Jimmy Carter, himself deeply religious and well ahead in the 1976 race for the White House, saw a precipitous fifteen-point drop in the polls after an interview in Playboy where he admitted that he sometimes lusted in his heart. Perhaps the sun wasn’t quite ready to come up for that new dawn after all.

Of course, the left did not help matters, often ideologically unyielding in its demand to have it all rather than settle for some, as well as blind to unintended consequences. Nothing was to alienate white members of the national coalition to advance civil rights for African Americans more than busing, a flawed shortcut that ignored the greater imperative for federal aid to fund and rebuild decaying inner-city schools, de facto segregated by income inequality. Efforts to advance what was seen as a far too radical federal universal job guarantee ended up energizing opposition that denied victory to other avenues of reform. And there’s much more. Perlstein recounts the success of Ralph Nader’s crusade for automobile safety, which exposed carmakers for deliberately skimping on relatively inexpensive design modifications that could have saved countless lives in order to turn out even greater profits. Auto manufacturers were finally brought to heel. Consumer advocacy became a thing, with widespread public support and frequent industry acquiescence. But even Nader—not unaware of consequences, unintended or otherwise—advised caution when a protégé pressed a campaign to ban TV ads for sugary cereals that targeted children, predicting with some prescience that “if you take on the advertisers you will end up with so many regulators with their bones bleached in the desert.” [p245] Captains of industry Perlstein terms “Boardroom Jacobins” were stirred to collective action by what was perceived as regulatory overreach, and big business soon joined hands to beat all such efforts back.

Meanwhile, subsequent to Nixon’s fall and Ford’s defeat to Carter in 1976, pundits—not for the last time—prematurely foretold the extinction of the Republican Party, leaving stalwart policy wonks on the right seemingly adrift, clinging to their opposition to the pending Salt II arms agreement and the Panama Canal Treaty, furiously wielding oars of obstruction but yet still lacking a reliable vessel to stem the tide. Bitterly opposed to the prevailing wisdom that counseled moderation to ensure not only relevance but survival, they chafed at accommodation with the Ford-Kissinger-Rockefeller wing of the party that preached détente abroad and compromise at home. They looked around for a new champion … and once again found Ronald Reagan!

The former Bedtime for Bonzo co-star and corporate shill had launched his political career railing against communists concealed in every cupboard, as well as shrewdly exploiting populist rage at long-haired antiwar demonstrators. As governor of California he directed an especially violent crackdown known as “Bloody Thursday” on non-violent protesters at UC Berkeley’s People’s Park that resulted in one death and hundreds of injuries after overzealous police fired tear gas and shotguns loaded with buckshot at the crowd. In a comment that eerily presaged Trump’s “very fine people on both sides” remark, Reagan declared that “Once the dogs of war have been unleashed, you must expect … that people … will make mistakes on both sides.” But a year later he was even less apologetic, proclaiming that “If it takes a bloodbath, let’s get it over with.” This was their candidate, who—remarkably one would think—had nearly snatched the nomination away from Ford in ’76, and then went on to cheer party unity while campaigning for Ford with even less enthusiasm than Bernie Sanders exhibited for Hillary Clinton in 2016. Many hold Reagan at least partially responsible for Ford’s loss in the general election.

But Reagan’s neglect of Ford left him neatly positioned as the front-runner for 1980. As conservatives dug in, others of the party faithful recoiled in horror, fearing a repeat of the drubbing at the polls they took in 1964 with Barry “extremism in defense of liberty is no vice” Goldwater at the top of the ticket. And Reagan did seem extreme, perhaps more so than Goldwater. The sounds of sabers rattling nearly drowned out his words every time he mentioned the U.S.S.R. And he said lots of truly crazy things, both publicly and privately, once even wondering aloud over dinner with columnist Jack Germond whether “Ford had staged fake assassination attempts to win sympathy for his renomination.” Germond later recalled that “He was always a man with a very loose hold on the real world around him.” [p617] Germond had a good point: Reagan once asserted that “Fascism was really the basis for the New Deal,” boosted the valuable recycling potential of nuclear waste, and insisted that “trees cause more pollution than automobiles do”—prompting some joker at a rally to decorate a tree with a sign that said “Chop me down before I kill again.”

But Reagan had a real talent with dog whistles, launching his campaign with a speech praising “states’ rights” at a county fair near Philadelphia, Mississippi, where three civil rights workers were murdered in 1964. He once boasted he “would have voted against the Civil Rights Act of 1964,” claimed “Jefferson Davis is a hero of mine,” and bemoaned the Voting Rights Act as “humiliating to the South.” A whiff of racism also clung to his disdain for Medicaid recipients as a “a faceless mass, waiting for handouts,” and his recycling ad nauseum of his dubious anecdote of a “Chicago welfare queen” with twelve social security cards who bilked the government out of $150,000. Unreconstructed whites ate this red meat up. Nixon’s “southern strategy” reached new heights under Reagan.

But a white southerner who was not a racist was actually the president of the United States. Despite the book’s title, the central protagonist of Reaganland is Jimmy Carter, a man who arrived at the Oval Office buoyed by public confidence rarely seen in the modern era—and then spent four years on a rollercoaster of support that plummeted far more often than it climbed. At one point his approval rating was a staggering 77% … at another 28%—only four points above where Nixon’s stood when he resigned in disgrace. These days, as the nonagenarian Carter has established himself as the most impressive ex-president since John Quincy Adams, we tend to forget what a truly bad president he was. Not that he didn’t have good intentions, only that—like Woodrow Wilson six decades before him—he was unusually adept at using them to pave his way to hell. A technocrat with an arrogant certitude that he had all the answers, he arrived on the Beltway with little idea of how the world worked, a family in tow that seemed like they were right out of central casting for a Beverly Hillbillies sequel. He often gravely lectured the public on what was really wrong with the country—and then seemed to lay blame upon Americans for outsize expectations. And he dithered, tacking this way and that way, alienating both sides of the aisle in a feeble attempt to seem to stand above the fray.

In fairness, he had a lot to deal with. Carter inherited a nation more socio-economically shook up than any since the 1930s. In 1969, the United States had proudly put a man on the moon. Only a few short years later, a country weaned on wallowing in American exceptionalism saw factories shuttered, runaway inflation, surging crime, cities on the verge of bankruptcy, and long lines just to gas up your car at an ever-skyrocketing cost. And that was before a nuclear power plant melted down, Iranians took fifty-two Americans hostage, and Soviet tanks rolled into Afghanistan. All this was further complicated by a new wave of media hype that saw the birth of the “bothersiderism” that gives equal weight to scandals legitimate or spurious—an unfortunate ingredient that remains so baked into current reporting.

Perhaps the most impressive part of Reaganland is Perlstein’s superlative rendering of what America was like in the mid-70s. Stephen King’s horror is often so effective at least in part due to the fads, fast food, and pop music he uses as so many props in his novels. If that stuff is real, perhaps ghosts or killer cars could be real, as well. Likewise, Perlstein brings a gritty authenticity home by stepping beyond politics and policy to enrich the narrative with headlines of serial killers and plane crashes, of assassination and mass suicide, adroitly resurrecting the almost numbing sense of anxiety that informed the times. DeNiro’s Taxi Driver rides again, and the reader winces through every page.

Carter certainly had his hands full, especially as the hostage crisis dragged on, but it hardly ranked up there with Truman’s Berlin Airlift or JFK’s Cuban missiles. There were indeed crises, but Carter seemed to manufacture even more—and to get in his own way most of the time. And his attempts to reassure consistently backfired, fueling even more national uncertainty. All this offered a perfect storm of opportunity for right-wing elements who discovered co-belligerency was not only a tactic but a way of life. Against all advice and all odds, Reagan—retaining his “very loose hold on the real world around him”—saw no contradiction bringing his brand of conservatism to join forces with those maligning gays, opposing abortion, stonewalling the ERA, and boosting the Christian right. Corporate CEO’s—Perlstein’s “Boardroom Jacobins”—already on the defensive, were more than ready to finance it. Carter, flailing, played right into their hands. Already the most right-of-center Democratic president of the twentieth century, he too shared that weird vision of the erosion of American morality. And Perlstein reminds us that the debacle of financial deregulation usually traced back to Reagan actually began on Carter’s watch, the seeds sown for the wage stagnation, growth of income inequality, and endless cycles of recession that has been de rigueur in American life ever since. Carter failed to make a good closing argument for why he should be re-elected, and the unthinkable occurred: Ronald Reagan became president of the United States. The result was that the middle-class dream that seemed so much in jeopardy under Carter was permanently crushed once Reagan’s regime of tax cuts, deregulation, and the supply-side approach George H.W. Bush rightly branded as “voodoo economics” became standard operating policy. Progressive reform sputtered and stalled. The little engine that FDR had ignited to manifest a social and economic miracle for America crashed and burned forever on the vanguard of Reaganomics.

Some readers might be intimidated by the size of Reaganland, but it’s a long book because it tells a long story, and it contains lots of moving parts. Perlstein succeeds magnificently because he demonstrates how all those parts fit together, replete with the nuance and complexity critical to historical analysis. Is it perfect? Of course not. I’m a political junkie, but there were certain segments on policy and legislative wrangling that seemed interminable. And if Perlstein mentioned “Boardroom Jacobins” just one more time, I might have screamed. But these are quibbles. This is without doubt the author’s finest book, and I highly recommend it, as both an invaluable reference work and a cover-to-cover read.

In Hearts of Atlantis, Stephen King imagines the sixties as bookended by JFK’s 1963 assassination and John Lennon’s murder in 1980. Perlstein seems to follow that same school of thought, for the final page of Reaganland also wraps up with Lennon’s untimely death. In an afterword to his work of fiction, King muses: “Although it is difficult to believe, the sixties are not fictional; they actually happened.” If you are more partial to nonfiction and want the real story of how the sixties ended, of how Atlantis sank, you must read Reaganland.

[Note: this review goes to press just a few days before the most consequential presidential election in modern American history. This book and this review are reminders that elections do matter.]

I reviewed Perlstein’s previous books here:

Review of: The Invisible Bridge: The Fall of Nixon and the Rise of Reagan by Rick Perlstein

Review of: Nixonland: The Rise of a President and the Fracturing of America, by Rick Perlstein

Featured

Review of: The Awakening: A History of the Western Mind AD 500-1700, by Charles Freeman

Nearly two decades have passed since Charles Freeman published The Closing of the Western Mind: The Rise of Faith and the Fall of Reason, a brilliant if controversial examination of the intellectual totalitarianism of Christianity that dated to the dawn of its dominance of Rome and the successor states that followed the fragmentation of the empire in the West.  Freeman argues persuasively that the early Christian church vehemently and often brutally rebuked the centuries-old classical tradition of philosophical enquiry and ultimately drove it to extinction with a singular intolerance of competing ideas crushed under the weight of a monolithic faith. Not only were pagan religions prohibited, but there would be virtually no provision for any dissent with official Christian doctrine, such that those who advanced even the most minor challenges to interpretation were branded heretics and sent to exile or put to death. That tragic state was to define medieval Europe for more than a millennium.

Now the renowned classical historian has returned with a follow-up epic, The Awakening: A History of the Western Mind AD 500-1700, recently published in the UK (and slated for U.S. release, possibly with a different title) which recounts the slow—some might brand it glacial—evolution of Western thought that restored legitimacy to independent examination and analysis, that eventually led to a celebration, albeit a cautious one, of reason over blind faith. In the process, Freeman reminds us that quality, engaging narrative history has not gone extinct, while demonstrating that it is possible to produce a work that is so well-written it is readable by a general audience while meeting the rigorous standards of scholarship demanded by academia. That this is no small achievement will be evident to anyone who—as I do—reads both popular and scholarly history and is struck by the stultifying prose that often typifies the academic. In contrast, here Freeman takes a skillful pen to reveal people, events and occasionally obscure concepts, much of which may be unfamiliar to those who are not well versed in the medieval period.

The fall of Rome remains a subject of debate for historians. While traditional notions of sudden collapse given to pillaging Vandals leaping over city walls and fora engulfed in flames have long been revised, competing visions of a more gradual transition that better reflect the scholarship sometimes distort the historiography to minimize both the fall and what was actually lost. And what was lost was indeed dramatic and incalculable. If, to take just one example, sanitation can be said to be a mark of civilization, the Roman aqueducts and complex network of sewers that fell into disuse and disrepair meant that fresh water was no longer reliable, and sewage that bred pestilence was to be the norm for fifteen centuries to follow. It was not until the late nineteenth century that sanitation in Europe even approached Roman standards. So, whatever the timeline—rapid or gradual—there was indeed a marked collapse. Causes are far more elusive.  But Gibbon’s largely discredited casting of Christianity as the villain that brought the empire down tends to raise hackles in those who suspect someone like Freeman attempting to point those fingers once more. But Freeman has nothing to say about why Rome fell, only what followed. The loss of the pursuit of reason was to be as devastating for the intellectual health of the post-Roman world in the West as sanitation was to prove for its physical health. And here Freeman does squarely take aim at the institutional Christian church as the proximate cause for the subsequent consequences for Western thought. This is well-underscored in the bleak assessment that follows in one of the final chapters in The Closing of the Western Mind:

Christian thought that emerged in the early centuries often gave irrationality the status of a universal “truth” to the exclusion of those truths to be found through reason. So the uneducated was preferred to the educated and the miracle to the operation of natural laws … This reversal of traditional values became embedded in the Christian tradi­tion … Intellectual self-confidence and curiosity, which lay at the heart of the Greek achievement, were recast as the dreaded sin of pride. Faith and obedience to the institutional authority of the church were more highly rated than the use of reasoned thought. The inevitable result was intellectual stagnation … [p322]

 Awakening picks up where Closing leaves off as the author charts the “Reopening of the Western Mind” (this was the working title of his draft!) but the new work is marked by far greater optimism. Rather than dwell on what has been lost, Freeman puts focus not only upon the recovery of concepts long forgotten but how rediscovery eventually sparked new, original thought, as the spiritual and later increasingly secular world danced warily around one another—with a burning heretic all too often staked between them on Europe’s fraught intellectual ballroom. Because the timeline is so long—encompassing twelve centuries—the author sidesteps what could have been a dull chronological recounting of this slow progression to narrow his lens upon select people, events and ideas that collectively marked milestones on the way that comprise thematic chapters to broaden the scope. This approach thus transcends what might have been otherwise parochial to brilliantly convey the panoramic.

There are many superlative chapters in Awakening, including the very first one, entitled “The Saving of the Texts 500-750.” Freeman seems to delight in detecting the bits and pieces of the classical universe that managed to survive not only vigorous attempts by early Christians to erase pagan thought but the unintended ravages of deterioration that is every archivist’s nightmare. Ironically, the sacking of cities in ancient Mesopotamia begat conflagrations that baked inscribed clay tablets, preserving them for millennia. No such luck for the Mediterranean world, where papyrus scrolls, the favored medium for texts, fell to war, natural disasters, deliberate destruction, as well as to entropy—a familiar byproduct of the second law of thermodynamics—which was not kind in prevailing environmental conditions. We are happily still discovering papyri preserved by the dry conditions in parts of Egypt—the oldest dating back to 2500 BCE—but it seems that the European climate doomed papyrus to a scant two hundred years before it was no more.

Absent printing presses or digital scans, texts were preserved by painstakingly copying them by hand, typically onto vellum, a kind of parchment made from animal skins with a long shelf life, most frequently in monasteries by monks for whom literacy was deemed essential. But what to save? The two giants of ancient Greek philosophy, Plato and Aristotle, were preserved, but the latter far more grudgingly. Fledgling concepts of empiricism in Aristotle made the medieval mind uncomfortable. Plato, on the other hand, who pioneered notions of imaginary higher powers and perfect forms, could be (albeit somewhat awkwardly) adapted to the prevailing faith in the Trinity, and thus elements of Plato were syncretized into Christian orthodoxy. Of course, as we celebrate what was saved it is difficult not to likewise mourn what was lost to us forever. Fortunately, the Arab world put a much higher premium on the preservation of classical texts—an especially eclectic collection that included not only metaphysics but geography, medicine and mathematics. When centuries later—as Freeman highlights in Awakening—these works reached Europe, they were to be instrumental as tinder to the embers that were to spark first a revival and then a revolution in science and discovery.

My favorite chapter in Awakening is “Abelard and the Battle for Reason,” which chronicles the extraordinary story of scholastic scholar Peter Abelard (1079-1142)—who flirted with the secular and attempted to connect rationalism with theology—told against the flamboyant backdrop of Abelard’s tragic love affair with Héloïse, a tale that yet remains the stuff of popular culture. In a fit of pique, Héloïse’s father was to have Abelard castrated. The church attempted something similar, metaphorically, with Abelard’s teachings, which led to an order of excommunication (later lifted), but despite official condemnation Abelard left a dramatic mark on European thought that long lingered.

There is too much material in a volume this thick to cover competently in a review, but the reader will find much of it well worth the time. Of course, some will be drawn to certain chapters more than others. Art historians will no doubt be taken with the one entitled “The Flowering of the Florentine Renaissance,” which for me hearkened back to the best elements of Kenneth Clark’s Civilisation, showcasing not only the evolution of European architecture but the author’s own adulation for both the art and the engineering feat demonstrated by Brunelleschi’s dome, the extraordinary fifteenth century adornment that crowns the Florence Cathedral. Of course, Freeman does temper his praise for such achievements with juxtaposition to what once had been, as in a later chapter that recounts the process of relocating an ancient Egyptian obelisk weighing 331 tons that had been placed on the Vatican Hill by the Emperor Caligula, which was seen as remarkable at the time. In a footnote, Freeman reminds us that: “One might talk of sixteenth-century technological miracles, but the obelisk had been successfully erected by the Egyptians, taken down by the Romans, brought by sea to Rome and then re-erected there—all the while remaining intact!” [p492n]

If I was to find a fault with Awakening, it is that it does not, in my opinion, go far enough to emphasize the impact of the Columbian Experience on the reopening of the Western mind.  There is a terrific chapter devoted to the topic, “Encountering the Peoples of the ‘Newe Founde Worldes,’” which explores how the discovery of the Americas and its exotic inhabitants compelled the European mind to examine other human societies whose existence had never before even been contemplated. While that is a valid avenue for analysis, it yet hardly takes into account just how earth-shattering 1492 turned out to be—arguably the most consequential milestone for human civilization (and the biosphere!) since the first cities appeared in Sumer—in a myriad of ways, not least the exchange of flora and fauna (and microbes) that accompanied it. But this significance was perhaps greatest for Europe, which had been a backwater, long eclipsed by China and the Arab middle east.  It was the Columbian Experience that reoriented the center of the world, so to speak, from the Mediterranean to the Atlantic, which was exploited to the fullest by the Europeans who prowled those seas and first bridged the continents. It is difficult to imagine the subsequent accomplishments—intellectual and otherwise—had Columbus not landed at San Salvador. But this remains just a quibble that does not detract from Freeman’s overall accomplishment.

Full disclosure: Charles Freeman and I began a long correspondence via email following my review of Closing. I was honored when he selected me as one of his readers for his drafts of Awakening, which he shared with me in 2018, but at the same time I approached this responsibility with some trepidation: given Freeman’s credentials and reputation, what if I found the work to be sub-standard? What if it was simply not a good book?  How would I address that? As it was, these worries turned out to be misplaced. It is a magnificent book and I am grateful to have read much of it as a work in progress, and then again after publication. I did submit several pages of critical commentary to assist the author, to the best of my limited abilities, hone a better final product, and to that end I am proud see my name appear in the “Acknowledgments.”

I do not usually talk about formats in book reviews, since the content is typically neither enhanced nor diminished by its presentation in either a leather-bound tome or a mass-market paperback or the digital ink of an e-book, but as a bibliophile I cannot help but offer high praise to this beautiful, illustrated edition of Awakening published by Head of Zeus, even accented by a ribbon marker. It has been some time since I have come across a volume this attractive without paying a premium for special editions from Folio Society or Easton Press, and in this case the exquisite art that supplements the text transcends the ornamental to enrich the narrative.

Interest in the medieval world has perhaps waned over time. But that is, of course, a mistake. How we got from point A to point B is an important story, even if it has never been told before as well as Freeman has told it in Awakening. And it is not an easy story to tell. As the author acknowledges in a concluding chapter: “Bringing together the many different elements that led to the ‘awakening of the western mind’ is a challenge. It is important to stress just how bereft Europe was, economically and culturally, after the fall of the Roman empire compared to what it had been before.” [p735]

Those of us given to dystopian fiction, concerned with the fragility of republics and civilization, and wondering aloud in the midst of a global pandemic and the rise of authoritarianism what our descendants might recall of us if it all fell to collapse tomorrow cannot help but be intrigued by how our ancestors coped—for better or for worse—after Rome was no more. If you want to learn more about that, there might be no better covers to crack than Freeman’s The Awakening. I highly recommend it.

NOTE: My review of Freeman’s earlier work appears here:

Review of: The Closing of the Western Mind: The Rise of Faith and the Fall of Reason by Charles Freeman

Featured

Review of: Seduction: Sex, Lies, and Stardom in Howard Hughes’s Hollywood, by Karina Longworth

“When people ask me if I went to film school, I tell them, ‘No, I went to films,’” Quentin Tarantino famously quipped. While I’m no iconic director, I too “went to films,” in a manner of speaking. I was raised by my grandmother in the 1960s—with a little help from a 19” console TV in the living room and seven channels delivered via rooftop antenna. When cartoons, soaps, or prime time westerns and sitcoms like Bonanza and Bewitched weren’t broadcasting, all the remaining airtime was filled with movies.  All kinds of movies: drama, screwball comedies, war movies, gangster movies, horror movies, sci-fi, musicals, love stories, murder mysteries—you name the genre, it ran. And ran. And ran. For untold hours and days and weeks and years.

Grandma—rest in peace—loved movies. Just loved them. All kinds of movies. But she didn’t have much of a discerning eye: for her, The Treasure of the Sierra Madre was no better or worse than Bedtime for Bonzo. At first, I didn’t know any better either, and whether I was four or fourteen I watched whatever was on, whenever she was watching. But I took a keen interest. The immersion paid dividends. My tastes evolved. One day I began calling them films instead of movies, and even turned into something of a “film geek,” arguing against the odds that Casablanca is a better picture than Citizen Kane, promoting Kubrick’s Paths of Glory over 2001, and shamelessly confessing to screening Tarantino’s Kill Bill I and II back-to-back more than a dozen times. In other words, I take films pretty seriously. So, when I noticed that Seduction: Sex, Lies, and Stardom in Howard Hughes’s Hollywood was up for grabs in an early reviewer program, I jumped at the opportunity. I was not to be disappointed.

In an extremely well-written and engaging narrative, film critic and journalist Karina Longworth has managed to turn out a remarkable history of Old Hollywood, in the guise of a kind of biography of Howard Hughes. In films, a “MacGuffin” is something insignificant or irrelevant in itself that serves as a device to trigger the plot. Examples include the “Letters of Transit” in Casablanca, the statuette in The Maltese Falcon, and the briefcase in Tarantino’s Pulp Fiction. Howard Hughes himself is the MacGuffin of sorts in Seduction, which is far less about him than his female victims and the peculiar nature of the studio system that enabled predators like Hughes and others who dominated the motion picture industry.

Howard Hughes was once one of the most famous men in America, known for his wealth and genius, a larger-than-life legend noted both for his exploits as aviator and flamboyance as a film producer given to extravagance and star-making.  But by the time I was growing up, all that was in the distant past, and Hughes was little more than a specter in supermarket tabloids, an eccentric billionaire turned recluse. It was later said that he spent most days alone, sitting naked in a hotel room watching movies. Long unseen by the public, at his death he was nearly unrecognizable, skeletal and covered in bedsores.  Director Martin Scorsese resurrected him for the big screen in his epic biopic “The Aviator,” headlined by Leonardo DiCaprio and a star-studded cast, which showcased Hughes as a heroic and brilliant iconoclast who in turn took on would-be censors, the Hollywood studio system, the aviation industry and anyone who might stand in the way of his quest for glory—all while courting a series of famed beauties. Just barely in frame was the mental instability, the emerging Obsessive-Compulsive Disorder that later brought him down.

Longworth finds Hughes a much smaller and more despicable man, an amoral narcissist and manipulator who was seemingly incapable of empathy for other human beings. (Yes, there is indeed a palpable resemblance to a certain president!) While Hughes carefully crafted an image of a titan who dominated the twin arenas of flight and film, in Longworth’s portrayal he seems to crash more planes than he lands, and churns out more bombs than blockbusters. In the public eye, he was a great celebrity, but off-screen he comes off as an unctuous villain, a charlatan whose real skill set was self-promotion empowered by vast sums of money and a network of hangers-on. The author gives him his due by denying him top billing as the star of the show, rather giving scrutiny to those in his orbit, the females in supporting roles whom he in turn dominated, exploited and discarded. You can almost hear refrains of Carly Simon’s You’re So Vain interposed in the narrative, taunting the ghost of Hughes with the chorus: “You probably think this song is about you”—which by the way would make a great soundtrack if there’s ever a screen adaptation of the book.

If not Hughes, the real main character is Old Hollywood itself, and with a skillful pen, Longworth turns out a solid history—a decidedly feminist history—of the place and time that is nothing less than superlative. The author recreates for us the early days before the tinsel, when a sleepy little “dry” town no one had ever heard of almost overnight became the celluloid capital of the country. Pretty girls from all over America would flock there on a pilgrimage to fame; most disappointed, many despairing, more than a few dead. Nearly all were preyed upon by a legion of the contemptible, used and abused with a thin tissue of lies and promises that anchored them not only to the geography but to the predominantly male movers and shakers who dominated the studio system that literally dominated everything else. This is a feminist history precisely because Longworth focuses on these women—more specifically ten women involved with Hughes—and through them brilliantly captures Hollywood’s golden age as manifested in both the glamorous and the tawdry.

Howard Hughes was not the only predator in Tinseltown, of course, but arguably its most depraved. If Hollywood power-brokers overpromised fame to hosts of young women just to bed them, for Hughes sex was not even always the principal motivation. It went way beyond that, often to twisted ends perhaps unclear to even Hughes himself. He indeed took many lovers, but those he didn’t sleep with were not exempt to his peculiar brand of exploitation.  What really got Howard Hughes off was exerting power over women, controlling them, owning them. He virtually enslaved some of these women, stripping them of their individual freedom of will for months or even years with vague hints at eventual stardom, abetted by assorted handlers appointed to spy on them and report back to him. Even the era of “Me Too” lacks the appropriate vocabulary to describe his level of “creepy!”

One of the women he apparently did not take to bed was Jane Russell. Hughes cast the beautiful, voluptuous nineteen year old in The Outlaw, a film that took forever to produce and release largely due to his fetishistic obsession with Russell’s breasts—and the way these spilled out of her dress in a promotional poster that provoked the ire of censors.  Longworth’s treatment of the way Russell unflappably endured her long association with Hughes—despite his relentless domination over her life and career—is just one of the many delightful highlights in Seduction.

The Outlaw, incidentally, was one of the movies I recall watching with Grandma back in the day.  Her notions of Hollywood had everything to do with the glamorous and the glorious, of handsome leading men and lovely leading ladies up on the silver screen. I can’t help wondering what she might think if she learned how those ladies were tormented by Hughes and other moguls of the time. I wish I could tell her about it, about this book. Alas, that’s not possible, but I can urge anyone interested in this era to read Seduction. If authors of film history could win an Academy Award, Longworth would have an Oscar on her mantle to mark this outstanding achievement.

Featured

Review of: Pox Americana: The Great Smallpox Epidemic of 1775-82, by Elizabeth A. Fenn

Imagine there’s a virus sweeping across the land claiming untold victims, the agent of the disease poorly understood, the population in terror of an unseen enemy that rages mercilessly through entire communities, leaving in its wake an exponential toll of victims.  As this review goes to press amid an alarming spike in new Coronavirus cases, Americans don’t need to stretch their collective imagination very far to envisage that at all. But now look back nearly two and a half centuries and consider an even worse case scenario: a war is on for the existential survival of our fledgling nation, a struggle compromised by mass attrition in the Continental Army due to another kind of virus, and the epidemic it spawns is characterized by symptoms and outcomes that are nothing less than nightmarish by any standard, then or now. For the culprit then was smallpox, one of the most dread diseases in human history.

This nearly forgotten chapter in America’s past left a deep impact on the course of the Revolution that has been long overshadowed by outsize events in the War of Independence and the birth of the Republic. This neglect has been masterfully redressed by Pox Americana: The Great Smallpox Epidemic of 1775-82, a brilliantly conceived and extremely well-written account by Pulitzer Prize-winning historian Elizabeth A. Fenn.  One of the advantages of having a fine personal library in your home is the delight of going to a random shelf and plucking off an edition that almost perfectly suits your current interests, a volume that has been sitting there unread for years or even decades, just waiting for your fingertips to locate it. Such was the case with my signed first edition of Pox Americana, a used bookstore find that turned out to be a serendipitous companion to my self-quarantine for Coronavirus, the great pandemic of our times.

As horrific as COVID-19 has been for us—as of this morning we are up to one hundred thirty four thousand deaths and three million cases in the United States, a significant portion of the more than half million dead and nearly twelve million cases worldwide—smallpox, known as “Variola,” was far, far worse.  In fact, almost unimaginably worse. Not only was it more than three times more contagious than Coronavirus, but rather than a mortality rate that ranges in the low single digits with COVID (the verdict’s not yet in), variola on average claimed an astonishing thirty percent of its victims, who often suffered horribly in the course of the illness and into their death throes, while survivors were frequently left disfigured by extensive scarring, and many were left blind. Smallpox has a long history that dates back to at least the third century BCE, as evidenced in Egyptian mummies. There were reportedly still fifteen million cases a year as late as 1967. In between it claimed untold hundreds of millions of lives over the years—some three hundred million in the twentieth century alone—until its ultimate eradication in 1980. There is perhaps some tragic irony that we are beset by Coronavirus on the fortieth anniversary of that milestone …

I typically eschew long excerpts for reviews, but Variola was so horrifying and Fenn writes so well that I believe it would be a disservice to do other than let her describe it here:

Headache, backache, fever, vomiting, and general malaise all are among the initial signs of infection. The headache can be splitting; the backache, excruciating … The fever usually abates after the first day or two … But … relief is fleeting. By the fourth day … the fever creeps upward again, and the first smallpox sores appear in the mouth, throat, and nasal passages …The rash now moves quickly. Over a twenty-four-hour period, it extends itself from the mucous membranes to the surface of the skin. On some, it turns inward, hemorrhaging subcutaneously. These victims die early, bleeding from the gums, eyes, nose, and other orifices. In most cases, however, the rash turns outward, covering the victim in raised pustules that concentrate in precisely the places where they will cause the most physical pain and psychological anguish: The soles of the feet, the palms of the hands, the face, forearms, neck, and back are focal points of the eruption … If the pustules remain discrete—if they do not run together— the prognosis is good. But if they converge upon one another in a single oozing mass, it is not. This is called confluent smallpox … For some, as the rash progresses in the mouth and throat, drinking becomes difficult, and dehydration follows. Often, an odor peculiar to smallpox develops… Patients at this stage of the disease can be hard to recognize. If damage to the eyes occurs, it begins now … Scabs start to form after two weeks of suffering … In confluent or semiconfluent cases of the disease, scabbing can encrust most of the body, making any movement excruciating … [One observation of such afflicted Native Americans noted that] “They lye on their hard matts, the poxe breaking and mattering, and runing one into another, their skin cleaving … to the matts they lye on; when they turne them, a whole side will flea of[f] at once.” … Death, when it occurs, usually comes after ten to sixteen days of suffering. Thereafter, the risk drops significantly … and unsightly scars replace scabs and pustules … the usual course of the disease—from initial infection to the loss of all scabs—runs a little over a month. Patients remain contagious until the last scab falls off …  Most survivors bear … numerous scars, and some are blinded. But despite the consequences, those who live through the illness can count themselves fortunate. Immune for life, they need never fear smallpox again. [p16-20]

Smallpox was an unfortunate component of the siege of Boston by the British in 1775, but—as Fenn explains—it was far worse for Bostonians than the Redcoats besieging them.  This was because smallpox was a fact of life in eighteenth century Europe—a series of outbreaks left about four hundred thousand people dead every year, and about a third of the survivors were blinded. As awful as that may seem, it meant that the vast majority of British soldiers had been exposed to the virus and were thus immune. Not so for the colonists, who not only had experienced less outbreaks but frequently lived in more rural settings at a greater distance from one another, which slowed exposure, leaving a far smaller quantity of those who could count on immunity to spare them. Nothing fuels the spread of a pestilence better than a crowded bottlenecked urban environment—such as Boston in 1775—except perhaps great encampments of susceptible men from disparate geographies suddenly crammed together, as was characteristic of the nascent Continental Army. To make matters worse, there was some credible evidence that the Brits at times engaged in a kind of embryonic biological warfare by deliberately sending known infected individuals back to the Colonial lines. All of this conspired to form a perfect storm for disaster.

Our late eighteenth-century forebears had a couple of things going for them that we lack today. First of all, while it was true that like COVID there was no cure for smallpox, there were ways to mitigate the spread and the severity that were far more effective than our masks and social distancing—or misguided calls to ingest hydroxychloroquine, for that matter.  Instead, their otherwise primitive medical toolkit did contain inoculation, an ancient technique that had only become known to the west in relatively recent times. Now, it is important to emphasize that inoculation—also known as “variolation”—is not comparable to vaccination, which did not come along until closer to the end of the century. Not for the squeamish, variolation instead involved deliberately inserting the live smallpox virus from scabs or pustules into superficial incisions in a healthy subject’s arm. The result was an actual case of smallpox, but generally a much milder one than if contracted from another infected person. Recovered, the survivor would walk away with permanent immunity. The downside was that some did not survive, and all remained contagious for the full course of the disease. This meant that the inoculated also had to be quarantined, no easy task in an army camp, for example.

The other thing they had going for them back then was a competent leader who took epidemics and how to contain them quite seriously—none other than George Washington himself. Washington was not president at the time, of course, but he was the commander of the Continental Army, and perhaps the most prominent man in the rebellious colonies. Like many of history’s notable figures, Washington was not only gifted with qualities such as courage, intelligence, and good sense, but also luck. In this case, Washington’s good fortune was to contract—and survive—smallpox as a young man, granting him immunity. But it was likewise the good fortune of the emerging new nation to have Washington in command. Initially reluctant to advance inoculation—not because he doubted the science but rather because he feared it might accelerate the spread of smallpox—he soon concluded that only a systematic program of variolation could save the army, and the Revolution! Washington’s other gifts—for organization and discipline—set in motion mass inoculations and enforced isolation of those affected. Absent this effort, it is likely that the War of Independence—ever a long shot—may not have succeeded.

Fenn argues convincingly that the course of the war was significantly affected by Variola in several arenas, most prominently in its savaging of Continental forces during the disastrous invasion of Quebec, which culminated in Benedict Arnold’s battered forces being driven back to Fort Ticonderoga.  And in the southern theater, enslaved blacks flocked to British lines, drawn by enticements to freedom, only to fall victim en masse to smallpox, and then tragically find themselves largely abandoned to suffering and death as the Brits retreated. There is a good deal more of this stuff, and many students of the American Revolution will find themselves wondering—as I did—why this fascinating perspective is so conspicuously absent in most treatments of this era?

Remarkably, despite the bounty of material, emphasis on the Revolution only occupies the first third of the book, leaving far more to explore as the virus travels to the west and southwest, and then on to Mexico, as well as to the Pacific northwest. As Fenn reminds us again and again, smallpox comes from where smallpox has been, and she painstakingly tracks hypothetical routes of the epidemic. Tragic bystanders in its path were frequently Native Americans, who typically manifested more severe symptoms and experienced greater rates of mortality.  It has been estimated that perhaps ninety percent of pre-contact indigenous inhabitants of the Americas were exterminated by exposure to European diseases for which they had no immunity, and smallpox was one of the great vehicles of that annihilation. Variola proved to be especially lethal as a “virgin soil” epidemic, and Native Americans not unexpectedly suffered far greater casualties than other populations, resulting in death on such a wide scale that entire tribes simply disappeared to history.

No review can properly capture all the ground that Fenn covers in this outstanding book, nor praise her achievement adequately. It is especially rare when a historian combines a highly original thesis with exhaustive research, keen analysis, and exceptional talent with a pen to deliver a magnificent work such as Pox Americana. And perhaps never has there been a moment when this book could find a greater relevance to readers than to Americans in 2020.

 

Featured

Review of: Lamarck’s Revenge: How Epigenetics is Revolutionizing Our Understanding of Evolution’s Past and Present, by Peter Ward

If you have studied evolution inside or outside of the classroom, you have no doubt encountered the figure of Jean-Baptiste Lamarck and the discredited notion of the inheritance of acquired characteristics attributed to him known as “Lamarckism.” This has most famously been represented in the example of giraffes straining to reach fruit on ever-higher branches, which results in the development of longer necks over succeeding generations. Never mind that Lamarck did not develop this concept—and while he echoed it, it remained only a tiny part of the greater body of his work—he was yet doomed to have it unfortunately cling to his legacy ever since. This is most regrettable, because Lamarck—who died three decades before Charles Darwin shook the spiritual and scientific world with his 1859 publication of On the Origin of Species—was actually a true pioneer in the field of evolutionary biology that recognized there were forces at work that put organisms on an ineluctable road to greater complexity. It was Darwin who identified this force as “natural selection,” and Lamarck was not only denied credit for his contributions to the field, but otherwise maligned and ridiculed.

But even if he did not invent the idea, what if Lamarck was right all along to believe, at least in part, that acquired characteristics can be passed along transgenerationally after all—perhaps not on the kind of macro scale manifested by giraffe necks, but in other more subtle yet no less critical components to the principles of evolution? That is the subject of Lamarck’s Revenge: How Epigenetics is Revolutionizing Our Understanding of Evolution’s Past and Present, by the noted paleontologist Peter Ward. The book’s cover naturally showcases a series of illustrated giraffes with ever-lengthening necks! Ward is an enthusiast for the relatively new, still developing—and controversial—science of epigenetics, which advances the hypothesis that certain circumstances can trigger markers that can be transmitted from parent to child by changing the gene expression without altering the primary structure of the DNA itself. Let’s imagine a Holocaust survivor, for instance: can the trauma of Auschwitz cut so deep that the devastating psychological impact of that horrific experience will be passed on to his children, and his children’s children?

This is heady stuff, of course.  We should pause for the uninitiated and explain the nature of Darwinian natural selection—the key mechanism of the Theory of Evolution—in its simplest terms. The key to survival for all organizations is adaptation. Random mutations occur over time, and if one of those mutations turns out to be better adapted to the environment, it is more likely to reproduce and thus pass along its genes to its offspring.  Over time, through “gradualism,” this can lead to the rise of new species. Complexity breeds complexity, and that is the road traveled by all organisms that has led from the simplest prokaryote unicellular organism—the 3.5-billion-year-old photosynthetic cyanobacteria—to modern homo sapiens sapiens.  This is, of course, a very, very long game; so long in fact that Darwin—who lived in a time when the age of the earth was vastly underestimated—fretted that there was not enough time for evolution as he envisioned it to occur. Advances in geology later determined that the earth was about 4.5 billion years old, which solved that problem, but still left other aspects of evolution unexplained by gradualism alone. The brilliant Stephen Jay Gould (along with Niles Eldredge) came along in 1972 and proposed that rather than gradualism most evolution more likely occurred through what he called “punctuated equilibrium,” often triggered by a catastrophic change in the environment. Debate has raged ever since, but it may well be that evolution is guided by forces of both gradualism and punctuated equilibrium. But could there still be other forces at work?

Transgenerational epigenetic inheritance represents another so-called force and is at the cutting edge of research in evolutionary biology today. But has the hypothesis of epigenetics been demonstrated to be truly plausible? And the answer to that is—maybe. In other words, there does seem to be studies that support transgenerational epigenetic inheritance, most famously—as detailed in Lamarck’s Revenge—in what has been dubbed the  “Dutch Hunger Winter Syndrome,” that saw children born during a famine smaller than those born before the famine, and with a later, greater risk of glucose intolerance, conditions then passed down to successive generations. On the other hand, the evidence for epigenetics has not been as firmly established as some proponents, such as Ward, might have us believe.

Lamarck’s Revenge is a very well-written and accessible scientific account of epigenetics for a popular audience, and while I have read enough evolutionary science to follow Ward’s arguments with some competence, I remain a layperson who can hardly endorse or counter his claims. The body of the narrative is comprised of Ward’s repeated examples of what he identifies as holes in traditional evolutionary biology that can only be explained by epigenetics. Is he right? I simply lack the expertise to say. I should note that I received this book as part of an “Early Reviewers” program, so I felt a responsibility to read it cover-to-cover, although my own interest lapsed as it moved beyond my own depth in the realm of evolutionary biology.

I should note that this is all breaking news, and as we appraise it we should be mindful of how those on the fringes of evangelicalism, categorically opposed to the science of human evolution, will cling to any debate over mechanisms in natural selection to proclaim it all a sham  sponsored by Satan—who has littered the earth with fossils to deceive us—to challenge the truth of the “Garden of Eden” related in the Book of Genesis.  Once dubbed “Creationists,” they have since rebranded themselves in association with the pseudoscience of so-called “Intelligent Design,” which somehow remains part of the curriculum at select accredited universities.  Science is self-correcting. These folks are not, so don’t ever let yourself be distracted by their fictional supernatural narrative. Evolution—whether through gradualism and/or punctuated equilibrium and/or epigenetics—remains central to both modern biology and modern medicine, and that is not the least bit controversial among scientific professionals. But if you want to find out more about the implications of epigenetics for human evolution, then I recommend that you pick up Lamarck’s Revenge and challenge yourself to learn more!

 

Note: While you are at it, if you want to learn more about 3.5-billion-year-old photosynthetic cyanobacteria, I highly recommend this: 

Review of: Cradle of Life: The Discovery of Earth’s Earliest Fossils, by J. William Schopf

Featured

Review of: Thomas Jefferson’s Education, by Alan Taylor

Here was buried Thomas Jefferson, Author of the Declaration of American Independence, of the Statute of Virginia for religious freedom & Father of the University of Virginia.

Thomas Jefferson wrote those very words and sketched out the obelisk they would be carved upon. For those who have studied him, that he not only composed his own epitaph but designed his own grave marker was—as we would say in contemporary parlance—just “so Jefferson.” His long life was marked by a catalog of achievements; these were intended to represent his proudest accomplishments. Much remarked upon is the conspicuous absence of his unhappy tenure as third President of the United States. Less noted is the omission of his time as Governor of Virginia during the Revolution, marred by his humiliating flight from Monticello just minutes ahead of British cavalry. Of the three that did make the final cut, his role as author of the Declaration has been much examined. The Virginia statute—seen as the critical antecedent to First Amendment guarantees of religious liberty—gets less press, but only because it is subsumed in a wider discussion of the Bill of Rights. But who really talks about Jefferson’s role as founder of the University of Virginia?

That is the ostensible focus of Thomas Jefferson’s Education, by Alan Taylor, perhaps the foremost living historian of the early Republic.  But in this extremely well-written and insightful analysis, Taylor casts a much wider net that ensnares a tangle of competing themes that not only traces the sometimes-fumbling transition of Virginia from colony to state, but speaks to underlying vulnerabilities in economic and political philosophy that were to extend well beyond its borders to the southern portion of the new nation. Some of these elements were to have consequences that echoed down to the Civil War; indeed, still echo to the present day.

Students of the American Civil War are often struck by the paradox of Virginia. How was it possible that this colony—so central to the Revolution and the founding of the Republic, the most populous and prominent, a place that boasted notable thinkers like Jefferson, Madison and Marshall, that indeed was home to four of the first five presidents of the new United States—could find itself on the eve of secession such a regressive backwater, soon doomed to serve as the capitol of the Confederacy? It turns out that the sweet waters of the Commonwealth were increasingly poisoned by the institution of human chattel slavery, once decried by its greatest intellects, then declared indispensable, finally deemed righteous. This tragedy has been well-documented in Susan Dunn’s superlative Dominion of Memories: Jefferson, Madison & the Decline of Virginia, as well as Alan Taylor’s own Pulitzer Prize winning work, The Internal Enemy: Slavery and the War in Virginia 1772-1832. What came to be euphemistically termed the “peculiar institution” polluted everything in its orbit, often invisibly except to the trained eye of the historian. This included, of course, higher education.

If the raison d’être of the Old Dominion was to protect and promote the interests of the wealthy planter elite that sat atop the pyramid of a slave society, then really how important was it for the scions of Virginia gentlemen to be educated beyond the rudimentary levels required to manage a plantation and move in polite society? And after all, wasn’t the “honor” of the up-and-coming young “masters” of far greater consequence than the aptitude to discourse in matters of rhetoric, logic or ethics? In Thomas Jefferson’s Education, Taylor takes us back to the nearly forgotten era of a colonial Virginia when the capitol was located in “Tidewater” Williamsburg and rowdy students—wealthy, spoiled sons of the planter aristocracy with an inflated sense of honor—clashed with professors at the prestigious College of William & Mary who dared to attempt to impose discipline upon their bad behavior. A few short years later, Williamsburg was in shambles, a near ghost town, badly mauled by the British during the Revolution, the capitol relocated north to “Piedmont” Richmond, William & Mary in steep decline. Thomas Jefferson’s determination over more than two decades to replace it with a secular institution devoted to the liberal arts that welcomed all white men, regardless of economic status, is the subject of this book.  How he realized his dream with the foundation of the University of Virginia in the very sunset of his life, as well as the spectacular failure of that institution to turn out as he envisioned it is the wickedly ironic element in the title of Thomas Jefferson’s Education.

The author is at his best when he reveals the unintended consequences of history. In his landmark study, American Revolutions: A Continental History, 1750-1804, Taylor underscores how American Independence—rightly heralded elsewhere as the dawn of representative democracy for the modern West—was at the same time to prove catastrophic for Native Americans and African Americans, whose fate would likely have been far more favorable had the colonies remained wedded to a British Crown that drew a line for westward expansion at the Appalachians, and later came to abolish slavery throughout the empire.  Likewise, there is the example of how the efforts of Jefferson and Madison—lauded for shaking off the vestiges of feudalism for the new nation by putting an end to institutions of primogeniture and entail that had formerly kept estates intact—expanded the rights of white Virginians while dooming countless numbers of the enslaved to be sold to distant geographies and forever separated from their families.

In Thomas Jefferson’s Education, the disestablishment of religion is the focal point for another unintended consequence. For Jefferson, an established church was anathema, and stripping the Anglican Church of its preferred status was central to his “Statute of Virginia for Religious Freedom” that was later enshrined in the First Amendment. But it turns out that religion and education were intertwined in colonial Virginia’s most prominent institution of higher learning, Williamsburg’s College of William & Mary, funded by the House of Burgesses, where professors were typically ordained Anglican clergymen. Moreover, tracts of land known as “glebes” that were formerly distributed by the colonial government for Anglican (later Episcopal) church rectors to farm or rent, came under assault by evangelical churches allied with secular forces after the Revolution in a movement that eventually was to result in confiscation. This put many local parishes—once both critical sponsors of education and poor relief—into a death spiral that begat still more unintended consequences that in some ways still resonate to the present-day politics and culture of the American south. As Taylor notes:

The move against church establishment decisively shifted public finance for Virginia. Prior to the revolution, the parish tax had been the greatest single tax levied on Virginians; its elimination cut the local tax burden by two thirds. Poor relief suffered as the new County overseers spent less per capita than had the old vestries. After 1790, per capita taxes, paid by free men in Virginia, were only a third of those in Massachusetts. Compared to northern states, Virginia favored individual autonomy over community obligation. Jefferson had hoped that Virginians would reinvest their tax savings from disestablishment by funding the public system of education for white children. Instead county elites decided to keep the money in their pockets and pose as champions of individual liberty. [p57-58]

For Jefferson, a creature of the Enlightenment, the sins of medievalism inherent to institutionalized religion were glaringly apparent, yet he was blinded to the positive contributions it could provide for the community. Jefferson also frequently perceived his own good intentions in the eyes of others who simply did not share them because they were either selfish or indifferent. Jefferson seemed to genuinely believe that an emphasis on individual liberty would in itself foster the public good, when in reality—then and now—many take such liberty as the license to simply advance their own interests. For all his brilliance, Jefferson was too often naïve when it came to the character of his countrymen.

Once near-universally revered, the legacy of Thomas Jefferson often triggers ambivalence for a modern audience and poses a singular challenge for historical analysis. A central Founder, Jefferson’s bold claim in the Declaration “that all men are created equal” defined both the struggle with Britain and the notion of “liberty” that not only came to characterize the Republic that eventually emerged, but gave echo with a deafening resonance to the French Revolution—and far beyond to legions of the oppressed yearning for the universal equality that Jefferson had asserted was their due. At the same time, over the course of his lifetime Jefferson owned hundreds of human beings as chattel property.  One of the enslaved almost certainly served as concubine to bear him several offspring who were also enslaved, and she almost certainly was the half-sister of Jefferson’s late wife.

The once popular view that imagined that Jefferson did not intend to include African Americans in his definition of “all men” has been clearly refuted by historians.  And Jefferson, like many of his elite peers of the Founding generation—Madison, Monroe, and Henry—decried the immorality of slavery as institution while consenting to its persistence, to their own profit. Most came to find grounds to justify it, but not Jefferson: the younger Jefferson cautiously advocated for abolition, while the older Jefferson made excuses for why it could not be achieved in his lifetime—made manifest in his much quoted “wolf by the ear” remark—but he never stopped believing it an existential wrong. As Joseph Ellis underscored in his superb study, American Sphinx, Jefferson frequently held more than one competing and contradictory view in his head simultaneously and was somehow immune to the cognitive dissonance such paradox might provoke in others.

It is what makes Jefferson such a fascinating study, not only because he was such a consequential figure for his time, but because the Republic then and now remains a creature of habitually irreconcilable contradictions remarkably emblematic of this man, one of its creators, who has carved out a symbolism that varies considerably from one audience to another. Jefferson, more than any of the other Founders, was responsible for the enduring national schizophrenia that pits federalism against localism, a central economic engine against entrepreneurialism, and the well-being of a community against personal liberties that would let you do as you please. Other elements have been, if not resolved, forced to the background, such as the industrial vs. the agricultural, and the military vs. the militia. Of course, slavery has been abolished, civil rights tentatively obtained, but the shadow of inequality stubbornly lingers, forced once more to the forefront by the murder of George Floyd; I myself participated in a “Black Lives Matter” protest on the day before this review was completed.

Perhaps much overlooked in the discussion but no less essential is the role of education in a democratic republic. Here too, Jefferson had much to offer and much to pass down to us, even if most of us have forgotten that it was his soft-spoken voice that pronounced it indispensable for the proper governance of both the state of Virginia and the new nation. That his ambition extended only to white, male universal education that excluded blacks and women naturally strikes us as shortsighted, even repugnant, but should not erase the fact that even this was a radical notion in its time. Rather than disparage Jefferson, who died two centuries ago, we should perhaps condemn the inequality in education that persists in America today, where a tradition of community schools funded by property taxes meant that my experience growing up in a white, middle class suburb in Fairfield, CT translated into an educational experience vastly superior to that of the people of color who attended the ancient crumbling edifices in the decaying urban environment of Bridgeport less than three miles from my home. How can we talk about “Black Lives Matter” without talking about that?

The granite obelisk that marked Jefferson’s final resting place was chipped away at by souvenir hunters until it was relocated in order to preserve it.  A joint resolution of Congress funded the replacement, erected in 1883, that visitors now encounter at Monticello. The original obelisk now incongruously sits in a quadrangle at the University of Missouri, perhaps as far removed from Jefferson’s grave as today’s diverse, co-ed institution of UVA at Charlottesville is at a distance from the both the university he founded and the one he envisioned. We have to wonder if Jefferson would be more surprised to learn that African Americans are enrolled at UVA—or that in 2020 they only comprise less than seven percent of the undergraduate population? And what would he make of the white supremacists who rallied at Charlottesville in 2017 and those who stood against them? I suspect a resurrected Jefferson would be no less enigmatic than the one who walked the earth so long ago.

Alan Taylor has written a number of outstanding works—I’ve read five of them—and he has twice won the Pulitzer Prize for History. He is also, incidentally, the Thomas Jefferson Memorial Foundation Professor of History at the University of Virginia, so Thomas Jefferson’s Education is not only an exceptional contribution to the historiography but no doubt a project dear to his heart. While I continue to admire Jefferson even as I acknowledge his many flaws, I cannot help wondering how Taylor—who has so carefully scrutinized him—personally feels about Thomas Jefferson. I recall that in the afterword to his magnificent historical novel, Burr, Gore Vidal admits: “All in all, I think rather more highly of Jefferson than Burr does …”  If someone puts Alan Taylor on the spot, I suppose that could be as good an answer as any …

Note: I have reviewed other works by Alan Taylor here:

Review of: American Revolutions: A Continental History, 1750-1804, by Alan Taylor

Review of: The Internal Enemy: Slavery and the War in Virginia 1772-1832, by Alan Taylor

Review of: The Civil War of 1812: American Citizens, British Subjects, Irish Rebels, & Indian Allies by Alan Taylor

Review of: American Republics: A Continental History of the United States, 1783-1850, by Alan Taylor

My review of Susan Dunn’s excellent book, referenced above, is here:

Review of: Dominion of Memories: Jefferson, Madison & the Decline of Virginia by Susan Dunn

Featured

Review of: The Testaments: The Sequel to The Handmaid’s Tale, by Margaret Atwood

Nolite te bastardes carborundorum could very well be the Latin phrase most familiar to a majority of Americans. Roughly translated as “Don’t let the bastards grind you down,” it has been emblazoned on tee shirts and coffee mugs, trotted out as bumper sticker and email signature, and—most prominently—has become an iconic feminist rallying cry for women. That this famous slogan is not really Latin or any language at all, but instead a kind of schoolkid’s “mock Latin,” speaks to the colossal cultural impact of the novel where it first made its appearance in 1985, The Handmaid’s Tale, by Margaret Atwood, as well as the media then spawned, including the 1990 film featuring Natasha Richardson, and the acclaimed series still streaming on Hulu. Consult any random critic’s list of the finest examples in the literary sub-genre “dystopian novels,” and you will likely find The Handmaid’s Tale in the top five, along with such other classic masterpieces as Orwell’s 1984, Huxley’s Brave New World and Bradbury’s Fahrenheit 451, which is no small achievement for Atwood.

For anyone who has not been locked in a box for decades, The Handmaid’s Tale relates the chilling story of the not-too-distant-future nation of “Gilead,” a remnant of a fractured United States that has become a totalitarian theonomy that demands absolute obedience to divine law, especially the harsh strictures of the Old Testament. A crisis in fertility has led to elite couples relying on semi-enslaved “handmaids” who serve as surrogates to be impregnated and carry babies to term for them, which includes a bizarre ritual where the handmaid lies in the embrace of the barren wife while being penetrated by the “Commander.” The protagonist is known as “Offred”—or “Of Fred,” the name of this Commander—but once upon a time, before the overthrow of the U.S., she was an independent woman, a wife, a mother.  It is Offred who one day happens upon Nolite te bastardes carborundorum scratched upon the wooden floor on her closet, presumably by the anonymous handmaid who preceded her.

Brilliantly structured as a kind of literary echo of Geoffrey Chaucer’s The Canterbury Tales, employing Biblical imagery—the eponymous “handmaid” based upon the Old Testament account of  Rachel and her handmaid Bilhah—and magnificently imagining a horrific near-future of a male-dominated society where all women are garbed in color-coded clothing to reflect their strictly assigned subservient roles, Atwood’s narrative achieves the almost impossible feat of imbuing what might otherwise smack of the fantastic with the highly persuasive badge of the authentic.

The 1990 film adaptation—which also starred Robert Duvall as the Commander and Faye Dunaway as his infertile wife Serena Joy—was largely faithful to the novel, while further fleshing out the character of Offred. But it is has been the Hulu series, updated to reflect a near-contemporary pre-Gilead America replete with cell phones and technology—and soon to beget (pun fully intended!) a fourth season—which both embellished and enriched Atwood’s creation for a new generation and a far wider audience.  And it has enjoyed broad resonance, at least partially due to its debut in early 2017, just months after the presidential election.  The coalition of right-wing evangelicals, white supremacists, and neofascists that has come to coalesce around the Republican Party in the Age of Trump has not only brought new relevance to The Handmaid’s Tale, but has also seen its scarlet handmaid’s cloaks adopted by many women as the de rigueur uniform of protest in the era of “Me Too.” Meanwhile, the series—which is distinguished by an outstanding cast of fine ensemble actors, headlined by Elisabeth Moss as Offred—has proved enduringly terrifying for three full seasons, while largely maintaining its authenticity.

Re-enter Margaret Atwood with The Testaments: The Sequel to The Handmaid’s Tale, released thirty-four years after the original novel. As a fan of both the book and the series, I looked forward to reading it, though my anticipation was tempered by a degree of trepidation based upon my time-honored conviction that sequels are ill-advised and should generally be avoided. (If Godfather II was the rare exception in film, Thomas Berger’s The Return of Little Big Man certainly proved the rule for literature!)  Complicating matters, Atwood penned a sequel not to her own novel, but rather to the Hulu series, which brought back memories of Michael Crichton’s awkward The Lost World, written as a follow-up to Spielberg’s Jurassic Park movie rather than his own book.

My fears were not misplaced.

The action in The Testaments takes place in both Gilead and in Atwood’s native Canada, which remains a bastion of freedom and democracy for those who can escape north. The timeframe is roughly fifteen years after the conclusion of Hulu’s Season Three. The narrative is told from the alternating perspectives of three separate protagonists, one of whom is Aunt Lydia, the outsize brown-clad villain of book and film known for both efficiency and brutality in her role as a “trainer” of handmaids.  Aunt Lydia turns out to have both a surprising pre-Gilead backstory as well as a secret life as an “Aunt,” although there are no hints of these in any previous works. Still, I found the Lydia portion of the book most interesting, and perhaps the more plausible in a storyline that often flirts with the farfetched.

In order to sidestep spoilers, I cannot say much about the identities of the other two main characters, who are each subject to surprise “reveals” in the narrative—except that I personally was less surprised than was clearly intended. Oh yes, I get it: the butler did it … but I still have hundreds of pages ahead of me. But that was not the worst of it.

The beauty of the original novel and the series has remained a remarkably consistent authenticity, despite an extraordinary futuristic landscape.  The test of all fiction—but most especially in science-fiction, fantasy, and the dystopian—is: can you successfully suspend disbelief? For me, The Testaments fails this test again and again, most prominently when one of our “unrevealed” characters—an otherwise ordinary teenage girl—is put through something like a “light” version of La Femme Nikita training, and then in short order trades high school for a dangerous undercover mission without missing a beat! Moreover, her character is not well-drawn, and the words put in her mouth ring counterfeit. It seems evident that the eighty-year-old Atwood does not know very many sixteen-year-old girls, and culturally this one acts and sounds like she was raised thirty years ago and then catapulted decades into the future. Overall, the plot is contrived, the action inauthentic, the characters artificial.

This is certainly not vintage Atwood, although some may try to spin it that way. The Handmaid’s Tale was not a one-hit wonder: Atwood is a prolific, accomplished author and I have read other works—including The Penelopiad and The Year of the Flood—that underscore her reputation as a literary master. But not this time.  In my disappointment, I was reminded of my experience with Khaled Hosseini, whose  The Kite Runner was a superlative novel that showcased a panoply of complex themes and nuanced characters that remained with me long after I closed the cover.  That was followed by A Thousand Splendid Suns, which though a bestseller was dramatically substandard to his earlier work, peopled with nearly one-dimensional caricatures assigned to be “good” or “evil” navigating a plot that smacked more of soap-opera than subtlety.

The Testaments too has proved a runaway bestseller, but it is the critical acclaim that I find most astonishing, even scoring the highly prestigious 2019 Booker Award—though I can’t bear to think of it sitting on the same shelf alongside … say … Richard Flanagan’s The Narrow Road to the Deep North, which took the title in 2014. It is tough for me to review a novel so well-received that I find so weak and inconsequential, especially when juxtaposed with the rest of the author’s catalog. I keep holding out hope that someone else might take notice that the emperor really isn’t wearing any clothes, but the bottom line is that lots of people loved this book; I did not.

On the other hand, a close friend countered that fiction, like music, is highly subjective. But I take some issue with that. Perhaps you personally might not have enjoyed Faulkner’s The Sound and the Fury, or Hemingway’s A Farewell to Arms, for that matter, but you cannot make the case that these are bad books. I would argue that The Testaments is a pretty bad book, and I would not recommend it. But here, it seems, I remain a lone voice in the literary wilderness.

 

Featured

Review of: A Warning, by Anonymous

DISCLAIMER: The review that follows and the book that is its subject each include a fact-based timeline, political polemic, and inflammatory language, some or all of which may be highly offensive to certain individuals, especially those who identify with the MAGA movement or abjure critical thinking.  If you or someone you care about fits that description, is highly sensitive, or is unable to handle views that contradict your political narrative, you are urged to stop reading now and put this review aside.  Those who proceed further do so at their own risk, and this reviewer will hold himself blameless for any fits of rage, dangerous increases in blood pressure, or Rumpelstiltskin-like attempts to stomp the ground so hard that the reader sinks into a chasm, that may result from continuing beyond this point …

President Trump is facing a test to his presidency unlike any faced by a modern American leader. It’s not just that the special counsel looms large. Or that the country is bitterly divided over Mr. Trump’s leadership. Or even that his party might well lose the House to an opposition hellbent on his downfall. The dilemma—which he does not fully grasp—is that many of the senior officials in his own administration are working diligently from within to frustrate parts of his agenda and his worst inclinations. I would know. I am one of them.

That is the opening excerpt from an Op-Ed entitled “I Am Part of the Resistance Inside the Trump Administration” published in the New York Times on September 5, 2018, along with this note from the editors:  “The Times is taking the rare step of publishing an anonymous Op-Ed essay. We have done so at the request of the author, a senior official in the Trump administration whose identity is known to us and whose job would be jeopardized by its disclosure.

The Op-Ed was written on the eve of the mid-term elections, before the release of the Mueller report, the murder of Khashoggi, the shutdown of the Trump Foundation for what was described as “a shocking pattern of illegality,” the expulsion of most remaining adults-in-the-room including Mattis and Kelly and Rosenstein, the “perfect call” with Volodymyr Zelensky that led to impeachment—which was just one shocking by-product of an erratic foreign policy of appeasement to Putin, ongoing saber-rattling with the Ayatollah and kissy-face with Kim Jung-un, the granting of dispensation to Mohammed bin Salman, and the green-lighting of Erdoğan to take out our Kurdish allies in Syria, not to mention the continuing crisis at home of kids in cages, and the ousting of any civil servant who dared contradict the President  with a fact-based narrative. And there was so very much more that it is truly a blur. In September 2019, Trump doctored a map with a Sharpie and flashed it on television to prove he was right all along about the path of Hurricane Dorian. In October 2019, the President of the United States actually expressed interest in constructing an electrified moat filled with alligators along the Mexican border and shooting migrants in the legs to slow them down! Who even remembers that now?

Shortly after the moat full of alligators rose to a brief crest in the 24 hour cable news cycle and then sank beneath the weight of the tide of whatever was next that no one can really recall anymore, while we collectively held our breaths for the next wave of … well, who knows what? …  A Warning, by Anonymous—the same “senior Trump administration official” who was author of that NYT editorial—was published. A Warning set a record for preorders and made the bestseller list, and while the staggering revelations by a senior insider that it contains would have no doubt thrust any other administration into a tailspin so severe that it could never have recovered, this book—much like the misadventures it chronicles—is essentially as forgotten to an overwhelmed amnesiac public as the moat full of alligators.  The notion that “nothing matters” has become such a cliché precisely because—as the subsequent impeachment acquittal underscored—when it comes to Trump, nothing truly does matter anymore. Or really ever has.

The thesis of A Warning—which picks up where the author’s editorial left off—is that 1) all hyperbole on left-leaning media aside, President Trump really is as he appears to the non-brainwashed observer: an unhinged, irrational, narcissistic, incompetent clown who left to his own devices would no doubt steer the clown car with all of us aboard right into the abyss; and 2) if not for the valiant efforts of the author and his or her furtive cohorts, working ceaselessly behind the scenes to curtail Trump’s most dangerous instincts, we would likely already be acquainted with said abyss. “Anonymous” claims that he/she is generally supportive of the administration’s conservative right-wing agenda, but fears what the President’s unbalanced behavior could bring. While Trump rambles on paranoiacally about the so-called imaginary “Deep State” plotting to undermine him, the author of A Warning refutes the notion of said “Deep State” while emphasizing what he/she terms the “Steady State,” an unidentified  alliance at the top tier of “glorified government babysitters” who quietly strive to “keep the wheels from coming off the White House wagon.”

But apparently the axle nuts are getting looser every day, and those wheels are about to let go, as underscored in the very first chapter, aptly entitled “Collapse of the Steady State,” where the author admits that:

I was wrong about the “quiet resistance” inside the Trump Administration. Unelected bureaucrats and cabinet appointees were never going to steer Donald Trump in the right direction in the long run, or refine his malignant management style.  He is who he is. Americans should not take comfort in knowing whether there are so-called adults in the room. We are not bulwarks against the president and shouldn’t be counted upon to keep him in check. That is not our job. That is the job of the voters …

If the original editorial was an attempt to reassure us that while the President was often indeed as mindlessly dangerous as a runaway bull amok in the national china shop, there was yet a significant presence of others sane and rational to rein him in before too much of value was irreparably wrecked, A Warning goes much further, urging a broad coalition to defeat him in 2020, especially targeting those in the right lane who otherwise cheer the lower taxes, frantic deregulation, and the ascent of ultraconservative Supreme Court justices that have been a byproduct of Trumpism. But does such a cohort actually exist?

For Trump and a polarized America in 2020, there are essentially four audiences to play to: 1) Donald Trump represents an existential threat to our values of freedom and democracy in our sacred Republic; 2) Donald Trump is a savior for America sent by the almighty God to restore our sacrosanct traditional values and lock up anyone who would even think about having an abortion; 3) Donald Trump is an absolutely offensive buffoon—of course—but the economy has been supercharged so why don’t they just let him do his job?; and, 4) Donald Trump is the same as Joe Biden, and if Bernie Sanders was President we’d all have free college and healthcare and everything else and if you don’t agree you should just die. A Warning makes a compelling argument, but I don’t see it changing anyone’s mind. Either the Emperor is wearing those new clothes or he isn’t.

Each chapter of A Warning is headed by a quotation from a former president—Madison, Washington, Jefferson, Kennedy, Reagan, etc.—that speaks to an aspect of government or the character of its leadership. What then follows are accounts of Trump’s resistance to expertise, paranoid ramblings, irrational behavior, and “malignant management style” that clearly stand as counterpoints to these ideals. At one point, the author reveals that: “Behind closed doors his own top officials deride him as an “idiot” and a “moron” with the understanding of a “fifth or sixth grader.” [p63] This excerpt that describes briefings with the President is a bit longish but perhaps most illustrative:

Early on, briefers were told not to send lengthy documents. Trump wouldn’t read them. Nor should they bring summaries to the Oval Office. If they must bring paper, then PowerPoint was preferred because he is a visual learner. Okay, that’s fine, many thought to themselves, leaders like to absorb information in different ways. Then officials were told that PowerPoint decks needed to be slimmed down. The president couldn’t digest too many slides. He needed more images to keep his interest—and fewer words. Then they were told to cut back the overall message (on complicated issues such as military readiness or the federal budget) to just three main points. Eh, that was still too much … Forget the three points. Come in with one main point and repeat it—over and over again, even if the president inevitably goes off on tangents—until he gets it. Just keep steering the subject back to it. One point. Just that one point. Because you cannot focus the commander-in-chief’s attention on more than one goddamned thing over the course of the meeting, okay? [p29-30]

This is just one of many persuasive arguments that the President is unfit for office, but again: whom is it likely to persuade?

A couple of things struck me about this book that have little to do with its message. First of all, it is not well-written.  Not at all. It may be that it was deliberately dumbed-down to target a less educated audience, but I don’t think so.  More likely, the author simply isn’t a very talented writer.  A Warning has a conversational style, and my guess is that it was dictated and transcribed by someone who is not generally comfortable with a pen.

Second, the author attempts to use history to make his/her point—beyond quotes from presidents, there are also numerous references in the narrative that reach back to ancient Greece and Rome. But the effort is clumsy, at best, and at worst just completely off the mark. At one point, when tracing the origins of the GOP, the author identifies it with “states’ rights,” which while a core value of the modern Republican Party was a hundred fifty years ago closely associated with rival Democrats. [p95] (In fact, one could argue that today’s “Party of Lincoln” has little in common with Lincoln at all.) Elsewhere, there is an awkward tussle with fact-based history as the author struggles to mine democracy in ancient Greece for workable analogies with today’s politics. Athenian demagogue Cleon is cast as a cloak-wearing precursor to Trump “… who will sound familiar to readers … [as he] … inherited money from his father and leveraged it to launch a career in politics.” The famous episode from Thucydides that has Cleon calling for the slaughter of the Mytilenean rebels is posited as an alleged signpost to the decline and fall of Athenian democracy. The later massacre of the Melians is also referenced, as is the execution of Socrates, along with a wild claim that “the latter was an exclamation point on the death of Athenian democracy …” [p183-86] All this is not only completely out of context but downright silly, and—as any historian of ancient Greece would point out—the radical democracy of Athens actually thrived for decades after the death of Socrates in 399 BCE, and even persisted well beyond the subjugation of the polis by Phillip II in 338 BCE.

But that the author is both a bad writer and a lousy historian to my mind just adds to his/her authenticity, as a “senior Trump administration official.”  After all, we know that the cabinet is comprised of second and third-rate individuals, and the quality—especially as we have made the shift to “acting” secretaries that don’t require Senate approval—has seen a pronounced downward slope.  Of course, the author’s lack of talent hardly diminishes the tale that is told.

The reason A Warning lacks shock-value to some degree is because we have heard much or all of this before, from multiple sources, some more respected than others.  While it might be easy to dismiss such schlocky work as Michael Wolff’s Fire and Fury: Inside the Trump White House, the much-celebrated expose of the administration that was frequently as long on bombshells as it was short on substantiation, it is far more difficult to ignore the chilling accounts from award-winning journalist Bob Woodward, whose 2018 book Fear: Trump in the White House identifies then-Secretary of Defense James Mattis as the source of the “fifth or sixth grader” quote. Woodward also reports then-Chief of Staff John Kelly describing the President as “unhinged”—exclaiming: “He’s an idiot. It’s pointless to try to convince him of anything. He’s gone off the rails. We’re in Crazytown.”  Far more worrisome than such anecdotes is Woodward’s revelation that then-Chief Economic Adviser Gary Cohn—alarmed that Trump was about to sign a document ending a key trade agreement with South Korea that also dove-tailed with a security arrangement that would alert us to North Korean nuclear adventurism—simply stole the document off the President’s desk! And the President never missed it …

Much of this material has been substantiated by insiders, and there is certainly plenty of evidence to suggest Trump is utterly incapable of serving as Chief Executive. But would anything convince his loyal acolytes of this? Apparently not, which is why A Warning both preached to the chorus and otherwise fell on deaf ears. In February 2020, fifty-two Republican Senators voted to acquit Trump in his impeachment trial—and you can bet that most or all of these “august” legislators know exactly what Donald Trump is really like behind closed doors.

As this review goes to press, we are in the midst of global pandemic that has hit the United States far harder than it should have, largely due to the ongoing incompetence of the President, who is not unsurprisingly the very worst person to be in charge during what is surely the greatest threat to the nation since Pearl Harbor, perhaps since Fort Sumter.  We need a Lincoln or a FDR or a JFK at the helm, and what we have is Basil Fawlty … although even that is unfair: Basil would have recognized that he was in over his head and sought Polly’s help, who would have enlisted Manual’s assistance, and we would at least have a chance. Trump, being Trump, believes he has all the answers; and thousands more succumb to the virus as the days go by …

So, who is the author of A Warning? Who exactly is “Anonymous?” There has been some speculation, but if I had to assign authorship, I would put my money on Kellyanne Conway. One clue that narrows it down a bit is that the tone in the narrative hints at a female voice rather than a male one, although I could be mishearing that. More persuasive is the style, which sounds an awful lot like Kellyanne in conversation, albeit spouting utterances diametrically opposed to those outrageous defenses of the President she concocts for the media.  Perhaps most compelling is the fact that Kellyanne has uncharacteristically outlasted most members of the administration, especially striking in light of the fact that her husband, attorney George Conway, is a loud and prominent critic of the President that has long called for his removal from office. That Kellyanne has managed to somehow keep her job despite this suggests that she has something on Trump that guarantees her tenure, and makes me think she more than anyone inside that circus tent wants us to hear this warning of why the ringmaster must be denied four more years …

UPDATE 10-28-20 I was wrong … it wasn’t Kellyanne Conway https://milestaylor.medium.com/a-statement-a13bc5173ee9

Link to: NYT Op-Ed:  “I Am Part of the Resistance Inside the Trump Administration”

Link to: Review of: Fear: Trump in the White House, by Bob Woodward

Featured

Review of: America’s War for the Greater Middle East: A Military History, by Andrew J. Bacevich

“From the end of World War II until 1980, virtually no American soldiers were killed in action while serving in the greater Middle East. Since 1980, virtually no American soldiers have been killed anywhere else. What caused that shift?”

            That stark question appears as a blurb on the back cover of my edition of America’s War for the Greater Middle East: A Military History, Andrew J. Bacevich’s ambitious, brilliantly conceived if flawed chronicle which seeks to both answer that