“Stranger, whoever you are, open this to learn what will amaze you.”
This passage, in ancient Greek and in translation, is the key to Cloud Cuckoo Land, a big, ambitious, complicated novel by Anthony Doerr, the latest from the author of the magnificent, Pulitzer Prize-winning All the Light We Cannot See (2014). Classicists will recognize “Cloud Cuckoo Land” as borrowed from The Birds, the 414 BCE comedy by the Athenian satirist Aristophanes, a city in the sky constructed by birds that later became synonymous for any kind of fanciful world. In this case, Cloud Cuckoo Land serves as the purported title of a long-lost ancient work by Antonius Diogenes, rediscovered as a damaged but partially translatable codex in 2019, that relates the tale of Aethon, a hapless shepherd who transforms into a donkey, then into a fish, then into a crow, in a quest to reach that utopian city in the clouds. It serves as well as the literary glue that binds together the narrative and the central protagonists of Doerr’s novel.
There is the octogenarian Zeno, self-taught in classical Greek, who has translated the fragmentary codex and adapted it into a play that is to be performed by fifth graders in the public library located in Lakeport, Idaho in 2020. Lurking in the vicinity is Seymour, an alienated teen with Asperger’s, flirting with eco-terrorism. And hundreds of years in the past, there is also the thirteen-year-old Anna, who has happened upon that same codex in Constantinople, on the eve of its fall to the Turks. Among the thousands of besiegers outside the city’s walls is Omeir, a harelipped youngster who with his team of oxen was conscripted to serve the Sultan in the cause of toppling the Byzantine capital. Finally, there is Konstance, fourteen years old, who has lived her entire life on the Argos, a twenty-second century spacecraft destined for a distant planet; she too comes to discover “Cloud Cuckoo Land.”
Alternating chapters, some short, others far longer, tell the stories of each protagonist, in real time or through flashbacks. For the long-lived Zeno, readers follow his hardscrabble youth, his struggle with his closeted homosexuality, his stint as a POW in the Korean War, and his long love affair with the language of the ancient Greeks. We observe how an uncertain and frequently bullied Seymour reacts to the destruction of wilderness and wildlife in his own geography. We watch the rebellious Anna abjure her work as a lowly seamstress to clandestinely translate the codex. We learn how the disfigured-at-birth Omeir is at first nearly left to die, then exiled along with his family because villagers believe he is a demon. We see Konstance, trapped in quarantine in what appears to be deep space, explore the old earth through an “atlas” in the ship’s library.
Cloud Cuckoo Land is in turn fascinating and captivating, but sometimes—unfortunately—also dull. There are not only the central protagonists to contend with, but also a number of secondary characters in each of their respective orbits, as well as the multiple timelines spanning centuries, so there is much to keep track of. I recall being so spellbound by All the Light We Cannot See that I read its entire 500-plus pages over a single weekend. This novel, much longer, did not hook me with a similar force. I found it a slow build: my enthusiasm tended to simmer rather than surge. Alas, I wanted to care about the characters far more than I did. Still, the second half of the novel is a much more exciting read than the first portion.
Science—in multiple disciplines—is often central to a Doerr novel. That was certainly the case in All the Light We Cannot See, as well as in his earlier work, About Grace. In Cloud Cuckoo Land, in contrast, science—while hardly absent—takes a backseat. The sci-fi in the Argos voyage is pretty cool, but hardly the stuff of Asimov or Heinlein. And Seymour’s science of climate catastrophe strikes as little more than an afterthought in the narrative.
Multiple individuals with lives on separate trajectories centuries apart whose exploits resonated larger and often overlapping themes reminded me at first of another work with a cloud in its title: Cloud Atlas, by David Mitchell. But Cloud Cuckoo Land lacks the spectacular brilliance of that novel, which manages to take your very breath away. It also falls short of the depth and intricacy that powers Doerr’s All the Light We Cannot See. And yet … and yet … I ended up really enjoying the book, even shedding a tear or two in its final pages. So there’s that. In the final analysis, Doerr is a talented writer and if this is not his finest work, it remains well worth the read.
I have reviewed other novels by Anthony Doerr here:
Reading the “The Epic of Gilgamesh,” in its entirety rekindled a long dormant interest in the Sumerians, the ancient Mesopotamian people that my school textbooks once boldly proclaimed as inventors not only of the written word, but of civilization itself! One of the pleasures of having a fine home library stocked with eclectic works is that there is frequently a volume near at hand to suit such inclinations, and in this case I turned to a relatively recent acquisition, The Sumerians, a fascinating and extremely well-written—if decidedly controversial—contribution to the Lost Civilizations series, by Paul Collins.
“The Epic of Gilgamesh” is, of course, the world’s oldest literary work: the earliest record of the five poems that form the heart of the epic were carved into Sumerian clay tablets that date back to 2100 BCE, and relate the exploits of the eponymous Gilgamesh, an actual historic king of the Mesopotamian city state Uruk circa 2750 BCE who later became the stuff of heroic legend. Most famously, a portion of the epic recounts a flood narrative nearly identical to the one reported in Genesis, making it the earliest reference to the Near East flood myth held in common by the later Abrahamic religions.
Uruk was just one of a number of remarkable city states—along with Eridu, Ur, and Kish—that formed urban and agricultural hubs between the Tigris and Euphrates rivers in what is today southern Iraq, between approximately 3500-2000 BCE, at a time when the Persian Gulf extended much further north, putting these cities very near the coast. Some archaeologists also placed “Ur of the Chaldees,” the city in the Hebrew Bible noted as the birthplace of the Israelite patriarch Abraham, in this vicinity, reinforcing the Biblical flood connection. A common culture that boasted the earliest system of writing that recorded in cuneiform script a language isolate unrelated to others, advances in mathematics that utilized a sexagesimal system, and the invention of both the wheel and the plow came to be attributed to these mysterious non-Semitic people, dubbed the Sumerians.
But who were the Sumerians? They were completely unknown, notes the author, until archaeologists stumbled upon the ruins of their forgotten cities about 150 years ago. Collins, who currently is Curator for Ancient Near East, Ashmolean Museum*, at University of Oxford, fittingly opens his work with the baked clay artifact known as a “prism” inscribed with the so-called Sumerian King List, circa 1800 BCE, currently housed in the Ashmolean Museum. The opening passage of the book is also the first lines of the Sumerian King List: “After the kingship descended from heaven, the kingship was in Eridu. In Eridu, Alulim became king; He ruled for 28,800 years.” Heady stuff.
“It is not history as we would understand it,” argues Collins, “but a combination of myth, legend and historical information.” This serves as a perfect metaphor for Collins’s thesis, which is that after a century and a half of archaeology and scholarship, we know less about the Sumerians—if such a structured, well-defined common culture ever even existed—and far more about the sometimes-spurious conclusions and even outright fictions that successive generations of academics and observers have attached to these ancient peoples.
Thus, Collins raises two separate if perhaps related issues that both independently and in tandem spark controversy. The first is the question of whether the Sumerians ever existed as a distinct culture, or whether—as the author suggests—scholars may have somehow mistakenly woven a misleading tapestry out of scraps and threads in the archaeological record representing a variety of inhabitants within a shared geography with material cultures that while overlapping were never of a single fabric? The second is how deeply woven into that same tapestry are distortions—some intended and others inadvertent—tailored to interpretations fraught with the biases of excavators and researchers determined to locate the Sumerians as uber-ancestors central to the myth of Western Civilization that tends to dominate the historiography? And, of course, if there is merit to the former, was it entirely the product of the latter, or were other factors involved?
I personally lack both the expertise and the qualifications to weigh in on the first matter, especially given that its author’s credentials include not only an association with Oxford’s School of Archaeology, but also as the Chair of the British Institute for the Study of Iraq. Still, I will note in this regard that he makes many thought-provoking and salient points. As to the second, Collins is quite persuasive, and here great authority on the part of the reader is not nearly as requisite.
Nineteenth century explorers and archaeologists—as well as their early twentieth century successors—were often drawn to this Middle Eastern milieu in a quest for concordance between Biblical references and excavations, which bred distortions in outcomes and interpretation. At the same time, a conviction that race and civilization were inextricably linked—to be clear, the “white race” and “Western Civilization”—determined that what was perceived as “advanced” was ordained at the outset for association with “the West.” We know that the leading thinkers of the Renaissance rediscovered the Greeks and Romans as their cultural and intellectual forebears, with at least some measure of justification, but later far more tenuous links were drawn to ancient Egypt—and, of course, later still, to Babylon and Sumer. Misrepresentations, deliberate or not, were exacerbated by the fact that the standards of professionalism characteristic to today’s archaeology were either primitive or nonexistent.
None of this should be news to students of history who have observed how the latest historiography has frequently discredited interpretations long taken for granted—something I have witnessed firsthand as a dramatic work in progress in studies of the American Civil War in recent decades: notably, although slavery was central to the cause of secession and war, for more than a century African Americans were essentially erased from the textbooks and barely acknowledged other than at the very periphery of the conflict, in what was euphemistically constructed as a sectional struggle among white men, north and south. It was a lie, but a lie that sold very well for a very long time, and still clings to those invested in what has come to called “Lost Cause” mythology.
But yet it’s surprising, as Collins underscores, that what should long have been second-guessed about Sumer remains integral to far too much of what persists as current thinking. Whether the Sumerians are indeed a distinct culture or not, should those peoples more than five millennia removed from us continue to be artificially attached to what we pronounce Western Civilization? Probably not. And while we certainly recognize today that race is an artificial construct that relates zero information of importance about a people, ancient or modern, we can reasonably guess with some confidence that those indigenous to southern Iraq in 3500 BCE probably did not have the pale skin of a native of, say, Norway. We can rightfully assert that the people we call the Sumerians were responsible for extraordinary achievements that were later passed down to other cultures that followed, but an attempt to draw some kind of line from Sumer to Enlightenment-age Europe is shaky, at best.
As such, Collins’s book gives focus to what we have come to believe about the Sumerians, and why we should challenge that. I previously read (and reviewed) Egypt by Christina Riggs, another book in the Lost Civilizations series, which is preoccupied with how ancient Egypt has resonated for those who walked in its shadows, from Roman tourists to Napoleon’s troops to modern admirers, even if that vision little resembles its historic basis. Collins takes a similar tack but devotes far more attention to parsing out in greater detail exactly what is really known about the Sumerians and what we tend to collectively assume that we know. Of course, Sumer is far less familiar to a wider audience, and it lacks the romantic appeal of Egypt—there is no imagined exotic beauty like Cleopatra, only the blur of the distant god-king Gilgamesh—so the Sumerians come up far more rarely in conversation, and provoke far less strong feelings, one way or the other.
The Sumerians is a an accessible read for the non-specialist, and there are plenty of illustrations to enhance the text. Like other authors in the Lost Civilizations series, Collins deserves much credit for articulating sometimes arcane material in a manner that suits both a scholarly and a popular audience, which is by no means an easy achievement. If you are looking for an outstanding introduction to these ancient people that is neither too esoteric nor dumbed-down, I highly recommend this volume.
*NOTE: I recently learned that Paul Collins has apparently left the Ashmolean Museum as of end October 2022, and is now associated with the Middle East Department, British Museum.
Imagine if God—or Gary Larson—had an enormous mayonnaise shaped jar at his disposal and stuffed it chock full of the collective consciousnesses of the greatest modern philosophers, psychoanalysts, neuroscientists, mathematicians, physicists, quantum theoreticians, and cosmologists … then lightly dusted it with a smattering of existential theologians, eschatologists, dream researchers, and violin makers, before tossing in a handful of race car drivers, criminals, salvage divers, and performers from an old-time circus sideshow … and next layered it with literary geniuses, heavy on William Faulkner and Ernest Hemingway with perhaps a dash of Haruki Murakami and just a smidge of Dashiell Hammett … before finally tossing in Socrates, or at least Plato’s version of Socrates, who takes Plato along with him because—love him or hate him—you just can’t peel Plato away from Socrates. Now imagine that giant jar somehow being given a shake or two before being randomly dumped into the multiverse, so that all the blended yet still unique components poured out into our universe as well into other multiple hypothetical universes. If such a thing was possible, the contents that spilled forth might approximate The Passenger and Stella Maris, the pair of novels by Cormac McCarthy that has so stunned readers and critics alike that there is yet no consensus whether to pronounce these works garbage or magnificent—or, for that matter, magnificent garbage.
The eighty-nine-year-old McCarthy, perhaps America’s greatest living novelist, released these companion books in 2022 after a sixteen-year hiatus that followed publication of The Road, the 2006 postapocalyptic sensation that explored familiar Cormac McCarthy themes in a very different genre, employing literary techniques strikingly different from his previous works, and in the process finding a whole new audience. The same might be said, to some degree, of the novel that preceded it just a year earlier, No Country for Old Men, another clear break from his past that was after all a radical departure for readers of say, The Border Trilogy, and his magnum opus, Blood Meridian, which to my mind is not only a superlative work but truly one of the finest novels of the twentieth century.
Full disclosure: I have read all of Cormac McCarthy’s novels, as well as a play and a screenplay that he authored. To suggest that I am a fan would be a vast understatement. My very first McCarthy novel was The Crossing, randomly plucked from a grocery store magazine rack while on a family vacation. That was 2008. I inhaled the book and soon set out to read his full body of work. The Crossing is actually the middle volume in The Border Trilogy, preceded by All the Pretty Horses and followed by Cities of the Plain, which collectively form a near- Shakespearean epic of the American southwest and the Mexican borderlands in the mid-twentieth century, which yet retain a stark primitivism barely removed from the milieu of Blood Meridian, set a full century earlier. The author’s style, in these sagas and beyond, has at times by critics been compared with both Faulkner and Hemingway, both favorably and unfavorably, but McCarthy’s voice is distinctive, and hardly derivative. There is indeed the rich vocabulary of a Faulkner or a Styron, that add a richness to the quality of the prose even as it challenges readers to sometimes seek out the dictionary app on their phones. There is also a magnificent use of the objective correlative, made famous by Hemingway and later in portions of the works of Gabriel Garcia Márquez, which evokes powerful emotions from inanimate objects. For McCarthy, this often manifests in the vast, seemingly otherworldly geography of the southwest. McCarthy also frequently makes use of Hemingway’s polysyndetic syntax that adds emphasis to sentences through a series of conjunctions. Most noticeable for those new to Cormac McCarthy is his omission of most traditional punctuation, such as quotation marks, which often improves the flow of the narrative even as it sometimes lends to a certain confusion in long dialogues between two characters that span several pages.
The Passenger opens with the prologue of a Christmas day suicide that must be recited in the author’s voice as an underscore to the beauty of his prose:
It had snowed lightly in the night and her frozen hair was gold and crystalline and her eyes were frozen cold and hard as stones. One of her yellow boots had fallen off and stood in the snow beneath her. The shape of her coat lay dusted in the snow where she’d dropped it and she wore only a white dress and she hung among the bare gray poles of the winter trees with her head bowed and her hands turned slightly outward like those of certain ecumenical statues whose attitude asks that their history be considered. That the deep foundation of the world be considered where it has its being in the sorrow of her creatures. The hunter knelt and stogged his rifle upright in the snow beside him … He looked up into those cold enameled eyes glinting blue in the weak winter light. She had tied her dress with a red sash so that she’d be found. Some bit of color in the scrupulous desolation. On this Christmas day.
With a poignancy reminiscent of the funeral of Peyton Loftis, also a suicide, in the opening of William Styron’s Lie Down in Darkness, the reader here encounters whom we later learn is Alicia Western, one of the two central protagonists in The Passenger and its companion volume, who much like Peyton in Styron’s novel haunts the narrative with chilling flashbacks. Ten years have passed when, on the very next page, we meet her brother Bobby, a salvage diver exploring a submerged plane wreck who happens upon clues that could put his life in jeopardy among those seeking something missing from that plane. Bobby is a brilliant intellect who could have been a physicist, but instead spends his life chasing down whatever provokes his greatest psychological fears. In this case, the terror of being deep underwater has driven him to salvage work in the oceans. Bobby is also a rugged and resourceful man’s man, a kind of Llewelyn Moss from No Country for Old Men, but with a much higher I.Q. Finally, Bobby, now thirty-seven years old, has never recovered from the death of his younger sister, with whom he had a close, passionate—and possibly incestuous—relationship.
Also integral to the plot is their now deceased father, a physicist who was once a key player in the Manhattan Project that produced the first atomic bombs that obliterated Hiroshima and Nagasaki. Their first names—Alicia and Bobby—seem to be an ironic echo of the “Alice and Bob” characters that are used as placeholders in science experiments, especially in physics. Their surname, Western, could be a kind of doomed metaphor for the tragedy of mass murder on a scale never before imagined that has betrayed the promise of western civilization in the twentieth century and in its aftermath.
A real sense of doom, and a mounting paranoia, grips the narrative in general and Bobby in particular, in what appears to be a kind of mystery/thriller that meanders about, sometimes uncertainly. The cast of characters are extremely colorful, from a Vietnam veteran whose only regret from the many lives he brutally spent while in-country are the elephants that he exploded with rockets from his gunship just for fun, to a small-time swindler with a wallet full of credit cards that don’t belong to him, and a bombshell trans woman with a heart of gold. Some of these folks are like the sorts that turn up in John Steinbeck’s Tortilla Flat, but on steroids, and more likely to suffer an unpredictable death.
But it is Alicia who steals the show in flashback fragments that reveal a stunningly beautiful young woman whose own brilliance in mathematics, physics, and music overshadows even Bobby. She seems to be schizophrenic, plagued by extremely well-defined hallucinations of bedside visitors who could be incarnates of walk-ons from an old-time circus sideshow, right out of central casting. The most prominent is the “Thalidomide Kid”—replete with the flippers most commonly identified with those deformities—who engages her as interlocutor with long-winded, fascinating, and often disturbing dialogue that can run to several pages. Alicia has been on meds, and has checked herself into institutions, but in the end, she becomes convinced both that her visitors are real and that she herself does not belong in this world. But is Alicia even human? There are passing hints that she could be an alien, or perhaps from another universe.
There’s much more, including an episode where “The Kid,” Alicia’s hallucination (?) takes a long walk on the beach with Bobby. This is surprising, if only because McCarthy has long pilloried the magical realism that frequently populates the novels of Garcia Márquez or Haruki Murakami. Perhaps “The Kid” is no hallucination, after all? In any event, much like a Murakami novel—think 1Q84, for example—there are multiple plot lines in The Passenger that go nowhere, and the reader is left frustrated by the lack of resolution. And yet … and yet, the characters are so memorable, and the quality of the writing is so exceptional, that the cover when finally closed is closed without an ounce of regret for the experience. And at the same time, the reader demands more.
The “more” turns out to be Stella Maris, the companion volume that is absolutely essential to broadening your awareness of the plot of The Passenger. Stella Maris is a mental institution that Alicia—then a twenty-year-old dropout from a doctoral program in mathematics—has checked herself into one final time, in the very last year of her life, and so a full decade before the events recounted in The Passenger. She has no luggage, but forty thousand dollars in a plastic bag which she attempts to give to a receptionist. Bobby, in those days a race car driver, lies in a coma as the result of a crash. He is not expected to recover, but Alicia refuses to remove him from life support. The full extent of the novel is told solely in transcript form through the psychiatric sessions of Alicia and a certain Dr. Cohen, but it is every bit a Socratic dialogue of science and philosophy and the existential meaning of life—not only Alicia’s life, but all of our lives, collectively. And finally, there is the dark journey to the eschatological. Alicia—and I suppose by extension Cormac McCarthy—doesn’t take much stock in a traditional, Judeo-Christian god, which has to be a myth, of course. At the same time, she has left atheism behind: there has to be something, in her view, even if she cannot identify it. But most terrifying, Alicia has a certainty that there lies somewhere an undiluted force of evil, something she terms the “Archatron,” that we all resist, even if there is a futility to that resistance.
I consider myself an intelligent and well-informed individual, but reading The Passenger, and especially Stella Maris, was immeasurably humbling. I felt much as I did the first time that I read Faulkner’s The Sound and the Fury, and even the second time that I read Gould’s Book of Fish, by Richard Flanagan. As if there are minds so much greater than mine that I cannot hope to comprehend all that they have to share, but yet I can take full pleasure in immersing myself in their work. To borrow a line from Alicia, in her discussion of Oswald Spengler in Stella Maris, we might say also of Cormac McCarthy: “As with the general run of philosophers—if he is one—the most interesting thing was not his ideas but just the way his mind worked.”
Still reeling from the pandemic, the world was rocked to its core on February 24, 2022, when Russian tanks rolled into Ukraine, an act of unprovoked aggression not seen in Europe since World War II that conjured up distressing historical parallels. If there were voices that previously denied the echo of Hitler’s Austrian Anschluss to Putin’s annexation of Crimea, as well as German adventurism in Sudetenland with Russian-sponsored separatism in the Donbas, there was no mistaking the similarity to the Nazi invasion of Poland in 1939. But it was Vladimir Putin’s challenge to the very legitimacy of Kyiv’s sovereignty—a shout out to the Kremlin’s rising chorus of irredentism that declares Ukraine a wayward chunk of the “near abroad” that is rightly integral to Russia—that compels us to look much further back in history.
Putin’s claim, however dubious, begs a larger question: by what right can any nation claim self-determination? Is Ukraine really just a modern construct, an opportunistic product of the collapse of the USSR that because it was historically a part of Russia should be once again? Or, perhaps counter-intuitively, should western Russia instead be incorporated into Ukraine? Or—let’s stretch it a bit further—should much of modern Germany rightly belong to France? Or vice versa? From a contemporary vantage point, these are tantalizing musings that challenge the notions of shifting boundaries, the formation of nation states, fact-based if sometimes uncomfortable chronicles of history, the clash of ethnicities, and, most critically, actualities on the ground. Naturally, such speculation abruptly shifts from the purely academic to a stark reality at the barrel of a gun, as the history of Europe has grimly demonstrated over centuries past.
To learn more, I turned to the recently updated edition of The Gates of Europe: A History of Ukraine, by historian and Harvard professor Serhii Plokhy, a dense, well-researched, deep dive into the past that at once fully establishes Ukraine’s right to exist, while expertly placing it into the context of Europe’s past and present. For those like myself largely unacquainted with the layers of complexity and overlapping hegemonies that have long dominated the region, it turns out that there is much to cover. At the same time, the wealth of material that strikes as unfamiliar places a strong and discouraging underscore to the western European bias in the classroom—which at least partially explains why it is that even those Americans capable of locating Ukraine on a map prior to the invasion knew almost nothing of its history.
Survey courses in my high school covered Charlamagne’s 9th century empire that encompassed much of Europe to the west, including what is today France and Germany, but never mentioned Kievan Rus’—the cultural ancestor of modern Ukraine, Belarus and Russia—that was in the 10th and 11th centuries the largest and by far the most powerful state on the continent, until it fragmented and then fell to Mongol invaders! To its east, the Grand Principality of Moscow, a 13th century Rus’ vassal state of the Mongols, formed the core of the later Russian Empire. In the 16th and 17th centuries, the Polish–Lithuanian Commonwealth was in its heyday among the largest and most populous on the continent, but both Poland and Lithuania were to fall to partition by Russia, Prussia, and Austria, and effectively ceased to exist for more than a century. Also missing from maps, of course, were Italy and Germany, which did not even achieve statehood until the later 19th century. And the many nations of today’s southeastern Europe were then provinces of the Ottoman Empire. That is European history, complicated and nuanced, as history tends to be.
Plokhy’s erudite study restores from obscurity Ukraine’s past and reveals a people who while enduring occupation and a series of partitions never abandoned an aspiration to sovereignty that was not to be realized until the late 20th century. Once a dominant power, Ukraine was to be overrun by the Mongols, preyed upon for slave labor by the Crimean Khanate, and throughout the centuries sliced up into a variety of enclaves ruled by the Golden Horde, the Polish–Lithuanian Commonwealth, the Austrian Empire, the Tsardom of Russia, and finally the Soviet Union.
That long history was written with much blood and suffering inflicted by its various occupiers. Just in the last hundred years that included Soviet campaigns of terror, ethnic cleansing, and deportations, as well as the catastrophic Great Famine of 1932–33—known as the “Holodomor”—a product of Stalin’s forced collectivization that led to the starvation deaths of nearly four million Ukrainians. Then there was World War II, which claimed another four million lives, including about a million Jews. The immediate postwar period was marked by more tumult and bloodshed. Stability and a somewhat better quality of life emerged under Nikita Khrushchev, who himself spent many years of residence in Ukraine. It was Khrushchev who transferred title of the Crimea to Ukraine in 1954. The final years under Soviet domination saw the Chernobyl nuclear disaster.
The structure of the USSR was manifested in political units known as Soviet Socialist Republics, which asserted a fictional autonomy subject to central control. Somewhat ironically, as time passed this enabled and strengthened nationalism within each of the respective SSR’s. Ukraine (like Belarus) even held its own United Nations seat, although its UN votes were rubber-stamped by Moscow. Still, this further reinforced a sense of statehood, which was realized in the unexpected dissolution of the Soviet Union and Ukraine’s independence in 1991. In the years that followed, as Ukraine aspired to closer ties with the West, that statehood increasingly came under attack by Putin, who spoke in earnest of a “Greater Russia” that by all rights included Ukraine. Election meddling became common, but with the spectacular fall of the Russian-backed president in 2014, Putin annexed Crimea and fomented rebellion that sought to create breakaway “republics” in the Donbas of eastern Ukraine. This only intensified the desire of Kyiv for integration with the European Union and for NATO membership.
A vast country of forest and steppe, marked by fertile plains crisscrossed by rivers, Ukraine has long served as a strategic gateway between the east and west, as emphasized in the book’s title. Elements of western, central, and eastern Europe all in some ways give definition to Ukrainian life and culture, and as such Ukraine remains inextricably as much a part of the west as the east. While Russia has left a huge imprint upon the nation’s DNA, it hardly informs the entirety of its national character. The Russian language continues to be widely spoken, and at least prior to the invasion many Ukrainians had Russian sympathies—if never a desire for annexation! For Ukrainians, stateless for too long, their own national identity ever remained unquestioned. The Russian invasion has, rather than threatened that, only bolstered it.
Today, Ukraine is the second largest European nation, after Russia. Far too often overlooked by both statesmen and talking heads, Ukraine would also be the world’s third largest nuclear power—and would have little to fear from the tanks of its former overlord—had it not given up its stockpile of nukes in a deal brokered by the United States, an important reminder to those who question America’s obligation to defend Ukraine.
As this review goes to press, Russia’s war—which Putin euphemistically terms a “special military operation”—is going very poorly, and despite energy supply shortages and threats of nuclear brinksmanship, the West stands firmly with Ukraine, which in the course of the conflict has been subjected to horrific war crimes by Russian invaders. However, as months pass, and both Europe and the United States endure the economic pain of inflation and rising fuel prices, as well as the ever-increasing odds of rightwing politicians gaining political power on both sides of the Atlantic, it remains to be seen if this alliance will hold steady. As battlefield defeats mount, and men and materiel run short, Putin seems to be running out the clock in anticipation of that outcome. We can only hope it does not come to that.
While I learned a great deal from The Gates of Europe, and I would offer much acclaim to its scholarship, there are portions that can prove to be a slog for a nonacademic audience. Too much of the author’s chronicle reads like a textbook—pregnant with names and dates and events—and thus lacks the sweep of a grand thematic narrative inspiring to the reader and so deserving of the Ukrainian people he treats. At the same time, that does not diminish Plokhy’s achievement in turning out what is certainly the authoritative history of Ukraine. With their right to exist under assault once more, this volume serves as a powerful defense—the weapon of history—against any who might challenge Ukraine’s sovereignty. If you believe, as I do, that facts must triumph over propaganda and polemic, then I highly recommend that you turn to Plokhy to best refute Putin.
I often suffer pangs of guilt when a volume received through an early reviewer program languishes on the shelf unread for an extended period. Such was the case with the “Advanced Reader’s Edition” of After the Apocalypse: America’s Role in the World Transformed, by Andrew Bacevich, that arrived in August 2021 and sat forsaken for an entire year until it finally fell off the top of my TBR (To-Be-Read) list and onto my lap. While hardly deliberate, my delay was no doubt neglectful. But sometimes neglect can foster unexpected opportunities for evaluation. More on that later.
First, a little about Andrew Bacevich. A West Point graduate and platoon leader in Vietnam 1970-71, he went on to an army career that spanned twenty-three years, including the Gulf War, retiring with the rank of Colonel. (It is said his early retirement was due to being passed over for promotion after taking responsibility for an accidental explosion at a camp he commanded in Kuwait.) He later became an academic, Professor Emeritus of International Relations and History at Boston University, and one-time director of its Center for International Relations (1998-2005). He is now president and co-founder of the bipartisan think-tank, the Quincy Institute for Responsible Statecraft. Deeply influenced by the theologian and ethicist Reinhold Niebuhr, Bacevich was once tagged as a conservative Catholic historian, but he defies simple categorization, most often serving as an unlikely voice in the wilderness decrying America’s “endless wars.” He has been a vocal, longtime critic of George W. Bush’s doctrine of preventative war, most prominently manifested in the Iraqi conflict, which he has rightly termed a “catastrophic failure.” He has also denounced the conceit of “American Exceptionalism,” and chillingly notes that the reliance on an all-volunteer military force translates into the ongoing, almost anonymous sacrifice of our men and women for a nation that largely has no skin in the game. His own son, a young army lieutenant, was killed in Iraq in 2007. I have previously read three other Bacevich works. As I noted in a review of one of these, his resumé attaches to Bacevich either enormous credibility or an axe to grind, or perhaps both. Still, as a scholar and gifted writer, he tends to be well worth the read.
The “apocalypse” central to the title of this book takes aim at the chaos that engulfed 2020, spawned by the sum total of the “toxic and divisive” Trump presidency, the increasing death toll of the pandemic, an economy in free fall, mass demonstrations by Black Lives Matter proponents seeking long-denied social justice, and rapidly spreading wildfires that dramatically underscored the looming catastrophe of global climate change. [p.1-3] Bacevich takes this armload of calamities as a flashing red signal that the country is not only headed in the wrong direction, but likely off a kind of cliff if we do not immediately take stock and change course. He draws odd parallels with the 1940 collapse of the French army under the Nazi onslaught, which—echoing French historian Marc Bloch—he lays to “utter incompetence” and “a failure of leadership” at the very top. [p.xiv] This then serves as a head-scratching segue into a long-winded polemic on national security and foreign policy that recycles familiar Bacevich themes but offers little in the way of fresh analysis. This trajectory strikes as especially incongruent given that the specific litany of woes besetting the nation that populate his opening narrative have—rarely indeed for the United States—almost nothing to do with the military or foreign affairs.
If ever history was to manufacture an example of a failure of leadership, of course, it would be hard-pressed to come up with a better model than Donald Trump, who drowned out the noise of a series of mounting crises with a deafening roar of self-serving, hateful rhetoric directed at enemies real and imaginary, deliberately ignoring the threat of both coronavirus and climate change, while stoking racial tensions. Bacevich gives him his due, noting that his “ascent to the White House exposed gaping flaws in the American political system, his manifest contempt for the Constitution and the rule of law placing in jeopardy our democratic traditions.” [p.2] But while he hardly masks his contempt for Trump, Bacevich makes plain that there’s plenty of blame to go around for political elites in both parties, and he takes no prisoners, landing a series of blows on George W. Bush, Barack Obama, Hillary Clinton, Joe Biden, and a host of other members of the Washington establishment that he holds accountable for fostering and maintaining the global post-Cold War “American Empire” responsible for the “endless wars” that he has long condemned. He credits Trump for urging a retreat from alliances and engagements, but faults the selfish motives of an “America First” predicated on isolationism. Bacevich instead envisions a more positive role for the United States in the international arena—one with its sword permanently sheathed.
All this is heady stuff, and regardless of your politics many readers will find themselves nodding their heads as Bacevich makes his case, outlining the many wrongheaded policy endeavors championed by Republicans and Democrats alike for a wobbly superpower clinging to an outdated and increasingly irrelevant sense of national identity that fails to align with the global realities of the twenty-first century. But then, as Bacevich looks to the future for alternatives, as he seeks to map out on paper the next new world order, he stumbles, and stumbles badly, something only truly evident in retrospect when viewing his point of view through the prism of the events that followed the release of After the Apocalypse in June 2021.
Bacevich has little to add here to his longstanding condemnation of the U.S. occupation of Afghanistan, which after two long decades of failed attempts at nation-building came to an end with our messy withdrawal in August 2021, just shortly after this book’s publication. President Biden was pilloried for the chaotic retreat, but while his administration could rightly be held to account for a failure to prepare for the worst, the elephant in that room in the Kabul airport where the ISIS-K suicide bomber blew himself up was certainly former president Trump, who brokered the deal to return Afghanistan to Taliban control. Biden, who plummeted in the polls due to outcomes he could do little to control, was disparaged much the same way Obama once was when he was held to blame for the subsequent turmoil in Iraq after effecting the withdrawal of U.S. forces agreed to by his predecessor, G.W. Bush. Once again, history rhymes. But the more salient point for those of us who share, as I do, Bacevich’s anti-imperialism, is that getting out is ever more difficult than going in.
But Bacevich has a great deal to say in After the Apocalypse about NATO, an alliance rooted in a past-tense Cold War stand-off that he pronounces counterproductive and obsolete. Bacevich disputes the long-held mythology of the so-called “West,” an artificial “sentiment” that has the United States and European nations bound together with common values of liberty, human rights, and democracy. Like Trump—who likely would have acted upon this had he been reelected—Bacevich calls for an end to US involvement with NATO. The United States and Europe have embarked on “divergent paths,” he argues, and that is as it should be. The Cold War is over. Relations with Russia and China are frosty, but entanglement in an alliance like NATO only fosters acrimony and fails to appropriately adapt our nation to the realities of the new millennium.
It is an interesting if academic argument that was abruptly crushed under the weight of the treads of Russian tanks in the premeditated invasion of Ukraine February 24, 2022. If some denied the echo of Hitler’s 1938 Austrian Anschluss to Putin’s 2014 annexation of Crimea, there was no mistaking the similarity of unprovoked attacks on Kyiv and sister cities to the Nazi war machine’s march on Poland in 1939. And yes, when Biden and French President Emmanuel Macron stood together to unite that so-called West against Russian belligerence, the memory of France’s 1940 defeat was hardly out of mind. All of a sudden, NATO became less a theoretical construct and somewhat more of a safe haven against brutal militarism, wanton aggression, and the unapologetic war crimes that livestream on twenty-first century social media of streets littered with the bodies of civilians, many of them children. All of a sudden, NATO is pretty goddamned relevant.
In all this, you could rightly argue against the wrong turns made after the dissolution of the USSR, of the failure of the West to allocate appropriate economic support for the heirs of the former Soviet Union, of how a pattern of NATO expansion both isolated and antagonized Russia. But there remains no legitimate defense for Putin’s attempt to invade, besiege, and absorb a weaker neighbor—or at least a neighbor he perceived to be weaker, a misstep that could lead to his own undoing. Either way, the institution we call NATO turned out to be something to celebrate rather than deprecate. The fact that it is working exactly the way it was designed to work could turn out to be the real road map to the new world order that emerges in the aftermath of this crisis. We can only imagine the horrific alternatives had Trump won re-election: the U.S. out of NATO, Europe divided, Ukraine overrun and annexed, and perhaps even Putin feted at a White House dinner. So far, without firing a shot, NATO has not only saved Ukraine; arguably, it has saved the world as we know it, a world that extends well beyond whatever we might want to consider the “West.”
As much as I respect Bacevich and admire his scholarship, his informed appraisal of our current foreign policy realities has turned out to be entirely incorrect. Yes, the United States should rein in the American Empire. Yes, we should turn away from imperialist tendencies. Yes, we should focus our defense budget solely on defense, not aggression, resisting the urge to try to remake the world in our own image for either altruism or advantage. But at the same time, we must be mindful—like other empires in the past—that retreat can create vacuums, and we must be ever vigilant of what kinds of powers may fill those vacuums. Because we can grow and evolve into a better nation, a better people, but that evolution may not be contagious to our adversaries. Because getting out remains ever more difficult than going in.
Finally, a word about the use of the term “apocalypse,” a characterization that is bandied about a bit too frequently these days. 2020 was a pretty bad year, indeed, but it was hardly apocalyptic. Not even close. Despite the twin horrors of Trump and the pandemic, we have had other years that were far worse. Think 1812, when the British burned Washington and sent the president fleeing for his life. And 1862, with tens of thousands already lying dead on Civil War battlefields as the Union army suffered a series of reverses. And 1942, still in the throes of economic depression, with Germany and Japan lined up against us. And 1968, marked by riots and assassinations, when it truly seemed that the nation was unraveling from within. Going forward, climate change may certainly breed apocalypse. So might a cornered Putin, equipped with an arsenal of nuclear weapons and diminishing options as Russian forces in the field teeter on collapse. But 2020 is already in the rear-view mirror. It will no doubt leave a mark upon us, but as we move on, it spins ever faster into our past. At the same time, predicting the future, even when armed with the best data, is fraught with unanticipated obstacles, and grand strategies almost always lead to failure. It remains our duty to study our history while we engage with our present. Apocalyptic or not, it’s all we’ve got …
On October 4, 1957, the Soviet Union sent geopolitical shock waves across the planet with the launch of Sputnik 1, the first artificial Earth satellite. Sputnik was only twenty-three inches in diameter, transmitted radio signals for a mere twenty-one days, then burned up on reentry just three months after first achieving orbit, but it literally changed everything. Not only were the dynamics of the Cold War permanently altered by what came to be dubbed the “Space Race,” but the success of Sputnik ushered in a dramatic new era for developments in science and technology. I was not quite six months old.
America was to later win that race to the moon, but despite its fearsome specter as a diabolical power bent on world domination, the USSR turned out to be a kind of vast Potemkin Village that almost noiselessly went out of business at the close 1991. The United States had pretty much lost interest in space travel by then, but that was just about the time that the next critical phase in the emerging digital age—widespread public access to personal computers and the internet—first wrought the enormous changes upon the landscape of American life that today might have Gen Z “zoomers” considering 1957 as something like a date out of ancient times.
And now, as this review goes to press—in yet still one more recycle of Mark Twain’s bon mot “History Doesn’t Repeat Itself, but It Often Rhymes”—NASA temporarily scrubbed the much anticipated blastoff of lunar-bound Artemis I, but a real space race is again fiercely underway, although this time the rivals include not only Russia, but China and a whole host of billionaires, at least one of whom could potentially fit a template for a “James Bond” style villain. And while all this is going on, I recently registered for Medicare.
Sixty-five years later, there’s a lot to look back on. In 1957: The Year That Launched the American Future (2020), a fascinating, fast-paced chronicle manifested by articulately rendered, thought-provocative chapter-length essays, author and journalist Eric Burns reminds us of what a pivotal year that proved to be, not only by kindling that first contest to dominate space, but in multiple other arenas of the social, political, and cultural, much that is only apparent in retrospect.
That year, while Sputnik stoked alarms that nuclear-armed Russians would annihilate the United States with bombs dropped from outer space, tabloid journalism reached what was then new levels of the outrageous exploiting “The Mad Bomber of New York,” who turned out to be a pathetic little fellow whose series of explosives actually claimed not a single fatality. In another example of history’s unintended consequences, a congressional committee investigating illegal labor activities helped facilitate Jimmy Hoffa’s takeover of the Teamsters. A cloak of mystery was partially lifted from organized crime activities with a very public police raid at Apalachin that rounded up Mafia bosses by the score. The iconic ’57 Chevy ruled the road and cruised on newly constructed interstate highways that would revolutionize travel as well as wreak havoc on cityscapes. African Americans remained second-class citizens but struggles for equality ignited a series of flashpoints. In September 1957, President Eisenhower federalized the Arkansas National Guard and sent Army troops to Little Rock to enforce desegregation. That same month, Congress passed the Civil Rights Act of 1957, watered-down yet still landmark legislation that paved the way for more substantial action ahead. Published that year were Jack Kerouac’s On the Road and Nevil Shute’s On the Beach. Michael Landon starred in I Was a Teenage Werewolf. Little Richard, who claimed to see Sputnik while performing in concert and took it as a message from God, abruptly walked off stage and abandoned rock music to preach the word of the Lord. But the nation’s number one hit was Elvis Presley’s All Shook Up; rock n’ roll was here to stay.
Burns’ commentary on all this and more is engaging and generally a delight to read, but 1957 is by no means a comprehensive history of that year. In fact, it is a stretch to term this book a history at all except in the sense that the events it describes occurred in the past. Instead, it is rather a subjective collection of somewhat loosely linked commentaries that spotlight specific events and emerging trends that the author identifies as formative for the nation we would become in decades that followed. As such, the book succeeds due to Burn’s keen sense of how both key episodes as well as more subtle cultural waves influenced a country in transition from the conventional, consensus-driven postwar years to the radicalized, tumultuous times that lay just ahead.
His insight is most apparent in his cogent analysis of how Civil Rights advanced not only through lunch-counter sit-ins and a reaction that was marked by violent repression, but by cultural shifts among white Americans—and that rock n’ roll had at least some role in this evolution of outlooks. At the same time, his conservative roots are exposed in his treatment of On the Road and the rise of the “Beat generation;” Burns genuinely seems as baffled by their emergence as he is amazed that anyone could praise Kerouac’s literary talents. But, to his credit, he recognizes the impact the novel has upon a national audience that no longer could confidently boast of a certainty in its destiny. And it is Burns’ talent with a pen that captivates a markedly different audience, some sixty-five years later.
In the end, the author leaves us yearning for more. After all, other than references that border on the parenthetical to Richard Nixon, Robert F. Kennedy, and Dag Hammarskjöld, there is almost no discussion of national politics or international relations, essential elements in any study of a nation at what the author insists is at a critical juncture. Even more problematic, very conspicuous in its absence is the missing chapter that should have been devoted to television. In 1950, 3.9 million TV sets were in less than ten percent of American homes. By 1957, that number increased roughly tenfold to 38.9 million TVs in the homes of nearly eighty percent of the population! That year, I Love Lucy aired its final half-hour episode, but in addition to network news, families were glued to their black-and-white consoles watching Gunsmoke, Alfred Hitchcock, Lassie, You Bet Your Life, and Red Skelton. For the World War II generation, technology that brought motion pictures into their living rooms was something like miraculous. Nothing was more central to the identity of the life of the average American in 1957 than television, but Burns inexplicably ignores it.
Other than Sputnik, which clearly marked a turning-point for science and exploration, it is a matter of some debate whether 1957 should be singled out for demarcation as the start of a new era. One could perhaps argue instead for the election of John F. Kennedy in 1960, or with even greater conviction, for the date of his assassination in 1963, as a true crossroads of sorts for the past and future United States. Still, if for no other reason than the conceit that this was my birth year, I am willing to embrace Burns’ thesis that 1957 represented a collective critical moment for us all. Either way, his book promises an impressive tour of a time that seems increasingly more distant with the passing of each and every day.
Early in 2022, I saw Casablanca on the big screen for the first time, the 80th anniversary of its premiere. Although over the years I have watched it in excess of two dozen times, this was a stunning, even mesmerizing experience for me, not least because I consider Casablanca the finest film of Old Hollywood—this over the objections of some of my film-geek friends who would lobby for Citizen Kane in its stead. Even so, most would concur with me that its star, Humphrey Bogart, was indeed the greatest actor of that era.
Attendance was sparse, diminished by a resurgence of COVID, but I sat transfixed in that nearly empty theater as Bogie’s distraught, drunken Rick Blaine famously raged that “Of all the gin joints in all the towns in all the world, she walks into mine!” He is, of course, lamenting his earlier unexpected encounter with old flame Ilsa Lund, splendidly portrayed with a sadness indelibly etched upon her beautiful countenance by Ingrid Bergman, who with Bogart led the credits of a magnificent ensemble cast that also included Paul Henreid, Claude Rains, Conrad Veidt, Sydney Greenstreet, and Peter Lorre. But Bogie remains the central object of that universe; the plot and the players in orbit about him. There’s no doubt that without Bogart, there could never have been a Casablanca as we know it. Such a movie might have been made, but it could hardly have achieved a greatness on this order of magnitude.
Bogie never actually uttered the signature line “Play it again, Sam,” so closely identified with the production (and later whimsically poached by Woody Allen for the title of his iconic 1972 comedy peppered with clips from Casablanca). And although the film won Academy Awards for Best Picture and Best Director, as well as in almost every other major category, Bogart was nominated but missed out on the Oscar, which instead went to Paul Lukas—does anyone still remember Paul Lukas?—for his role in Watch on the Rhine. This turns out to be a familiar story for Bogart, who struggled with a lifelong frustration at typecasting, miscasting, studio manipulation, lousy roles, inadequate compensation, missed opportunities, and repeated snubs—public recognition of his talent and star-quality came only late in life and even still frequently eluded him, as on that Oscar night. He didn’t really expect to win, but we can yet only wonder at what Bogart must have been thinking . . . He was already forty-four years old on that disappointing evening when the Academy passed him over. There was no way he could have known that most of his greatest performances would lie ahead, that after multiple failed marriages (one still unraveling that very night) a young starlet he had only just met would come to be the love of his life and mother of his children, and that he would at last achieve not only the rare brand of stardom reserved for just a tiny slice of the top tier in his profession, but that he would go on become a legend in his own lifetime and well beyond it: the epitome of the cool, tough, cynical guy who wears a thin veneer of apathy over an incorruptible moral center, women swooning over him as he stares down villains, an unlikely hero that every real man would seek to emulate.
My appreciation of Casablanca and its star in this grand cinema setting was enhanced by the fact that I was at the time reading Bogart (1997), by A.M. Sperber & Eric Lax, which is certainly the definitive biography of his life. I was also engaged in a self-appointed effort to watch as many key Bogie films in roughly chronological order as I could while reading the bio, which eventually turned out to be a total of twenty movies, from his first big break in The Petrified Forest (1936) to The Harder They Fall (1956), his final role prior to his tragic, untimely death at fifty-seven from esophageal cancer.
Bogie’s story is told brilliantly in this unusual collaboration by two authors who had never actually met. Ann Sperber, who wrote a celebrated biography of journalist Edward R. Murrow, spent seven years researching Bogart’s life and conducted nearly two hundred interviews with those who knew him most intimately before her sudden death in 1994. Biographer Eric Lax stepped in and shaped her draft manuscript into a coherent finished product that reads seamlessly like a single voice. I frequently read biographies of American presidents not only to study the figure that is profiled, but because the very best ones serve double duty as chronicles of United States history, the respective president as the focal point. I looked to the Bogart book for something similar, in this case a study of Old Hollywood with Bogie in the starring role. I was not to be disappointed.
Humphrey DeForest Bogart was born on Christmas Day 1899 in New York City to wealth and privilege, with a father who was a cardiopulmonary surgeon and a mother who was a commercial illustrator. Both parents were distant and unaffectionate. They had an apartment on the Upper West side and a vast estate on Canandaigua Lake in upstate New York, where Bogie began his lifelong love affair with boating. Indifferent to higher education, he eventually flunked out of boarding school and joined the navy. There seems nothing noteworthy about his early life.
His acting career began almost accidentally, and he spent several years on the stage before making his first full-length feature in 1930, Up the River, with his drinking buddy Spencer Tracy, who called him “Bogie.” He was already thirty years old. What followed were largely lackluster roles on both coasts, alternating between Broadway theaters and Hollywood studios. He was frequently broke, drank heavily, and his second marriage was crumbling. Then he won rave reviews as escaped murderer Duke Mantee in The Petrified Forest, playing opposite Leslie Howard on the stage. The studio bought the rights, but characteristically for Bogie, they did not want to cast him to reprise his role, looking instead for an established actor, with Edward G. Robinson at the top of the list. Then Howard, who had production rights, stepped in to demand Bogart get the part. The 1936 film adaptation of the play, which also featured a young Bette Davis, channeled Bogart’s dark and chillingly realistic portrayal of a psychopathic killer—in an era when gangsters like Dillinger and Pretty Boy Floyd dominated the headlines—and made Bogie a star.
But again he faced a series of let-downs. This was the era of the studio system, with actors used and abused by big shots like Jack Warner, who locked Bogart into a low-paid contract that tightly controlled his professional life, casting him repeatedly in virtually interchangeable gangster roles in a string of B-movies. It wasn’t until 1941, when he played Sam Spade in The Maltese Falcon—quintessential film noir as well as John Huston’s directorial debut—that Bogie joined the ranks of undisputed A-list stars and began the process of taking revenge on the studio system by commanding greater compensation and demanding greater control of his screen destiny. But in those days, despite his celebrity, that remained an uphill battle.
I began watching his films while reading the bio as a lark, but it turned out to be an essential assignment: you can’t read about Bogie without watching him. Many of the twenty that I screened I had seen before, some multiple times, but others were new to me. I was raised by my grandparents in the 1960s with a little help from a console TV in the livingroom and all of seven channels delivered via rooftop antenna. When cartoons, soaps, and prime time westerns and sitcoms weren’t broadcasting, the remaining airtime was devoted to movies. All kinds of movies, from the dreadful to the superlative and everything in-between, often on repeat. Much of it was classic Hollywood and Bogart made the rounds. One of my grandfather’s favorite flicks was The Treasure of the Sierra Madre, and I can recall as a boy watching it with him multiple times. In general, he was a lousy parent, but I am grateful for that gift; it remains among my top Bogie films. We tend to most often think of Bogart as Rick Blaine or Philip Marlowe, but it is as Fred C. Dobbs in The Treasure of the Sierra Madre and Charlie Allnutt in The African Queen and Captain Queeg in The Caine Mutiny that the full range of his talent is revealed.
It was hardly his finest role or his finest film, but it was while starring as Harry Morgan in To Have and Have Not (1944) that Bogie met and fell for his co-star, the gorgeous, statuesque, nineteen-year-old Lauren Bacall—twenty-five years younger than him—spawning one of Hollywood’s greatest on-screen, off-screen romances. They would be soulmates for the remainder of his life, and it was she who brought out the very best of him. Despite his tough guy screen persona, the real-life Bogie tended to be a brooding intellectual who played chess, was well-read, and had a deeply analytical mind. An expert sailor, he preferred boating on the open sea to carousing in bars, although he managed to do plenty of both. During crackdowns on alleged communist influence in Hollywood, Bogart and Bacall together took controversial and sometimes courageous stands against emerging blacklists and the House Un-American Activities Committee (HUAC). But he also had his flaws. He could be cheap. He could be a mean drunk. He sometimes wore a chip on his shoulder carved out of years of frustration at what was after all a very slow rise to the top of his profession. But warts and all, far more of his peers loved him than not.
Bogart is a massive tome, and the first section is rather slow-going because Bogie’s early life was just so unremarkable. But it holds the reader’s interest because it is extremely well-written, and it goes on to succeed masterfully in spotlighting Bogart’s life against the rich fabric that forms the backdrop of that distant era of Old Hollywood before the curtains fell for all time. If you are curious about either, I highly recommend this book. If you are too busy for that, at the very least carve out some hours of screen time and watch Bogie’s films. You will not regret the time spent. Although his name never gets dropped in the lyrics by Ray Davies for the familiar Kinks tune, if there were indeed Celluloid Heroes, the greatest among them was certainly Humphrey Bogart.
NOTE: These are Bogart films I screened while reading this book:
Historians consistently rank him at the top, tied with Washington for first place or simply declared America’s greatest president. His tenure was almost precisely synchronous with the nation’s most critical existential threat: his very election sparked secession, first shots fired at Sumter a month after his inauguration, the cannon stilled at Appomattox a week before his murder. There were still armies in the field, but he was gone, replaced by one of the most sinister men to ever take the oath of office, leaving generations of his countrymen to wonder what might have transpired with all the nation’s painful unfinished business had he survived, to the trampled hopes for equality for African Americans to the promise of a truly “New South” that never emerged. A full century ago, decades after his death, he was reimagined as an enormous, seated marble man with the soulful gaze of fixed purpose, the central icon in his monument that provokes tears for so many visitors that stand in awe before him. When people think of Abraham Lincoln, that’s the image that usually springs to mind.
The seated figure rises to a height of nineteen feet; somebody calculated that if it stood up it would be some twenty-eight feet tall. The Lincoln that once walked the earth was not nearly that gargantuan, but he was nevertheless a giant in his time: physically, intellectually—and far too frequently overlooked—politically! He sometimes defies characterization because he was such a character, in so very many ways.
An autodidact gifted with a brilliant analytical mind, he was also a creature of great integrity loyal to a firm sense of a moral center that ever evolved when polished by new experiences and touched by unfamiliar ideas. A savvy politician, he understood how the world worked. He had unshakeable convictions, but he was tolerant of competing views. He had a pronounced sense of empathy for others, even and most especially his enemies. In company, he was a raconteur with a great sense of humor given to anecdotes often laced with self-deprecatory wit. (Lincoln, thought to be homely, when accused in debate of being two-faced, self-mockingly replied: “I leave it to my audience. If I had another face, do you think I’d wear this one?”) But despite his many admirable qualities, he was hardly flawless. He suffered with self-doubt, struggled with depression, stumbled through missteps, burned with ambition, and was capable of hosting a mean streak that loomed even as it was generally suppressed. More than anything else he had an outsize personality.
And Lincoln likewise left an outsize record of his life and times! So why has he generally posed such a challenge for biographers? Remarkably, some 15,000 books have been written about him—second, it is said, only to Jesus Christ—but yet in this vast literature, the essence of Lincoln again and again somehow seems out of reach to his chroniclers. We know what he did and how he did it all too well, but portraying what the living Lincoln must have been like has remained frustratingly elusive in all too many narratives. For instance, David Herbert Donald’s highly acclaimed bio—considered by many the best single volume treatment of his life—is indeed impressive scholarship but yet leaves us with a Lincoln who is curiously dull and lifeless. Known for his uproarious banter, the guy who joked about being ugly for political advantage is glaringly absent in most works outside of Gore Vidal’s Lincoln, which superbly captures him but remains, alas, a novel not a history.
All that changed with A Self-Made Man: The Political Life of Abraham Lincoln Vol. I, 1809–1849, by Sidney Blumenthal (2016), an epic, ambitious, magnificent contribution to the historiography that demonstrates not only that despite the thousands of pages written about him there still remains much to say about the man and his times, but even more significantly that it is possible to brilliantly recreate for readers what it must have been like to engage with the flesh and blood Lincoln. This is the first in a projected four-volume study (two subsequent volumes have been published to date) that—as the subtitle underscores—emphasize the “political life” of Lincoln, another welcome contribution to a rapidly expanding genre focused upon politics and power, as showcased in such works as Jon Meacham’s Thomas Jefferson: The Art of Power, Robert Dallek’s Franklin D. Roosevelt: A Political Life, and George Washington: The Political Rise of America’s Founding Father, by David O. Stewart.
At first glance, this tactic might strike as surprising, since prior to his election as president in 1860 Lincoln could boast of little in the realm of public office beyond service in the Illinois state legislature and a single term in the US House of Representatives in the late 1840s. But, as Blumenthal’s deeply researched and well-written account reveals, politics defined Lincoln to his very core, inextricably manifested in his life and character from his youth onward, something too often disregarded by biographers of his early days. It turns out that Lincoln was every bit a political animal, and there is a trace of that in nearly every job he ever took, every personal relationship he ever formed, and every goal he ever chased.
This approach triggers a surprising epiphany for the student of Lincoln. It is as if an entirely new dimension of the man has been exposed for the first time that lends new meaning to words and actions previously treated superficially or—worse—misunderstood by other biographers. Early on, Blumenthal argues that Donald and others have frequently been misled by Lincoln’s politically crafted utterances that cast him as marked by passivity, too often taking him at his word when a careful eye on the circumstances demonstrates the exact opposite. In contrast, Lincoln, ever maneuvering, if quietly, could hardly be branded as passive [p9]. Given this perspective, the life and times of young Abe is transformed into something far richer and more colorful than the usual accounts of his law practice and domestic pursuits. In another context, I once snarkily exclaimed “God save us from The Prairie Years” because I found Lincoln’s formative period—and not just Sandburg’s version of it—so uninteresting and unrelated to his later rise. Blumenthal has proved me wrong, and that sentiment deeply misplaced.
But Blumenthal not only succeeds in fleshing out a far more nuanced portrait of Lincoln—an impressive accomplishment on its own—but in the process boldly sets out to do nothing less than scrupulously detail the political history of the United States in the antebellum years from the Jackson-Calhoun nullification crisis onward. Ambitious is hardly an adequate descriptive for the elaborate narrative that results, a product of both prodigious research and a very talented pen. Scores of pages—indeed whole chapters—occur with literally no mention of Lincoln at all, a striking technique that is surprisingly successful; while Lincoln may appear conspicuous in his absence, he is nevertheless present, like the reader a studious observer of these tumultuous times even when he is not directly engaged, only making an appearance when the appropriate moment beckons. As such, A Self-Made Man is every bit as much a book of history as it is biography, a key element to the unstated author’s thesis: that it is impossible to truly get to know Lincoln—especially the political Lincoln—except in the context and complexity of his times, a critical emphasis not afforded in other studies.
And there is much to chronicle in these times. Some of this material is well known, even if until recently subject to faulty analysis. The conventional view of the widespread division that characterized the antebellum period centered on a sometimes-paranoid south on the defensive, jealous of its privileges, in fear of a north encroaching upon its rights. But in keeping with the latest historiography, Blumenthal deftly highlights how it was that, in contrast, the slave south—which already wielded a disproportionate share of national political power due to the Constitution’s three-fifths clause that inflated its representation—not only stifled debate on slavery but aggressively lobbied for its expansion. And just as a distinctly southern political ideology evolved its notion of the peculiar institution from the “wolf by the ear” necessary evil of Jefferson’s time to a vaunted hallmark of civilization that boasted benefit to master and servant, so too did it come to view the threat of separation less in dread than anticipation. The roots of all that an older Lincoln would witness severing the ancient “bonds of affection” of the then no longer united states were planted in these, his early years.
Other material is less familiar. Who knew how integral to Illinois politics—for a time—was the cunning Joseph Smith and his Mormon sect? Or that Smith’s path was once entangled with the budding career of Stephen A. Douglas? Meanwhile, the author sheds new light on the long rivalry between Lincoln and Douglas, which had deep roots that went back to the 1830s, decades before their celebrated clash on the national stage brought Lincoln to a prominence that finally eclipsed Douglas’s star.
Blumenthal’s insight also adeptly connects the present to the past, affording a greater relevance for today’s reader. He suggests that the causes of the financial crisis of 2008 were not all that dissimilar to those that drove the Panic of 1837, but rather than mortgage-backed securities and a housing bubble, it was the monetization of human beings as slave property that leveraged enormous fortunes that vanished overnight when an oversupply of cotton sent market prices plummeting, which triggered British banks to call in loans on American debtors—a cotton bubble that burst spectacularly (p158-59). This point can hardly be overstated, since slavery was not only integral to the south’s economy, but by the eve of secession human property was to represent the largest single form of wealth in the nation, exceeding the combined value of all American railroads, banks, and factories. A cruel system that assigned values to men, women, and children like cattle had deep ramifications not only for masters who acted as “breeders” in the Chesapeake and markets in the deep south, but also for insurance companies in Hartford, textile mills in Lowell, and banks in London.
Although Blumenthal does not himself make this point, I could detect eerie if imperfect parallels to the elections of 2016 and 1844, with Lincoln seething as the perfect somehow became the enemy of the good. In that contest, Whig Henry Clay was up against Democrat James K. Polk. Both were slaveowners, but Clay opposed the expansion of slavery while Polk championed it. Antislavery purists in New York rejected Clay for the tiny Liberty Party, which by a slender margin tipped the election to Polk, who then boosted the slave power with Texas annexation, and served as principal author of the Mexican War that added vast territories to the nation, setting forces in motion that later spawned secession and Civil War. Lincoln was often prescient, but of course he could not know all that was to follow when, a year after Clay’s defeat, he bitterly denounced the “moral absolutism” that led to the “unintended tragic consequences” of Polk’s elevation to the White House (p303). To my mind, there was an echo of this in the 2016 disaster that saw Donald Trump prevail, a victory at least partially driven by those unwilling to support Hillary Clinton who—despite the stakes—threw away their votes on Jill Stein and Gary Johnson.
No review could properly summarize the wealth of the material contained here, nor overstate the quality of the presentation, which also suggests much promise for the volumes that follow. I must admit that at the outset I was reluctant to read yet another book about Lincoln, but A Self-Made Man was recommended to me by no less than historian Rick Perlstein, (author of Nixonland), and like Perlstein, Blumenthal’s style is distinguished by animated prose bundled with a kind of uncontained energy that frequently delivers paragraphs given to an almost breathless exhale of ideas and people and events that expertly locates the reader at the very center of concepts and consequences. The result is something exceedingly rare for books of history or biography: a page-turner! Whether new to studies of Lincoln or a long-time devotee, this book should be required reading.
NOTE: A review of one of Rick Perlstein’s books is here:
As the COVID-19 pandemic swept the globe in 2020, it left in its wake the near-paralysis of many hospital systems, unprepared and unequipped for the waves of illness and death that suddenly overwhelmed capacities for treatment that were after all at best only palliative care, since for this deadly new virus there was neither a cure nor a clear route to prevention. Overnight, epidemiologists—scrambling for answers or even just clues—became the most critically significant members of the public health community, even if their informed voices were often shouted down by the shriller ones of media pundits and political hacks.
Meanwhile, data collection began in earnest and the number of data dashboards swelled. In the analytical process, the first stop was identifying the quality of the data and the disparities in how data was collected. Was it true, as some suggested, that a disproportionate number of African Americans were dying from COVID? At first, there was no way to know since some states were not collecting data broken down by this kind of specific demographic. Data collection eventually became more standardized, more precise, and more reliable, serving as a key ingredient to combat the spread of this highly contagious virus, as well as one of the elements that guided the development of vaccines. Even so, dubious data and questionable studies too often took center stage both at political rallies and in the media circus that echoed a growing polarization that had one side denouncing masks, resisting vaccination, and touting sideshow magic bullets like Ivermectin. But talking heads and captive audiences aside, masks reduce infection, vaccines are effective, and dosing with Ivermectin is a scam. How do we know that? Data. Mostly due to data. Certainly, other key parts of the mix include scientists, medical professionals, case studies, and peer reviewed papers, but data—first collected and then analyzed—is the gold standard, not only for COVID but for all disease treatment and prevention.
But it wasn’t always that way.
In the beginning, there was no such thing as epidemiology. Disease causes and treatments were anecdotal, mystical, or speculative. Much of the progress in science and medicine that was the legacy of the classical world had long been lost to the west. The dawn of modern epidemiology rose above a horizon constructed of data painstakingly collected and compiled and subsequently analyzed. In fact, certain aspects of the origins of epidemiology were to run concurrent with the evolution of statistical analysis. In the early days, as the reader comes to learn in this brilliant and groundbreaking 2021 work by historian Jim Downs, Maladies of Empire: How Colonialism, Slavery, and War Transformed Medicine, the bulk of the initial data was derived from unlikely and unwilling participants who existed at the very margins: the enslaved, the imprisoned, the war-wounded, and the destitute condemned to the squalor of public hospitals. Their identities are mostly forgotten, or were never recorded in the first place, but yet collectively the data harvested from them was to provide the skeletal framework for the foundation of modern medicine.
In a remarkable achievement that could hardly be more relevant today, the author cleverly locates Maladies of Empire at the intersection of history and medicine, where data collection from unexpected and all too frequently wretched subjects comes to form the very basis of epidemiology itself. It is these early stories that send shudders to a modern audience. Nearly everyone is familiar with the wrenching 1787 diagram of the lower deck of the slave ship Brookes, where more than four hundred fifty enslaved human beings were packed like sardines for a months-long voyage, which became an emblem for the British antislavery movement. But, as Downs points out, few are aware that the sketch can be traced to the work of British naval surgeon Dr. Thomas Trotter, one of the first to recognize that poor ventilation in crowded conditions results in a lack of oxygen that breeds disease and death. His observations also led to a better understanding of how to prevent scurvy, a frequent cause of higher mortality rates among the seaborne citrus-deprived. Trotter himself was appalled by the conditions he encountered on the Brookes, and testified to this before the House of Commons. But that was hardly the case for many of his peers, and certainly not for the owners of slave ships, who looked past the moral dilemmas of a Trotter while exceedingly grateful for his insights; after all, the goal was keep larger quantities of their human cargo alive in order to turn greater profits. Dead slaves lack market value.
A little more than three decades prior to Trotter’s testimony, the critical need for ventilation was documented by another physician in the wake of the confinement of British soldiers in the infamous “Black Hole of Calcutta” during the revolution in Bengal, which resulted in the death by suffocation of the majority of the captives. Downs makes the point that one of the unintended consequences of colonialism was that for early actors in the medical arena it served to vastly extend the theater of observation of the disease-afflicted to a virtually global stage that hosted the byproducts of colonialism: war, subjugated peoples, the slave trade, military hospitals and prisons. But it turns out that the starring roles belong less to the doctors and nurses that receive top billing in the history books than to the mostly uncredited bit players removed from the spotlight: the largely helpless and disadvantaged patients whose symptoms and outcomes were observed and cataloged, whose anonymous suffering translated into critical data that collectively advanced the emerging science of epidemiology.
Traditionally, history texts rarely showcased notable women, but one prominent exception was Florence Nightingale, frequently extolled for her role as a nurse during the Crimean War. But as underscored in Maladies of Empire, Nightingale’s real if often overlooked legacy was as a kind of disease statistician through her painstaking data collection and analysis—the very basis for epidemiology that was generally credited to white men rather than to “women working in makeshift hospitals.” [p111] But it was the poor outcomes for patients typically subjected to deplorable conditions in these makeshift military hospitals—which Nightingale assiduously observed and recorded—that drew attention to similarly appalling environments in civilian hospitals in England and the United States, which led to a studied analysis that eventually established systematic evidence for the causes, spread, and treatment of disease.
The conclusions these early epidemiologists reached were not always accurate. In fact, they were frequently wrong. But Downs emphasizes that what was significant was the development of the proper analytical framework. In these days prior to the revolutionary development of germ theory, notions on how to improve survival rates of the stricken put forward by Nightingale and others were controversial and often contradictory. Was the best course quarantine, a frequent resort? Or would improving the sickbed conditions, as Nightingale advocated, lead to better outcomes? Unaware of the role of germs in contagion, evidence could be both inconclusive and inconsistent, and competing ideas could each be partly right. After all, regardless of how disease spread, cleaner and better ventilated facilities might lead to lower mortality rates. Nightingale stubbornly resisted germ theory, even as it was widely adopted, but after it won her grudging acceptance, she continued to promote more sanitary hospital conditions to improve survival rates. Still, epidemiologists faced difficult challenges with diseases that did not conform to familiar patterns, such as cholera, spread by a tainted water supply, and yellow fever, a mosquito-borne pathogen.
In the early days, as noted, European observers collected data from slave ships, yet it never occurred to them that because their human subjects were black such evidence was not applicable to the white population. But epidemiology took a surprisingly different course in the United States, where race has long proved to be a defining element. Of the more than six hundred thousand who lost their lives during the American Civil War, about two-thirds were felled not by bullets but by disease. The United States Sanitary Commission (USSC) was established in an attempt to ameliorate these dreadful outcomes, but its achievements on one hand were undermined on the other by an obsession with race, even going so far as the sending out to “. . . military doctors a questionnaire, ‘The Physiological Status of the Negro,’ whose questions were based on the belief that Black soldiers were innately different from white soldiers . . . The questionnaire also distinguished gradations of color among Black soldiers, asking doctors to compare how ‘pure Negroes’ differed from people of ‘mixed races’ and to describe ‘the effects of amalgamation on the vital endurance and vigor of the offspring.’” With its imprimatur of governmental authority, the USSC officially championed scientific racism, with profound and long-term social, political, and economic consequences for African Americans. [p134-35]
Some of these notions can be traced back to the antebellum musings of Alabama surgeon Josiah Nott—made famous after the war when he correctly connected mosquitoes to the etiology of Yellow Fever—who asserted that blacks and whites were members of separate species whose mixed-race offspring he deemed “hybrids” who were “physiologically inferior.” Nott believed that all three of these distinct “types” responded differently to disease. [p124-25] His was but one manifestation of the once widespread pseudoscience of physiognomy that alleged black inferiority in order to justify first slavery and later second-class citizenship. Such ideas persisted for far too long, and although scientific racism still endures on the alt-right, it has been thoroughly discredited by actual scientists. It turns out that a larger percentage of African Americans did indeed succumb to death in the still ongoing COVID pandemic, but this has been shown to be due to factors of socioeconomic status and lack of access to healthcare, not genetics.
Still, although deemed inferior, enslaved blacks also proved useful when convenient. The author argues that “… enslaved children were most likely used as the primary source of [smallpox] vaccine matter in the Civil War South,” despite the danger of infection in harvesting lymph from human subjects in order to vaccinate Confederate soldiers in the field. In yet one more reminder of the moral turpitude that defined the south’s “peculiar institution,” the subjects also included infants whose resulting scar or pit, Downs points out, “. . . would last a lifetime, indelibly marking a deliberate infection of war and bondage. Few, if any, knew that the scars and pit marks actually disclosed the infant’s first form of enslaved labor, an assignment that did not make it into the ledger books or the plantation records.” [p141-42]
Tragically, this episode was hardly an anomaly, and unethical medical practices involving blacks did not end with Appomattox. The infamous “Tuskegee Syphilis Study” that observed but failed to offer treatment to the nearly four hundred black men recruited without informed consent ran for forty years and was not terminated until 1972! One of the chief reasons for COVID vaccine hesitancy among African Americans has been identified as a distrust of a medical community that historically has either victimized or marginalized them.
Maladies of Empire is a well-written, highly readable book suitable to a scholarly as well as popular audience, and clearly represents a magnificent contribution to the historiography. But it is hardly only for students of history. Instead, it rightly belongs on the shelf of every medical professional practicing today—especially epidemiologists!
Is your morning coffee moving? Is there a particle party going on in your kitchen? What makes for a great-tasting gourmet meal? Does artificial flavoring really make a difference? Why does mixing soap with water get your dishes clean? Why do some say that “sitting is the new smoking?” How come one beer gives you a strong buzz but your friend can drink a bottle of wine without slurring her words? When it comes to love, is the “right chemistry” just a metaphor? And would you dump your partner because he won’t use fluoridated toothpaste?
All this and much more makes for the delightful conversation packed into Chemistry for Breakfast: The Amazing Science of Everyday Life, by Mai Thi Nguyen-Kim, a fun, fascinating, and fast-moving slender volume that could very well turn you into a fan of—of all things—chemistry! This cool and quirky book is just the latest effort by the author—a real-life German chemist who hosts a YouTube channel and has delivered a TED Talk—to combat what she playfully dubs “chemism:” the notion that chemistry is dull and best left to the devices of boring nerdy chem-geeks! One reason it works is because Nguyen-Kim is herself the antithesis of such stereotypes, coming off in both print and video as a hip, brilliant, and articulate young woman with a passion for science and for living in the moment.
I rarely pick up a science book, but when I do, I typically punch above my intellectual weight, challenging myself to reach beyond my facility with history and literature to dare to tangle with the intimidating realms of physics, biology, and the like. I often emerge somewhat bruised but with the benefit of new insights, as I did after my time with Sean Carroll’s The Particle at the End of the Universe and Bill Schopf’s Cradle of Life. So it was with a mix of eagerness and trepidation that I approached Chemistry for Breakfast.
But this proved to be a vastly different experience! Using her typical day as a backdrop—from her own body’s release of stress hormones when the alarm sounds to the way postprandial glasses of wine mess with the neurotransmitters of her guests—Nguyen-Kim demonstrates the omnipresence of chemistry to our very existence, and distills its complexity into bite-size concepts that are easy to process but yet never dumbed-down. Apparently, there is a particle party going on in your kitchen every morning, with all kinds of atoms moving at different rates in the coffee you’re sipping, the mug in your hand, and the steam rising above it. It’s all about temperature and molecular bonds. In a chapter whimsically entitled “Death by Toothpaste,” we find out how chemicals bond to produce sodium fluoride, the stuff of toothpaste, and why that not only makes for a potent weapon against cavities, but why the author’s best buddy might dump her boyfriend—because he thinks fluoride is poison! There’s much more to come—and it’s still only morning at Mai’s house …
As a reader, I found myself learning a lot about chemistry without studying chemistry, a remarkable achievement by the author, whose technique is so effective because it is so unique. Fielding humorous anecdotes plucked from everyday existence, Mai’s wit is infectious, so the “lessons” prove entertaining without turning silly. I love to cook, so I especially welcomed her return to the kitchen in a later chapter. Alas, I found out that while I can pride myself on my culinary expertise, it all really comes down to the way ingredients react with one another in a mixing bowl and on the hot stove. Oh, and it turns out that despite the fearmongering in some quarters, most artificial flavors are no better or worse than natural ones. Yes, you should read the label—but you have to know what those ingredients are before you judge them healthy or not.
Throughout the narrative, Nguyen-Kim conveys an attractive brand of approachability that makes you want to sit down and have a beer with her, but unfortunately she can’t drink: Mai, born of Vietnamese parents, has inherited a gene mutation in common with a certain segment of Asians which interferes with the way the body processes alcohol, so she becomes overly intoxicated after just a few sips of any strong drink. She explains in detail why her “broken” ALDH2 enzyme simply will not break down the acetaldehyde in the glass of wine that makes her guests a little tipsy but gives her nausea, a rapid-heartbeat, and sends a “weird, lobster-red tinge” to her face. Mai’s issue with alcohol reminded me of recent studies that revealed the reason that some people of northern European ancestry always burn instead of tan at the beach is due to faulty genes that block the creation of melanin in response to sun exposure. This is a strong underscore that while race is of course a myth that otherwise communicates nothing of importance about human beings, in the medical world genetics has the potential of serving as a powerful tool to explain and treat disease. As for Mai, given the overall health risks of alcohol consumption, she views her inability to drink as more of a blessing than a curse, and hopes to pass her broken gene on to her offspring!
The odds that I would ever deliberately set out to read a book about chemistry were never that favorable. That I would do so and then rave about the experience seemed even more unlikely. But here we are, along with my highest recommendations. Mai’s love of science is nothing less than contagious. If you read her work, I can promise that not only will you learn a lot, but you will really enjoy the learning process. And that too, I suppose, is chemistry!
[Note: I read an Advance Reader’s Copy of this book as part of an early reviewer’s program]
Until Jimmy Carter came along, there really was no rival to John Quincy Adams (1767-1848) as best ex-president, although perhaps William Howard Taft earns honorable mention for his later service as Chief Justice of the Supreme Court. Carter—who at ninety-seven still walks among us as this review goes to press—has made his reputation as a humanitarian outside of government after what many view as a mostly failed single term in the White House. Adams, on the other hand, whose one term as the sixth President of the United States (1825-29) was likewise disappointing, managed to establish a memorable outsize official legacy when he returned to serve his country as a member of the House of Representatives from 1831 until his dramatic collapse at his desk and subsequent death inside the Capitol Building in 1848. Freshman Congressman Abraham Lincoln would be a pallbearer.
Like several of the Founders whose own later presidential years were troubled, including his own father, John Quincy had a far more distinguished and successful career prior to his time as Chief Executive. But quite remarkably, unlike these other men—John Adams, Jefferson, Madison—who lingered in mostly quiet retirement for decades beyond their respective tenures, in his long career John Quincy Adams could be said to have equaled or surpassed his accomplished pre-presidential service as diplomat, United States Senator, and Secretary of State, returning as just a simple Congressman from Massachusetts who was to be a giant in antislavery advocacy. Adams remains the only former president elected to the House, and until George W. Bush in 2001, the only man who could claim his own father as a fellow president.
Notably, the single unsatisfactory terms that he and his father served in the White House turned out to be bookends to a significant era in American history: John Adams was the first to run for president in a contested election (Washington had essentially been unopposed); his son’s tenure ended along with the Early Republic, shattered by the ascent of Jacksonian democracy. But if the Early Republic was no more, it marked only the beginning of another chapter in the extraordinary life of John Quincy Adams. And yet, for a figure that carved such indelible grooves in our nation’s history, present at the creation and active well into the crises of the antebellum period that not long after his death would threaten to annihilate the American experiment, it remains somewhat astonishing how utterly unfamiliar he remains to most citizens of the twenty-first century.
Prominent historian William J. Cooper seeks to remedy that with The Lost Founding Father: John Quincy Adams and the Transformation of American Politics (2017), an exhaustively researched, extremely well-written, if dense study that is likely to claim distinction as the definitive biography for some years to come. Cooper’s impressive work is old-fashioned narrative history at its best. John Quincy Adams is the main character, but his story is told amid the backdrop of the nation’s founding, its evolution as a young republic, and its descent to sectional crises over slavery, while many, at home and abroad, wondered at the likelihood of its survival. It is not only clever but entirely apt that in the book’s title the author dubs his subject the “Lost Founding Father.”
Some have called Benjamin Franklin the “grandfather of his country.” Likewise, John Quincy Adams could be said to be a sort of “grandson.” He was not only to witness the tumultuous era of the American Revolution and observe John Adams’ storied role as a principal Founder, he also accompanied his father on diplomatic missions to Europe while still a boy, and completed most of his early education there. Like Franklin, Jefferson, and his father, he spent many years abroad during periods of fast-moving events and dramatic developments on American soil that altered the nation and could prove jarring upon return. Unlike the others, his extended absence coincided with his formative years; John Quincy grew up not in New England but rather in France, the Netherlands, Russia, and Great Britain, and this came to deeply affect him.
A brooding intellectual with a brilliant mind who sought solitude over society, dedicated to principle above all else, including loyalty to party, the Adams that emerges in these pages was a socially awkward workaholic subject to depression, blessed with a wide range of talents that ranged from the literary to languages to the deeply analytical, but lacking even the tiniest vestige of charisma. He strikes the reader as the least suitable person to ever aspire to or serve as president of the United States. A gifted writer, he began a diary when he was twelve years old that he continued almost without interruption until shortly before his death. He frequently expressed dismay at his inability to keep up with his ambitious goals for daily diary entries that often ran to considerable length.
There is much in the man that resembles his father, also a principled intellect, whom he much admired even while he suffered a sense of inadequacy in his shadow. Both men were stubborn in their ideals and tended to alienate those who might otherwise be allies. While each could be self-righteous, John Adams was also ever firmly self-confident in a way that his son could never match. Of course, in his defense, the younger man not only felt obligated to live up to a figure who was a titan in the public arena, but he lacked a wife that was cut from the same cloth as his mother, with whom he had a sometimes-troubled relationship.
Modern historians have made much of the historic partnership that existed, mostly behind the scenes, between John and Abigail Adams; in every way except eighteenth century mores she seems his equal. John Quincy, on the other hand, was wedded to Louisa Catherine, a sickly woman given to fainting spells and frequent migraines whose multiple miscarriages coupled with the loss of an infant daughter certainly triggered severe psychological trauma. A modern audience can’t help but wonder if her many maladies and histrionics were not psychosomatic. At any rate, John Quincy treated his wife and other females he encountered with the patronizing male chauvinism typical of his times, so it is dubious that if he instead found an Abigail Adams at his side, he could have flourished in her orbit the way his father did.
Although Secretary of State John Quincy Adams was largely the force that drove the landmark “Monroe Doctrine” and other foreign policy achievements of the Monroe Administration, most who know of Adams tend to know of him only peripherally, through his legendary political confrontation with the far more celebrated Andrew Jackson. That conflict was forged in the election of 1824. The Federalist Party, scorned for threats of New England secession during the War of 1812, was essentially out of business. James Monroe was wrapping up his second term in what historians have called the “Era of Good Feelings” that ostensibly reflected a sense of national unity controlled by a single party, the Democratic-Republicans, but there were fissures, factions, local interests, and emerging coalitions beneath the surface. In the most contested election to date in the nation’s history, John Quincy, Andrew Jackson, Henry Clay, and William Crawford were chief contenders for the highest office. While Jackson received a plurality, none received a majority of the electoral votes, so as specified in the Constitution the race was sent to the House for decision. Crawford had suffered a devastating stroke and was thus out of consideration. Adams and Clay tended to clash, but both were aligned on many national issues, and Jackson was rightly seen as a dangerous demagogue. Clay threw his support to Adams, who became president. Jackson was furious, even more so when Adams later named Clay Secretary of State, which was then seen as a sure steppingstone to the presidency, something that further enraged Jackson, who branded his appointment by Adams a “Corrupt Bargain.” As it turned out, while Adams prevailed, his presidency was marked by frustration, his ambitious domestic goals stymied by Congress. In a run for reelection, he was dealt a humiliating defeat by Jackson, who headed the new Democratic Party. The politics of John Quincy Adams and the Early Republic went extinct.
While evaluating these two elections, it’s worth pausing here to emphasize John Quincy’s longtime objection to the nefarious if often overlooked impact of the three-fifths clause in the Constitution, which granted southern slaveholding states outsize political clout by counting an enslaved individual as three-fifths of a person for the purpose of representation. This was to prove significant, since the slave south claimed a disproportionate share of national political power when it came to advancing legislation or, for that matter, electing a president. He found focus on this issue while Secretary of State in the debate that swirled around the Missouri Compromise of 1820, concluding that:
The bargain in the Constitution between freedom and slavery had conveyed to the South far too much political influence, its base the notorious three-fifths clause, which immorally increased southern power in the nation … the past two decades had witnessed a southern domination that had ravaged the Union … he emphasized what he saw as the moral viciousness of that founding accord. It contradicted the fundamental justification of the American Revolution by subjecting slaves to oppression while privileging their masters with about a double representation. [p174]
This was years before he was himself to fall victim to the infamous clause. As underscored by historian Alan Taylor in his recent work, American Republics (2021), the disputed election of 1824 would have been far less disputed without the three-fifths clause, since in that case Adams would have led Andrew Jackson in the Electoral College 83 to 77 votes, instead of putting Jackson in the lead 99 to 84. When Jackson prevailed in the next election in 1828, it was the south that cemented his victory. The days of Virginia planters in the White House may have passed, but the slave south clearly dominated national politics and often served as antebellum kingmaker for the White House.
In any case, Adams’ dreams of vindicating his father’s single term were dashed. A lesser man would have gone off into the exile of retirement, but Adams was to come back—and come back stronger than ever as a political figure to be reckoned with, distinguished by his fierce antislavery activism. His abhorrence of human bondage ran deep, and long preceded his return to Congress. And because he kept such a detailed journal, we have insight into his most personal convictions.
Musing once more about the Missouri Compromise, he confided to his diary his belief that a war over slavery was surely on the horizon that would ultimately result in its elimination: “If slavery be the destined sword in the hand of the destroying angel which is to sever the ties of this Union … the same sword will cut in sunder the bonds of slavery itself.” [p173] He also wrote of his conversations with the fellow cabinet secretary he most admired at the time, South Carolina’s John C. Calhoun, who clearly articulated the doctrine of white supremacy that defined the south. To Adams’ disappointment, Calhoun told him that southerners did not believe the Declaration’s guarantees of universal rights applied to blacks, and “Calhoun maintained that racial slavery guaranteed equality among whites because it placed all of them above blacks.” [p175]
These diary entries from 1820 came to foreshadow the more crisis-driven politics in the decades hence when Adams—his unhappy presidency long behind him—was the leading figure in Congress who stood against the south’s “peculiar institution” and southern domination of national politics. These were, of course, far more fraught times. He opposed both Texas annexation and the Mexican War, which he correctly viewed as a conflict designed to extend slavery. But he most famously led the opposition against the 1836 resolution known as the “gag rule” that prohibited House debate on petitions to abolish slavery, which incensed the north and spawned greater polarization. Adams was eventually successful, and the gag rule was repealed, but not until 1844.
It has long been my goal to read at least one biography of each American president, and I came to Cooper’s book with that objective in mind. I found my time with it a deeply satisfying experience, although I suspect because it is so pregnant in detail it will find less appeal among a more popular audience. Still, if you want to learn about this too often overlooked critical figure and at the same time gain a greater understanding of an important era in American history, I would highly recommend that you turn to The Lost Founding Father.
Early in the war … a Union squad closed in on a single ragged Confederate, and he obviously didn’t own any slaves. He couldn’t have much interest in the Constitution or anything else. And said: “What are you fighting for, anyhow?” they asked him. And he said: “I’m fighting because you’re down here.” Which is a pretty satisfactory answer.
That excerpt is from Ken Burns’ epic The Civil War (1990) docuseries, Episode 1, “The Cause.” It was delivered by the avuncular Shelby Foote in his soft, reassuring—some might say mellifluous—cadence, the inflection decorated with a pronounced but gentle southern accent. As professor of history James M. Lundberg complains, Foote, author of a popular Civil War trilogy who was himself not a historian, “nearly negates Burns’ careful 15-minute portrait of slavery’s role in the coming of the war with a 15-second” anecdote. Elsewhere, Foote rebukes the scholarly consensus that slavery was the central cause for secession and the conflict it spawned that would take well over 600,000 American lives.
While all but die-hard “Lost Cause” myth fanatics have relegated Foote’s ill-conceived dismissal of the centrality of slavery to the dustbin of history, the notion that southern soldiers fought solely for home and hearth has long persisted, even among historians. And on the face of it, it seems as if it should be true. After all, secession was the work of a narrow slice of the antebellum south, the slave-owning planter class which only comprised less than two percent of the population but dominated the political elite, in fury that Lincoln’s election by “Free-Soil” Republicans would likely deny their demands to transplant their “peculiar institution” to the new territories acquired in the Mexican War. More critically, three-quarters of southerners owned no slaves at all, and nearly ninety per cent of the remainder owned twenty or fewer. Most whites lived at the margins as yeoman farmers, although their skin color ensured a status markedly above those of blacks, free or enslaved. The Confederate army closely reflected that society: most rebel soldiers were not slaveowners. So slavery could not have been important to them … or could it?
The first to challenge the assumption that Civil War soldiers, north or south, were political agnostics was James M. McPherson in What They Fought For 1861-1865 (1995). Based on extensive research on letters written home from the front, McPherson argued that most of those in uniform were far more ideological than previously acknowledged. In a magnificent contribution to the historiography, Colin Edward Woodward goes much further in Marching Masters: Slavery, Race, and the Confederate Army During the Civil War (2014), presenting compelling evidence that not only were most gray-clad combatants well-informed about the issues at stake, but a prime motivating force for a majority was to preserve the institution of human chattel bondage and the white supremacy that defined the Confederacy.
Like McPherson, Woodward does a deep dive into the wealth of still extant letters from those at the front to make his case in a deeply researched and well-written narrative that reveals that the average rebel was surprisingly well-versed in the greater issues manifested in the debates that launched an independent Confederacy and justified the blood and treasure being spent to sustain it. And just as in secession, the central focus was upon preserving a society that had its foundation in chattel slavery and white supremacy. Some letters were penned by those who left enslaved human beings—many or just a few—back at home with their families when they marched off to fight, while most were written by poor dirt farmers who had no human property nor the immediate prospect of obtaining any.
But what is fully astonishing, as Woodward exposes in the narrative, is not only how frequently slavery and the appropriate status for African Americans is referenced in such correspondence, but how remarkably similar the language is, whether the soldier is the son of a wealthy planter or a yeoman farmer barely scraping by. In nearly every case, the righteousness of their cause is defined again and again not by the euphemism of “states’ rights” that became the rallying cry of “Lost Cause” after the war, but by the sanctity of the institution of human bondage. More than once, letters resound with a disturbing yet familiar refrain that asserted that the most fitting condition for blacks is as human property, something seen as mutually beneficial to the master as well as to the enslaved.
If those without slaves risking life and limb to sustain slavery with both musket in hand and zealous declarations in letters home provokes a kind of cognitive dissonance to modern ears, we need only be reminded of our own contemporaries in doublewides who might sound the most passionate defense of Wall Street banks. Have-nots in America often aspire to what is beyond their reach, for themselves or for their children. For poor southern whites of the time, in and out of the Confederate army, that turns out to be slave property.
One of the greatest sins of postwar reconciliation and the tenacity of the “Lost Cause” was the erasure of African Americans from history. In the myth-making that followed Appomattox, with human bondage extinct and its practice widely reviled, the Civil War was transformed into a sectional war of white brother against white brother, and blacks were relegated to roles as bit players. The centrality of slavery was excised from the record. In the literature, blacks were generally recalled as benign servants loyal to their masters, like the terrified Prissy in Gone with the Wind screeching “De Yankees is comin!” in distress rather than the celebration more likely characteristic to that moment in real time. That a half million of the enslaved fled to freedom in Union lines was lost to memory. Also forgotten was the fact that by the end of the war, fully ten percent of the Union Army was comprised of black soldiers in the United States Colored Troops (USCT)—and these men played a significant role in the south’s defeat. Never mentioned was that Confederate soldiers routinely executed black men in blue uniforms who were wounded or attempting to surrender, not only in well-known encounters like at Fort Pillow and the Battle of the Crater, but frequently and anonymously. As Woodward reminds us, this brand of murder was often unofficial, but rarely acknowledged, and almost never condemned. Only recently have these aspects of Civil War history received the attention that is their due.
And yet, more remarkably, Marching Masters reveals that perhaps the deepest and most enduring erasure of African Americans was of the huge cohort that accompanied the Confederate army on its various campaigns throughout the war. Thousands and thousands of them. “Lost Cause” zealots have imagined great corps of “Black Confederates” who served as fighters fending off Yankee marauders, but if that is fantasy—and it certainly is—the massive numbers of blacks who served as laborers alongside white infantry were not only real but represented a significant reason why smaller forces of Confederates held out as well as they did against their often numerically superior northern opponents. We have long known that a greater percentage of southerners were able to join the military than their northern counterparts because slave labor at home in agriculture and industry freed up men to wield saber and musket, but Woodward uncovers the long-overlooked legions of the enslaved who travelled with the rebels performing the kind of labor that (mostly) fell on white enlisted men in northern armies.
A segment of these were also personal servants to the sons of planters, which sometimes provoked jealousy among the ranks. Certain letters home plead for just such a servile companion, sometimes arguing that the enslaved person would be less likely to flee to Union lines if he was to be a cook in an army camp instead! And there were occasionally indeed tender if somewhat perversely paternalistic bonds between the homesick soldier and the enslaved, some of which found wistful expression in letters, some manifested in relationships with servants in the encampments. Many soldiers had deep attachments to the enslaved that nurtured them as children in the bosom of their families; some of that was sincerely reciprocated. Woodward makes it clear that while certain generalities can be drawn, every individual—soldier or chattel—was a human being capable of a wide range of actions and emotions, from the cruel to the heartwarming. For better or for worse, all were creatures of their times and their circumstances. But, at the end of the day, white soldiers had something like free will; enslaved African Americans were subject to the will of others, sometimes for the better but more often for the worse.
And then there was impressment. One of the major issues relatively unexplored in the literature is the resistance of white soldiers in the Confederate army to perform menial labor—the same tasks routinely done by white soldiers in the Union army, who grumbled as all those in the ranks in every army were wont to do while nevertheless following orders. But southern boys were different. Nurtured in a society firmly grounded in white supremacy, with chattel property doomed to the most onerous toil, rebels not only typically looked down upon hard work but—as comes out in their letters—equated it with “slavery.” To cope with this and an overall shortage of manpower, legislation was passed in 1863 mandating impressment of the enslaved along with a commitment of compensation to owners. This was not well received, but yet enacted, and thousands more blacks were sent to camps to do the work soldiers were not willing to do.
The numbers were staggering. When Lee invaded Pennsylvania, his army included 6000 enslaved blacks—which added an additional ten percent to the 60,000 infantry troops he led to Gettysburg! This of course does not include the runaways and free blacks his forces seized and enslaved after he crossed the state line. The point to all of this, of course, is that slavery was not some ideological abstraction for the average rebel soldier in the ranks, something that characterized the home front, whether your own family were owners of chattel property or not. Instead, the enslaved were with you in the field every day, not figuratively but in the flesh. With this in mind, sounding a denial that slavery served as a critical motivation for Confederate troops rings decidedly off-key.
While slavery was the central cause of the war, it was certainly not the only cause. There were other tensions that included agriculture vs. industry, rural vs. urban, states’ rights vs. central government, tariffs, etc. But as historians have long concluded, none of these factors on their own could ever have led to Civil War. Likewise, southern soldiers fought for a variety of reasons. While plenty were volunteers, many were also drafted into the war effort. Like soldiers from ancient times to the present day, they fought because they were ordered to, because of their personal honor, because they did not want to appear cowardly in the eyes of their companions. And because much of the war was decided on southern soil, they also fought for their homeland, to defend their families, to preserve their independence. So Shelby Foote might have had a point. But what was that independence based upon? It was fully and openly based upon creating and sustaining a proud slave republic, as all the rhetoric in the lead-up to secession loudly underscored.
Marching Masters argues convincingly that the long-held belief that southern soldiers were indifferent to or unacquainted with the principles that guided the Confederate States of America is in itself a kind of myth that encourages us to not only forgive those who fought for a reprehensible cause but to put them on a kind of heroic pedestal. Many fought valiantly, many lost their lives, and many were indeed heroes, but we must not overlook the cause that defined that sacrifice. In this, we must recall the speech delivered by the formerly enslaved Frederick Douglass on “Remembering the Civil War” with his plea against moral equivalency that is as relevant today as it was when he delivered it on Decoration Day in 1878: “There was a right side and a wrong side in the late war, which no sentiment ought to cause us to forget, and while today we should have malice toward none, and charity toward all, it is no part of our duty to confound right with wrong, or loyalty with treason.”
For all of the more than 60,000 books on the Civil War, there still remains a great deal to explore and much that has long been cloaked in myth for us to unravel. It is the duty not only of historians but for all citizens of our nation—a nation that was truly reborn in that tragic, bloody conflict—to set aside popular if erroneous notions of what led to that war, as well as what motivated its long-dead combatants to take up arms against one another. To that end, Woodward’s Marching Masters is a book that is not only highly recommended but is most certainly required reading.
Several years ago, I published an article in a scholarly journal entitled “Strange Bedfellows: Nativism, Know-Nothings, African-Americans & School Desegregation in Antebellum Massachusetts,” that spotlighted the odd confluence of anti-Irish nativism and the struggle to desegregate Boston schools. The Know-Nothings—a populist, nativist coalition that contained elements that would later be folded into the emerging Republican Party—made a surprising sweep in the Massachusetts 1854 elections, fueled primarily by anti-Irish sentiment, as well as a pent-up popular rage against the elite status quo that had long dominated state politics. Suddenly, the governor, all forty senators, and all but three house representatives were Know-Nothings!
Perhaps more startling was that during their brief tenure, the Know-Nothing legislature enacted a host of progressive reforms, creating laws to protect workingmen, ending imprisonment for debt, strengthening women’s rights in property and marriage, and—most significantly—passing landmark legislation in 1855 that “prohibited the exclusion [from public schools] of children for either racial or religious reasons,” which effectively made Massachusetts the first state in the country to ban segregation in schools! Featured in the debate prior to passage of the desegregation bill is a quote from the record that is to today’s ears perhaps at once comic and cringeworthy, as one proponent of the new law sincerely voiced his regret “that Negroes living on the outskirts . . . were forced to go a long distance to [the segregated] Smith School. . . while . . . the ‘dirtiest Irish,’ were allowed to step from their houses into the nearest school.”
My article focused on Massachusetts politics and the bizarre incongruity of nativists unexpectedly delivering the long sought-after prize of desegregated schools to the African American community. It is also the story of the nearly forgotten black abolitionist and integrationist William Cooper Nell, a mild if charismatic figure who united disparate forces of blacks and whites in a long, stubborn, determined campaign to end Boston school segregation. But there are lots of other important stories of people and events that led to that moment which due to space constraints could not receive adequate treatment in my effort.
Arguably the most significant one, which my article references but does not dwell upon, centers upon a little black girl named Sarah Roberts. Her father, Benjamin R. Roberts, sued for equal protection rights under the state constitution because his daughter was barred from attending a school near her residence and was compelled to a long walk to the rundown and crowded Smith School instead. He was represented by Robert Morris, one of the first African American attorneys in the United States, and Charles Sumner, who would later serve as United States Senator. In April 1850, in Roberts v. The City of Boston, the state Supreme Court ruled against him, declaring that each locality could decide for itself whether to have or end segregation. This ruling was to serve as an unfortunate precedent for the ignominious separate but equal ruling in Plessy v. Ferguson some decades hence and was also an obstacle Thurgood Marshall had to surmount when he successfully argued to have the Supreme Court strike down school segregation across the nation in 1954’s breakthrough Brown v. Board of Education case—just a little more than a century after the disappointing ruling in the Roberts case.
Father and son Stephen Kendrick and Paul Kendrick teamed up to tell the Roberts story and a good deal more in Sarah’s Long Walk: The Free Blacks of Boston and How Their Struggle for Equality Changed America, an extremely well-written, comprehensive, if occasionally slow-moving chronicle that recovers for the reader the vibrant, long overlooked black community that once peopled Boston in the years before the Civil War. In the process, the authors reveal how it was that while the state of Massachusetts offered the best overall quality of life in the nation for free blacks, it was also the home to the same stark, virulent racism characteristic of much of the north in the antebellum era, a deep-seated prejudice that manifested itself not only in segregated schools but also in a strict separation in other arenas such as transportation and theaters.
Doctrines of abolition were widely despised, north and south, and while abolitionists remained a minority in Massachusetts, as well, it was perhaps the only state in the country where antislavery ideology achieved widespread legitimacy. But true history is all nuance, and those who might rail passionately against the inherent evil in holding humans as chattel property did not necessarily also advance notions of racial equality. That was indeed far less common. Moreover, it is too rarely underscored that the majority of northern “Freesoilers” who were later to become the most critical component of the Republican Party vehemently opposed the spread of slavery to the new territories acquired in the Mexican War while concomitantly despising blacks, free or enslaved.
At the same time, there was hardly unanimity in the free black community when it came to integration; some blacks welcomed separation. Still, as Sarah’s Long Walk relates, there were a number of significant African American leaders like Robert Morris and William Cooper Nell whom, with their white abolitionist allies, played the long game and pursued compelling, nonviolent mechanisms to achieve both integration and equality, many of which presaged the tactics of Martin Luther King and other Civil Rights figures a full century later. For instance, rather than lose hope after the Roberts court decision, Nell doubled down his efforts, this time with a new strategy—a taxpayer’s boycott of Boston which saw prominent blacks move out of the city to suburbs that featured integrated schools, depriving Boston of tax revenue.
The Kendrick’s open the narrative with a discussion of Thurgood Marshall’s efforts to overturn the Roberts precedent in Brown v. Board of Education, and then trace that back to the flesh and blood Boston inhabitants who made Roberts v. The City of Boston possible, revealing the free blacks who have too long been lost to history. Readers not familiar with this material will come across much that will surprise them between the covers of this fine book. The most glaring might be how thoroughly in the decades after Reconstruction blacks have been erased from our history, north and south. Until recently, how many growing up in Massachusetts knew anything at all about the thriving free black community in Boston, or similar ones elsewhere above the Mason-Dixon?
But most astonishing for many will be the fact that the separation of races that that would become the new normal in the post-Civil War “Jim Crow” south had its roots fully nurtured in the north decades before Appomattox. Whites and their enslaved chattels shared lives intertwined in the antebellum south, while separation between whites and blacks was fiercely enforced in the north. Many African Americans in Massachusetts had fled bondage, or had family members that were runaways, and knew full well that southern slaveowners commonly traveled by rail accompanied by their enslaved servants, while free blacks in Boston were relegated to a separate car until the state prohibited racial segregation in mass transportation in 1842.
Sarah may not have been spared her long walk to school, but the efforts of integrationists eventually paid off when school segregation was prohibited by Massachusetts law just five years after Sarah’s father lost his case in court. Unfortunately, this battle had to be waged all over again in the 1970s, this time accompanied by episodes of violence, as Boston struggled to achieve educational equality through controversial busing mandates that in the long term generated far more ill will than sustainable results. Despite the elevation of Thurgood Marshall to the Supreme Court bench, and the election of the first African American president, more than one hundred fifty years after the Fourteenth Amendment became the law of the land, the Black Lives Matter (BLM) movement reminds us that there is still much work to be done to achieve anything like real equality in the United States.
For historians and educators, an even greater concern these days lies in the concerted efforts by some on the political right to erase the true story of African American history from public schools. As this review goes to press in Black History Month, February 2022, shameful acts are becoming law across a number of states that by means of gaslighting legislation ostensibly designed to ban Critical Race Theory (CRT) effectively prohibit educators from teaching their students the true history of slavery, Reconstruction, and Civil Rights. As of this morning, there are some one hundred thirteen other bills being advanced across the nation that could serve as potential gag orders in schools. How can we best combat that? One way is to loudly protest to state and federal officials, to insist that black history is also American history and should not be erased. The other is to freely share black history in your own networks. The best weapons for that in our collective arsenal are quality books like Sarah’s Long Walk.
My journal article, “Strange Bedfellows: Nativism, Know-Nothings, African-Americans & School Desegregation in Antebellum Massachusetts,” and related materials can be accessed by clicking here: Know-Nothings
A familiar trope in the Looney Tunes cartoons of my boyhood had Elmer Fudd or some other zany character digging a hole with such vigor and determination that they emerged on the other side of the world in China, greeted by one or more of the stereotypically racist Asian animated figures of the day. In the 1964 Road Runner vehicle “War and Pieces,” Wile E. Coyote goes it one better, riding a rocket clear through the earth—presumably passing through its center—until he appears on the other side dangling upside down, only to then encounter a Chinese Road Runner embellished with a braided pigtail and conical hat who bangs a gong with such force that he is driven back through the tunnel to end up right where he started from. In an added flourish, the Chinese Road Runner then peeps his head out of the hole and beep-beep’s faux Chinese characters that turn into letters that spell “The End.”
There were healthy doses of both hilarious comedy and uncomfortable caricature here, but what really stuck in a kid’s mind was the notion that you could somehow burrow through the earth with a shovel or some explosive force, which it turns out is just as impossible in 2022 as it was in 1964. But if you hypothetically wanted to give it a go, you would have to start at China’s actual antipode in this hemisphere, which lies in Chile or Argentina, and then tunnel some 7,918 miles: twice the distance to the center of the earth you would pass through, which lies at around 3,959 miles (6,371 km) from the surface.
So what about the center of the earth? Could we go there? After all, we did visit the moon, and the average distance there—238,855 miles away—is far more distant. But of course what lies between the earth and its single satellite is mostly empty space, not the crust, mantle, outer core, and inner core of a rocky earth that is a blend of the solid and the molten. Okay, it’s a challenge, you grant, but how far have we actually made it in our effort to explore our inner planet? We must have made some headway, right? Well, it turns out that the answer is: not very much. A long, concerted effort at drilling that began in 1970 by the then Soviet Union resulted in a measly milestone of a mere 7.6 miles (12.3 km) at the Kola Superdeep Borehole near the Russian border with Norway; efforts were abandoned in 1994 because of higher-than-expected temperatures of 356 °F (180 °C). Will new technologies take us deeper one day at this site or another? Undoubtedly. But it likely will not be in the near future. After all, there’s another 3,951.4 miles to go and conditions will only grow more perilous at greater depths.
But we can dream, can’t we? Indeed. And it was Jules Verne who did so most famously when he imagined just such a trip in his classic 1864 science fiction novel, Journey to the Center of the Earth. Astrophysicist and journalist David Whitehouse cleverly models his grand exploration of earth’s interior, Into the Heart of the World: A Journey to the Center of the Earth, on Verne’s tale, a well-written, highly accessible, and occasionally exciting work of popular science that relies on geology rather than fiction to transport the reader beneath the earth’s crust through the layers below and eventually to what we can theoretically conceive based upon the latest research as the inner core that comprises the planet’s center.
It is surprising just how few people today possess a basic understanding of the mechanics that power the forces of the earth. But perhaps even more astonishing is how new—relatively—this science is. When I was a child watching Looney Tunes on our black-and-white television, my school textbooks admitted that although certain hypotheses had been suggested, the causes of sometimes catastrophic events such as earthquakes and volcanoes remained essentially unknown. All that changed effectively overnight—around the time my family got our first color TV—with the widespread acceptance by geologists of the theory of plate tectonics, constructed on the foundation of the much earlier hypothesis of German meteorologist and geophysicist Alfred Wegener, who in 1912 advanced the view of continents in motion known as “continental drift,” which was ridiculed in his time. By 1966, the long-dead Wegener was vindicated, and continental drift was upgraded to the more elegant model of plate tectonics that fully explained not only earthquakes and volcanoes, but mountain-building, seafloor spreading, and the whole host of other processes that power a dynamic earth.
Unlike some disciplines such as astrophysics, the basic concepts that make up earth science are hardly insurmountable to any individual with an average intelligence, so for those who have no idea how plate tectonics work and are curious enough to want to learn, Into the Heart of the World is a wonderful starting point. Whitehouse can be credited with articulating complicated processes in an easy to follow narrative that consistently holds the reader’s interest and remains fully comprehensible to the non-scientist. I came to this book with more than a passing familiarity with plate tectonics, but I nevertheless added to my knowledge base and enjoyed the way the author united disparate topics into this single theme of a journey to the earth’s inner core.
If I have a complaint, and as such it is only a quibble tied to my own preferences, Into the Heart of the World often devotes far more paragraphs to a history of “how we know what we know” rather than a more detailed explanation of the science itself. The author is not to be faulted for what is integral to the structure of the work—after all the cover does boast “A Remarkable Voyage of Scientific Discovery,” but it left me longing for more. Also, some readers may stumble over these backstories of people and events, eager instead to get to the fascinating essence of what drives the forces that shape our planet.
A running gag in more than one Bugs Bunny episode has the whacky rabbit inadvertently tunneling to the other side of the world, then admonishing himself that “I knew I shoulda taken that left turn at Albuquerque!” He doesn’t comment on what turn he took at his juncture with the center of the earth, but many kids who sat cross-legged in front their TVs wondered what that trip might look like. For grownups who still wonder, I recommend Into the Heart of the World as your first stop.
[Note: this book has also been published under the alternate title, Journey to the Centre of the Earth: The Remarkable Voyage of Scientific Discovery into the Heart of Our World.]
[A link to the referenced 1964 Road Runner episode is here: War and Pieces]
In southern Greece in 1944, German forces constructing a wartime bunker reportedly unearthed a single mandible that paleontologist Bruno von Freyberg incorrectly identified as an extinct Old-World monkey. A decades-later reexamination by another paleoanthropologist determined that the tooth instead belonged to a 7.2-million-year-old extinct species of great ape which in 1972 was dubbed Graecopithecus freybergi and came to be more popularly known as “El Graeco.” Another tooth was discovered in Bulgaria in 2012. Then, in 2017, an international team led by German paleontologist Madelaine Böhme conducted an analysis that came to the astonishing conclusion that El Graeco in fact represents the oldest hominin—our oldest direct human ancestor! At the same, Böhme challenged the scientific consensus that all humans are “Out-of-Africa” with her competing “North Side Story” that suggests Mediterranean ape ancestry instead. Both of these notions remain widely disputed in the paleontological community.
In Ancient Bones: Unearthing the Astonishing New Story of How We Became Human, Böhme—with coauthors Rüdiger Braun and Florian Breier—advances this North Side Story with a vengeance, scorning the naysayers and intimating the presence of some wider conspiracy in the paleontological community to suppress findings that dispute the status quo. Böhme brings other ammunition to the table, including the so-called “Trachilos footprints,” the 5.7-million-year-old potentially hominin footprints found on Crete, which—if fully substantiated—would make these more than 2.5 million years earlier than the footprints of Australopithecus afarensis found in Tanzania. Perhaps these were made by El Graeco?! And then there’s Böhme’s own discovery of the 11.6-million-year-old Danuvius guggenmosi, an extinct species of great ape she uncovered near the town of Pforzen in southern Germany, which according to the author revolutionizes the origins of bipedalism. Throughout, she positions herself as the lonely voice in the wilderness shouting truth to power.
I lack the scientific credentials to quarrel with Böhme’s assertions, but I have studied paleoanthropology as a layman long enough to both follow her arguments and to understand why accepted authorities would be reluctant to embrace her somewhat outrageous claims that are after all based on rather thin evidence. But for the uninitiated, some background to this discussion is in order:
While human evolution is in itself not controversial (for scientists, at least; Christian evangelicals are another story), the theoretical process of how we ended up as Homo sapiens sapiens, the only living members of genus Homo, based upon both molecular biology and fossil evidence, has long been open to spirited debate in the field, especially because new fossil finds occur with some frequency and the rules of somewhat secretive peer-reviewed scholarship that lead to publication in scientific journals often delays what should otherwise be breaking news.
Paleontologists have long been known to disagree vociferously with one other, sometimes spawning feuds that erupt in the public arena, such as the famous one in the 1970s between the esteemed, pedigreed Richard Leakey and Donald Johanson over Johanson’s discovery and identification of the 3.2-million-year-old Australopithecine “Lucy,” which was eventually accepted by the scientific community over Leakey’s objections. At one time, it was said that all hominin fossils could be placed on one single, large table. Now there are far more than that: Homo, Australopithecine, and many that defy simple categorization. Also at one time human evolution was envisioned as a direct progression from primitive to sophistication, but today it is accepted that rather than a “tree” our own evolution can best be imagined as a bush, with many relatives—and many of those relatives not on a direct path to the humans that walk the earth today.
Another controversary has been between those who favored an “Out-of-Africa” origin for humanity, and those who advanced what used to be called the multi-regional hypothesis. Since all living Homo sapiens sapiens are very, very closely related to each other—even more closely related than chimpanzees that live in different parts of Africa today—multiregionalism smacked a bit of the illogical and has largely fallen out of favor. The scholarly consensus that Böhme takes head on is that humans can clearly trace their ancestry back to Africa. Another point that should be made is that there are loud voices of white supremacist “race science” proponents outside of the scientific community whom without any substantiation vehemently oppose the “Out-of-Africa” origin theory for racist political purposes, as underscored in Angela Saini’s brilliant recent book, Superior: The Return of Race Science. This is not to suggest that Böhme is racist nor that her motives should be suspect—there is zero evidence that is the case—but the reader must be aware of the greater “noise” that circulates around this topic.
My most pointed criticism of Ancient Bones is that it is highly disorganized, meandering between science and polemic and unexpected later chapters that read like a college textbook on human evolution. It is often hard to know what to make of it. And it’s difficult for me to accept that there is a larger conspiracy in the paleoanthropological community to preserve “Out-of-Africa” against better evidence that few beyond Böhme and her allies have turned up. The author also makes a great deal of identifying singular features in both El Graeco and Danuvius that she insists must establish that her hypotheses are the only correct ones, but as those who are familiar with the work of noted paleoanthropologists John Hawks and Lee Berger are well aware, mosaics—primitive and more advanced characteristics occurring in the same hominin—are far more common than once suspected and thus should give pause to those tempted to conclusions that actual evidence does not unambiguously support.
As noted earlier, I am not a paleontologist or even a scientist, and thus I am far from qualified to peer-review Böhme’s arguments or pronounce judgment on her work. But as a layman with some familiarity with the current scholarship, I remain unconvinced. She also left me uncomfortable with what appears to be a lack of respect for rival ideas and for those who fail to find concordance with her conclusions. More significantly, her book is poorly edited and too often lacks focus. Still, for those like myself who want to stay current with the latest twists-and-turns in the ever-developing story of human evolution, at least some portions of Ancient Bones might be worth a read.
[Note: I read an Advance Reader’s Copy (ARC) of this book obtained through an early reviewer’s group.]
Is another biography of George Washington really necessary? A Google search reveals some nine hundred already exist, not to mention more than five thousand journal articles that chronicle some portion of his life. But the answer turns out to be a resounding yes, and David O. Stewart makes that case magnificently with his latest work, George Washington: The Political Rise of America’s Founding Father, an extremely well-written, insightful, and surprisingly innovative contribution to the historiography.
Many years ago, I recall reading the classic study, Washington: The Indispensable Man, by James Thomas Flexner, which looks beyond his achievements to put emphasis on his most extraordinary contribution, defined not by what he did but what he deliberately did not do: seize power and rule as tyrant. This, of course, is no little thing, as seen in the pages of history from Caesar to Napoleon. When told he would resign his commission and surrender power to a civilian government, King George III—who no doubt would have had him hanged (or worse) had the war gone differently—famously declared that “If he does that, he will be the greatest man in the world.” Washington demonstrated that greatness again when he voluntarily—you might say eagerly—stepped down after his tenure as President of the United States to retire to private life. Indispensable he was: it is difficult to imagine the course of the American experiment had another served in his place in either of those pivotal roles.
But there is more to Washington than that, and some of it is less than admirable. Notably, there was Washington’s heroic fumble as a young Virginia officer leading colonial forces to warn away the French at what turned into the Battle of Jumonville Glen and helped to spark the French & Indian War. Brash, headstrong, arrogant, thin-skinned, and ever given to an unshakable certitude that his judgment was the sole correct perspective in every matter, the young Washington distinguished himself for his courage and his integrity while at the same time routinely clashing with authority figures, including former mentors that he frequently left exasperated by his demands for recognition.
Biographers tend to visit this period of his life and then fast-forward two decades ahead to the moment when the esteemed if austere middle-aged Washington showed up to the Continental Congress resplendent in his military uniform, the near-unanimous choice to lead the Revolutionary Army in the struggle against Britain. But how did he get here? In most studies, it is not clear. But this is where Stewart shines! The author, whose background is the law rather than academia—he was once a constitutional lawyer who clerked for Supreme Court Justice Lewis Powell, Jr.—has proved himself a brilliant historian in several fine works, including his groundbreaking reassessment of a key episode of the early post-Civil War era, Impeached: The Trial of President Andrew Johnson and the Fight for Lincoln’s Legacy. And in Madison’s Gift: Five Partnerships that Built America, Stewart’s careful research, analytical skills, and intuitive approach successfully resurrected portions of James Madison’s elusive personality that had been otherwise mostly lost to history.
This talent is on display here, as well, as Stewart adeptly examines and interprets Washington’s evolution from Jumonville Glen to Valley Forge. Washington’s own personality is something of a conundrum for biographers, as he can seem to be simultaneously both selfless and self-centered. The young Washington so frequently in turn infuriated and alienated peers and superiors alike that it may strike us as fully remarkable that this is the same individual who could later harness the talents and loyalty of both rival generals during the war and the outsize egos of fellow Founders as the new Republic took shape. Stewart demonstrates that Washington was the author of his own success in this arena, quietly in touch with his strengths and weaknesses while earning respect and cultivating goodwill over the years as he established himself as a key figure in the Commonwealth. Washington himself was not in this regard a changed man as much as he was a more mature man who taught himself to modify his demeanor and his behavior in the company of others for mutual advantage. This too, is no small thing.
The subtitle of this book—ThePolitical Rise of America’s Founding Father—is thus hardly accidental, the latest contribution to a rapidly expanding genre focused upon politics and power, showcased in such works as Jon Meacham’s Thomas Jefferson: The Art of Power, and Robert Dallek’s Franklin D. Roosevelt: A Political Life. Collectively, these studies serve to underscore that politics is ever at the heart of leadership, as well as that great leaders are not born fully formed, but rather evolve and emerge. George Washington perhaps personifies the most salient example of this phenomenon.
The elephant in the room of any examination of Washington—or the other Virginia Founders who championed liberty and equality for that matter—is slavery. Like Jefferson and Madison and a host of others, Washington on various occasions decried the institution of enslaving human beings—while he himself held hundreds as chattel property. Washington is often credited with freeing the enslaved he held direct title to in his will, but that hardly absolves him of the sin of a lifetime of buying, selling, and maintaining an unpaid labor force for nothing less than his own personal gain, especially since he was aware of the moral blemish in doing so. Today’s apologists often caution that is unfair to judge those who walked the earth in the late eighteenth-century by our own contemporary standards, but the reality is that these were Enlightenment-era men that in their own words declared slavery abhorrent while—like Jefferson with his famous “wolf by the ear” cop-out—making excuses to justify participating in and perpetuating a cruel inhumanity that served their own economic self-interests. As biographer, Stewart’s strategy for this dimension of Washington’s life is to treat very little with it in the course of the narrative, while devoting the second to last chapter to a frank and balanced discussion of the ambivalence that governed the thoughts and actions of the master of Mount Vernon. It is neither whitewash nor condemnation.
Stewart’s study is by no means hagiography, but the author clearly admires his subject. Washington gets a pass for his shortcomings at Jumonville, and he is hardly held to strict account for his role as an enslaver. Still, the result of Stewart’s research, analysis, and approach is the most readable and best single-volume account of Washington’s life to date. This is a significant contribution to the scholarship that I suspect will long be deemed required reading.
Conspicuous in their absence from my 1960s elementary education were African Americans and Native Americans. Enslaved blacks made an appearance in my textbooks, of course, but slavery as an institution was sketched out as little more than a vague and largely benign product of the times. Then there was a Civil War fought over white men’s sectional grievances; there were dates, and battles, and generals, winners and losers. There was Lincoln’s Emancipation Proclamation, then constitutional amendments that ended slavery and guaranteed equality. There was some bitterness but soon there was reconciliation, and we went on to finish building the transcontinental railroad. There were the obligatory walk-on cameos by Frederick Douglass and Harriet Tubman, and later George Washington Carver, who had something to do with peanuts. For Native Americans, the record was even worse. Our texts featured vignettes of Squanto, Pocahontas, Sacajawea, and Sitting Bull. The millions of Amerindians that once populated the country from coast to coast had been effectively erased.
Alan Taylor, Pulitzer Prize winning author and arguably the foremost living historian of early America, has devoted a lifetime to redressing those twin wrongs while restoring the nuanced complexity of our past that was utterly excised from the standard celebration of our national heritage that for so long dominated our historiography. In the process, in the eleven books he has published to date, he has also dramatically shifted the perspective and widened the lens from the familiar approach that more rigidly defines the boundaries of the geography and the established chapters in the history of the United States—a stunning collective achievement that reveals key peoples, critical elements, and greater themes often obscured by the traditional methodology.
I first encountered Taylor some years ago when I read his magnificent American Colonies: The Settling of North America, which restores the long overlooked multicultural and multinational participants who peopled the landscape, while at the same time enlarging the geographic scope beyond the English colonies that later comprised the United States to encompass the rest of the continent that was destined to become Canada and Mexico, as well as highlighting vital links to the West Indies. Later, in American Revolutions, Taylor identifies a series of social, economic and political revolutions of outsize significance over more than five decades that often go unnoticed in the shadows of the War of Independence, which receives all the attention.
Still, as Taylor underscores, it was the outcome of the latter struggle—in which white, former English colonists established a new nation—that was to have the most lasting and dire consequences for all those in their orbit who were not white, former English colonists, most especially blacks and Native Americans. The defeated British had previously drawn boundaries that served as a brake on westward expansion and left more of that vast territory as a home to the indigenous. That brake was now off. Some decades later, Britain was to abolish slavery throughout its empire, which no longer included its former colonies. Thus the legacy of the American Revolution was the tragic irony that a Republic established to champion liberty and equality for white men would ultimately be constructed upon the backs of blacks doomed to chattel slavery, as well as the banishment or extermination of Native Americans. This theme dominates much of Taylor’s work.
In his latest book, American Republics: A Continental History of the United States, 1783-1850, which roughly spans the period from the Peace of Paris to California statehood, Taylor further explores this grim theme in a brilliant analysis of how the principles of white supremacy—present at the creation—impacted the subsequent course of United States history. Now this is, of course, uncomfortable stuff for many Americans, who might cringe at that very notion amid cries of revisionism that insist contemporary models and morality are being appropriated and unfairly leveraged against the past. But terminology is less important than outcomes: non-whites were not only foreclosed from participating as citizens in the new Republic, but also from enjoying the life, liberty and pursuit of happiness allegedly granted to their white counterparts. At the same time, southern states where slavery thrived wielded outsize political power that frequently plotted the nation’s destiny. As in his other works, Taylor is a master of identifying unintended consequences, and there are more than a few to go around in the insightful, deeply analytical, and well-written narrative that follows.
These days, it is almost de rigueur for historians to decry the failure of the Founders to resolve the contradictions of permitting human chattel slavery to coexist within what was declared to be a Republic based upon freedom and equality. In almost the same breath, however, many in the field still champion the spirit of compromise that has marked the nation’s history. But if there is an original sin to underscore, it is less that slavery was allowed to endure than that it was codified within the very text of the Constitution of the United States by means of the infamous compromise that was the “three-fifths rule,” which for the purposes of representation permitted each state to count enslaved African Americans as three-fifths of a person, thus inflating the political power of each state based upon their enslaved population. This might have benefited all states equally, but since slavery was to rapidly decline and all but disappear above what would be drawn as the Mason-Dixon, all the advantage flowed to the south, where eventually some states saw its enslaved population outnumber its free white citizenry.
This was to prove dramatic, since the slave south claimed a disproportionate share of national political power when it came to advancing legislation or, for that matter, electing a president! Taylor notes that the disputed election of 1824 that went for decision to the House of Representatives would have been far less disputed without the three-fifths clause, since in that case John Quincy Adams would have led Andrew Jackson in the Electoral College 83 to 77 votes, instead of putting Jackson in the lead 99 to 84. [p253] When Jackson prevailed in the next election, it was the south that cemented his victory.
The scholarly consensus has established the centrality of slavery to the Civil War, but Taylor goes further, arguing that its significance extended long before secession: slavery was ever the central issue in American history, representing wealth, power, and political advantage. The revolutionary generation decried slavery on paper—slave masters Washington, Jefferson and Madison all pronounced it one form of abomination or another—but nevertheless failed to act against it, or even part with their own human property. Jefferson famously declared himself helpless, saying of the peculiar institution that “We have the wolf by the ear, and we can neither hold him, nor safely let him go,” but as slavery grew less profitable for Virginia in the upper south, Jefferson and his counterparts turned to breeding the enslaved for sale to the lower south, where the demand was great. Taylor points out that “In 1803 a male field hand sold for about $600 in South Carolina compared to $400 in Virginia: a $200 difference enticing to Virginia sellers and Carolina slave traders … Between 1790 and 1860, in one of the largest forced migrations in world history, slave traders and migrants herded over a million slaves from Virginia and Maryland to expand southern society …” [p159] Data and statistics may obscure it, but these were after all living, breathing, sentient human beings who were frequently subjected to great brutalities while enriching those who held them as chattel property.
Jefferson and others of his ilk imagined that slavery would somehow fall out of favor at some distant date, but optimistically kicking the can down the road to future generations proved a fraught strategy: nothing but civil war could ever have ended it. As Taylor notes:
Contrary to the wishful thinking of many Patriots, slavery did not wither away after the American Revolution. Instead, it became more profitable and entrenched as the South expanded westward. From 698,600 in 1790, the number of enslaved people soared to nearly 4 million by 1860, when they comprised a third of the South’s population … In 1860, the monetary value of enslaved people exceeded that of all the nation’s banks, factories, and railroads combined. Masters would never part with so much valuable human property without a fight. [p196]
As bad as it was for enslaved blacks, in the end Native Americans fared far worse. It has been estimated that up to 90% of Amerindians died as a result to exposure to foreign pathogens within a century of the Columbian Experience. The survivors faced a grim future competing for land and resources with rapacious settlers who were better armed and better organized. It may very well be that conflict between colonists and the indigenous was inevitable, but as Taylor emphasizes, the trajectory of the relationship became especially disastrous for the latter after British retreat essentially removed all constraints on territorial expansion.
The stated goal of the American government was peaceful coexistence that emphasized native assimilation to “white civilization.” The Cherokees who once inhabited present-day Georgia actually attempted that, transitioning from hunting and gathering to agriculture, living in wooden houses, learning English, creating a written language. Many practiced Christianity. Some of the wealthiest worked plantations with enslaved human property. It was all for naught. With the discovery of gold in the vicinity, the Cherokees were stripped of their lands in the Indian Removal Act of 1830, championed by President Andrew Jackson, and marched at bayonet point over several months some 1200 miles to the far west. Thousands died in what has been dubbed the “Trail of Tears,” certainly one of the most shameful episodes of United States history. Sadly, rather than an exception, the fate of the Cherokees proved to be indicative of what lay in store for the rest of the indigenous as the new nation grew and the hunger for land exploded.
That hunger, of course, also fueled the Mexican War, launched on a pretext in yet another shameful episode that resulted in an enormous land grab that saw a weaker neighbor forced to cede one-third of its former domains. It was the determination of southern states to transplant plantation-based slavery to these new territories—and the fierce resistance to that by “Free-Soilers” in Lincoln’s Republican Party—that lit the fuse of secession and the bloody Civil War that it spawned.
If there are faults to this fine book, one is that there is simply too much material to capably cover in less than four hundred pages, despite the talented pen and brilliant analytical skills of Alan Taylor. The author devoted an entire volume—The Civil War of 1812—to the events surrounding the War of 1812, a conflict also central to a subsequent effort, The Internal Enemy. This kind of emphasis on a particular event or specific theme is typical of Taylor’s work. In American Republics, he strays from that technique to attempt the kind of grand narrative survey seen by other chroniclers of the Republic, powering through decades of significance at sometimes dizzying speeds, no doubt a delight for some readers but yet disappointing to others long accustomed to the author’s detailed focus on the more narrowly defined.
Characteristic of his remarkable perspicacity, Taylor identifies what other historians overlook, arguing in American Republics that the War of 1812 was only the most well-known struggle in a consequential if neglected era he calls the “Wars of the 1810s” that also saw the British retreat northward, the Spanish forsake Florida, and the dispossession of Native Americans accelerate. [p148] That could be a volume in itself. Likewise, American culture and politics in the twelve years that separate Madison and Jackson is worthy of book-length treatment. There is so much more.
Another issue is balance—or a lack thereof. If the history of my childhood was written solely in the triumphs of white men, such accomplishments are wholly absent in American Republics, which reveals the long-suppressed saga of the once invisible victims of white supremacy. It’s a true story, an important story—but it’s not the only story. Surely there are some achievements of the Republic worthy of recognition here?
As the culture wars heat to volcanic temperatures, such omissions only add tinder to the flames of those dedicated to the whitewash that promotes heritage over history. Already the right has conjured an imaginary bugaboo in Critical Race Theory (CRT), with legislation in place or pending in a string of states that proscribes the teaching of CRT. These laws have nothing to do with Critical Race Theory, of course, but rather give cover to the dog whistles of those who would intimidate educators so they cannot teach the truth about slavery, about Reconstruction, about Civil Rights. These laws put grade-school teachers at a risk of termination for incorporating factual elements of our past into their curriculum, effectively banning from the classroom the content of much of American Republics. This is very serious stuff: Alan Taylor is a distinguished professor at the University of Virginia, a state that saw the governor-elect recently ride to an unlikely victory astride a sort of anti-CRT Trojan Horse. Historians cannot afford any unforced errors in a game that scholars seem to be ceding to dogmatists. If the current trend continues, we may very well witness reprints of my childhood textbooks, with blacks and the indigenous once more consigned to the periphery.
I have read seven of Taylor’s books to date. Like the others, his most recent work represents a critical achievement for historical scholarship, as well as a powerful antidote to the propaganda that formerly tarnished studies of the American Experience. The United States was and remains a nation unique in the family of nations, replete with a fascinating history that is at once complicated, messy, and controversial. American history, at its most basic, is simply a story of how we got from then to now: it can only be properly understood and appreciated in the context of its entirety, warts and all. Anything less is a disservice to the discipline as well as to the audience. To that end, American Republics is required reading.
Note: I have reviewed other works by Alan Taylor here:
In what has to be the most shameful decision rendered in the long and otherwise distinguished career of Justice Oliver Wendell Holmes, in 1927 the Supreme Court ruled in Buck v. Bell to uphold a compulsory sterilization law in Virginia. The case centered on eighteen-year-old Carrie Buck, confined to the Virginia State Colony for Epileptics and Feebleminded, and Holmes wrote the majority opinion in the near unanimous decision, famously concluding that “three generations of idiots is enough.”
Similar laws prevailed in some thirty-two states, resulting in the forced sterilization of more than 60,000 Americans. Had Carrie lived in Massachusetts, she would have avoided this fate, but she likely would have been condemned to the Belchertown State School for the Feeble-Minded, which—like similar institutions of this era—had its foundation in the eugenics, racism and Social Darwinism of the time that argued that “defectives” with low moral character threatened the very health of the population by breeding others of their kind, raising fears that a kind of contagious degeneracy would permanently damage the otherwise worthy inhabitants of the nation. I have written elsewhere of the horror-show of inhumane conditions and patient abuse at the Belchertown State School, which did not finally close its doors until 1992.
Sterilization was only one chilling byproduct of “eugenics,” a term coined by Francis Galton, a cousin of Charles Darwin whose misunderstanding of the principles of Darwinian evolution led to his championing of scientific racism. Eugenics was also the driving force behind the 1924 immigration law that dramatically reduced the number of Jews, Italians, and East Europeans admitted to the United States. White supremacy did not only consign blacks and other people of color to the ranks of the “less developed” races, but specifically exalted those of northern and central European origin as the best and the brightest. This was all pseudoscience of course, but it was quite widely accepted and “respectable” in its day.
Then, along came Hitler and the Holocaust, and more than six million Jews and other “undesirables” were systematically murdered in the name of racial purity. Eugenics was respectable no more. Most of us born in the decades that followed the almost unfathomable horror of that Nazi sponsored genocide may have assumed that race science was finally discredited and disappeared forever, relegated to a blood-spattered dustbin of history. But, as Angela Saini reveals in her well-written, deeply researched, and sometimes startling book, Superior: The Return of Race Science, scientific racism not only never really went extinct, but it has returned in our day with a kind of vengeance, fueling the fever for calls to action on the right for anti-immigration legislation.
Saini, a science journalist, broadcaster, and author with a pair of master’s degrees may be uniquely qualified to tell this story. Born in London of Indian parents, in a world seemingly obsessed with racial classification she relates how her background and brown complexion defies categorization; some may consider her Indian, or Asian—or even black. But of course in reality she could not be more British, even if for many her skin color sets her apart. The UK’s legacy of empire and Kipling’s “white man’s burden” still loom large.
But Superior is not a screed and is not about Saini, but rather about how mistaken notions of race and the pseudoscience of scientific racism have not only persisted but are rapidly gaining ground for a new audience and a new era. To achieve this, the author conducted comprehensive research into the origins of eugenics, but even more significantly identified how the ideology of race science that fueled National Socialism and begat Auschwitz and Birkenau quietly if no less adamantly endured post-Nuremberg cloaked in the less fiery rhetoric of pseudoscientific journals grasping at the periphery of legitimacy. Moreover, a modern revolution in paleogenetics and DNA research that should firmly refute such dangerous musings has instead been incorporated for a new generation of acolytes to scientific racism that serve to both undergird and add a false sense of authenticity to dangerous political tendencies on the right that long simmered and now have burst forth in the public arena.
Whatever some may believe, science has long established that race, for all intents and purposes, is a myth, a social construct that advances no important information about any given population. Regardless of superficial characteristics, all living humans—designated homo sapiens sapiens—are biologically the same and by every other critical metric are essentially members of the same closely related population. In fact, various groups of chimpanzees of Central Africa demonstrate greater genetic diversity than all humans across the globe today. Modern humans likely evolved from a common ancestor in Africa, and thus all of humanity is out of Africa. It is just as likely that all humans once had dark skin, and that lighter skin, what we would term “white” or Caucasian, developed later as populations moved north and melanin—a pigment located in the outer skin layer called the epidermis—was reduced as an adaptation to cope with relatively weak solar radiation in far northern latitudes. The latest scholarship reveals that Europeans only developed their fairer complexion as recently as 8500 years ago!
The deepest and most glaring flaw in the race science that was foundational to Nazism is that it is actually a lack of diversity that often results in a less healthy population. This is not only apparent in the hemophilia that plagued the closely related royal houses of the European monarchies, but on a more macro scale with genetic conditions more common to certain ethnic groups, such as sickle cell disease for those of African heritage, and Tay-Sachs disease among Ashkenazi Jews.
Counterintuitively, modern proponents of race science cherry pick DNA data to attempt to promote superiority for whites that concomitantly assigns a lesser status for people of color, and these concepts are then repackaged to champion policies that limit immigration from certain parts of the world. Once anathema for all but those on the very fringes of the political spectrum, this dangerous rebirth of genetic pseudoscience is now given voice on right-wing media. Worse perhaps, the tendency of mainstream media to promote fairness in what has come to be dubbed “bothsiderism” sometimes offers an underserved platform to those spinning racist dogma in the guise of scientific studies. Of course, social media has now transcended television as a messaging vehicle, and it is far better suited to spreading misinformation, especially in an era given to a mistrust of expertise, thus granting a seat at the table to the unsupported on the same platform with credible fact-based reality, urging the audience to do their own research and come to their own conclusions.
The United States was collectively shaken in 2017 when white supremacists wielding tiki torches marched at Charlottesville chanting “Jews will not replace us,” and shaken once more when then-president Donald Trump subsequently asserted that there “were very fine people, on both sides.” But there was far less outrage the following year when Trump both sounded a dog whistle and startled lawmakers as he wondered aloud why we should allow in more immigrants from Haiti and “shithole countries” in Africa instead of from places like Norway. (Unanswered, of course, is why a person would want to abandon the arguably higher quality of life in Norway to come to the U.S. …) But the volume on such dog whistles has been turned up alarmingly as of late by popular Fox News host Tucker Carlson, who in between fear-mongering messaging that casts the Black Lives Matter (BLM) movement and Critical Race Theory (CRT) as Marxist conspiracies that threaten the American way of life, openly advocates against the paranoid alt-right terror of the “Great Replacement” theory, a staple of the white supremacist canon, declaring the Biden administration actively engaged in trying “to change the racial mix of the country … to reduce the political power of people whose ancestors lived here, and dramatically increase the proportion of Americans newly-arrived from the third world.” Translation: people of color are trying to supplant white people. Carlson doesn’t cite race science, but he did recently allow comments to go unchallenged by his guest, the racist extremist social scientist Charles Murray, that the “the cognitive demands” of some occupations mean “a whole lot of more white people qualify than Black people.” Superior was published in 2019 but is chillingly prescient about the dangerous trajectory of both racism and race science on the right.
There is a lot of material between the covers of this book, but because Saini writes so well and speaks to the more arcane matters in language comprehensible to a wide audience, it is not a difficult read. Throughout, the research is impeccable and the analysis spot-on. Still, there are moments Saini strays a bit, at one point seeming to speculate whether we should hold back on paleogenetic research lest this data be further perverted by proponents of scientific racism. That is, of course, the wrong approach: the best weapon against pseudoscience remains science itself. Still, the warning bells she sounds here must be heeded. The twin threats of racism and the rebirth of race science into the mainstream are indeed clear and present dangers that must be confronted and combated at every corner. The author’s message is clear and perhaps more relevant now than at any time since the 1930s, another era when hate and racism served as by-products that informed an angry brand of populism that claimed legitimacy through race science. We all know how that ended.
I have written of the Belchertown State School here:
When identifying the “greatest presidents,” historians consistently rank Washington and Lincoln in the top two slots; the third spot almost always goes to Franklin Delano Roosevelt, who served as chief executive longer than any before or since and shepherded the nation through twin existential crises of economic depression and world war. FDR left an indelible legacy upon America that echoes loudly both forward to our present and future as well as back to his day. Lionized by the left today—especially by its progressive wing—far more than he was in his own time, he remains vilified by the right, then and now. Today’s right, which basks in the extreme and often eschews common sense, conflating social security with socialism, frequently casts him as villain. Yet his memory, be it applauded or heckled, is nevertheless of an iconic figure who forever changed the course of American history, for good or ill.
FDR has been widely chronicled, by such luminaries as James MacGregor Burns, William Leuchtenburg, Doris Kearns Goodwin, Jay Winik, Geoffrey C. Ward, and a host of others, including presidential biographer Robert Dallek, winner of the Bancroft Prize for Franklin D. Roosevelt and American Foreign Policy, 1932–1945. Dallek now revisits his subject with Franklin D. Roosevelt: A Political Life, the latest contribution to a rapidly expanding genre focused upon politics and power, showcased in such works as Jon Meacham’s Thomas Jefferson: The Art of Power, and most recently, in George Washington: The Political Rise of America’s Founding Father, by David O. Stewart.
A rough sketch of FDR’s life is well known. Born to wealth and sheltered by privilege, at school he had difficulty forming friendships with peers. He practiced law for a time, but his passion turned to politics, which seemed ideally suited to the tall, handsome, and gregarious Franklin. To this end, he modeled himself on his famous cousin, President Theodore Roosevelt. He married T.R.’s favorite niece, Eleanor, and like Theodore eventually became Assistant Secretary of the Navy. Unsuccessful as a vice-presidential candidate in the 1920 election, his political future still seemed assured until he was struck down by polio. His legs were paralyzed, but not his ambition. He never walked again, but equipped with heavy leg braces and an impressive upper body strength, he perfected a swinging gait that propelled him forward while leaning into an aide that served, at least for brief periods, as a reasonable facsimile of the same. He made a remarkable political comeback as governor of New York in 1928, and won national attention for his public relief efforts, which proved essential in his even more remarkable bid to win the White House four years later. Reimagining government to cope with the consequences of economic devastation never before seen in the United States, then reimagining it again to construct a vast war machine to counter Hitler and Tojo, he bucked tradition to win reelection three times, then stunned the nation with his death by cerebral hemorrhage only a few months into the fourth term of one of the most consequential presidencies in American history.
That “brief sketch” translates into mountains of material for any biographer, so narrowing the lens to FDR’s “political life” proves to be a sound strategy that underscores the route to his many achievements as well as the sometimes-shameful ways he juggled competing demands and realities. Among historians, even his most ardent admirers tend to question his judgment in the run-up to the disaster at Pearl Harbor, as well as his moral compass in exiling Japanese Americans to confinement camps, but as Dallek reveals again and again in this finely wrought study, these may simply be the most familiar instances of his shortcomings. If FDR is often recalled as smart and heroic—as he indeed deserves to be—there are yet plenty of salient examples where he proves himself to be neither. Eleanor Roosevelt once famously quipped that John F. Kennedy should show a little less profile and a little more courage, but there were certainly times this advice must have been just as suitable to her husband. What is clear is that while he was genuinely a compassionate man capable of great empathy, FDR was at the same time at his very core driven by an almost limitless ambition that, reinforced by a conviction that he was always in the right, spawned an ever-evolving strategy to prevail that sometimes blurred the boundaries of the greater good he sought to impose. Shrewd, disciplined, and gifted with finely tuned political instincts, he knew how to balance demands, ideals, and realities to shape outcomes favorable to his goals. He was a man who knew how to wield power to deliver his vision of America, and the truth is, he could be quite ruthless in that pursuit. To his credit, much like Lincoln and Washington before him, his lasting achievements have tended to paper over flaws that might otherwise cling with greater prominence to his legacy.
I read portions of this volume during the 2020 election cycle and its aftermath, especially relevant given that the new President, Joe Biden—born just days after the Battle of Guadalcanal during FDR’s third term—had an oversize portrait of Roosevelt prominently hung in the Oval Office across from the Resolute Desk. But even more significantly, Biden the candidate was pilloried by progressives in the run-up to November as far too centrist, as a man who had abandoned the vision of Franklin Roosevelt. But if the left correctly recalls FDR as the most liberal president in American history, it also badly misremembers Roosevelt the man, who in his day very deftly navigated the politics of the center lane.
Dallek brilliantly restores for us the authentic FDR of his own era, unclouded by the mists of time that has begotten both a greater belligerence from the right as well as a distorted worship from the left. This context is critical: when FDR first won election in 1932, the nation was reeling from its greatest crisis since the Civil War, the economy in a tailspin and his predecessor, Herbert Hoover, unwilling to use the power of the federal government to intervene while nearly a quarter of the nation’s workforce was unemployed, at a time when a social safety net was nearly nonexistent. People literally starved to death in the United States of America! This provoked radical tugs to the extreme left and extreme right. There was loud speculation that the Republic would not survive, with cries by some for Soviet-style communism and by others for a strongman akin to those spearheading an emerging fascism in Europe. It was into this arena that FDR was thrust. Beyond fringe radical calls for revolution or reaction, despite his party’s congressional majority, like Lincoln before him perhaps Roosevelt’s greatest challenge after stabilizing the state was contending with the forces to the left and right in his own party. This, as Dallek details in a well-written, fast-moving narrative, was to be characteristic of much of his long tenure.
In spite of an alphabet soup of New Deal programs that sought to both rescue the sagging economy and the struggling citizen, for the liberal wing of the Democratic Party FDR never went far enough. For conservative Democrats, on the other hand, the power of the state was growing too large and there was far too much interference with market forces. But, as Dallek stresses repeatedly, Roosevelt struggled the most with forces on the left, especially populist demagogues like Huey Long and the antisemitic radio host Father Coughlin. And with the outbreak of World War II, the left was unforgiving when FDR seemed to abandon his commitment to the New Deal to focus on combating Germany and Japan. Today’s democratic socialists may want to claim him as their own, but FDR was no socialist, seeking to reform capitalism rather than replace it, earning Coughlin’s eventual enmity for being too friendly with bankers. At the same time, Republicans obstructed the president at every turn, calling him a would-be dictator. And most wealthy Americans branded him a traitor to his class. There was also an increasingly hostile Supreme Court, which was to ride roughshod over some of FDR’s most cherished programs, including the National Recovery Act (NRA), which was just one of several that were struck down as unconstitutional. We tend to recall the successes such as the Social Security Act that indelibly define FDR’s legacy, yet he endured many losses as well. But while Roosevelt did not win every battle, as Dallek details, only a leader with FDR’s political acumen could have succeeded so often while tackling so much amid a rising chorus of opposition on all sides during such a crisis-driven presidency. If the left in America tends to fail so frequently, it could be because it often fails to grasp the politics of the possible. In this realm, there has perhaps been no greater genius in the White House than Franklin Delano Roosevelt.
Fault can be found in Dallek’s book. For one thing, in the body of the narrative he too often namedrops references to other notable Roosevelt chroniclers such as Doris Kearns Goodwin and William Leuchtenburg, which feels awkward given that the author is not some unknown seeking to establish credibility, but Robert Dallek himself, distinguished presidential biographer! And less a flaw than a weakness, despite his skill with a pen in these chapters the reader carefully observes FDR but never really gets to know him intimately. I have encountered this in other Dallek works. If you were, for instance, to juxtapose the Lyndon Johnson biographies of Robert Caro with those by Dallek, Caro’s LBJ colorfully leaps off the page the flesh-and-blood menacing figure who grasps you by the lapels and bends you to his will, while Dallek’s LBJ remains off in the distance. Caro has that gift; Dallek does not.
Still, this is a fine book that marks a significant contribution to the literature. FDR was indeed a giant; there has never been anyone like him in the White House, nor are we likely to ever see a rival. Dallek succeeds in placing Roosevelt firmly in the context of his time, warts and all, so that we can better appreciate who he was and how he should be remembered.
When I first met Ellen Oppenheimer she was in her eighties, a spry woman with a contagious smile and a mischievous teenager’s twinkle in her eyes standing beside her now-late husband Marty on a housecall visit that I made on behalf of the computer services company that I own and operate. But many, many years before that, when she was a very young child, she and her family fled the Nazis, literally one step ahead of a brutal killing machine that claimed too many of her relations, marked for death only because of certain strands of DNA that overlap with centuries of irrational hatred directed at Europeans of Jewish descent.
On subsequent visits, we got to know each other better, and she shared with me bits and pieces of the story of how her family cheated death and made it to America. In turn, I told her of my love of history—and the fact that while I was not raised as a Jew, we had in common some of those same Ashkenazi strands of DNA through my great-grandfather, who fled Russia on the heels of a pogrom. I also mentioned my book blog, and she asked that I bookmark the page on the browser of the custom computer they had purchased from me.
It was on a later housecall, as Marty’s health declined, that I detected shadows intruding on Ellen’s characteristic optimism, and the deep concern for him looming behind the stoic face she wore. But all at once her eyes brightened when she announced with visible pride that she had published a book called Flight to Freedom about her childhood escape from the Nazis. She urged me to buy it and to read it, and she genuinely wanted my opinion of her work.
I did not do so.
But she persisted. On later visits, the urging evolved to something of an admonishment. When I would arrive, she always greeted me with a big hug, as if she was more a grandmother than a client, but she was clearly disappointed that I—a book reviewer no less—had not yet read her book. For my part, my resistance was deliberate. I had too many memories of good friends who pestered me to see their bands playing at a local pub, only to discover that they were terrible. I liked Ellen: what if her book was awful? I did not want to take the chance.
Then, during the pandemic, I saw Marty’s obituary. Covid had taken him. Ellen and Marty had moved out of state, but I still had Ellen’s email, so I reached out with condolences. We both had much to say to each other, but in the end, she asked once more: “Have you read my book yet?” So I broke down and ordered it on Amazon, then took it with me on a week-long birthday getaway to an Airbnb.
Flight to Freedom is a thin volume with a simple all-white cover stamped with only title and author. I brought it with me to a comfortable chair in a great room lined with windows that gave breathtaking views to waves lapping the shore in East Lyme, CT. I popped the cap on a cold IPA and cracked the cover of Ellen’s book. Once I began, all my earlier reluctance slipped away: I simply could not stop reading it.
In an extremely well-written account—especially for someone with virtually no literary background—the author transports the reader back to a time when an educated, affluent middle-class German family overnight was set upon a road of potential extermination in the wake of the Nazi rise to power. Few, of course, believed that a barbarity of such enormity could ever come to pass in 1933, when three-year-old Ellen’s father Adolf was seized on a pretext and jailed. But Grete, Ellen’s mother—the true hero of Flight to Freedom—was far more prescient. In a compelling narrative with a pace that never slows down, we follow the brilliant and intrepid Grete as she more than once engineers Adolf’s release from captivity and serves as the indefatigable engine of her family’s escape from the Nazis, first to Paris, and then later, as war erupted, to Marseille and Oran and finally Casablanca—the iconic route of refugees etched on a map that is splashed across the screen in the classic film featuring Bogart and Bergman.
The last leg then was a Portuguese boat that finally delivered them to safety on Staten Island in 1942. In the passages of Flight to Freedom that describe that voyage, the author cannot disguise her disgust at the contempt displayed shipboard for the less fortunate by those who have purchased more expensive berth, when all were Jews who would of course have found a kind of hideous equality in Germany’s death camps. This was, tragically, the fate of much of Ellen’s extended family who did not heed Grete’s warnings of what might befall them, by those who simply could not believe that such horrors could lurk in their future. Throughout the tale, there is a kind of nuance and complexity one might expect to find in a book by a trained academic or a gifted novelist that instead is delightfully on display by a novice author. Her voice and her pen are both strong from start to finish in this powerful and stirring work.
As a reviewer, can I find some flaws? Of course I can. In the narrative, Ellen treats her childhood character simply as a bystander; the story is instead told primarily through Grete’s eyes. As such, the omniscient point of view often serves as vehicle to the chronicle, with observations and emotions the author could not really know for certain. And sometimes, the point of view shifts awkwardly. But these are quibbles. This is a fine book on so many levels, and the author deserves much praise for telling this story and telling it so well!
A few days after I read Flight to Freedom, I dug into my client database to come up with Ellen’s phone number and rang her up. She was, as I anticipated, thrilled that I had finally read the book, and naturally welcomed my positive feedback. After we chatted for a while, I confessed that my only complaint, if it could be called a complaint, was that the child character of Ellen stood mute much of time in the narrative, and I wondered why she did not relate more of her feelings as a key actor in the drama. With a firm voice she told me (and I am paraphrasing here): “Because it is my mother’s story. It was she who saved our lives. She was the hero for our family.”
I think Grete would be proud of her little girl for telling the story this way, so many decades later. And I’m proud to know Ellen, who shared it so beautifully. Buy this book. Read it. It is a story you need to experience, as well.
I must admit that I knew nothing of the apparently widespread practice of “couchsurfing” before I read Stephan Orth’s quirky, sometime comic, and utterly entertaining travelogue, Behind Putin’s Curtain: Friendships and Misadventures Inside Russia. For the uninitiated, couchsurfing is a global community said to be comprised of more than 14 million members in over 200,000 cities that includes virtually every country on the map. The purpose is to provide free if bare bones lodging for travelers in exchange for forming new friendships and spawning new adventures. The term couchsurfing is an apt one, since frequently the visitor in fact beds down on a couch, although accommodations range from actual beds with sheets and pillowcases to blankets strewn on a kitchen floor—or, as Orth discovers to his amusement, a cot in a bathroom, just across from the toilet! Obviously, if your idea of a good time is a $2000/week Airbnb with a memory foam mattress and a breathtaking view, this is not for you, but if you are scraping together your loose change and want to see the world from the bottom up, couchsurfing offers an unusual alternative that will instantly plug you into the local culture by pairing you up with an authentic member of the community. Of course, authentic does not necessarily translate into typical. More on that later.
Orth, an acclaimed journalist from Germany, is no novice to couchsurfing, but rather a practiced aficionado, who has not only long relied upon it as a travel mechanism but has upped the ante by doing so in distant and out of the ordinary spots like Iran, Saudi Arabia and China, the subjects of his several best-selling books. This time he gives it a go in Russia: from Grozny in the North Caucasus, on to Volgograd and Saint Petersburg, then to Novosibirsk and the Altai Republic in Siberia, and finally Yakutsk and Vladivostok in the Far East. (Full disclosure: I never knew Yakutsk existed other than as a strategic corner of the board in the game of Risk.) All the while Orth proves a keen, non-judgmental observer of peoples and customs who navigates the mundane, the hazardous, and the zany with an enthusiasm instantly contagious to the reader. He’s a fine writer, with a style underscored by impeccable timing, comedic and otherwise, and passages often punctuated with wit and sometime wicked irony. You can imagine him penning the narrative impatiently, eager to work through one paragraph to the next so he can detail another encounter, express another anecdote, or simply mock his circumstances once more, all while wearing a twinkle in his eye and a wry twist to his lips.
Couchsurfing may be routine for the author, but he wisely assumes this is not the case for his audience, so he introduces this fascinating milieu by detailing the process of booking a room. The very first one he describes turns out to be a hilarious online race down various rabbit holes over a sequence of seventy-nine web pages where his utterly eccentric eventual host peppers him with bizarre, even existential observations, and challenges potential guests to fill in various blanks while warning them “that he follows the principle of ‘rational egoism’” and “doesn’t have ten dwarves cleaning up after guests.” [p7] Orth, unintimidated, responds with a wiseass retort and wins the invitation.
Perhaps the most delightful portions of this book are Orth’s profiles of his various hosts, who tend to run the full spectrum of the odd to the peculiar. I say this absent any negative connotation that might otherwise be implied. After all, Einstein and Lincoln were both peculiar fellows. I only mean that the reader, eager to get a taste of local culture, should not mistake Orth’s bunkmates for typical representatives of their respective communities. This makes sense, of course, since regardless of nationality the average person is unlikely to welcome complete strangers into their homes as overnight guests for free. That said, most of his hosts come off as fascinating if unconventional folks you might love to hang out with, at least for a time. And they put as much trust in the author as he puts in them. One couple even briefly leaves Orth to babysit their toddler. Another host turns over the keys of his private dacha and leaves him unattended with his dog.
Of course, the self-deprecating Orth, who seems equally gifted as patient listener and engaging raconteur, could very well be the ideal guest in these circumstances. At the same time, he could also very well be a magnet for the outrageous and the bizarre, as witnessed by the madcap week-long car trip through Siberia he ends up taking with this wild and crazy chick named Nadya that begins when they meet and bond over lamb soup and a spirited debate as to what was the best Queen album, survives a rental car catastrophe on a remote roadway, and winds up with them horseback riding on the steppe. Throughout, with only a single exception, the two disagree about … well … absolutely everything, but still manage to have a good time. If you don’t literally laugh out loud while reading through this long episode, you should be banned for life from using the LOL emoji.
You would think that travel via couchsurfing could very well be dangerous—perhaps less for Orth, who is well over six feet tall and a veteran couchsurfer—but certainly for young, attractive women bedding down in unknown environs. But it turns out that such incidents while not unknown are very, very rare. The couchsurfing community is self-policing: guests and hosts rely on ratings and reviews not unlike those on Airbnb, which tends to minimize if not entirely eliminate creeps and psychos. Still, while 14 million people cannot be wrong, it’s not for everyone. Which leads me to note that the only fault I can find with this outstanding work is its title, Behind Putin’s Curtain, since it has little to do with Putin or the lives led by ordinary Russians: certainly the peeps that Orth runs with are anything but ordinary or typical! I have seen this book published elsewhere simply as Couchsurfing in Russia, which I think suits it far better. Other than that quibble, this is one of the best travel books that I have ever read, and I highly recommend it. And while I might be a little too far along in years to start experimenting with couchsurfing, I admire Orth’s spirit and I’m eager to read more of his adventures going forward.
[Note: the edition of this book that I read was an ARC (Advance Reader’s Copy), as part of an early reviewer’s program.]
About five years ago, I read what I still consider to be the finest travel and adventure book I have ever come across, On the Trail of Genghis Khan: An Epic Journey Through the Land of the Nomads, by Tim Cope, a remarkable tale of an intrepid young Australian who in 2004 set out on a three-year mostly solo trek on horseback across the Eurasian steppe from Mongolia to Hungary—some 10,000 kilometers (about 6,200 miles)—roughly retracing routes followed by Genghis Khan and his steppe warriors. An extraordinary individual, Cope refused to carry a firearm, despite warnings against potential predators of the animal or human kind to menace an untested foreigner alone on the vast and often perilous steppe corridor, instead relying on his instincts, personality, and determination to succeed, regardless of the odds. Oh, and those odds seem further stacked against him because despite his outsize ambition, he is quite an inexperienced horseman—in fact his only previous attempt on horseback as a child left him with a broken arm! Nevertheless, his only companions for the bulk of the journey ahead would be three horses—and a dog named Tigon foisted upon him against his will that would become his best friend.
My 2016 review of On the Trail of Genghis Khan—which Cope featured on his website for a time—sparked an email correspondence between us, and shortly after publication he sent me an inscribed copy of his latest work, Tim & Tigon, stamped with Tigon’s footprints. I’m always a little nervous in these circumstances: what if the new book falls short? As it turned out, such concerns were misplaced; I enjoyed it so much I bought another copy to give as a gift!
In Kazakhstan, early in his journey, a herder named Aset connived to shift custody of a scrawny six-month-old puppy to Cope, insisting it would serve both as badly needed company during long periods of isolation as well as an ally to warn against wolves. The dog, a short-haired breed of hound known as a tazi, was named Tigon, which translates into something like “fast wind.” Tim was less than receptive, but Aset was persuasive: “In our country dogs choose their owners. Tigon is yours.” [p89] That initial grudging acceptance was to develop into a critical bond that was strengthened again and again during the many challenges that lay ahead. In fact, Tim’s connection with Tigon came to represent the author’s single most significant relationship in the course of this epic trek. Hence the title of this book.
Tim & Tigon defies simple categorization. On one level, it is a compact re-telling of On the Trail of Genghis Khan, but it’s not simply an abridged version of the earlier book. Styled as a Young Adult (YA) work, it has appeal to a much broader audience. And while it might be tempting to brand it as some kind of heartwarming boy and his dog tale, it is marked by a much greater complexity. Finally, as with the first book, it is bound to frustrate any librarian looking to shelve it properly: Is it memoir? Is it travel? Is it adventure? Is it survival? Is it a book about animals? It turns out to be about all of these and more.
As the title suggests, the emphasis this time finds focus upon the unique connection that develops between a once reluctant Tim and the dog that becomes nothing less than his full partner in the struggle to survive over thousands of miles of terrain marked by an often-hostile environment that frequently saw extreme temperatures of heat and cold, conditions both difficult and dangerous, as well as numerous obstacles. But despite the top billing neither Tim nor Tigon are the main characters here. Instead, as the narrative comes to reveal again and again, the true stars of this magnificent odyssey are the land and its peoples, a sometimes-forbidding landscape that hosts remarkably resilient, enterprising, and surprisingly optimistic folks—clans, families and individuals that are ever undaunted by highly challenging lifeways that have their roots in centuries-old customs.
Stalin effectively strangled their traditional nomadic ways in the former Soviet Union by enforcing borders that were unknown to their ancestors, but he never crushed their collective spirit. And long after the U.S.S.R went out of business, these nomads still thrive, their orbits perhaps more circumscribed, their horses and camels supplemented—if not supplanted—by jeeps and motorbikes. They still make their homes in portable tents known as yurts, although these days many sport TV sets served by satellite and powered by generators. The overwhelming majority welcome the author into their humble camps, often with unexpected enthusiasm and outsize hospitality, generously offering him food and shelter and tending to his animals, even as many are themselves scraping by in conditions that can best be described as hardscrabble. The shared empathy between Cope and his hosts is marvelously palpable throughout the narrative, and it is this authenticity that distinguishes his work. It is clear that Tim is a great listener, and despite how alien he must have appeared upon arrival in these remote camps, he quickly establishes rapport with men, women, children, clan elders—the old and the young—and remarkably repeats this feat in Mongolia, in Kazakhstan, in Russia, and beyond. This turns out to be his finest achievement: his talents with a pen are evident, to be sure, but the story he relates would hardly be as impressive if not for that element.
When Tim’s amazing journey across the steppe ended in Hungary in 2007, joy mingled with a certain melancholy at the realization that he would have to leave Tigon behind when he returned home. But the obstacles of a an out-of-reach price tag and a mandatory quarantine were eventually overcome, and a little more than a year later, Tigon joined Tim in Australia. Tigon went on to sire many puppies and lived to a ripe old age before, tragically, the dog that once braved perils large and small on the harsh landscapes of the Eurasian steppe fell before the wheels of a speeding car on the Australian macadam. Tim was devastated by his loss, so this book is also, of course, a tribute to Tigon. My signed copy is inscribed with the Kazakh saying that served as a kind of ongoing guidepost to their trek together: “Trust in fate … but always tie up the camel.” That made me smile, but that smile was tinged with sadness as I gazed upon Tigon’s footprint stamped just below it. Tigon is gone, but he left an indelible mark not only on Tim, who perhaps still grieves for him, but also upon every reader, young and old, who is touched by his story.
Some would argue that the precise moment that marked the beginning of the eventual dissolution of the Soviet Union was February 20, 1988, when the regional soviet governing the Nagorno-Karabakh Oblast—an autonomous region of mostly ethnic Armenians within the Soviet Republic of Azerbaijan—voted to redraw the maps and attach Nagorno-Karabakh to the Soviet Republic of Armenia. Thus began a long, bloody, and yet unresolved conflict in the Caucasus that has ravaged once proud cities and claimed many thousands of lives of combatants and civilians alike. The U.S.S.R. went out of business on December 25, 1991, about midway through what has been dubbed the First Nagorno-Karabakh War, which ended on May 12, 1994, an Armenian victory that established de facto—if internationally unrecognized—independence for the Republic of Artsakh (also known as the Nagorno-Karabakh Republic), but left much unsettled. Smoldering grievances that remained would come to spark future hostilities.
That day came last fall, when the long uneasy stalemate ended suddenly with an Azerbaijani offensive in the short-lived 2020 Nagorno-Karabakh War that had ruinous consequences for the Armenian side. Few Americans have ever heard of Nagorno-Karabakh, but I was far better informed because when the war broke out I happened to be reading The Caucasus: An Introduction, by Thomas De Waal, a well-written, insightful, and—as it turns out—powerfully relevant book that in its careful analysis of this particular region raises troubling questions about human behavior in similar socio-political environments elsewhere.
What is the Caucasus? A region best described as a corridor between the Black Sea on one side and the Caspian Sea on the other, with boundaries at the south on Turkey and Iran, and at the north by Russia and the Greater Caucasus mountain range that has long been seen as the natural border between Eastern Europe and Western Asia. Above those mountains in southern Russia is what is commonly referred to as the North Caucasus, which includes Dagestan and Chechnya. Beneath them lies Transcaucasia, comprised of the three tiny nations of Armenia, Azerbaijan, and Georgia, whose modern history began with the collapse of the Soviet Union and are the focus of De Waal’s fascinating study. The history of the Caucasus is the story of peoples dominated by the great powers beyond their borders, and despite independence this remains true to this day: Russia invaded Georgia in 2008 to support separatist enclaves in Abkhazia and South Ossetia, in the first European war of the twenty-first century; Turkey provided military support to Azerbaijan in the 2020 Nagorno-Karabakh War.
At this point, some readers of this review will pause, intimidated by exotic place names in an unfamiliar geography. Fortunately, De Waal makes that part easy with a series of outstanding maps that puts the past and the present into appropriate context. At the same time, the author eases our journey through an often-uncertain terrain by applying a talented pen to a dense, but highly readable narrative that assumes no prior knowledge of the Caucasus. At first glance, this work has the look and feel of a textbook of sorts, but because De Waal has such a fine-tuned sense of the lands and the peoples he chronicles, there are times when the reader feels as if a skilled travel writer was escorting them through history and then delivering them to the brink of tomorrow. Throughout, breakout boxes lend a captivating sense of intimacy to places and events that after all host human beings who like their counterparts in other troubled regions live, laugh, and sometimes tragically perish because of their proximity to armed conflict that typically has little to do with them personally.
De Waal proves himself a strong researcher, as well as an excellent observer highly gifted with an analytical acumen that not only carefully scrutinizes the complexity of a region bordered by potentially menacing great powers, and pregnant with territorial disputes, historic enmities, and religious division, but identifies the tolerance and common ground in shared cultures enjoyed by its ordinary inhabitants if left to their own devices. More than once, the author bemoans the division driven by elites on all sides of competing causes that have swept up the common folk who have lived peacefully side-by-side for generations, igniting passions that led to brutality and even massacre. This is a tragic tale we have seen replayed elsewhere, with escalation to genocide among former neighbors in what was once Yugoslavia, for instance, and also in Rwanda. For all the bloodletting, it has not risen to that level in the Caucasus, but unfortunately spots like Nagorno-Karabakh have all the ingredients for some future catastrophe if wiser heads do not prevail.
I picked up this book quite randomly last summer en route from a Vermont Airbnb in my first visit to a brick-and-mortar bookstore since the start of the pandemic. A rare positive from quarantine has been a good deal of time to read and reflect. I am grateful that The Caucasus: An Introduction was in the fat stack of books that I consumed in that period. Place names and details are certain to fade, but I will long remember the greater themes De Waal explored here. If you are curious about the world, I would definitely recommend this book to you.
[Note: Thomas de Waal is a senior fellow with Carnegie Europe, specializing in Eastern Europe and the Caucasus region.]
Myth has it that before he became king of Athens, Theseus went to Crete and slew the Minotaur, a creature half-man and half-bull that roamed the labyrinth in Knossos. According to Homer’s Iliad, Idomeneus, King of Crete, was one of the top-ranked generals of the Greek alliance in the Trojan War. But long before the legends and the literature, Crete hosted Europe’s most advanced early Bronze Age civilization—dubbed the Minoan—which was then overrun and absorbed by the Mycenean Greeks that are later said to have made war at Troy. Minoan Civilization flourished circa 3000 BCE-1450 BCE, when the Myceneans moved in. What remains of the Minoans are magnificent ruins of palace complexes, brilliantly rendered frescoes depicting dolphins, bull-leaping lads, and bare-breasted maidens, and a still yet undeciphered script known as Linear A. The deepest roots of Western Civilization run to the ancient Hellenes, so much so that some historians proclaim the Greeks the grandfathers of the modern West. If that is true, then the Minoans of Crete were the grandfathers of the Greeks.
Unfortunately, if you want to learn more about the Minoans, do not turn to A History of Crete, by former educator Chris Moorey, an ambitious if too often dull work that affords this landmark civilization a mere 22 pages. Of course, the author has every right to emphasize what he deems most relevant, but the reader also has a right to feel misled—especially as the jacket cover sports a bull-leaping scene from a Minoan fresco! And it isn’t only the Minoans that are bypassed; Moorey’s treatment of Crete’s glorious ancient past is at best superficial. After a promising start that touches on recent discoveries of Paleolithic hand-axes, he fast-forwards at a dizzying rate: Minoan Civilization ends on page 39; more than a thousand years of Greek dominance concludes on page 66, and Roman rule is over by page 84. Thus begins the long saga of Crete as a relative backwater, under the sway of distant colonial masters.
I am not certain what the author’s strategy was, but it appears that his goal was to divide Crete’s long history into equal segments, an awkward tactic akin to a biographer of Lincoln lending equal time to his rail-splitting and his presidency. At any rate, much of the story is simply not all that interesting the way Moorey tells it. In fact, too much of it reads like an expanded Wikipedia entry, while sub-headings too frequently serve as unwelcome interrupts to a narrative that generally tends to be stilted and colorless. The result is a chronological report of facts about people and events, conspicuously absent the analysis and interpretation critical to a historical treatment. Moreover, the author’s voice lacks enthusiasm and remains maddeningly neutral, whether the topic is tax collection or captive rebels impaled on hooks. As the chronicle plods across the many centuries, there is also a lack of connective tissue, so the reader never really gets a sense of what distinguishes the people of Crete from people anywhere else. What are their salient characteristics? What is the cultural glue that bonds them together? We never really find out.
To be fair, there is a lot of information here. And Moorey is not a bad writer, just an uninspired one. Could this be because the book is directed at a scholarly rather than a popular audience, and academic writing by its nature can often be stultifying? That’s one possibility. But is it even a scholarly work? The endnotes are slim, and few point to primary sources.
A History of Crete is a broad survey that may serve as a useful reference for those seeking a concise study of the island’s past, but it seems like an opportunity missed. In the final paragraph, the author concludes: “In spite of all difficulties, it is likely the spirt of Crete will survive.” What is this spirit of Crete he speaks of? Whatever it may be, the reader must look elsewhere to find out.
In the aftermath of a clash in Turkistan in 1221, a woman held captive by Mongol soldiers admitted she had swallowed her pearls to safeguard them. She was immediately executed and eviscerated. When pearls were indeed recovered, Genghis Khan “ordered that they open the bellies of the slain” on the battlefield to look for more. [p23] Such was the consequence of pearls for the Mongol Empire.
As this review goes to press (5-12-21), the value of a single Bitcoin is about $56,000 U.S. dollars—dwarfing the price for an ounce of gold at a mere $1830—an astonishing number for a popular cybercurrency that few even accept for payment. Those ridiculing the rise of Bitcoin dismiss it as imaginary currency. But aren’t all currencies imaginary? The paper a dollar is printed on certainly is not worth much, but it can be exchanged for a buck because that United States government says so, subject to inflation of course. All else rises and falls on a market that declares a value, which varies from day-to-day. Then why, you might ask, in the rational world of the twenty-first century, are functionally worthless shiny objects like gold and diamonds (for non-industrial applications) worth anything at all? It’s a good question, but hardly a new one—long before the days of Jericho and Troy people have attached value to the pretty but otherwise useless. Circa 4200 BCE, spondylus shells were money of a sort in both Old Europe and the faraway Andes. Remarkably, cowries once served as the chief economic mechanism in the African slave trade; for centuries human beings were bought and sold as chattel in exchange for shells that once housed sea snails!
The point is that even the most frivolous item can be deemed of great worth if enough agree that it is valuable. With that in mind, it is hardly shocking to learn that pearls were treasured above all else by the Mongols during their heady days of empire. It may nevertheless seem surprising that this phenomenon would be worthy of a book-length treatment, but acclaimed educator, author and historian Thomas T. Allsen makes a convincing case that it does in his final book prior to his passing in 2019, The Steppe and the Sea: Pearls in the Mongol Empire, which will likely remain the definitive work on this subject for some time to come.
The oversize footprint of the Mongols and their significance to global human history has been vast if too often underemphasized, a casualty of the Eurocentric focus on so-called “Western Civilization.” Originally nomads that roamed the steppe, by the thirteenth and fourteenth centuries the transcontinental Mongol Empire formed the largest contiguous land empire in history, stretching from Eastern Europe to the Sea of Japan, encompassing parts or all of China, Southeast Asia, the Iranian plateau, and the Indian subcontinent. Ruthlessly effective warriors, numerous kingdoms tumbled before the fierce onslaughts that marked their famously brutal trails of conquest. Less well-known, as Allsen reveals, was their devotion to plunder, conducted with both a ferocious appetite and perhaps the greatest degree of organization ever seen in the sacking of cities. No spoils were prized more than pearls, acquired from looted state treasuries as well as individuals such as that random unfortunate who was sliced open at Genghis Khan’s command. Pearls were more than simply booty; the Mongols were obsessed with them.
This is a story that turns out to be as fascinating as it is unexpected. The author’s approach is highly original, cogently marrying economics to political culture and state-building without losing sight of his central theme. In a well-written if decidedly academic narrative, Allsen focuses on the Mongol passion for pearls as symbols of wealth and status to explore a variety of related topics. One of the most entertaining examines the Yuan court, where pearls were the central element for wardrobe and fashion, and rank rigidly determined if and how these could be displayed. At the very top tier, naturally, was the emperor and his several wives, who were spectacularly identifiable in their extravagant ornamentation. The emperor’s consorts wore earrings of “matched tear-shaped pearls” said to be the size of hazelnuts, or alternately, pendant earrings with as many as sixty-five matched pearls attached to each pendant! More flamboyant was their elaborate headgear, notably the tall, unwieldy boot-shaped headdress called a boghta that was decorated with plumes and gems and—of course—many more pearls! [p52-53]
Beyond the spotlight on court life, the author widens his lens to explore broader arenas. The Mongols may have been the most fanatical about acquiring pearls, but they certainly were not the first to value them, nor the last; pearls remain among the “shiny objects” with no real function beyond adornment that command high prices to this day. Allsen provides a highly engaging short course for the reader as to where pearls come from and why the calcium carbonate that forms a gemstone in one oyster is—based upon shape, size, luster, and color—prized more than another. This is especially important because of the very paradox the book’s title underscores: it is remarkable that products from the sea became the most prized possession for a people of the steppe! There is also a compelling discussion of the transition from conquering nomad warrior to settled overlord that offers a studied debate on whether the “self-indulgent” habit of coveting consumer goods such as “fine textiles, precious metals, and gems, especially pearls” was the result of being corrupted by the sedentary “civilized” they subjected, or if such cravings were born in a more distant past. [p61]
While I enjoyed The Steppe and the Sea, especially the first half, which concludes with the disintegration of the Mongol Empire, this book is not for everyone. Academic writing imposes a certain stylistic rigidity that suits the scholarly audience it is intended for, but that tends to create barriers for the general reader. In this case accessibility is further diminished by Allsen’s translation of Mongolian proper names into ones likely unfamiliar to those outside of his field: Genghis Khan is accurately rendered as Chinggis Qan, and Kubalai Khan as Qubilai Qan, but this causes confusion that might have been mitigated by a parenthetical reference to the more common name. And the second part of the book, “Comparisons and Influence,” which looks beyond the Mongol realm, is slow going. It seemed like a better tactic might have been to incorporate much of it into the previous narrative, strengthening connections and contrasts while improving readability. On the plus side, sound historical maps are included that proved a critical reference throughout the read.
The Mongol Empire is ancient history, but these days a wild pearl of high quality could still be worth as much as $100,000, although most range in price from $300 to $1500. It seems like civilization is still pretty immature when it comes to shiny objects. On the other hand, this morning, an ounce of palladium—a precious metal valued for its use in catalytic converters and multi-layer ceramic capacitors rather than jewelry—was priced at almost $3000, some 62% more than an ounce of gold! So maybe there is hope for us, after all. I wish Dr. Allsen was still alive so I could reach out via email and find out his thoughts on the subject. Since that is impossible, I can only urge you to read his final book and consider how little human appetites have changed throughout the ages.
In April 1962, President John F. Kennedy hosted a remarkable dinner for more than four dozen Nobel Prize winners and assorted other luminaries drawn from the top echelons of the arts and sciences. With his characteristic wit, JFK pronounced it “The most extraordinary collection of talent, of human knowledge, that has ever been gathered together at the White House with the possible exception of when Thomas Jefferson dined alone.” One of the least prominent guests that evening was the novelist William Styron, who attended with his wife Rose, and recalled his surprise at the invitation. Styron was not yet then the critically-acclaimed, Pulitzer Prize-winning literary icon he was to later become, but he was hardly an unknown figure, and it turns out that his most recent novel of American expatriates, Set This House on Fire, was the talk of the White House in the weeks leading up to the event. So he had the good fortune to dine not only with the President and First Lady, but with the likes of John Glenn, Linus Pauling, and Pearl Buck—and in the after-party forged a long-term intimate relationship with the Kennedy family.
My first Styron was The Confessions of Nat Turner, which I read as a teen. Its merits somewhat unfairly subsumed at the time by the controversy it sparked over race and remembrance, it remains a notable achievement, as well as a reminder that literature is not synonymous with history, nor should it be held to that account. I found Set This House on Fire largely forgettable, but as an undergrad was utterly blown away when I read Lie Down in Darkness, his first novel and a true masterpiece that while yet indisputably original clearly evoked the Faulknerian southern gothic. I went on to read anything by the author I could get my hands on. Also a creature of controversy upon publication, Sophie’s Choice, winner of the National Book Award for Fiction in 1980, remains in my opinion one of the finest novels of the twentieth century.
I thought I had read all of Styron’s fiction, so it was with certain surprise that I learned from a friend who is both author and bibliophile of the existence of A Tidewater Morning, a collection of three novellas I had somehow overlooked. I bought the book immediately, and packed it to take along for a pandemic retreat to a Vermont cabin in the woods where I read it through in the course of the first day and a half of the getaway, parked in a comfortable chair on the porch sipping hot coffee in the morning and cold beer in late afternoon. Perhaps it was the fact that this was our first breakaway from months of quarantine isolation, or maybe it was the alcohol content of the IPA I was tossing down, but there was definitely a palpable emotional tug for me reading Styron again—works previously unknown to me no less—so many decades after my last encounter with his work, back when I was a much younger man than the one turning these pages. The effect was more pronounced, I suppose, because the semi-autobiographical stories in this collection look back to Styron’s own youth in the Virginia Tidewater in the 1930s and were written when he too was a much older man.
“Love Day,” the first tale of the collection, has him as a young Marine in April 1945 yet untested in combat, awaiting orders to join the invasion of Okinawa and wrestling the ambivalence of chasing heroic destiny while privately entertaining “gut-heaving frights.” There’s much banter among the men awaiting their fate, but the story of real significance is told through flashbacks to an episode some years prior, he still a boy in the back seat of his father’s Oldsmobile, broken down on the side of the road. War is looming—the very war he is about to join—although it was far from certain then, but the catastrophe of an unprepared America overrun by barbaric Japanese invaders is the near-future imagined in the Saturday Evening Post piece the boy is reading in the back of the stalled car. Simmering tempers flare when he lends voice to the prediction. His mother, stoic in her leg brace, slowly dying of a cancer known to all but unacknowledged, had earlier furiously rebuked him for mouthing a racist epithet and now upbraided him again for characterizing the Japanese as “slimy butchers,” while belittling the notion of a forthcoming war. Unexpectedly, his father—a mild, highly-educated man quietly raging at his own inability to effect a simple car repair—lashes out at his wife, branding her “idiotic” and “a fool” for her naïve idealism, then crumbles under the weight of his words to beg her forgiveness. It is a dramatic snapshot not only of a moment of a family in turmoil, but of a time and a place that has long faded from view. Only Styron’s talent with a pen could leave us with so much from what is after only a few pages.
The third story is the title tale, “A Tidewater Morning,” which revisits the family to follow his mother’s final, agonizing days. It concludes with both the boy and his father experiencing twin if unrelated epiphanies. It’s a good read, but I found it a bit overwrought, lacking the subtlety characteristic of Styron’s prose.
Sandwiched between these two is my own favorite, “Shadrach,” the story of a 99-year-old former slave—sold away to an Alabama plantation in antebellum days—who shows up unpredictably with the dying wish to be buried in the soil of the Dabney property where he was born. The problem is that the Dabney descendant currently living there is a struggling, dirt-poor fellow who could be a literary cousin of one of the Snopes often resident in Faulkner novels. The law prohibits interring a black man on his property, and he likewise lacks the means to afford to bury him elsewhere. On the surface, “Shadrach” appears to be a simple story, but on closer scrutiny reveals itself to be a very complex one peopled with multidimensional characters and layered with vigorous doses of both comedy and tragedy.
I highly recommend Styron to those who have not yet read him. For the uninitiated, (spoiler alert!) I will close this review with a worthy passage:
“Death ain’t nothin’ to be afraid about,” he blurted in a quick, choked voice … “Life is where you’ve got to be terrified!” he cried as the unplugged rage spilled forth. … Where in the goddamned hell am I goin’ to get the money to put him in the ground? … I ain’t got thirty-five-dollars! I ain’t got twenty-five dollars! I ain’t got five dollars!” … “And one other thing!” He stopped. Then suddenly his fury—or the harsher, wilder part of it—seemed to evaporate, sucked up into the moonlit night with its soft summery cricketing sounds and its scent of warm loam and honeysuckle. For an instant he looked shrunken, runtier than ever, so light and frail that he might blow away like a leaf, and he ran a nervous, trembling hand through his shock of tangled black hair. “I know, I know,” he said in a faint, unsteady voice edged with grief. “Poor old man, he couldn’t help it. He was a decent, pitiful old thing, probably never done anybody the slightest harm. I ain’t got a thing in the world against Shadrach. Poor old man.” …
“And anyway,” Trixie said, touching her husband’s hand, “he died on Dabney ground like he wanted to. Even if he’s got to be put away in a strange graveyard.”
“Well, he won’t know the difference,” said Mr. Dabney. “When you’re dead nobody knows the difference. Death ain’t much.” [p76-78]
Africa. My youth largely knew of it only through the distorted lens of racist cartoons peopled with bone-in-their-nose cannibals, B-grade movies showcasing explorers in pith helmets who somehow always managed to stumble into quicksand, and of course Tarzan. It was still even then sometimes referred to as the “Dark Continent,” something that was supposed to mean dangerous and mysterious but also translated, for most of us, into the kind of blackness that was synonymous with race and skin color.
My interest in Africa came via the somewhat circuitous route of my study of the Civil War. The central cause of that conflict was, of course, human chattel slavery, and nearly all the enslaved were descendants of lives stolen from Africa. So, for me, a closer scrutiny of the continent was the logical next step. One of the benefits of a fine personal library is that there are hundreds of volumes sitting on shelves waiting for me to find the moment to find them. Such was the case for Africa: A Biography of the Continent, by John Reader, which sat unattended but beckoning for some two decades until a random evening found a finger on the spine and then the cover was open and the book was in my lap. I did not turn back.
With a literary flourish rarely present in nonfiction combined with the ambitious sweep of something like a novel of James Michener, Reader attempts nothing less than the epic as he boldly surveys the history of Africa from the tectonic activities that billions of years ago shaped the continent, to the evolution of the single human species that now populates the globe, to the rise and fall of empires, to colonialism and independence, and finally to the twin witness of the glorious and the horrific in the peaceful dismantling of South African apartheid and the Rwandan genocide. In nearly seven hundred pages of dense but highly readable text, the author succeeds magnificently, identifying the myriad differences in peoples and lifeways and environments while not neglecting the shared themes that then and now much of the continent holds in common.
Africa is the world’s second largest continent, and it hosts by far the largest number of sovereign nations: with the addition of South Sudan in 2011—twelve years after Reader’s book was published—there are now fifty-four, as well as a couple of disputed territories. But nearly all of these states are artificial constructs that are relics of European colonialism, lines on maps once penciled in by elite overlords in distant drawing rooms in places like London, Paris, Berlin, and Brussels, and those maps were heavily influenced by earlier incursions by the Spanish, Portuguese, and Dutch. Much of the poverty, instability, and often dreadful standards of living in Africa are the vestiges of these artificial borders that mostly ignored prior states, tribes, clans, languages, religions, identities, lifeways. When their colonial masters, who had long raped the land for its resources and the people for their self-esteem, withdrew in the whirlwind decolonization era of 1956-1976—some at the strike of the pen, others at the point of the sword—the exploiters left little of value for nation-building to the exploited beyond the mockery of those boundaries. That of the ancestral that had been lost in the process, had been irrevocably lost. That is one of Reader’s themes. But there is so much more.
The focus is, as it should be, on sub-Saharan Africa; the continent’s northern portion is an extension of the Mediterranean world, marked by the storied legacies of ancient Greeks, Carthaginians, Romans, and the later Arab conquest. And Egypt, then and now, belongs more properly to the Middle East. But most of Africa’s vast geography stretches south of that, along the coasts and deep into the interior. Reader delivers “Big History” at its best, and the sub-Saharan offers up an immense arena for the drama that entails—from the fossil beds that begat Homo habilis in Tanzania’s Olduvai Gorge, to the South African diamond mines that spawned enormous wealth for a few on the backs of the suffering of a multitude, to today’s Maasai Mara game reserve in Kenya that we learn is not as we would suppose a remnant of some ancient pristine habitat, but rather a breeding ground for the deadly sleeping sickness carried by the tsetse fly that turned once productive land into a place unsuitable for human habitation.
Perhaps the most remarkable theme in Reader’s book is population sustainability and migration. While Africa is the second largest of earth’s continents, it remains vastly underpopulated relative to its size. Given the harsh environment, limited resources, and prevalence of devastating disease, there is strong evidence that it has likely always been this way. Slave-trading was, of course, an example of a kind forced migration, but more typically Africa’s history has long been characterized by a voluntary movement of peoples away from the continent, to the Middle East, to Europe, to all the rest of the world. Migration has always been—and remains today—subject to the dual factors of “push” and “pull,” but the push factor has dominated. That is perhaps the best explanation for what drove the migrations of archaic and anatomically modern humans out of Africa to populate the rest of the globe. The recently identified 210,000-year-old Homo sapiens skull in a cave in Greece reminds us that this has been going on a very long time. Homo erectus skulls found in Dmansi, Georgia that date to 1.8 million years old underscore just how long!
Slavery is, not unexpectedly, also a major theme for Reader, largely because of the impact of the Atlantic slave trade on Africa and how it forever transformed the lifeways of the people directly and indirectly affected by its pernicious hold—culturally, politically and economically. The slavery that was a fact of life on the continent before the arrival of European traders closely resembled its ancient roots; certainly race and skin color had nothing to do with it. As noted, I came to study Africa via the Civil War and antebellum slavery. To this day, a favored logical fallacy advanced by “Lost Cause” apologists for the Confederate slave republic asks rhetorically “But their own people sold them as slaves, didn’t they?” As if this contention—if it was indeed true—would somehow expiate or at least attenuate the sin of enslaving human beings. But is it true? Hardly. Captors of slaves taken in raids or in war by one tribe or one ethnicity would hardly consider them “their own people,” any more than the Vikings that for centuries took Slavs to feed the hungry slave markets of the Arab world would have considered them “their own people.” This is a painful reminder that such notions endure in the mindset of the deeply entrenched racism that still defines modern America—a racism derived from African chattel slavery to begin with. It reflects how outsiders might view Africa, but not how Africans view themselves.
The Atlantic slave trade left a mark on every African who was touched by it as buyer, seller or unfortunate victim. The insatiable thirst for cheap labor to work sugar (and later cotton) plantations in the Americas overnight turned human beings into Africa’s most valuable export. Traditions were trampled. An ever-increasing demand put pressure on delivering supply at any cost. Since Europeans tended to perish in Africa’s hostile environment of climate and disease, a whole new class of “middle-men” came to prominence. Slavery, which dominated trade relations, corrupted all it encountered and left scars from its legacy upon the continent that have yet to fully heal.
This review barely scratches the surface of the range of material Reader covers in this impressive work. It’s a big book, but there is not a wasted page or paragraph, and it neither neglects the diversity nor what is held in common by the land and its peoples. Are there flaws? The included maps are terrible, but for that the publisher should be faulted rather than the author. To compensate, I hung a map of modern Africa on the door of my study and kept a historical atlas as companion to the narrative. Other than that quibble, the author’s achievement is superlative. Rarely have I read something of this size and scope and walked away so impressed, both with how much I learned as well as the learning process itself. If you have any interest in Africa, this book is an essential read. Don’t miss it.
Some years ago, I had the pleasure to stay in a historic cabin on a property in Spotsylvania that still hosts extant Civil War trenches. Those who imagine great armies clad in blue and grey massed against each other with pennants aloft on open fields would not be wrong for the first years of the struggle, but those trenches better reflect the reality of the war as it ground to its slow, bloody conclusion in its final year. Those last months contained some of the greatest drama and most intense suffering of the entire conflict, yet often receive far less attention than deserved. A welcome redress to this neglect is Hymns of the Republic: The Story of the Final Year of the American Civil War, by journalist and historian S.C. Gwynne, that neatly marries literature to history and resurrects for us the kind of stirring narratives that once dominated the field.
Looking back, for all too many Civil War buffs it might seem that a certain Fourth of July in 1863—when in the east a battered Lee retreated from Gettysburg on the same day that Vicksburg fell in the west—marked the beginning of the end for the Confederacy. But experts know that assessment is overdrawn. Certainly, the south had sustained severe body blows on both fronts, but the war yet remained undecided. Like the colonists four score and seven years prior to that day, these rebels did not need to “win” the war, only to avoid losing it. As it was, a full ninety-two weeks—nearly two years—lay ahead until Appomattox, some six hundred forty-six days of bloodshed and uncertainty for both sides, most of what truly mattered compressed into the last twelve months of the war. And, tragically, those trenches played a starring role.
Hymns of the Republic opens in March 1864, when Ulysses Grant—architect of the fall of Vicksburg that was by far the more significant victory on that Independence Day 1863—was brought east and given command of all Union Armies. In the three years since Fort Sumter, the war had not gone well in the east, largely as the result of a series of less-than-competent northern generals who had squandered opportunities and been repeatedly driven to defeat or denied outright victory by the wily tactician, Robert E. Lee. The seat of the Confederacy at Richmond—only a tantalizing ninety-five miles from Washington—lay unmolested, while European powers toyed with the notion of granting them recognition. The strategic narrative in the west was largely reversed, marked by a series of dramatic Union victories crafted by skilled generals, crowned by Grant’s brilliant campaign that saw Vicksburg fall and the Confederacy virtually cut in half. But all eyes had been on the east, to Lincoln’s great frustration. Now events in the west were largely settled, and Lincoln brought Grant east, confident that he had finally found his general who would defeat Lee and end the war. But while Lincoln’s instincts proved sound in the long term, misplaced optimism for an early close to the conflict soon evaporated. More than a year of blood and tears lay ahead.
Much of the battle tactics are a familiar story—Grant Takes Command was the exact title of a Bruce Catton classic—but Gwynne updates the narrative with the benefit of the latest scholarship that not only looks beyond the stereotypes of Grant and Lee, but the very dynamics of more traditional treatments focused solely upon battles and leaders. Most prominently, he resurrects the African Americans that until somewhat recently were for too long conspicuously absent from much Civil War history, buried beneath layers of propaganda spun by unreconstructed Confederates who fashioned an alternate history of the war—the “Lost Cause” myth—that for too long dominated Civil War studies and still stubbornly persists both in right-wing politics and the curricula of some southern school systems to this day. In the process, Gwynne restores the role of African Americans as central players to the struggle who have long been erased from the history books.
Erased. Remarkably, most Americans rarely thought of blacks at all in the context of the war until the film Glory (1989) and Ken Burns’ docuseries The Civil War (1990) came along. And there are still books—Joseph Wheelan’s Their Last Full Measure: The Final Days of the Civil War, published in 2015, springs to mind—that demote these key actors to bit parts. Yet, without enslaved African Americans there would have never been a Civil War. The centrality of slavery to secession has been just as incontrovertibly asserted by the scholarly consensus as it has been vehemently resisted by Lost Cause proponents who would strike out that uncomfortable reference and replace it with the euphemistic “States’ Rights,” neatly obscuring the fact that southern states seceded to champion and perpetuate the right to own dark-complected human beings as chattel property. Social media is replete with concocted fantasies of legions of “Black Confederates,” but the reality is that about a half million African Americans fled to Union lines, and so many enlisted to make war on their former masters that by the end of the war fully ten percent of the Union army was comprised of United States Colored Troops (USCT). Blacks knew what the war was about, and ultimately proved a force to be reckoned with that drove Union victory, even as a deeply racist north often proved less than grateful for their service.
Borrowing a page from the latest scholarship, Gwynne points to the prominence of African Americans throughout the war, but especially in its final months—marked both by remarkable heroism and a trail of tragedy. His story of the final year of the conflict commences with the massacre at Fort Pillow in April 1864 of hundreds of surrendering federal troops—the bulk of whom were uniformed blacks—by Confederates under the command of Nathan Bedford Forrest. The author gives Forrest a bit of a pass here—while the general was himself not on the field, he later bragged about the carnage—but Gwynne rightly puts focus on the long-term consequences, which were manifold.
The Civil War was the rare conflict in history not marred by wide scale atrocities—except towards African Americans. Lee’s allegedly “gallant” forces in the Gettysburg campaign kidnapped blacks they encountered to send south into slavery, and while Fort Pillow might have been the most significant open slaughter of black soldiers by southerners, it was hardly the exception. Confederates were enraged to see blacks garbed in uniform and sporting a rifle, and thus they were frequently murdered once disarmed rather than taken prisoner like their white counterparts. Something like a replay of Fort Pillow occurred at the Battle of the Crater during the siege of Petersburg, although the circumstances were more ambiguous, as the blacks gunned down in what rebels termed a “turkey shoot” were not begging for their lives as at Pillow. This was not far removed from official policy, of course: the Confederate government threatened to execute or sell into slavery captured black soldiers, and refused to consider them for prisoner exchange. This was a critical factor that led to the breakdown of the parole and exchange processes that had served as guiding principles throughout much of the war. The result bred conditions on both sides that led to the horrors of overcrowding and deplorable conditions in places like Georgia’s Andersonville and Camp Douglas in Chicago.
Meanwhile, Grant was hardly disappointed with the collapse of prisoner exchange. To his mind, anything that denied the south men or materiel would hasten the end of the war, which was his single-minded pursuit. Grant has long been subjected to calumnies that branded him “Grant the Butcher” because he seemed to throw lives away in hopeless attempts to dislodge a heavily fortified enemy. The most infamous example of this was Cold Harbor, which saw massive Union casualties. But Lee’s tactical victory there—it was to be his last of the war—further depleted his rapidly diminishing supply of men and arms which simply could not be replaced. Grant had a strategic vision that set him apart from the rest. That Lee pushed on as the odds shrunk for any outcome other than ultimate defeat came to beget what Gwynne terms “the Lee paradox: the more the Confederates prolonged the war, the more the Confederacy was destroyed.” [p252] And that destruction was no unintended consequence, but a deliberate component of Grant’s grand strategy to prevent food, munitions, pack animals, and slave labor from supporting the enemy’s war effort. Gwynne finds fault with Sherman’s generalship, but his “march to the sea” certainly achieved what had been intended. And while a northern public divided between those who would make peace with the rebels and those impatient with both Grant and Lincoln for an elusive victory, it was Sherman who delivered Atlanta and ensured the reelection of the president, something much in doubt even in Lincoln’s own mind.
There is far more contained within the covers of this fine work than any review could properly summarize. Much to his credit, the author does not neglect those often marginalized by history, devoting a well-deserved chapter to Clara Barton entitled “Battlefield Angel.” And the very last paragraph of the final chapter settles upon Juneteenth, when—far removed from the now quiet battlefields—the last of the enslaved finally learned they were free. Thus, the narrative ends as it has begun, with African Americans in the central role in the struggle too often denied to them in other accounts. For those well-read in the most recent scholarship, there is little new in Hymns of the Republic, but the general audience will find much to surprise them, if only because a good deal of this material has long been overlooked. Perhaps Gwynne’s greatest achievement is in distilling a grand story from the latest historiography and presenting it as the kind of exciting read Civil War literature is meant to be. I highly recommend it.
A familiar construct for students of European history is what is known as “The Long Nineteenth Century,” a period bookended by the French Revolution and the start of the Great War. The Great War. That is what it used to be called, before it was diminished by its rechristening as World War I, to distinguish it from the even more horrific conflict that was to follow just two decades hence. It is the latter that in retrospect tends to overshadow the former. Some are even tempted to characterize one as simply a continuation of the other, but that is an oversimplification. There was in fact far more than semantics to that designation of “Great War,” and historians are correct to flag it as a definitive turning point, for by the time it was over Europe’s cherished notions of civilization—for better and for worse—lay in ruins, and her soil hosted not only the scars of vast, abandoned trenches, but the bones of millions who once held the myths those notions turned out to be dear in their heads and their hearts.
The war ended with a stopwatch of sorts. The Armistice that went into effect on November 11, 1918 at 11AM Paris time marked the end of hostilities, a synchronized moment of collective European consciousness it is said all who experienced would recall for as long as they lived. Of course, something like 22 million souls—military and civilian—could not share that moment: they were the dead. Nearly three thousand died that very morning, as fighting continued right up to the final moments when the clock ran out.
What happened next? There is a tendency to fast forward because we know how it ends: the imperfect Peace of Versailles, the impotent League of Nations, economic depression, the rise of fascism and Nazism, American isolationism, Hitler invades Poland. In the process, so much is lost. Instead, Daniel Schönpflug artfully slows the pace with his well-written, highly original strain of microhistory, A World on Edge: The End of the Great War and the Dawn of a New Age. The author, an internationally recognized scholar and adjunct professor of history at the Free University of Berlin, blends the careful analytical skills of a historian with a talented pen to turn out one of the finest works in this genre to date.
First, he presses the pause button. That pause—the Armistice—is just a fragment of time, albeit one of great significance. But it is what follows that most concerns Schönpflug, who has a great drama to convey and does so through the voices of an eclectic array of characters from various walks of life across multiple geographies. When the action resumes, alternating and occasionally overlapping vignettes chronicle the postwar years from the unique, often unexpected vantage points of just over two dozen individuals—some very well known, others less so—who were to leave an imprint of larger or smaller consequence upon the changed world they walked upon.
There is Harry S Truman, who regrets that the military glory he aspired to as a boy has eluded him, yet is confident he has acquitted himself well, and cannot wait to return home to marry his sweetheart Bess and—ironically—vows he will never fire another shot as long as he lives. Former pacifist and deeply religious Medal of Honor winner Sergeant Alvin York receives a hero’s welcome Truman could only dream of, but eschews offers of money and fame to return to his backwoods home in Tennessee, where he finds purpose by leveraging his celebrity to bring roads and schools to his community. Another heroic figure is Sergeant Henry Johnson, of the famed 369th Infantry known as the “Harlem Hellfighters,” who incurred no less than twenty-one combat injuries fending off the enemy while keeping a fellow soldier from capture, but because of his skin color returns to an America where he remains a second-class citizen who does not receive the Medal of Honor he deserves until its posthumous award by President Barack Obama nearly a century later. James Reese Europe, the regimental band leader of the “Harlem Hellfighters,” who has been credited with introducing jazz to Europe, also returns home to an ugly twist of fate.
And there’s Käthe Kollwitz, an artist who lost a son in the war and finds herself in the uncertain environment of a defeated Germany engulfed in street battles between Reds and reactionaries, both flanks squeezing the center of a nascent democracy struggling to assert itself in the wake of the Kaiser’s abdication. One of the key members of that tenuous center is Matthias Erzberger, perhaps the most hated man in the country, who had the ill luck to be chosen as the official who formally accedes to Germany’s humiliating terms for Armistice, and as a result wears a target on his back for the rest of his life. At the same time, the former Kaiser’s son, Crown Prince Wilhelm von Preussen, is largely a forgotten figure who waits in exile for a call to destiny that never comes. Meanwhile in Paris, Marshal Ferdinand Foch lobbies for Germany to pay an even harsher price, as journalist Louise Weiss charts a new course for women in publishing and longs to be reunited with her lover, Milan Štefánik, an advocate for Czechoslovak sovereignty.
Others championing independence elsewhere include Nguyễn Tất Thành (later Hồ Chí Minh), polishing plates and politics while working as a dishwasher in Paris; Mohandas Gandhi, who barely survives the Spanish flu and now struggles to hold his followers to a regimen of nonviolent resistance in the face of increasingly violent British repression; T.E. Lawrence, increasingly disillusioned by the failure of the victorious allies to live up to promises of Arab self-determination; and, Terence MacSwiney, who is willing to starve himself to death in the cause of Irish nationhood. No such lofty goals motivate assassin Soghomon Tehlirian, a survivor of the Armenian genocide, who only seeks revenge on the Turks; nor future Auschwitz commandant Rudolf Höss, who emerges from the war an eager and merciless recruit for right-wing paramilitary forces.
There are many more voices, including several from the realms of art, literature, and music such as George Grosz, Virginia Woolf, and Arnold Schönberg. The importance of the postwar evolution of the arts is underscored in quotations and illustrations that head up each chapter. Perhaps the most haunting is Paul Nash’s 1918 oil-on-canvas of a scarred landscape entitled—with a hint of either optimism or sarcasm—We Are Making a New World. All the stories the voices convey are derived from their respective letters, diaries, and memoirs; only in the “Epilogue” does the reader learn that some of those accounts are clearly fabricated.
Many of my favorite characters in A World on Edge are ones that I had never heard of before, such as Moina Michael, who was so inspired by the sacrifice of those who perished in the Great War that she singlehandedly led a campaign to memorialize the dead with the poppy as her chosen emblem for the fallen, an enduring symbol to this very day. But I found no story more gripping than that of Marina Yurlova, a fourteen year old Cossack girl who became a child soldier in the Russian army, was so badly wounded she was hospitalized for a year, then entered combat once more during the ensuing civil war and was wounded again, this time by the Bolsheviks. Upon recovery, Yurlova embarked upon a precarious journey on foot through Siberia that lasted a month before she was able to flee Russia for Japan and eventually settle in the United States, where despite her injuries she became a dancer of some distinction.
I am a little embarrassed to admit that I received an advance reader’s edition (ARC) of A World on Edge as part of an early reviewer’s program way back in November 2018, but then let it linger in my to-be-read (TBR) pile until I finally got around to it near the end of June 2020. I loved the book but did not take any notes for later reference. So, by the time I sat down to review it in January 2021, given the size of the cast and the complexity of their stories, I felt there was no way I could do justice to the author and his work without re-reading it—so I did, over just a couple of days! And that is the true beauty of this book: for all its many characters, competing storylines, and what turns out to be multilevel, deeply profound messaging, for something of the grand saga that it is it remains a fast-paced, exciting read. Schönpflug’s technique of employing bit players to recount an epic tale succeeds so masterfully that the reader is hardly aware of what has been happening until the final pages are being turned. This is history, of course, this is indeed nonfiction, but yet the result invites a favorable comparison to great literature, to a collection of short stories by Ernest Hemingway, or to a novel by André Brink. If European history is an interest, A World on Edge is not only a recommended read, but a required one.
Women are conspicuously absent in most Civil War chronicles. With a few notable exceptions—Clara Barton, Harriet Tubman, Mary Todd Lincoln—female figures largely appear in the literature as bit players, if they make an appearance at all. Author Karen Abbott seeks a welcome redress to this neglect with Liar Temptress Soldier Spy: Four Women Undercover in the Civil War, an exciting and extremely well-written, if deeply flawed account of some ladies who made a significant contribution to the war effort, north and south.
The concept is sound enough. Abbott focuses on four very different women and relates their respective stories in alternating chapters. There is Belle Boyd, a teenage seductress with a lethal temper who serves as rebel spy and courier; Emma Edmonds, who puts on trousers to masquerade as Frank Thompson and joins the Union army; Rose O’Neal Greenhow, an attractive widow who romances northern politicians to obtain intel for the south; and, Elizabeth Van Lew, a prominent Richmond abolitionist who maintains a sophisticated espionage ring that infiltrates the inner circles of the Confederate government. Each of these is worthy of book-length treatment, but weaving their exploits together is an effective technique that makes for a readable and compelling narrative.
I had never heard of Karen Abbott—the pen name for Abbott Kahler—a journalist and highly acclaimed best-selling author dubbed the “pioneer of sizzle history” by USA Today. She is certainly a gifted writer, and unlike all too many works of history, her prose is fast-moving and engaging. I was swept along by her colorful recounting of the 1861 Battle of Bull Run, with flourishes such as: “Union troops fumbled backward and the Confederates rammed forward, a brutal and uneven dance, with soldiers felled like rotting trees.” I got so carried away I almost made it through the following passage without stumbling:
Some Northern soldiers claimed that every angle, every viewpoint, offered a fresh horror. The rebels slashed throats from ear to ear. They sliced off heads and dropkicked them across the field. They carved off noses and ears and testicles and kept them as souvenirs. They propped the limp bodies of wounded soldiers against trees and practiced aiming for the heart. They wrested muskets and swords from the clenched hands of corpses. They plunged bayonets deep into the backsides of the maimed and the dead. They burned the bodies, collecting “Yankee shin-bones” to whittle into drumsticks, and skulls to use as steins. [p34]
Almost. But I have a master’s degree in history and have spent a lifetime studying the American Civil War, and I have never heard this account of such barbarism at Bull Run. So I paused and flipped to Abbott’s notes for the corresponding page at the back of the book, where with a whiff of insouciance she admits that: “Throughout the war both the North and the South exaggerated the atrocities committed by the enemy, and it’s difficult to determine which incidents were real and which were apocryphal.” [p442] Which is another way of saying that her account is highly sensationalized, if not outright fabrication.
To my mind, Abbott commits an unpardonable sin here. A little research reveals that there were in fact a handful of allegations of brutality in the course of the battle, including the mutilation of corpses, but much of it anecdotal. There were several episodes of Confederate savagery later in the war, principally inflicted upon black soldiers in blue uniforms, but that is another story. How many readers of a popular history would without question take her at her word about what transpired at Bull Run? How many when confronted with stories of testicles taken as souvenirs would think to consult her citations? Lively paragraphs like this may certainly make for “sizzle”—but where’s the history? Historical novels have their place—The Killer Angels, by Michael Shaara, and Gore Vidal’s Lincoln, are among my favorites—but that is not the same thing as history, which must abide by a strict allegiance to fact-based reporting, informed analysis, and documentation. Apparently, this author demonstrates little loyalty to such constraints.
I read on, but with far more skepticism. Abbott’s style is seductive, so it’s easy to keep going. But sins do continue to accumulate. I have a passing familiarity with three of the four main characters, but fact-checking remained essential. Certainly the best known and most consequential was Van Lew, a heroic figure who aided the escape of prisoners of war and provided key intelligence to Union forces in the field. Greenhow is often cited as her counterpart working for the southern cause. Belle Boyd, on the other hand, has become a creature of legend who turns up more frequently in fiction or film than in history texts. I had never heard of Emma Edmonds, but I came to find her story the most fascinating of them all.
It seems that the more documented the subject—such as Van Lew, for example—the closer Abbott’s portrait comes to reliable biography. Beyond that, the imaginative seems to intrude, indeed dominate. The astonishing tale of Emma Edmonds has her not only impersonating a male Union soldier, but also variously posing as an Irish peddler and in blackface disguised as a contraband, engaged in thrilling espionage missions behind enemy lines! It rang of the stuff that Thomas Berger’s Little Big Man was made of. I was suitably sucked in, but also wary. And rightly so: Abbott’s version of Emma Edmonds’ life is based almost entirely on Edmonds’ own memoir, with little that corroborates it, but the author doesn’t bother to reveal that in the narrative. That Edmonds pretended to be a man in order to enlist seems plausible; her spy missions perhaps only fantasy. We simply just don’t know; a true historian would help us draw conclusions. Abbott seems content to let it play out as so much drama to tickle her audience.
But the worst of all is when the time comes to reveal the fate of luckless Confederate spy Greenhow, who drowns when her lifeboat capsizes with Union vessels bearing down on the steamer she abandoned, the moment where the superlative talent of Abbott’s pen collides with her concomitant disloyalty to scholarship:
She was sideways, upside down, somersaulting inside the wet darkness. She screamed noiselessly, the water rushing in. She tried to hold her breath—thirty seconds, sixty, ninety—before her mouth gave way and water filled it again. Tiny streams of bubbles escaped from her nostrils. A burning scythed through her chest. That bag of gold yanked like a noose around her neck. Her hair unspooled and leeched to her skin, twining around her neck. She tried to aim her arms up and her legs down, to push and pull, but every direction seemed the same. No moonlight skimmed along the surface, showing her the way; there was no light at all. [p389]
Entertaining, right? Outstanding writing, correct? Solid history—of course not! Imagining Greenhow’s final agonizing moments of life with a literary flourish may very well enrich the pages of a work of fiction, but it is nothing less than an outrage to a work of history.
This book was a fun read. Were it a novel I would likely give it high marks. But that is not how it is packaged. Emma Edmonds pretended to be a man to save the Union. Karen Abbott pretends to be a historian to sell books. Both make for great stories. But don’t confuse either with reliable history.
When I visited New York’s Metropolitan Museum of Art some years ago, the object I found most stunning was the “Monteleone Chariot,” a sixth century Etruscan bronze chariot inlaid with ivory. I stood staring at it, transfixed, long enough for my wife to shuffle her feet impatiently. Still I lingered, dwelling on every detail, especially the panels depicting episodes from the life of Homeric hero Achilles. By that time, I had read The Iliad more than once, and had long been immersed in studies of ancient Greece. How was it then, I wondered, that I could speak knowledgeably about Solon and Pisistratus, but yet know so little about the Etruscans who crafted that chariot in the same century those notables walked the earth?
Long before anyone had heard of the Romans, city-states of Etruria dominated the Italian peninsula—and, along with Carthage and a handful of Greek poleis—the central Mediterranean, as well. Later, Rome would absorb, crush or colonize all of them. In the case of the Etruscans, it was to be a little of each. And somehow, somewhat incongruously, over the millennia Etruscan civilization—or at least what the living, breathing Etruscans would have recognized as such—has been lost to us. But not lost in the way we usually think of “lost civilizations,” like Teotihuacan, for instance, or the Indus Valley, where what remains are ruins of a vanished culture that disappeared from living memory, an undeciphered script, and even the uncertain ethnicity of its inhabitants. The Etruscans, on the other hand, were never forgotten, their alphabet can be read although their language largely defies translation, and their DNA lingers in at least some present-day Italians. Yet, by all accounts they are nevertheless lost, and tantalizingly so.
Such a conundrum breeds frustration, of course: Romans supplanted the Etruscans but hardly exterminated them. Moreover, unlike other civilizations deemed “lost to history,” the Etruscans appear in ancient texts going as far back as Hesiod. There are also hundreds of excavated tombs, rich with decorative art and grave goods, the latter top-heavy with Greek imports they clearly treasured. So how can we know so much about the Etruscans and at the same time so little? Fortunately, Lucy Shipley, who holds a PhD in Etruscan archaeology, comes to a rescue of sorts with her well-written, delightful contribution to the scholarship, entitled simply The Etruscans, a volume in the digest-sized Lost Civilization series published by Reaktion Books.
Most Etruscan studies are dominated by discussions of the ancient sources and—most prominently—the tombs, which are nothing short of magnificent. But where does that lead us? Herodotus references the Etruscans, as does Livy. But are the sources reliable? Rather dubious, as it turns out. Herodotus may be a dependable chronicler of the Hellenes, but anyone who has read his comically misguided account of Egyptian life and culture is aware how far he can stray from reality. And Roman authors such as Livy routinely trumped a decidedly negative perspective, most evident in disdainful memories of the unwelcome semi-legendary Etruscan kings that are said to have ruled Rome until the overthrow of “Tarquin the Proud” in 509 BCE.
Then there are the tombs. Attempts to extrapolate what ancient life was like from the art that decorates the tombs of the dead—awe inspiring as it may be—can present a distorted picture (pun fully intended!) that ignores all but the wealthiest elite slice of the population. Much like Egyptology’s one-time obsession for pyramids and the pharaoh’s list tended to obscure the no less interesting lives of the non-royal—such as those of the workers who collected daily beer rations and left graffiti within the walls of pyramids they constructed—the emphasis on tombs that is standard to Etruscan studies reveals little of the lives of the vast majority of ordinary folks that peopled their world.
Shipley neatly sidesteps these traditional traps by failing to be constrained by them. Instead, she relies on her training as an archaeologist to ask questions: what do we know about the Etruscans and how do we know it? And, perhaps more critically: what don’t we know and why don’t we know it? In the process, she brings a surprisingly fresh look to an enigmatic people in a highly readable narrative suitable to both academic and popular audiences. Arranged thematically rather than chronologically, the author selects a specific artifact or site for each chapter to serve as a visual trigger for the discussion. Because Shipley is so talented with a pen, it is worth pausing to let her explain her methodology in her own words:
Why focus on the archaeology? Because it is the very materiality, the physicality, the toughness and durability of things and the way they insidiously slip and slide into every corner of our lives that makes them so compelling … We are continually making and remaking ourselves, with the help of things. I would argue that the past is no different in this respect. It’s through things that we can get at the people who made, used and ultimately discarded them—their projects of self-production are as wrapped up in stuff as our own. And always, wrapped up in these things, are fundamental questions about how we choose to be in the world, questions that structure our actions and reactions, questions that change and challenge how we think and what we feel. Questions and objects—the two mainstays of human experience. [p19-20]
Shipley’s approach succeeds masterfully. Because many of these objects—critical artifacts for the archaeologist but often also spectacular works of art for the casual observer—are rendered in full color in this striking edition, the reader is instantly hooked: effortlessly chasing the author’s captivating prose down a host of intriguing rabbit holes in pursuit of answers to the questions she has mated with these objects. Along the way, she showcases the latest scholarship with a concise treatment of a broad range of topics informed by the kind of multi-disciplinary research that defines twenty-first century historical inquiry.
This includes DNA studies of both cattle and human populations in an attempt to resolve the long debate over Etruscan origins. While Herodotus and legions of other ancient and modern detectives have long pointed to legendary migrations from Anatolia, it turns out that the Etruscans are likely autochthonous, speaking a pre-Indo European language that may possibly be related to the one spoken by Ötzi, the mummified iceman, thousands of years ago. Shipley also takes the time to explain how it is that we can read enough of the Etruscan alphabet to decipher proper names while remaining otherwise frustrated in efforts aimed at meaningful translation. Much that we identify as Roman was borrowed from Etruria, but as Rome assimilated the Etruscans over the centuries, their language was left behind. Later, Etruscan literature—like all too much of the classical world—fell victim to the zeal of early Christians in campaigns to purge any remnants of paganism. Most offensive in this regard were writings that described the practices of the “haruspex,” a specialist who sought to divine the future by examining the livers of sacrificial animals, an Etruscan ritual later integrated into Roman religious practices. Texts of haruspices appear prominently in the “hit lists” drawn up by Christian thinkers Tertullian and Arnobius.
My favorite chapter is entitled “Super Rich, Invisible Poor,” which highlights the inevitable distortion that results from the attention paid to the exquisite art and grave goods of the wealthy elite at the expense of the sizeable majority of the inhabitants of a dozen city-states comprised of numerous towns, villages and some larger cities with populations thought to number in the tens of thousands. Although, to be fair, this has hardly been deliberate: there remains a stark scarcity in the archaeological record of the teeming masses, so to speak. While it may smack of the cliché, the famous aphorism “Absence of evidence is not evidence of absence” should be triple underscored here! The Met’s Monteleone Chariot, originally part of an elaborate chariot burial, makes an appearance in this chapter, but perhaps far more fascinating is a look at the great complex of workshops at a site called Poggio Civitate, more than a hundred miles from Monteleone, where skilled craftspeople labored to produce a whole range of goods in the same century that chariot was fashioned. But what of those workers? There seemed to be no trace of them. You can clearly detect the author’s delight as she describes recent excavations that uncovered remains of a settlement that likely housed them. Shipley returns again and again to her stated objective of connecting the material culture to the living Etruscans who were once integral to it.
Another chapter worthy of superlatives is “Sex, Lives and Etruscans.” While it is tempting to impose modern notions of feminism on earlier peoples, Etruscan women do seem to have had claimed lives of far greater independence than their classical contemporaries in Greece and Rome. And there are also compelling hints at an openness in sexuality—including wife-sharing—that horrified ancient observers who nevertheless thrilled in recounting licentious tales of wicked Etruscan behavior! Shipley describes tomb art that depicts overt sex acts with multiple partners, while letting the reader ponder whether legendary accounts of Etruscan profligacy are given to hyperbole or not.
In addition to beautiful illustrations and an engaging narrative, this volume also features a useful map, a chronology, recommended reading, and plenty of notes. It is rare that any author can so effectively tackle a topic so wide-ranging in such a compact format, so Shipley deserves special recognition for turning out such an outstanding work. The Etruscans rightly belongs on the shelf of anyone eager to learn more about a people who certainly made a vital contribution to the history of western civilization.
In Hearts of Atlantis, Stephen King channels the fabled lost continent as metaphor for the glorious promise of the sixties that vanished so utterly that nary a trace remains. Atlantis sank, King declares bitterly in his fiction. He has a point. If you want to chart the actual moments those collective hopes and dreams were swamped by currents of reaction and finally submerged in the merciless wake of a new brand of unforgiving conservatism, you absolutely must turn to Reaganland: America’s Right Turn 1976-1980, Rick Perlstein’s brilliant, epic political history of an era too often overlooked that surely echoes upon America in 2020 with far greater resonance than perhaps any before or since. But be warned: you may need forearms even bigger than the sign-spinning guy in the Progressive commercial to handle this dense, massive 914-page tome that is nevertheless so readable and engaging that your wrists will tire before your interest flags.
Reaganland is a big book because it is actually several overlapping books. It is first and foremost the history of the United States at an existential crossroads. At the same time, it is a close account of the ill-fated presidency of Jimmy Carter. And, too, it is something of a “making of the president 1980.” This is truly ambitious stuff, and that Perlstein largely succeeds in pulling it off should earn him wide and lasting accolades both as a historian and an observer of the American experience.
Reaganland is the final volume in a series launched nearly two decades ago by Perlstein, a progressive historian, that chronicles the rise of the right in modern American politics. Before the Storm focused on Goldwater’s ascent upon the banner of far-right conservatism. This was followed by Nixonland, which profiled a president who thrived on division and earned the author outsize critical acclaim; and, The Invisible Bridge, which revealed how Ronald Reagan—stridently unapologetic for the Vietnam debacle, for Nixon’s crimes, and for angry white reaction to Civil Rights—brought notions once the creature of the extreme right into the mainstream, and began to pave the road that would take him to the White House. Reaganland is written in the same captivating, breathless style Perlstein made famous in his earlier works, but he has clearly honed his craft: the narrative is more measured, less frenetic, and is crowned with a strong concluding chapter—something conspicuously absent in The Invisible Bridge.
The grand—and sometimes allied—causes of the Sixties were Civil Rights and opposition to the Vietnam War, but concomitant social and political revolutions spawned a myriad of others that included antipoverty efforts for the underprivileged, environmental activism, equal treatment for homosexuals and other marginalized groups such as Native Americans and Chicano farm workers, constitutional reform, consumer safety, and most especially equality for women, of which the right to terminate a pregnancy was only one component. The common theme was inclusion, equality, and cultural secularism. The antiwar movement came to not only dominate but virtually overshadow all else, but at the same time served as a unifying factor that stitched together a kind of counterculture coat of many colors to oppose an often stubbornly unyielding status quo. When the war wound down, that fabric frayed. Those who once marched together now marched apart.
This fragmentation was not generally adversarial; groups once in alliance simply went their own ways, organically seeking to advance the causes dear to them. And there was much optimism. Vietnam was history. Civil Rights had made such strides, even if there remained so much unfinished business. Much of what had been counterculture appeared to have entered the mainstream. It seemed like so much was possible. At Woodstock, Grace Slick had declared that “It’s a new dawn,” and the equality and opportunity that assurance heralded actually seemed within reach. Yet, there were unseen, menacing clouds forming just beneath the horizon.
Few suspected that forces of reaction quietly gathering strength would one day unite to destroy the progress towards a more just society that seemed to lie just ahead. Perlstein’s genius in Reaganland lies in his meticulous identification of each of these disparate forces, revealing their respective origin stories and relating how they came to maximize strength in a collective embrace. The Equal Rights Amendment, riding on a wave of massive bipartisan public support, was but three states away from ratification when a bizarre woman named Phyllis Schlafly seemingly crawled out of the woodwork to mobilize legions of conservative women to oppose it. Gay people were on their way to greater social acceptance via local ordinances which one by one went down to defeat after former beauty queen and orange juice hawker Anita Bryant mounted what turned into a nationwide campaign of resistance. The landmark Roe v. Wade case that guaranteed a woman’s right to choose sparked the birth of a passionate right-to-life movement that soon became the central creature of the emerging Christian evangelical “Moral Majority,” that found easy alliance with those condemning gays and women’s lib. Most critically—in a key component that was to have lasting implications, as Perlstein deftly underscores—the Christian right also pioneered a political doctrine of “co-belligerency” that encouraged groups otherwise not aligned to make common ground against shared “enemies.” Sure, Catholics, Mormons and Jews were destined to burn in a fiery hell one day, reasoned evangelical Protestants, but in the meantime they could be enlisted as partners in a crusade to combat abortion, homosexuality and other miscellaneous signposts of moral decay besetting the nation.
That all this moral outrage could turn into a formidable political dynamic seems to have been largely unanticipated. But, as Perlstein reminds us, maybe it should not have been so surprising: candidate Jimmy Carter, himself deeply religious and well ahead in the 1976 race for the White House, saw a precipitous fifteen-point drop in the polls after an interview in Playboy where he admitted that he sometimes lusted in his heart. Perhaps the sun wasn’t quite ready to come up for that new dawn after all.
Of course, the left did not help matters, often ideologically unyielding in its demand to have it all rather than settle for some, as well as blind to unintended consequences. Nothing was to alienate white members of the national coalition to advance civil rights for African Americans more than busing, a flawed shortcut that ignored the greater imperative for federal aid to fund and rebuild decaying inner-city schools, de facto segregated by income inequality. Efforts to advance what was seen as a far too radical federal universal job guarantee ended up energizing opposition that denied victory to other avenues of reform. And there’s much more. Perlstein recounts the success of Ralph Nader’s crusade for automobile safety, which exposed carmakers for deliberately skimping on relatively inexpensive design modifications that could have saved countless lives in order to turn out even greater profits. Auto manufacturers were finally brought to heel. Consumer advocacy became a thing, with widespread public support and frequent industry acquiescence. But even Nader—not unaware of consequences, unintended or otherwise—advised caution when a protégé pressed a campaign to ban TV ads for sugary cereals that targeted children, predicting with some prescience that “if you take on the advertisers you will end up with so many regulators with their bones bleached in the desert.” [p245] Captains of industry Perlstein terms “Boardroom Jacobins” were stirred to collective action by what was perceived as regulatory overreach, and big business soon joined hands to beat all such efforts back.
Meanwhile, subsequent to Nixon’s fall and Ford’s defeat to Carter in 1976, pundits—not for the last time—prematurely foretold the extinction of the Republican Party, leaving stalwart policy wonks on the right seemingly adrift, clinging to their opposition to the pending Salt II arms agreement and the Panama Canal Treaty, furiously wielding oars of obstruction but yet still lacking a reliable vessel to stem the tide. Bitterly opposed to the prevailing wisdom that counseled moderation to ensure not only relevance but survival, they chafed at accommodation with the Ford-Kissinger-Rockefeller wing of the party that preached détente abroad and compromise at home. They looked around for a new champion … and once again found Ronald Reagan!
The former Bedtime for Bonzo co-star and corporate shill had launched his political career railing against communists concealed in every cupboard, as well as shrewdly exploiting populist rage at long-haired antiwar demonstrators. As governor of California he directed an especially violent crackdown known as “Bloody Thursday” on non-violent protesters at UC Berkeley’s People’s Park that resulted in one death and hundreds of injuries after overzealous police fired tear gas and shotguns loaded with buckshot at the crowd. In a comment that eerily presaged Trump’s “very fine people on both sides” remark, Reagan declared that “Once the dogs of war have been unleashed, you must expect … that people … will make mistakes on both sides.” But a year later he was even less apologetic, proclaiming that “If it takes a bloodbath, let’s get it over with.” This was their candidate, who—remarkably one would think—had nearly snatched the nomination away from Ford in ’76, and then went on to cheer party unity while campaigning for Ford with even less enthusiasm than Bernie Sanders exhibited for Hillary Clinton in 2016. Many hold Reagan at least partially responsible for Ford’s loss in the general election.
But Reagan’s neglect of Ford left him neatly positioned as the front-runner for 1980. As conservatives dug in, others of the party faithful recoiled in horror, fearing a repeat of the drubbing at the polls they took in 1964 with Barry “extremism in defense of liberty is no vice” Goldwater at the top of the ticket. And Reagan did seem extreme, perhaps more so than Goldwater. The sounds of sabers rattling nearly drowned out his words every time he mentioned the U.S.S.R. And he said lots of truly crazy things, both publicly and privately, once even wondering aloud over dinner with columnist Jack Germond whether “Ford had staged fake assassination attempts to win sympathy for his renomination.” Germond later recalled that “He was always a man with a very loose hold on the real world around him.” [p617] Germond had a good point: Reagan once asserted that “Fascism was really the basis for the New Deal,” boosted the valuable recycling potential of nuclear waste, and insisted that “trees cause more pollution than automobiles do”—prompting some joker at a rally to decorate a tree with a sign that said “Chop me down before I kill again.”
But Reagan had a real talent with dog whistles, launching his campaign with a speech praising “states’ rights” at a county fair near Philadelphia, Mississippi, where three civil rights workers were murdered in 1964. He once boasted he “would have voted against the Civil Rights Act of 1964,” claimed “Jefferson Davis is a hero of mine,” and bemoaned the Voting Rights Act as “humiliating to the South.” A whiff of racism also clung to his disdain for Medicaid recipients as a “a faceless mass, waiting for handouts,” and his recycling ad nauseum of his dubious anecdote of a “Chicago welfare queen” with twelve social security cards who bilked the government out of $150,000. Unreconstructed whites ate this red meat up. Nixon’s “southern strategy” reached new heights under Reagan.
But a white southerner who was not a racist was actually the president of the United States. Despite the book’s title, the central protagonist of Reaganland is Jimmy Carter, a man who arrived at the Oval Office buoyed by public confidence rarely seen in the modern era—and then spent four years on a rollercoaster of support that plummeted far more often than it climbed. At one point his approval rating was a staggering 77% … at another 28%—only four points above where Nixon’s stood when he resigned in disgrace. These days, as the nonagenarian Carter has established himself as the most impressive ex-president since John Quincy Adams, we tend to forget what a truly bad president he was. Not that he didn’t have good intentions, only that—like Woodrow Wilson six decades before him—he was unusually adept at using them to pave his way to hell. A technocrat with an arrogant certitude that he had all the answers, he arrived on the Beltway with little idea of how the world worked, a family in tow that seemed like they were right out of central casting for a Beverly Hillbillies sequel. He often gravely lectured the public on what was really wrong with the country—and then seemed to lay blame upon Americans for outsize expectations. And he dithered, tacking this way and that way, alienating both sides of the aisle in a feeble attempt to seem to stand above the fray.
In fairness, he had a lot to deal with. Carter inherited a nation more socio-economically shook up than any since the 1930s. In 1969, the United States had proudly put a man on the moon. Only a few short years later, a country weaned on wallowing in American exceptionalism saw factories shuttered, runaway inflation, surging crime, cities on the verge of bankruptcy, and long lines just to gas up your car at an ever-skyrocketing cost. And that was before a nuclear power plant melted down, Iranians took fifty-two Americans hostage, and Soviet tanks rolled into Afghanistan. All this was further complicated by a new wave of media hype that saw the birth of the “bothersiderism” that gives equal weight to scandals legitimate or spurious—an unfortunate ingredient that remains so baked into current reporting.
Perhaps the most impressive part of Reaganland is Perlstein’s superlative rendering of what America was like in the mid-70s. Stephen King’s horror is often so effective at least in part due to the fads, fast food, and pop music he uses as so many props in his novels. If that stuff is real, perhaps ghosts or killer cars could be real, as well. Likewise, Perlstein brings a gritty authenticity home by stepping beyond politics and policy to enrich the narrative with headlines of serial killers and plane crashes, of assassination and mass suicide, adroitly resurrecting the almost numbing sense of anxiety that informed the times. DeNiro’s Taxi Driver rides again, and the reader winces through every page.
Carter certainly had his hands full, especially as the hostage crisis dragged on, but it hardly ranked up there with Truman’s Berlin Airlift or JFK’s Cuban missiles. There were indeed crises, but Carter seemed to manufacture even more—and to get in his own way most of the time. And his attempts to reassure consistently backfired, fueling even more national uncertainty. All this offered a perfect storm of opportunity for right-wing elements who discovered co-belligerency was not only a tactic but a way of life. Against all advice and all odds, Reagan—retaining his “very loose hold on the real world around him”—saw no contradiction bringing his brand of conservatism to join forces with those maligning gays, opposing abortion, stonewalling the ERA, and boosting the Christian right. Corporate CEO’s—Perlstein’s “Boardroom Jacobins”—already on the defensive, were more than ready to finance it. Carter, flailing, played right into their hands. Already the most right-of-center Democratic president of the twentieth century, he too shared that weird vision of the erosion of American morality. And Perlstein reminds us that the debacle of financial deregulation usually traced back to Reagan actually began on Carter’s watch, the seeds sown for the wage stagnation, growth of income inequality, and endless cycles of recession that has been de rigueur in American life ever since. Carter failed to make a good closing argument for why he should be re-elected, and the unthinkable occurred: Ronald Reagan became president of the United States. The result was that the middle-class dream that seemed so much in jeopardy under Carter was permanently crushed once Reagan’s regime of tax cuts, deregulation, and the supply-side approach George H.W. Bush rightly branded as “voodoo economics” became standard operating policy. Progressive reform sputtered and stalled. The little engine that FDR had ignited to manifest a social and economic miracle for America crashed and burned forever on the vanguard of Reaganomics.
Some readers might be intimidated by the size of Reaganland, but it’s a long book because it tells a long story, and it contains lots of moving parts. Perlstein succeeds magnificently because he demonstrates how all those parts fit together, replete with the nuance and complexity critical to historical analysis. Is it perfect? Of course not. I’m a political junkie, but there were certain segments on policy and legislative wrangling that seemed interminable. And if Perlstein mentioned “Boardroom Jacobins” just one more time, I might have screamed. But these are quibbles. This is without doubt the author’s finest book, and I highly recommend it, as both an invaluable reference work and a cover-to-cover read.
In Hearts of Atlantis, Stephen King imagines the sixties as bookended by JFK’s 1963 assassination and John Lennon’s murder in 1980. Perlstein seems to follow that same school of thought, for the final page of Reaganland also wraps up with Lennon’s untimely death. In an afterword to his work of fiction, King muses: “Although it is difficult to believe, the sixties are not fictional; they actually happened.” If you are more partial to nonfiction and want the real story of how the sixties ended, of how Atlantis sank, you must read Reaganland.
[Note: this review goes to press just a few days before the most consequential presidential election in modern American history. This book and this review are reminders that elections do matter.]
Nearly two decades have passed since Charles Freeman published The Closing of the Western Mind: The Rise of Faith and the Fall of Reason, a brilliant if controversial examination of the intellectual totalitarianism of Christianity that dated to the dawn of its dominance of Rome and the successor states that followed the fragmentation of the empire in the West. Freeman argues persuasively that the early Christian church vehemently and often brutally rebuked the centuries-old classical tradition of philosophical enquiry and ultimately drove it to extinction with a singular intolerance of competing ideas crushed under the weight of a monolithic faith. Not only were pagan religions prohibited, but there would be virtually no provision for any dissent with official Christian doctrine, such that those who advanced even the most minor challenges to interpretation were branded heretics and sent to exile or put to death. That tragic state was to define medieval Europe for more than a millennium.
Now the renowned classical historian has returned with a follow-up epic, The Awakening: A History of the Western Mind AD 500-1700, recently published in the UK (and slated for U.S. release, possibly with a different title) which recounts the slow—some might brand it glacial—evolution of Western thought that restored legitimacy to independent examination and analysis, that eventually led to a celebration, albeit a cautious one, of reason over blind faith. In the process, Freeman reminds us that quality, engaging narrative history has not gone extinct, while demonstrating that it is possible to produce a work that is so well-written it is readable by a general audience while meeting the rigorous standards of scholarship demanded by academia. That this is no small achievement will be evident to anyone who—as I do—reads both popular and scholarly history and is struck by the stultifying prose that often typifies the academic. In contrast, here Freeman takes a skillful pen to reveal people, events and occasionally obscure concepts, much of which may be unfamiliar to those who are not well versed in the medieval period.
The fall of Rome remains a subject of debate for historians. While traditional notions of sudden collapse given to pillaging Vandals leaping over city walls and fora engulfed in flames have long been revised, competing visions of a more gradual transition that better reflect the scholarship sometimes distort the historiography to minimize both the fall and what was actually lost. And what was lost was indeed dramatic and incalculable. If, to take just one example, sanitation can be said to be a mark of civilization, the Roman aqueducts and complex network of sewers that fell into disuse and disrepair meant that fresh water was no longer reliable, and sewage that bred pestilence was to be the norm for fifteen centuries to follow. It was not until the late nineteenth century that sanitation in Europe even approached Roman standards. So, whatever the timeline—rapid or gradual—there was indeed a marked collapse. Causes are far more elusive. But Gibbon’s largely discredited casting of Christianity as the villain that brought the empire down tends to raise hackles in those who suspect someone like Freeman attempting to point those fingers once more. But Freeman has nothing to say about why Rome fell, only what followed. The loss of the pursuit of reason was to be as devastating for the intellectual health of the post-Roman world in the West as sanitation was to prove for its physical health. And here Freeman does squarely take aim at the institutional Christian church as the proximate cause for the subsequent consequences for Western thought. This is well-underscored in the bleak assessment that follows in one of the final chapters in The Closing of the Western Mind:
Christian thought that emerged in the early centuries often gave irrationality the status of a universal “truth” to the exclusion of those truths to be found through reason. So the uneducated was preferred to the educated and the miracle to the operation of natural laws … This reversal of traditional values became embedded in the Christian tradition … Intellectual self-confidence and curiosity, which lay at the heart of the Greek achievement, were recast as the dreaded sin of pride. Faith and obedience to the institutional authority of the church were more highly rated than the use of reasoned thought. The inevitable result was intellectual stagnation … [p322]
Awakening picks up where Closing leaves off as the author charts the “Reopening of the Western Mind” (this was the working title of his draft!) but the new work is marked by far greater optimism. Rather than dwell on what has been lost, Freeman puts focus not only upon the recovery of concepts long forgotten but how rediscovery eventually sparked new, original thought, as the spiritual and later increasingly secular world danced warily around one another—with a burning heretic all too often staked between them on Europe’s fraught intellectual ballroom. Because the timeline is so long—encompassing twelve centuries—the author sidesteps what could have been a dull chronological recounting of this slow progression to narrow his lens upon select people, events and ideas that collectively marked milestones on the way that comprise thematic chapters to broaden the scope. This approach thus transcends what might have been otherwise parochial to brilliantly convey the panoramic.
There are many superlative chapters in Awakening, including the very first one, entitled “The Saving of the Texts 500-750.” Freeman seems to delight in detecting the bits and pieces of the classical universe that managed to survive not only vigorous attempts by early Christians to erase pagan thought but the unintended ravages of deterioration that is every archivist’s nightmare. Ironically, the sacking of cities in ancient Mesopotamia begat conflagrations that baked inscribed clay tablets, preserving them for millennia. No such luck for the Mediterranean world, where papyrus scrolls, the favored medium for texts, fell to war, natural disasters, deliberate destruction, as well as to entropy—a familiar byproduct of the second law of thermodynamics—which was not kind in prevailing environmental conditions. We are happily still discovering papyri preserved by the dry conditions in parts of Egypt—the oldest dating back to 2500 BCE—but it seems that the European climate doomed papyrus to a scant two hundred years before it was no more.
Absent printing presses or digital scans, texts were preserved by painstakingly copying them by hand, typically onto vellum, a kind of parchment made from animal skins with a long shelf life, most frequently in monasteries by monks for whom literacy was deemed essential. But what to save? The two giants of ancient Greek philosophy, Plato and Aristotle, were preserved, but the latter far more grudgingly. Fledgling concepts of empiricism in Aristotle made the medieval mind uncomfortable. Plato, on the other hand, who pioneered notions of imaginary higher powers and perfect forms, could be (albeit somewhat awkwardly) adapted to the prevailing faith in the Trinity, and thus elements of Plato were syncretized into Christian orthodoxy. Of course, as we celebrate what was saved it is difficult not to likewise mourn what was lost to us forever. Fortunately, the Arab world put a much higher premium on the preservation of classical texts—an especially eclectic collection that included not only metaphysics but geography, medicine and mathematics. When centuries later—as Freeman highlights in Awakening—these works reached Europe, they were to be instrumental as tinder to the embers that were to spark first a revival and then a revolution in science and discovery.
My favorite chapter in Awakening is “Abelard and the Battle for Reason,” which chronicles the extraordinary story of scholastic scholar Peter Abelard (1079-1142)—who flirted with the secular and attempted to connect rationalism with theology—told against the flamboyant backdrop of Abelard’s tragic love affair with Héloïse, a tale that yet remains the stuff of popular culture. In a fit of pique, Héloïse’s father was to have Abelard castrated. The church attempted something similar, metaphorically, with Abelard’s teachings, which led to an order of excommunication (later lifted), but despite official condemnation Abelard left a dramatic mark on European thought that long lingered.
There is too much material in a volume this thick to cover competently in a review, but the reader will find much of it well worth the time. Of course, some will be drawn to certain chapters more than others. Art historians will no doubt be taken with the one entitled “The Flowering of the Florentine Renaissance,” which for me hearkened back to the best elements of Kenneth Clark’s Civilisation, showcasing not only the evolution of European architecture but the author’s own adulation for both the art and the engineering feat demonstrated by Brunelleschi’s dome, the extraordinary fifteenth century adornment that crowns the Florence Cathedral. Of course, Freeman does temper his praise for such achievements with juxtaposition to what once had been, as in a later chapter that recounts the process of relocating an ancient Egyptian obelisk weighing 331 tons that had been placed on the Vatican Hill by the Emperor Caligula, which was seen as remarkable at the time. In a footnote, Freeman reminds us that: “One might talk of sixteenth-century technological miracles, but the obelisk had been successfully erected by the Egyptians, taken down by the Romans, brought by sea to Rome and then re-erected there—all the while remaining intact!” [p492n]
If I was to find a fault with Awakening, it is that it does not, in my opinion, go far enough to emphasize the impact of the Columbian Experience on the reopening of the Western mind. There is a terrific chapter devoted to the topic, “Encountering the Peoples of the ‘Newe Founde Worldes,’” which explores how the discovery of the Americas and its exotic inhabitants compelled the European mind to examine other human societies whose existence had never before even been contemplated. While that is a valid avenue for analysis, it yet hardly takes into account just how earth-shattering 1492 turned out to be—arguably the most consequential milestone for human civilization (and the biosphere!) since the first cities appeared in Sumer—in a myriad of ways, not least the exchange of flora and fauna (and microbes) that accompanied it. But this significance was perhaps greatest for Europe, which had been a backwater, long eclipsed by China and the Arab middle east. It was the Columbian Experience that reoriented the center of the world, so to speak, from the Mediterranean to the Atlantic, which was exploited to the fullest by the Europeans who prowled those seas and first bridged the continents. It is difficult to imagine the subsequent accomplishments—intellectual and otherwise—had Columbus not landed at San Salvador. But this remains just a quibble that does not detract from Freeman’s overall accomplishment.
Full disclosure: Charles Freeman and I began a long correspondence via email following my review of Closing. I was honored when he selected me as one of his readers for his drafts of Awakening, which he shared with me in 2018, but at the same time I approached this responsibility with some trepidation: given Freeman’s credentials and reputation, what if I found the work to be sub-standard? What if it was simply not a good book? How would I address that? As it was, these worries turned out to be misplaced. It is a magnificent book and I am grateful to have read much of it as a work in progress, and then again after publication. I did submit several pages of critical commentary to assist the author, to the best of my limited abilities, hone a better final product, and to that end I am proud see my name appear in the “Acknowledgments.”
I do not usually talk about formats in book reviews, since the content is typically neither enhanced nor diminished by its presentation in either a leather-bound tome or a mass-market paperback or the digital ink of an e-book, but as a bibliophile I cannot help but offer high praise to this beautiful, illustrated edition of Awakening published by Head of Zeus, even accented by a ribbon marker. It has been some time since I have come across a volume this attractive without paying a premium for special editions from Folio Society or Easton Press, and in this case the exquisite art that supplements the text transcends the ornamental to enrich the narrative.
Interest in the medieval world has perhaps waned over time. But that is, of course, a mistake. How we got from point A to point B is an important story, even if it has never been told before as well as Freeman has told it in Awakening. And it is not an easy story to tell. As the author acknowledges in a concluding chapter: “Bringing together the many different elements that led to the ‘awakening of the western mind’ is a challenge. It is important to stress just how bereft Europe was, economically and culturally, after the fall of the Roman empire compared to what it had been before.” [p735]
Those of us given to dystopian fiction, concerned with the fragility of republics and civilization, and wondering aloud in the midst of a global pandemic and the rise of authoritarianism what our descendants might recall of us if it all fell to collapse tomorrow cannot help but be intrigued by how our ancestors coped—for better or for worse—after Rome was no more. If you want to learn more about that, there might be no better covers to crack than Freeman’s The Awakening. I highly recommend it.
NOTE: My review of Freeman’s earlier work appears here:
“When people ask me if I went to film school, I tell them, ‘No, I went to films,’” Quentin Tarantino famously quipped. While I’m no iconic director, I too “went to films,” in a manner of speaking. I was raised by my grandmother in the 1960s—with a little help from a 19” console TV in the living room and seven channels delivered via rooftop antenna. When cartoons, soaps, or prime time westerns and sitcoms like Bonanza and Bewitched weren’t broadcasting, all the remaining airtime was filled with movies. All kinds of movies: drama, screwball comedies, war movies, gangster movies, horror movies, sci-fi, musicals, love stories, murder mysteries—you name the genre, it ran. And ran. And ran. For untold hours and days and weeks and years.
Grandma—rest in peace—loved movies. Just loved them. All kinds of movies. But she didn’t have much of a discerning eye: for her, The Treasure of the Sierra Madre was no better or worse than Bedtime for Bonzo. At first, I didn’t know any better either, and whether I was four or fourteen I watched whatever was on, whenever she was watching. But I took a keen interest. The immersion paid dividends. My tastes evolved. One day I began calling them films instead of movies, and even turned into something of a “film geek,” arguing against the odds that Casablanca is a better picture than Citizen Kane, promoting Kubrick’s Paths of Glory over 2001, and shamelessly confessing to screening Tarantino’s Kill Bill I and II back-to-back more than a dozen times. In other words, I take films pretty seriously. So, when I noticed that Seduction: Sex, Lies, and Stardom in Howard Hughes’s Hollywood was up for grabs in an early reviewer program, I jumped at the opportunity. I was not to be disappointed.
In an extremely well-written and engaging narrative, film critic and journalist Karina Longworth has managed to turn out a remarkable history of Old Hollywood, in the guise of a kind of biography of Howard Hughes. In films, a “MacGuffin” is something insignificant or irrelevant in itself that serves as a device to trigger the plot. Examples include the “Letters of Transit” in Casablanca, the statuette in The Maltese Falcon, and the briefcase in Tarantino’s Pulp Fiction. Howard Hughes himself is the MacGuffin of sorts in Seduction, which is far less about him than his female victims and the peculiar nature of the studio system that enabled predators like Hughes and others who dominated the motion picture industry.
Howard Hughes was once one of the most famous men in America, known for his wealth and genius, a larger-than-life legend noted both for his exploits as aviator and flamboyance as a film producer given to extravagance and star-making. But by the time I was growing up, all that was in the distant past, and Hughes was little more than a specter in supermarket tabloids, an eccentric billionaire turned recluse. It was later said that he spent most days alone, sitting naked in a hotel room watching movies. Long unseen by the public, at his death he was nearly unrecognizable, skeletal and covered in bedsores. Director Martin Scorsese resurrected him for the big screen in his epic biopic “The Aviator,” headlined by Leonardo DiCaprio and a star-studded cast, which showcased Hughes as a heroic and brilliant iconoclast who in turn took on would-be censors, the Hollywood studio system, the aviation industry and anyone who might stand in the way of his quest for glory—all while courting a series of famed beauties. Just barely in frame was the mental instability, the emerging Obsessive-Compulsive Disorder that later brought him down.
Longworth finds Hughes a much smaller and more despicable man, an amoral narcissist and manipulator who was seemingly incapable of empathy for other human beings. (Yes, there is indeed a palpable resemblance to a certain president!) While Hughes carefully crafted an image of a titan who dominated the twin arenas of flight and film, in Longworth’s portrayal he seems to crash more planes than he lands, and churns out more bombs than blockbusters. In the public eye, he was a great celebrity, but off-screen he comes off as an unctuous villain, a charlatan whose real skill set was self-promotion empowered by vast sums of money and a network of hangers-on. The author gives him his due by denying him top billing as the star of the show, rather giving scrutiny to those in his orbit, the females in supporting roles whom he in turn dominated, exploited and discarded. You can almost hear refrains of Carly Simon’s You’re So Vain interposed in the narrative, taunting the ghost of Hughes with the chorus: “You probably think this song is about you”—which by the way would make a great soundtrack if there’s ever a screen adaptation of the book.
If not Hughes, the real main character is Old Hollywood itself, and with a skillful pen, Longworth turns out a solid history—a decidedly feminist history—of the place and time that is nothing less than superlative. The author recreates for us the early days before the tinsel, when a sleepy little “dry” town no one had ever heard of almost overnight became the celluloid capital of the country. Pretty girls from all over America would flock there on a pilgrimage to fame; most disappointed, many despairing, more than a few dead. Nearly all were preyed upon by a legion of the contemptible, used and abused with a thin tissue of lies and promises that anchored them not only to the geography but to the predominantly male movers and shakers who dominated the studio system that literally dominated everything else. This is a feminist history precisely because Longworth focuses on these women—more specifically ten women involved with Hughes—and through them brilliantly captures Hollywood’s golden age as manifested in both the glamorous and the tawdry.
Howard Hughes was not the only predator in Tinseltown, of course, but arguably its most depraved. If Hollywood power-brokers overpromised fame to hosts of young women just to bed them, for Hughes sex was not even always the principal motivation. It went way beyond that, often to twisted ends perhaps unclear to even Hughes himself. He indeed took many lovers, but those he didn’t sleep with were not exempt to his peculiar brand of exploitation. What really got Howard Hughes off was exerting power over women, controlling them, owning them. He virtually enslaved some of these women, stripping them of their individual freedom of will for months or even years with vague hints at eventual stardom, abetted by assorted handlers appointed to spy on them and report back to him. Even the era of “Me Too” lacks the appropriate vocabulary to describe his level of “creepy!”
One of the women he apparently did not take to bed was Jane Russell. Hughes cast the beautiful, voluptuous nineteen year old in The Outlaw, a film that took forever to produce and release largely due to his fetishistic obsession with Russell’s breasts—and the way these spilled out of her dress in a promotional poster that provoked the ire of censors. Longworth’s treatment of the way Russell unflappably endured her long association with Hughes—despite his relentless domination over her life and career—is just one of the many delightful highlights in Seduction.
The Outlaw, incidentally, was one of the movies I recall watching with Grandma back in the day. Her notions of Hollywood had everything to do with the glamorous and the glorious, of handsome leading men and lovely leading ladies up on the silver screen. I can’t help wondering what she might think if she learned how those ladies were tormented by Hughes and other moguls of the time. I wish I could tell her about it, about this book. Alas, that’s not possible, but I can urge anyone interested in this era to read Seduction. If authors of film history could win an Academy Award, Longworth would have an Oscar on her mantle to mark this outstanding achievement.
Imagine there’s a virus sweeping across the land claiming untold victims, the agent of the disease poorly understood, the population in terror of an unseen enemy that rages mercilessly through entire communities, leaving in its wake an exponential toll of victims. As this review goes to press amid an alarming spike in new Coronavirus cases, Americans don’t need to stretch their collective imagination very far to envisage that at all. But now look back nearly two and a half centuries and consider an even worse case scenario: a war is on for the existential survival of our fledgling nation, a struggle compromised by mass attrition in the Continental Army due to another kind of virus, and the epidemic it spawns is characterized by symptoms and outcomes that are nothing less than nightmarish by any standard, then or now. For the culprit then was smallpox, one of the most dread diseases in human history.
This nearly forgotten chapter in America’s past left a deep impact on the course of the Revolution that has been long overshadowed by outsize events in the War of Independence and the birth of the Republic. This neglect has been masterfully redressed by Pox Americana: The Great Smallpox Epidemic of 1775-82, a brilliantly conceived and extremely well-written account by Pulitzer Prize-winning historian Elizabeth A. Fenn. One of the advantages of having a fine personal library in your home is the delight of going to a random shelf and plucking off an edition that almost perfectly suits your current interests, a volume that has been sitting there unread for years or even decades, just waiting for your fingertips to locate it. Such was the case with my signed first edition of Pox Americana, a used bookstore find that turned out to be a serendipitous companion to my self-quarantine for Coronavirus, the great pandemic of our times.
As horrific as COVID-19 has been for us—as of this morning we are up to one hundred thirty four thousand deaths and three million cases in the United States, a significant portion of the more than half million dead and nearly twelve million cases worldwide—smallpox, known as “Variola,” was far, far worse. In fact, almost unimaginably worse. Not only was it more than three times more contagious than Coronavirus, but rather than a mortality rate that ranges in the low single digits with COVID (the verdict’s not yet in), variola on average claimed an astonishing thirty percent of its victims, who often suffered horribly in the course of the illness and into their death throes, while survivors were frequently left disfigured by extensive scarring, and many were left blind. Smallpox has a long history that dates back to at least the third century BCE, as evidenced in Egyptian mummies. There were reportedly still fifteen million cases a year as late as 1967. In between it claimed untold hundreds of millions of lives over the years—some three hundred million in the twentieth century alone—until its ultimate eradication in 1980. There is perhaps some tragic irony that we are beset by Coronavirus on the fortieth anniversary of that milestone …
I typically eschew long excerpts for reviews, but Variola was so horrifying and Fenn writes so well that I believe it would be a disservice to do other than let her describe it here:
Headache, backache, fever, vomiting, and general malaise all are among the initial signs of infection. The headache can be splitting; the backache, excruciating … The fever usually abates after the first day or two … But … relief is fleeting. By the fourth day … the fever creeps upward again, and the first smallpox sores appear in the mouth, throat, and nasal passages …The rash now moves quickly. Over a twenty-four-hour period, it extends itself from the mucous membranes to the surface of the skin. On some, it turns inward, hemorrhaging subcutaneously. These victims die early, bleeding from the gums, eyes, nose, and other orifices. In most cases, however, the rash turns outward, covering the victim in raised pustules that concentrate in precisely the places where they will cause the most physical pain and psychological anguish: The soles of the feet, the palms of the hands, the face, forearms, neck, and back are focal points of the eruption … If the pustules remain discrete—if they do not run together— the prognosis is good. But if they converge upon one another in a single oozing mass, it is not. This is called confluent smallpox … For some, as the rash progresses in the mouth and throat, drinking becomes difficult, and dehydration follows. Often, an odor peculiar to smallpox develops… Patients at this stage of the disease can be hard to recognize. If damage to the eyes occurs, it begins now … Scabs start to form after two weeks of suffering … In confluent or semiconfluent cases of the disease, scabbing can encrust most of the body, making any movement excruciating … [One observation of such afflicted Native Americans noted that] “They lye on their hard matts, the poxe breaking and mattering, and runing one into another, their skin cleaving … to the matts they lye on; when they turne them, a whole side will flea of[f] at once.” … Death, when it occurs, usually comes after ten to sixteen days of suffering. Thereafter, the risk drops significantly … and unsightly scars replace scabs and pustules … the usual course of the disease—from initial infection to the loss of all scabs—runs a little over a month. Patients remain contagious until the last scab falls off … Most survivors bear … numerous scars, and some are blinded. But despite the consequences, those who live through the illness can count themselves fortunate. Immune for life, they need never fear smallpox again. [p16-20]
Smallpox was an unfortunate component of the siege of Boston by the British in 1775, but—as Fenn explains—it was far worse for Bostonians than the Redcoats besieging them. This was because smallpox was a fact of life in eighteenth century Europe—a series of outbreaks left about four hundred thousand people dead every year, and about a third of the survivors were blinded. As awful as that may seem, it meant that the vast majority of British soldiers had been exposed to the virus and were thus immune. Not so for the colonists, who not only had experienced less outbreaks but frequently lived in more rural settings at a greater distance from one another, which slowed exposure, leaving a far smaller quantity of those who could count on immunity to spare them. Nothing fuels the spread of a pestilence better than a crowded bottlenecked urban environment—such as Boston in 1775—except perhaps great encampments of susceptible men from disparate geographies suddenly crammed together, as was characteristic of the nascent Continental Army. To make matters worse, there was some credible evidence that the Brits at times engaged in a kind of embryonic biological warfare by deliberately sending known infected individuals back to the Colonial lines. All of this conspired to form a perfect storm for disaster.
Our late eighteenth-century forebears had a couple of things going for them that we lack today. First of all, while it was true that like COVID there was no cure for smallpox, there were ways to mitigate the spread and the severity that were far more effective than our masks and social distancing—or misguided calls to ingest hydroxychloroquine, for that matter. Instead, their otherwise primitive medical toolkit did contain inoculation, an ancient technique that had only become known to the west in relatively recent times. Now, it is important to emphasize that inoculation—also known as “variolation”—is not comparable to vaccination, which did not come along until closer to the end of the century. Not for the squeamish, variolation instead involved deliberately inserting the live smallpox virus from scabs or pustules into superficial incisions in a healthy subject’s arm. The result was an actual case of smallpox, but generally a much milder one than if contracted from another infected person. Recovered, the survivor would walk away with permanent immunity. The downside was that some did not survive, and all remained contagious for the full course of the disease. This meant that the inoculated also had to be quarantined, no easy task in an army camp, for example.
The other thing they had going for them back then was a competent leader who took epidemics and how to contain them quite seriously—none other than George Washington himself. Washington was not president at the time, of course, but he was the commander of the Continental Army, and perhaps the most prominent man in the rebellious colonies. Like many of history’s notable figures, Washington was not only gifted with qualities such as courage, intelligence, and good sense, but also luck. In this case, Washington’s good fortune was to contract—and survive—smallpox as a young man, granting him immunity. But it was likewise the good fortune of the emerging new nation to have Washington in command. Initially reluctant to advance inoculation—not because he doubted the science but rather because he feared it might accelerate the spread of smallpox—he soon concluded that only a systematic program of variolation could save the army, and the Revolution! Washington’s other gifts—for organization and discipline—set in motion mass inoculations and enforced isolation of those affected. Absent this effort, it is likely that the War of Independence—ever a long shot—may not have succeeded.
Fenn argues convincingly that the course of the war was significantly affected by Variola in several arenas, most prominently in its savaging of Continental forces during the disastrous invasion of Quebec, which culminated in Benedict Arnold’s battered forces being driven back to Fort Ticonderoga. And in the southern theater, enslaved blacks flocked to British lines, drawn by enticements to freedom, only to fall victim en masse to smallpox, and then tragically find themselves largely abandoned to suffering and death as the Brits retreated. There is a good deal more of this stuff, and many students of the American Revolution will find themselves wondering—as I did—why this fascinating perspective is so conspicuously absent in most treatments of this era?
Remarkably, despite the bounty of material, emphasis on the Revolution only occupies the first third of the book, leaving far more to explore as the virus travels to the west and southwest, and then on to Mexico, as well as to the Pacific northwest. As Fenn reminds us again and again, smallpox comes from where smallpox has been, and she painstakingly tracks hypothetical routes of the epidemic. Tragic bystanders in its path were frequently Native Americans, who typically manifested more severe symptoms and experienced greater rates of mortality. It has been estimated that perhaps ninety percent of pre-contact indigenous inhabitants of the Americas were exterminated by exposure to European diseases for which they had no immunity, and smallpox was one of the great vehicles of that annihilation. Variola proved to be especially lethal as a “virgin soil” epidemic, and Native Americans not unexpectedly suffered far greater casualties than other populations, resulting in death on such a wide scale that entire tribes simply disappeared to history.
No review can properly capture all the ground that Fenn covers in this outstanding book, nor praise her achievement adequately. It is especially rare when a historian combines a highly original thesis with exhaustive research, keen analysis, and exceptional talent with a pen to deliver a magnificent work such as Pox Americana. And perhaps never has there been a moment when this book could find a greater relevance to readers than to Americans in 2020.
If you have studied evolution inside or outside of the classroom, you have no doubt encountered the figure of Jean-Baptiste Lamarck and the discredited notion of the inheritance of acquired characteristics attributed to him known as “Lamarckism.” This has most famously been represented in the example of giraffes straining to reach fruit on ever-higher branches, which results in the development of longer necks over succeeding generations. Never mind that Lamarck did not develop this concept—and while he echoed it, it remained only a tiny part of the greater body of his work—he was yet doomed to have it unfortunately cling to his legacy ever since. This is most regrettable, because Lamarck—who died three decades before Charles Darwin shook the spiritual and scientific world with his 1859 publication of On the Origin of Species—was actually a true pioneer in the field of evolutionary biology that recognized there were forces at work that put organisms on an ineluctable road to greater complexity. It was Darwin who identified this force as “natural selection,” and Lamarck was not only denied credit for his contributions to the field, but otherwise maligned and ridiculed.
But even if he did not invent the idea, what if Lamarck was right all along to believe, at least in part, that acquired characteristics can be passed along transgenerationally after all—perhaps not on the kind of macro scale manifested by giraffe necks, but in other more subtle yet no less critical components to the principles of evolution? That is the subject of Lamarck’s Revenge: How Epigenetics is Revolutionizing Our Understanding of Evolution’s Past and Present, by the noted paleontologist Peter Ward. The book’s cover naturally showcases a series of illustrated giraffes with ever-lengthening necks! Ward is an enthusiast for the relatively new, still developing—and controversial—science of epigenetics, which advances the hypothesis that certain circumstances can trigger markers that can be transmitted from parent to child by changing the gene expression without altering the primary structure of the DNA itself. Let’s imagine a Holocaust survivor, for instance: can the trauma of Auschwitz cut so deep that the devastating psychological impact of that horrific experience will be passed on to his children, and his children’s children?
This is heady stuff, of course. We should pause for the uninitiated and explain the nature of Darwinian natural selection—the key mechanism of the Theory of Evolution—in its simplest terms. The key to survival for all organizations is adaptation. Random mutations occur over time, and if one of those mutations turns out to be better adapted to the environment, it is more likely to reproduce and thus pass along its genes to its offspring. Over time, through “gradualism,” this can lead to the rise of new species. Complexity breeds complexity, and that is the road traveled by all organisms that has led from the simplest prokaryote unicellular organism—the 3.5-billion-year-old photosynthetic cyanobacteria—to modern homo sapiens sapiens. This is, of course, a very, very long game; so long in fact that Darwin—who lived in a time when the age of the earth was vastly underestimated—fretted that there was not enough time for evolution as he envisioned it to occur. Advances in geology later determined that the earth was about 4.5 billion years old, which solved that problem, but still left other aspects of evolution unexplained by gradualism alone. The brilliant Stephen Jay Gould (along with Niles Eldredge) came along in 1972 and proposed that rather than gradualism most evolution more likely occurred through what he called “punctuated equilibrium,” often triggered by a catastrophic change in the environment. Debate has raged ever since, but it may well be that evolution is guided by forces of both gradualism and punctuated equilibrium. But could there still be other forces at work?
Transgenerational epigenetic inheritance represents another so-called force and is at the cutting edge of research in evolutionary biology today. But has the hypothesis of epigenetics been demonstrated to be truly plausible? And the answer to that is—maybe. In other words, there does seem to be studies that support transgenerational epigenetic inheritance, most famously—as detailed in Lamarck’s Revenge—in what has been dubbed the “Dutch Hunger Winter Syndrome,” that saw children born during a famine smaller than those born before the famine, and with a later, greater risk of glucose intolerance, conditions then passed down to successive generations. On the other hand, the evidence for epigenetics has not been as firmly established as some proponents, such as Ward, might have us believe.
Lamarck’s Revenge is a very well-written and accessible scientific account of epigenetics for a popular audience, and while I have read enough evolutionary science to follow Ward’s arguments with some competence, I remain a layperson who can hardly endorse or counter his claims. The body of the narrative is comprised of Ward’s repeated examples of what he identifies as holes in traditional evolutionary biology that can only be explained by epigenetics. Is he right? I simply lack the expertise to say. I should note that I received this book as part of an “Early Reviewers” program, so I felt a responsibility to read it cover-to-cover, although my own interest lapsed as it moved beyond my own depth in the realm of evolutionary biology.
I should note that this is all breaking news, and as we appraise it we should be mindful of how those on the fringes of evangelicalism, categorically opposed to the science of human evolution, will cling to any debate over mechanisms in natural selection to proclaim it all a sham sponsored by Satan—who has littered the earth with fossils to deceive us—to challenge the truth of the “Garden of Eden” related in the Book of Genesis. Once dubbed “Creationists,” they have since rebranded themselves in association with the pseudoscience of so-called “Intelligent Design,” which somehow remains part of the curriculum at select accredited universities. Science is self-correcting. These folks are not, so don’t ever let yourself be distracted by their fictional supernatural narrative. Evolution—whether through gradualism and/or punctuated equilibrium and/or epigenetics—remains central to both modern biology and modern medicine, and that is not the least bit controversial among scientific professionals. But if you want to find out more about the implications of epigenetics for human evolution, then I recommend that you pick up Lamarck’s Revenge and challenge yourself to learn more!
Note: While you are at it, if you want to learn more about 3.5-billion-year-old photosynthetic cyanobacteria, I highly recommend this:
Here was buried Thomas Jefferson, Author of the Declaration of American Independence, of the Statute of Virginia for religious freedom & Father of the University of Virginia.
Thomas Jefferson wrote those very words and sketched out the obelisk they would be carved upon. For those who have studied him, that he not only composed his own epitaph but designed his own grave marker was—as we would say in contemporary parlance—just “so Jefferson.” His long life was marked by a catalog of achievements; these were intended to represent his proudest accomplishments. Much remarked upon is the conspicuous absence of his unhappy tenure as third President of the United States. Less noted is the omission of his time as Governor of Virginia during the Revolution, marred by his humiliating flight from Monticello just minutes ahead of British cavalry. Of the three that did make the final cut, his role as author of the Declaration has been much examined. The Virginia statute—seen as the critical antecedent to First Amendment guarantees of religious liberty—gets less press, but only because it is subsumed in a wider discussion of the Bill of Rights. But who really talks about Jefferson’s role as founder of the University of Virginia?
That is the ostensible focus of Thomas Jefferson’s Education, by Alan Taylor, perhaps the foremost living historian of the early Republic. But in this extremely well-written and insightful analysis, Taylor casts a much wider net that ensnares a tangle of competing themes that not only traces the sometimes-fumbling transition of Virginia from colony to state, but speaks to underlying vulnerabilities in economic and political philosophy that were to extend well beyond its borders to the southern portion of the new nation. Some of these elements were to have consequences that echoed down to the Civil War; indeed, still echo to the present day.
Students of the American Civil War are often struck by the paradox of Virginia. How was it possible that this colony—so central to the Revolution and the founding of the Republic, the most populous and prominent, a place that boasted notable thinkers like Jefferson, Madison and Marshall, that indeed was home to four of the first five presidents of the new United States—could find itself on the eve of secession such a regressive backwater, soon doomed to serve as the capitol of the Confederacy? It turns out that the sweet waters of the Commonwealth were increasingly poisoned by the institution of human chattel slavery, once decried by its greatest intellects, then declared indispensable, finally deemed righteous. This tragedy has been well-documented in Susan Dunn’s superlative Dominion of Memories: Jefferson, Madison & the Decline of Virginia, as well as Alan Taylor’s own Pulitzer Prize winning work, The Internal Enemy: Slavery and the War in Virginia 1772-1832. What came to be euphemistically termed the “peculiar institution” polluted everything in its orbit, often invisibly except to the trained eye of the historian. This included, of course, higher education.
If the raison d’être of the Old Dominion was to protect and promote the interests of the wealthy planter elite that sat atop the pyramid of a slave society, then really how important was it for the scions of Virginia gentlemen to be educated beyond the rudimentary levels required to manage a plantation and move in polite society? And after all, wasn’t the “honor” of the up-and-coming young “masters” of far greater consequence than the aptitude to discourse in matters of rhetoric, logic or ethics? In Thomas Jefferson’s Education, Taylor takes us back to the nearly forgotten era of a colonial Virginia when the capitol was located in “Tidewater” Williamsburg and rowdy students—wealthy, spoiled sons of the planter aristocracy with an inflated sense of honor—clashed with professors at the prestigious College of William & Mary who dared to attempt to impose discipline upon their bad behavior. A few short years later, Williamsburg was in shambles, a near ghost town, badly mauled by the British during the Revolution, the capitol relocated north to “Piedmont” Richmond, William & Mary in steep decline. Thomas Jefferson’s determination over more than two decades to replace it with a secular institution devoted to the liberal arts that welcomed all white men, regardless of economic status, is the subject of this book. How he realized his dream with the foundation of the University of Virginia in the very sunset of his life, as well as the spectacular failure of that institution to turn out as he envisioned it is the wickedly ironic element in the title of Thomas Jefferson’s Education.
The author is at his best when he reveals the unintended consequences of history. In his landmark study, American Revolutions: A Continental History, 1750-1804, Taylor underscores how American Independence—rightly heralded elsewhere as the dawn of representative democracy for the modern West—was at the same time to prove catastrophic for Native Americans and African Americans, whose fate would likely have been far more favorable had the colonies remained wedded to a British Crown that drew a line for westward expansion at the Appalachians, and later came to abolish slavery throughout the empire. Likewise, there is the example of how the efforts of Jefferson and Madison—lauded for shaking off the vestiges of feudalism for the new nation by putting an end to institutions of primogeniture and entail that had formerly kept estates intact—expanded the rights of white Virginians while dooming countless numbers of the enslaved to be sold to distant geographies and forever separated from their families.
In Thomas Jefferson’s Education, the disestablishment of religion is the focal point for another unintended consequence. For Jefferson, an established church was anathema, and stripping the Anglican Church of its preferred status was central to his “Statute of Virginia for Religious Freedom” that was later enshrined in the First Amendment. But it turns out that religion and education were intertwined in colonial Virginia’s most prominent institution of higher learning, Williamsburg’s College of William & Mary, funded by the House of Burgesses, where professors were typically ordained Anglican clergymen. Moreover, tracts of land known as “glebes” that were formerly distributed by the colonial government for Anglican (later Episcopal) church rectors to farm or rent, came under assault by evangelical churches allied with secular forces after the Revolution in a movement that eventually was to result in confiscation. This put many local parishes—once both critical sponsors of education and poor relief—into a death spiral that begat still more unintended consequences that in some ways still resonate to the present-day politics and culture of the American south. As Taylor notes:
The move against church establishment decisively shifted public finance for Virginia. Prior to the revolution, the parish tax had been the greatest single tax levied on Virginians; its elimination cut the local tax burden by two thirds. Poor relief suffered as the new County overseers spent less per capita than had the old vestries. After 1790, per capita taxes, paid by free men in Virginia, were only a third of those in Massachusetts. Compared to northern states, Virginia favored individual autonomy over community obligation. Jefferson had hoped that Virginians would reinvest their tax savings from disestablishment by funding the public system of education for white children. Instead county elites decided to keep the money in their pockets and pose as champions of individual liberty. [p57-58]
For Jefferson, a creature of the Enlightenment, the sins of medievalism inherent to institutionalized religion were glaringly apparent, yet he was blinded to the positive contributions it could provide for the community. Jefferson also frequently perceived his own good intentions in the eyes of others who simply did not share them because they were either selfish or indifferent. Jefferson seemed to genuinely believe that an emphasis on individual liberty would in itself foster the public good, when in reality—then and now—many take such liberty as the license to simply advance their own interests. For all his brilliance, Jefferson was too often naïve when it came to the character of his countrymen.
Once near-universally revered, the legacy of Thomas Jefferson often triggers ambivalence for a modern audience and poses a singular challenge for historical analysis. A central Founder, Jefferson’s bold claim in the Declaration “that all men are created equal” defined both the struggle with Britain and the notion of “liberty” that not only came to characterize the Republic that eventually emerged, but gave echo with a deafening resonance to the French Revolution—and far beyond to legions of the oppressed yearning for the universal equality that Jefferson had asserted was their due. At the same time, over the course of his lifetime Jefferson owned hundreds of human beings as chattel property. One of the enslaved almost certainly served as concubine to bear him several offspring who were also enslaved, and she almost certainly was the half-sister of Jefferson’s late wife.
The once popular view that imagined that Jefferson did not intend to include African Americans in his definition of “all men” has been clearly refuted by historians. And Jefferson, like many of his elite peers of the Founding generation—Madison, Monroe, and Henry—decried the immorality of slavery as institution while consenting to its persistence, to their own profit. Most came to find grounds to justify it, but not Jefferson: the younger Jefferson cautiously advocated for abolition, while the older Jefferson made excuses for why it could not be achieved in his lifetime—made manifest in his much quoted “wolf by the ear” remark—but he never stopped believing it an existential wrong. As Joseph Ellis underscored in his superb study, American Sphinx, Jefferson frequently held more than one competing and contradictory view in his head simultaneously and was somehow immune to the cognitive dissonance such paradox might provoke in others.
It is what makes Jefferson such a fascinating study, not only because he was such a consequential figure for his time, but because the Republic then and now remains a creature of habitually irreconcilable contradictions remarkably emblematic of this man, one of its creators, who has carved out a symbolism that varies considerably from one audience to another. Jefferson, more than any of the other Founders, was responsible for the enduring national schizophrenia that pits federalism against localism, a central economic engine against entrepreneurialism, and the well-being of a community against personal liberties that would let you do as you please. Other elements have been, if not resolved, forced to the background, such as the industrial vs. the agricultural, and the military vs. the militia. Of course, slavery has been abolished, civil rights tentatively obtained, but the shadow of inequality stubbornly lingers, forced once more to the forefront by the murder of George Floyd; I myself participated in a “Black Lives Matter” protest on the day before this review was completed.
Perhaps much overlooked in the discussion but no less essential is the role of education in a democratic republic. Here too, Jefferson had much to offer and much to pass down to us, even if most of us have forgotten that it was his soft-spoken voice that pronounced it indispensable for the proper governance of both the state of Virginia and the new nation. That his ambition extended only to white, male universal education that excluded blacks and women naturally strikes us as shortsighted, even repugnant, but should not erase the fact that even this was a radical notion in its time. Rather than disparage Jefferson, who died two centuries ago, we should perhaps condemn the inequality in education that persists in America today, where a tradition of community schools funded by property taxes meant that my experience growing up in a white, middle class suburb in Fairfield, CT translated into an educational experience vastly superior to that of the people of color who attended the ancient crumbling edifices in the decaying urban environment of Bridgeport less than three miles from my home. How can we talk about “Black Lives Matter” without talking about that?
The granite obelisk that marked Jefferson’s final resting place was chipped away at by souvenir hunters until it was relocated in order to preserve it. A joint resolution of Congress funded the replacement, erected in 1883, that visitors now encounter at Monticello. The original obelisk now incongruously sits in a quadrangle at the University of Missouri, perhaps as far removed from Jefferson’s grave as today’s diverse, co-ed institution of UVA at Charlottesville is at a distance from the both the university he founded and the one he envisioned. We have to wonder if Jefferson would be more surprised to learn that African Americans are enrolled at UVA—or that in 2020 they only comprise less than seven percent of the undergraduate population? And what would he make of the white supremacists who rallied at Charlottesville in 2017 and those who stood against them? I suspect a resurrected Jefferson would be no less enigmatic than the one who walked the earth so long ago.
Alan Taylor has written a number of outstanding works—I’ve read five of them—and he has twice won the Pulitzer Prize for History. He is also, incidentally, the Thomas Jefferson Memorial Foundation Professor of History at the University of Virginia, so Thomas Jefferson’s Education is not only an exceptional contribution to the historiography but no doubt a project dear to his heart. While I continue to admire Jefferson even as I acknowledge his many flaws, I cannot help wondering how Taylor—who has so carefully scrutinized him—personally feels about Thomas Jefferson. I recall that in the afterword to his magnificent historical novel, Burr, Gore Vidal admits: “All in all, I think rather more highly of Jefferson than Burr does …” If someone puts Alan Taylor on the spot, I suppose that could be as good an answer as any …
Note: I have reviewed other works by Alan Taylor here:
Nolite te bastardes carborundorum could very well be the Latin phrase most familiar to a majority of Americans. Roughly translated as “Don’t let the bastards grind you down,” it has been emblazoned on tee shirts and coffee mugs, trotted out as bumper sticker and email signature, and—most prominently—has become an iconic feminist rallying cry for women. That this famous slogan is not really Latin or any language at all, but instead a kind of schoolkid’s “mock Latin,” speaks to the colossal cultural impact of the novel where it first made its appearance in 1985, The Handmaid’s Tale, by Margaret Atwood, as well as the media then spawned, including the 1990 film featuring Natasha Richardson, and the acclaimed series still streaming on Hulu. Consult any random critic’s list of the finest examples in the literary sub-genre “dystopian novels,” and you will likely find The Handmaid’s Tale in the top five, along with such other classic masterpieces as Orwell’s 1984, Huxley’s Brave New World and Bradbury’s Fahrenheit 451, which is no small achievement for Atwood.
For anyone who has not been locked in a box for decades, The Handmaid’s Tale relates the chilling story of the not-too-distant-future nation of “Gilead,” a remnant of a fractured United States that has become a totalitarian theonomy that demands absolute obedience to divine law, especially the harsh strictures of the Old Testament. A crisis in fertility has led to elite couples relying on semi-enslaved “handmaids” who serve as surrogates to be impregnated and carry babies to term for them, which includes a bizarre ritual where the handmaid lies in the embrace of the barren wife while being penetrated by the “Commander.” The protagonist is known as “Offred”—or “Of Fred,” the name of this Commander—but once upon a time, before the overthrow of the U.S., she was an independent woman, a wife, a mother. It is Offred who one day happens upon Nolite te bastardes carborundorum scratched upon the wooden floor on her closet, presumably by the anonymous handmaid who preceded her.
Brilliantly structured as a kind of literary echo of Geoffrey Chaucer’s The Canterbury Tales, employing Biblical imagery—the eponymous “handmaid” based upon the Old Testament account of Rachel and her handmaid Bilhah—and magnificently imagining a horrific near-future of a male-dominated society where all women are garbed in color-coded clothing to reflect their strictly assigned subservient roles, Atwood’s narrative achieves the almost impossible feat of imbuing what might otherwise smack of the fantastic with the highly persuasive badge of the authentic.
The 1990 film adaptation—which also starred Robert Duvall as the Commander and Faye Dunaway as his infertile wife Serena Joy—was largely faithful to the novel, while further fleshing out the character of Offred. But it is has been the Hulu series, updated to reflect a near-contemporary pre-Gilead America replete with cell phones and technology—and soon to beget (pun fully intended!) a fourth season—which both embellished and enriched Atwood’s creation for a new generation and a far wider audience. And it has enjoyed broad resonance, at least partially due to its debut in early 2017, just months after the presidential election. The coalition of right-wing evangelicals, white supremacists, and neofascists that has come to coalesce around the Republican Party in the Age of Trump has not only brought new relevance to The Handmaid’s Tale, but has also seen its scarlet handmaid’s cloaks adopted by many women as the de rigueur uniform of protest in the era of “Me Too.” Meanwhile, the series—which is distinguished by an outstanding cast of fine ensemble actors, headlined by Elisabeth Moss as Offred—has proved enduringly terrifying for three full seasons, while largely maintaining its authenticity.
Re-enter Margaret Atwood with The Testaments: The Sequel to The Handmaid’s Tale, released thirty-four years after the original novel. As a fan of both the book and the series, I looked forward to reading it, though my anticipation was tempered by a degree of trepidation based upon my time-honored conviction that sequels are ill-advised and should generally be avoided. (If Godfather II was the rare exception in film, Thomas Berger’s The Return of Little Big Man certainly proved the rule for literature!) Complicating matters, Atwood penned a sequel not to her own novel, but rather to the Hulu series, which brought back memories of Michael Crichton’s awkward The Lost World, written as a follow-up to Spielberg’s Jurassic Park movie rather than his own book.
My fears were not misplaced.
The action in The Testaments takes place in both Gilead and in Atwood’s native Canada, which remains a bastion of freedom and democracy for those who can escape north. The timeframe is roughly fifteen years after the conclusion of Hulu’s Season Three. The narrative is told from the alternating perspectives of three separate protagonists, one of whom is Aunt Lydia, the outsize brown-clad villain of book and film known for both efficiency and brutality in her role as a “trainer” of handmaids. Aunt Lydia turns out to have both a surprising pre-Gilead backstory as well as a secret life as an “Aunt,” although there are no hints of these in any previous works. Still, I found the Lydia portion of the book most interesting, and perhaps the more plausible in a storyline that often flirts with the farfetched.
In order to sidestep spoilers, I cannot say much about the identities of the other two main characters, who are each subject to surprise “reveals” in the narrative—except that I personally was less surprised than was clearly intended. Oh yes, I get it: the butler did it … but I still have hundreds of pages ahead of me. But that was not the worst of it.
The beauty of the original novel and the series has remained a remarkably consistent authenticity, despite an extraordinary futuristic landscape. The test of all fiction—but most especially in science-fiction, fantasy, and the dystopian—is: can you successfully suspend disbelief? For me, The Testaments fails this test again and again, most prominently when one of our “unrevealed” characters—an otherwise ordinary teenage girl—is put through something like a “light” version of La Femme Nikita training, and then in short order trades high school for a dangerous undercover mission without missing a beat! Moreover, her character is not well-drawn, and the words put in her mouth ring counterfeit. It seems evident that the eighty-year-old Atwood does not know very many sixteen-year-old girls, and culturally this one acts and sounds like she was raised thirty years ago and then catapulted decades into the future. Overall, the plot is contrived, the action inauthentic, the characters artificial.
This is certainly not vintage Atwood, although some may try to spin it that way. The Handmaid’s Tale was not a one-hit wonder: Atwood is a prolific, accomplished author and I have read other works—including The Penelopiad and The Year of the Flood—that underscore her reputation as a literary master. But not this time. In my disappointment, I was reminded of my experience with Khaled Hosseini, whose The Kite Runner was a superlative novel that showcased a panoply of complex themes and nuanced characters that remained with me long after I closed the cover. That was followed by A Thousand Splendid Suns, which though a bestseller was dramatically substandard to his earlier work, peopled with nearly one-dimensional caricatures assigned to be “good” or “evil” navigating a plot that smacked more of soap-opera than subtlety.
The Testaments too has proved a runaway bestseller, but it is the critical acclaim that I find most astonishing, even scoring the highly prestigious 2019 Booker Award—though I can’t bear to think of it sitting on the same shelf alongside … say … Richard Flanagan’s The Narrow Road to the Deep North, which took the title in 2014. It is tough for me to review a novel so well-received that I find so weak and inconsequential, especially when juxtaposed with the rest of the author’s catalog. I keep holding out hope that someone else might take notice that the emperor really isn’t wearing any clothes, but the bottom line is that lots of people loved this book; I did not.
On the other hand, a close friend countered that fiction, like music, is highly subjective. But I take some issue with that. Perhaps you personally might not have enjoyed Faulkner’s The Sound and the Fury, or Hemingway’s A Farewell to Arms, for that matter, but you cannot make the case that these are bad books. I would argue that The Testaments is a pretty bad book, and I would not recommend it. But here, it seems, I remain a lone voice in the literary wilderness.
DISCLAIMER: The review that follows and the book that is its subject each include a fact-based timeline, political polemic, and inflammatory language, some or all of which may be highly offensive to certain individuals, especially those who identify with the MAGA movement or abjure critical thinking. If you or someone you care about fits that description, is highly sensitive, or is unable to handle views that contradict your political narrative, you are urged to stop reading now and put this review aside. Those who proceed further do so at their own risk, and this reviewer will hold himself blameless for any fits of rage, dangerous increases in blood pressure, or Rumpelstiltskin-like attempts to stomp the ground so hard that the reader sinks into a chasm, that may result from continuing beyond this point …
President Trump is facing a test to his presidency unlike any faced by a modern American leader. It’s not just that the special counsel looms large. Or that the country is bitterly divided over Mr. Trump’s leadership. Or even that his party might well lose the House to an opposition hellbent on his downfall. The dilemma—which he does not fully grasp—is that many of the senior officials in his own administration are working diligently from within to frustrate parts of his agenda and his worst inclinations. I would know. I am one of them.
That is the opening excerpt from an Op-Ed entitled “I Am Part of the Resistance Inside the Trump Administration” published in the New York Times on September 5, 2018, along with this note from the editors: “The Times is taking the rare step of publishing an anonymous Op-Ed essay. We have done so at the request of the author, a senior official in the Trump administration whose identity is known to us and whose job would be jeopardized by its disclosure.”
The Op-Ed was written on the eve of the mid-term elections, before the release of the Mueller report, the murder of Khashoggi, the shutdown of the Trump Foundation for what was described as “a shocking pattern of illegality,” the expulsion of most remaining adults-in-the-room including Mattis and Kelly and Rosenstein, the “perfect call” with Volodymyr Zelensky that led to impeachment—which was just one shocking by-product of an erratic foreign policy of appeasement to Putin, ongoing saber-rattling with the Ayatollah and kissy-face with Kim Jung-un, the granting of dispensation to Mohammed bin Salman, and the green-lighting of Erdoğan to take out our Kurdish allies in Syria, not to mention the continuing crisis at home of kids in cages, and the ousting of any civil servant who dared contradict the President with a fact-based narrative. And there was so very much more that it is truly a blur. In September 2019, Trump doctored a map with a Sharpie and flashed it on television to prove he was right all along about the path of Hurricane Dorian. In October 2019, the President of the United States actually expressed interest in constructing an electrified moat filled with alligators along the Mexican border and shooting migrants in the legs to slow them down! Who even remembers that now?
Shortly after the moat full of alligators rose to a brief crest in the 24 hour cable news cycle and then sank beneath the weight of the tide of whatever was next that no one can really recall anymore, while we collectively held our breaths for the next wave of … well, who knows what? … A Warning, by Anonymous—the same “senior Trump administration official” who was author of that NYT editorial—was published. A Warning set a record for preorders and made the bestseller list, and while the staggering revelations by a senior insider that it contains would have no doubt thrust any other administration into a tailspin so severe that it could never have recovered, this book—much like the misadventures it chronicles—is essentially as forgotten to an overwhelmed amnesiac public as the moat full of alligators. The notion that “nothing matters” has become such a cliché precisely because—as the subsequent impeachment acquittal underscored—when it comes to Trump, nothing truly does matter anymore. Or really ever has.
The thesis of A Warning—which picks up where the author’s editorial left off—is that 1) all hyperbole on left-leaning media aside, President Trump really is as he appears to the non-brainwashed observer: an unhinged, irrational, narcissistic, incompetent clown who left to his own devices would no doubt steer the clown car with all of us aboard right into the abyss; and 2) if not for the valiant efforts of the author and his or her furtive cohorts, working ceaselessly behind the scenes to curtail Trump’s most dangerous instincts, we would likely already be acquainted with said abyss. “Anonymous” claims that he/she is generally supportive of the administration’s conservative right-wing agenda, but fears what the President’s unbalanced behavior could bring. While Trump rambles on paranoiacally about the so-called imaginary “Deep State” plotting to undermine him, the author of A Warning refutes the notion of said “Deep State” while emphasizing what he/she terms the “Steady State,” an unidentified alliance at the top tier of “glorified government babysitters” who quietly strive to “keep the wheels from coming off the White House wagon.”
But apparently the axle nuts are getting looser every day, and those wheels are about to let go, as underscored in the very first chapter, aptly entitled “Collapse of the Steady State,” where the author admits that:
I was wrong about the “quiet resistance” inside the Trump Administration. Unelected bureaucrats and cabinet appointees were never going to steer Donald Trump in the right direction in the long run, or refine his malignant management style. He is who he is. Americans should not take comfort in knowing whether there are so-called adults in the room. We are not bulwarks against the president and shouldn’t be counted upon to keep him in check. That is not our job. That is the job of the voters …
If the original editorial was an attempt to reassure us that while the President was often indeed as mindlessly dangerous as a runaway bull amok in the national china shop, there was yet a significant presence of others sane and rational to rein him in before too much of value was irreparably wrecked, A Warning goes much further, urging a broad coalition to defeat him in 2020, especially targeting those in the right lane who otherwise cheer the lower taxes, frantic deregulation, and the ascent of ultraconservative Supreme Court justices that have been a byproduct of Trumpism. But does such a cohort actually exist?
For Trump and a polarized America in 2020, there are essentially four audiences to play to: 1) Donald Trump represents an existential threat to our values of freedom and democracy in our sacred Republic; 2) Donald Trump is a savior for America sent by the almighty God to restore our sacrosanct traditional values and lock up anyone who would even think about having an abortion; 3) Donald Trump is an absolutely offensive buffoon—of course—but the economy has been supercharged so why don’t they just let him do his job?; and, 4) Donald Trump is the same as Joe Biden, and if Bernie Sanders was President we’d all have free college and healthcare and everything else and if you don’t agree you should just die. A Warning makes a compelling argument, but I don’t see it changing anyone’s mind. Either the Emperor is wearing those new clothes or he isn’t.
Each chapter of A Warning is headed by a quotation from a former president—Madison, Washington, Jefferson, Kennedy, Reagan, etc.—that speaks to an aspect of government or the character of its leadership. What then follows are accounts of Trump’s resistance to expertise, paranoid ramblings, irrational behavior, and “malignant management style” that clearly stand as counterpoints to these ideals. At one point, the author reveals that: “Behind closed doors his own top officials deride him as an “idiot” and a “moron” with the understanding of a “fifth or sixth grader.” [p63] This excerpt that describes briefings with the President is a bit longish but perhaps most illustrative:
Early on, briefers were told not to send lengthy documents. Trump wouldn’t read them. Nor should they bring summaries to the Oval Office. If they must bring paper, then PowerPoint was preferred because he is a visual learner. Okay, that’s fine, many thought to themselves, leaders like to absorb information in different ways. Then officials were told that PowerPoint decks needed to be slimmed down. The president couldn’t digest too many slides. He needed more images to keep his interest—and fewer words. Then they were told to cut back the overall message (on complicated issues such as military readiness or the federal budget) to just three main points. Eh, that was still too much … Forget the three points. Come in with one main point and repeat it—over and over again, even if the president inevitably goes off on tangents—until he gets it. Just keep steering the subject back to it. One point. Just that one point. Because you cannot focus the commander-in-chief’s attention on more than one goddamned thing over the course of the meeting, okay? [p29-30]
This is just one of many persuasive arguments that the President is unfit for office, but again: whom is it likely to persuade?
A couple of things struck me about this book that have little to do with its message. First of all, it is not well-written. Not at all. It may be that it was deliberately dumbed-down to target a less educated audience, but I don’t think so. More likely, the author simply isn’t a very talented writer. A Warning has a conversational style, and my guess is that it was dictated and transcribed by someone who is not generally comfortable with a pen.
Second, the author attempts to use history to make his/her point—beyond quotes from presidents, there are also numerous references in the narrative that reach back to ancient Greece and Rome. But the effort is clumsy, at best, and at worst just completely off the mark. At one point, when tracing the origins of the GOP, the author identifies it with “states’ rights,” which while a core value of the modern Republican Party was a hundred fifty years ago closely associated with rival Democrats. [p95] (In fact, one could argue that today’s “Party of Lincoln” has little in common with Lincoln at all.) Elsewhere, there is an awkward tussle with fact-based history as the author struggles to mine democracy in ancient Greece for workable analogies with today’s politics. Athenian demagogue Cleon is cast as a cloak-wearing precursor to Trump “… who will sound familiar to readers … [as he] … inherited money from his father and leveraged it to launch a career in politics.” The famous episode from Thucydides that has Cleon calling for the slaughter of the Mytilenean rebels is posited as an alleged signpost to the decline and fall of Athenian democracy. The later massacre of the Melians is also referenced, as is the execution of Socrates, along with a wild claim that “the latter was an exclamation point on the death of Athenian democracy …” [p183-86] All this is not only completely out of context but downright silly, and—as any historian of ancient Greece would point out—the radical democracy of Athens actually thrived for decades after the death of Socrates in 399 BCE, and even persisted well beyond the subjugation of the polis by Phillip II in 338 BCE.
But that the author is both a bad writer and a lousy historian to my mind just adds to his/her authenticity, as a “senior Trump administration official.” After all, we know that the cabinet is comprised of second and third-rate individuals, and the quality—especially as we have made the shift to “acting” secretaries that don’t require Senate approval—has seen a pronounced downward slope. Of course, the author’s lack of talent hardly diminishes the tale that is told.
The reason A Warning lacks shock-value to some degree is because we have heard much or all of this before, from multiple sources, some more respected than others. While it might be easy to dismiss such schlocky work as Michael Wolff’s Fire and Fury: Inside the Trump White House, the much-celebrated expose of the administration that was frequently as long on bombshells as it was short on substantiation, it is far more difficult to ignore the chilling accounts from award-winning journalist Bob Woodward, whose 2018 book Fear: Trump in the White House identifies then-Secretary of Defense James Mattis as the source of the “fifth or sixth grader” quote. Woodward also reports then-Chief of Staff John Kelly describing the President as “unhinged”—exclaiming: “He’s an idiot. It’s pointless to try to convince him of anything. He’s gone off the rails. We’re in Crazytown.” Far more worrisome than such anecdotes is Woodward’s revelation that then-Chief Economic Adviser Gary Cohn—alarmed that Trump was about to sign a document ending a key trade agreement with South Korea that also dove-tailed with a security arrangement that would alert us to North Korean nuclear adventurism—simply stole the document off the President’s desk! And the President never missed it …
Much of this material has been substantiated by insiders, and there is certainly plenty of evidence to suggest Trump is utterly incapable of serving as Chief Executive. But would anything convince his loyal acolytes of this? Apparently not, which is why A Warning both preached to the chorus and otherwise fell on deaf ears. In February 2020, fifty-two Republican Senators voted to acquit Trump in his impeachment trial—and you can bet that most or all of these “august” legislators know exactly what Donald Trump is really like behind closed doors.
As this review goes to press, we are in the midst of global pandemic that has hit the United States far harder than it should have, largely due to the ongoing incompetence of the President, who is not unsurprisingly the very worst person to be in charge during what is surely the greatest threat to the nation since Pearl Harbor, perhaps since Fort Sumter. We need a Lincoln or a FDR or a JFK at the helm, and what we have is Basil Fawlty … although even that is unfair: Basil would have recognized that he was in over his head and sought Polly’s help, who would have enlisted Manual’s assistance, and we would at least have a chance. Trump, being Trump, believes he has all the answers; and thousands more succumb to the virus as the days go by …
So, who is the author of A Warning? Who exactly is “Anonymous?” There has been some speculation, but if I had to assign authorship, I would put my money on Kellyanne Conway. One clue that narrows it down a bit is that the tone in the narrative hints at a female voice rather than a male one, although I could be mishearing that. More persuasive is the style, which sounds an awful lot like Kellyanne in conversation, albeit spouting utterances diametrically opposed to those outrageous defenses of the President she concocts for the media. Perhaps most compelling is the fact that Kellyanne has uncharacteristically outlasted most members of the administration, especially striking in light of the fact that her husband, attorney George Conway, is a loud and prominent critic of the President that has long called for his removal from office. That Kellyanne has managed to somehow keep her job despite this suggests that she has something on Trump that guarantees her tenure, and makes me think she more than anyone inside that circus tent wants us to hear this warning of why the ringmaster must be denied four more years …
“From the end of World War II until 1980, virtually no American soldiers were killed in action while serving in the greater Middle East. Since 1980, virtually no American soldiers have been killed anywhere else. What caused that shift?”
That stark question appears as a blurb on the back cover of my edition of America’s War for the Greater Middle East: A Military History, Andrew J. Bacevich’s ambitious, brilliantly conceived if flawed chronicle which seeks to both answer that question and place it in its appropriate context. It is, of course, quite the tall order: how is it that a geography ever on the periphery of an American foreign policy that for decades could best be described as benign neglect came to not only dominate our national attention but be identified as central to our strategic interests? And how is that as this review goes to press—nearly four years after the publication of Bacevich’s book—America’s longest war in its history endures beyond its eighteenth year … in Afghanistan of all places?!
The short answer, I would posit, is oil. Bacevich is older than me, and I wasn’t yet driving at the time in 1969 when he notes dropping three bucks to fill up the tank of his new Mustang at 29.9 cents a gallon. But I was on the road just a few years later, and I recall sitting in long lines at the pump for fuel priced nearly ten times that, as well as the random guy who threatened to shoot a certain long-haired teenager for trying to cut line, and that same teen later learning how to siphon gas from parked cars. It was a time.
That tumultuous time stemmed, of course, from the 1973 oil embargo placed on the United States by OPEC (Organization of the Petroleum Exporting Countries) in retaliation for its support of Israel during the Yom Kippur War. Because he has styled his book “A Military History,” the author does not dwell on the gasoline shortage that so shook American self-confidence in the early 1970s, nor on the related and still unresolved Israeli-Palestinian conflict that remains as central to the theme of Middle East unrest as slavery was to the American Civil War. Instead, after a brief “Prologue,” Bacevich rapidly shifts focus to the Iran hostage crisis and the 1980 debacle that was Operation Eagle Claw, the aborted mission to rescue those hostages that resulted in those first American casualties referenced in that jacket blurb. This decision by the author to not accord oil and Israel their respective fundamental significance in far greater detail proves to be a weakness that tends to undermine an otherwise well-researched and well-written narrative history.
That author certainly has both the credentials and the skills worthy of the task before him. Andrew Bacevich is a career army officer, veteran of the Vietnam and Persian Gulf wars, who retired with the rank of colonel. He is also a noted historian and award-winning author, someone who has described himself as a “Catholic conservative,” but defies traditional labels of parties and politics. He is a pronounced critic of American military interventionism, George W. Bush’s advocacy for so-called “preventive wars,” and especially of the U.S. invasion of Iraq. In a kind of tragic irony, his own son, an army officer, was killed in combat in Iraq. I have read two of his previous books: Breach of Trust: How Americans Failed Their Soldiers and Their Country, and The Limits of Power: The End of American Exceptionalism, both magnificent treatises that reflect Bacevich’s ideological opposition to spending American lives needlessly in endless wars. But treatises don’t always translate well into narrative history—in fact these are and should be entirely separate channels—and Bacevich’s tendency to blur those boundaries here comes to weaken America’s War for the Greater Middle East.
The author points to repeated epic fails in Middle East policy that take us down all the wrong roads, while experts in and out of government shake their heads in bewilderment, yet one administration after another nevertheless presses on stubbornly. Bacevich is at his best when he underscores a series of unintended consequences on a road paved with occasional good intentions that not only exacerbate bad decision-making but cement unnecessary obligations to fickle, illusory allies that then put up almost insurmountable roadblocks to disentanglement. Two salient and substantial examples are: the poorly-conceived U.S. support for rebels opposed to the Russian-friendly regime in Afghanistan that was to spark Soviet intervention in 1979; and, subsequent U.S. backing for the Islamic fundamentalist Mujahideen that was to later spawn Al-Qaeda.
There is much more to come—more perhaps intended and incompetent rather than unintended—and much of that is either utterly unknown or long forgotten for most Americans, including the 1982 suicide-bombing of the Marine compound in Beirut that killed 241 but somehow failed to tarnish the “Teflon” presidency of Ronald Reagan, who retreated while euphemistically “redeploying.” From the vantage point of Washington, the greater enemy remained the Ayatollah, and all efforts were made to enable the brutal despot Saddam Hussein in his opportunistic war upon Iran, a decision that was to fuel Middle East instability for decades and lead to two future US conflicts with our former ally. And Reagan was still President and still all-Teflon in 1988 when the US shot down through either negligence or spite Iran Air Flight 655 over the Persian Gulf, a commercial airliner with 290 souls aboard. George H. W. Bush led a coalition to liberate Kuwait from our erstwhile ally Iraq, but then left a wounded, isolated and still dangerous Saddam to plague our future. But, of course, it was under George W. Bush that the tragedy that was 9-11 was hijacked and turned into a fanciful “War on Terror” that ultimately was to embolden Islamic fundamentalism, served as a pretext for an illegal invasion of Iraq that strengthened Iran and utterly destabilized the region, and later bred ISIL to terrorize multiple corridors of the Middle East. You can indeed draw almost a straight line from the Afghan Mujahideen of 1979 to ISIL suicide bombers today.
Bacevich is masterful with a pen, and his history is so well-written that there are literally no dry spots. The problem I found was with the tone, which while legitimately critical of American missteps is often needlessly arrogant, eye-rolling, even snarky—all of which detracts from the primary message, which is indeed spot-on. My politics often align closely with those of MSNBC host Rachel Maddow, but I simply cannot watch her show: I find her breathless exhalations and intimations of “How-could-anyone-be-so-stupid?” and “We-told-you-so” coupled with lip-curling grimaces intolerable. Bacevich is not that bad here by any means, but there is certainly a whiff of it that puts me off. Moreover, while he makes a cogent case for why just about every policy we put in place was wrong-headed, I would have much welcomed the author’s alternative recipes. Bacevich is a brilliant man: I truly wanted to know what he would have done differently if he was sitting behind the Resolute Desk instead of Carter or Reagan or Bush or any of the others.
Bacevich does deserve much credit for his far more panoramic view of what he rightly calls the “Greater Middle East,” as he widens the lens to focus upon the often neglected yet certainly related periphery of the Balkans and the Muslim population in the former Yugoslavia subjected to ethnic cleansing. Few mention Eastern Europe in the same breath as the Middle East, but for some five hundred years much of that geography was integral to the same Ottoman Empire that ruled over present-day Syria and Iraq. There is a common history that cannot be ignored. But just as I was disappointed elsewhere that Bacevich failed to highlight the background noise of the Israeli-Palestinian conflict that truly informs every conversation about Middle East affairs, in this case little was made of the bond between post-Soviet Russia and Slavs of “Greater Serbia,” which not only deeply influenced the Balkan Civil Wars but soured emerging US-Russian relations in its aftermath and resounded across the Islamic landscape. Likewise, the narrative swerves to take a peek at “Black Hawk Down” in Mogadishu, but the long history of ties between East Africa and Arabia remains unexplored.
America’s War for the Greater Middle East is divided into three parts: the first takes the reader to the conclusion of the Persian Gulf War (which Bacevich brands as “Second Gulf War”), and the second wraps up on the eve of 9-11. But it is the last part, dominated by the Iraq War, that strikes a markedly different tone and smacks of the more somber, perhaps coincidental to Bacevich’s own deeply personal loss, perhaps not. Alas, none of the sections are large enough to bear the weight of the material.
Rarely would I lobby for any book to be longer, but in this case the 370 pages in my edition—plus the copious notes and excellent maps—is simply not enough. The topic not only deserves but demands more. This book should either be three times longer or, better still, should be a three-volume series. A more comprehensive historical background—including the echo of the greater Ottoman heritage and the Russo-British grapple for Central Asia—of this entire milieu is requisite for getting a grasp upon how we got here. The Israeli-Palestinian conflict demands more focus. As does the Shia-Sunni division. And the relationships between Arab and non-Arab states, as well as the ties that transcend the regional to extend to Africa and Europe and beyond. There is no hope of a better grasp of all that has gone wrong with American entanglement in the Middle East without all of that and much more.
Given all these reservations, the reader of this review might be surprised that I nevertheless recommend this book. Warts and all, there is no other work out there that connects the dots of America’s involvement in the Middle East as well as it does, even as it cries for more depth, for more complexity. I would likely be less critical of this book if my admiration for Bacevich was less pronounced and my expectations for his work was not so high. Even if America’s War for the Greater Middle East falls short, it deserves to be on your reading list.
Can you imagine a President of the United States who blatantly ignores its conventions, ridicules its established order and appeals beyond these directly to the electorate, pledging to elevate the interests of the average citizen over those of the elite, whom he brands as corrupt, while scorning the courts, financial institutions, and any who stand in his way, polarizing the nation while he yet shamelessly exploits a partisan press and rewards his supporters with government jobs and favors? No, it’s not who you think, but it does at least partially explain why the current occupant of the White House often appears with a portrait of Andrew Jackson as a backdrop, a painting that he directed be displayed prominently in the Oval Office.
Jackson once loomed large in our collective cultural memory, but I suspect that memory is now a bit fuzzy for most Americans, who when pressed might at best tentatively identify him as the grim-looking fellow on the face of the twenty-dollar bill. Of course, Jackson has hardly been forgotten by historians, who have long recognized his centrality as the most consequential president of the antebellum era, although their assessments of him have seen a marked rise and fall over time. Once lionized as a giant in the emergence of a more democratic polity and a more egalitarian nation, a critical reexamination in the more recent historiography has revealed substantial “warts,” not only underscored by his leading role in “The Indian Removal Act” of 1830 that led to the deaths of thousands of Cherokees in the so-called “Trail of Tears,” but also in the ill-effects of the long echo of his “spoils system,” the dangerous naivety of his economic strategies including the “Bank War” that led to the Panic of 1837, as well as other forceful if misguided policies that some have argued set irrevocable forces in motion that later resulted in Civil War.
Andrew Jackson has been the subject of hundreds of biographies and related works. A prominent chapter has frequently been devoted to the Bank War, long framed as a flamboyant clash of wills between Jackson, who loathed banks, and the shrewd if hapless Nicholas Biddle, president of the Second Bank of the United States. A famous game of cat and mouse prevailed, as the standard tale has been told, with Jackson ultimately victorious, the bank abolished and Biddle sent packing in surprising and ignominious defeat.
It is such a familiar story that has received so much attention in the literature that it might seem unlikely that anything new could be said of it. So, there is then something of real genius in the astute reexamination showcased in the recently published monograph, The Bank War and the Partisan Press: Newspapers, Financial Institutions, and the Post Office in Jacksonian America, by Stephen W. Campbell. In this brilliant if not always easily accessible book, Campbell—a historian and lecturer at Cal Poly Pomona—challenges the orthodox narrative that puts Jackson and Biddle front-and-center to widen the lens to encompass the nuance and complexity that informs a long overlooked and far more intricate, multilayered confluence of people and events on both sides. The Bank War was indeed a great drama, but it turns out that there were many more essential players than Jackson and Biddle, and much more at stake than simply re-chartering the bank. As the subtitle suggests, Campbell notes that integral to the Bank War were common threads that ran between post offices, branch banks, and newspapers in what was indeed such a tangled weave that much went unnoticed or disregarded by historians prone to focus on the larger tapestry.
Today we might bemoan certain cable news propaganda vehicles that eschew reporting in favor of distorting, yet at its worst this phenomenon bears almost no resemblance to the partisan press of Jackson’s day, when there was little expectation of any kind of objectivity. In fact, valuable contracts for printing government documents were doled out to the politically simpatico, who were expected to promote the official line. Meanwhile, the Second National Bank through its branches had powerful financial incentives at hand to entice their allies in the press to champion their point of view.
Then there was the post office, which to us perhaps smacks of the anachronistic and irrelevant. Yet, its importance to early nineteenth century Americans cannot be overstated, since it effectively served as the sole vehicle for personal, business, and official communication. But it was not only first-class mail that passed through post offices, but also newspapers, so branches could—and did—act as a kind of local valve for what sort of media could be passed across the counter. It was after all Jackson’s Postmaster General, former newspaper editor Amos Kendall, who famously permitted southern postmasters to refuse to distribute abolitionist tracts, another spark that was to fan antebellum sectional flames. Odd as it may seem now, Postmaster General was the single most valuable cabinet office in that era because of the vast patronage it controlled. Through its direct and indirect influence over the press, the White House clearly stacked the deck against poor Biddle, who despite vast resources could not hope to compete in the arena of what today we might term “messaging.”
While little of this material is in itself new or groundbreaking, Campbell deserves much credit for being the first to astutely connect all the dots of these seemingly unrelated elements to the Bank War. But he goes further, articulately probing the economic realities of American life in the 1830s and deftly fitting the financial institutions of the day into the larger picture. The way banks and the economy functioned then would be almost unrecognizable to modern students of finance. Campbell peels back the fascinating if arcane layers of antebellum banking that other historians of the period have long neglected.
For the world of academia, The Bank War and the Partisan Press is a magnificent achievement, but alas much of it may remain unknown to the wider public because it is not always easily accessible to the general reader. This is not Campbell’s fault: he is after all quite skillful with a pen. But this was originally a thesis expanded into a book, so the strictures of academic writing sometimes weigh heavily on the account. Also problematic, perhaps, is that the text is somewhat rigidly compartmentalized, so that each sub-topic is exhaustively explored by chapter, rather than more seamlessly woven into the narrative. These are mere quibbles to a scholarly audience and hardly detract from the finished product, but I would like to see Campbell revisit this theme one day in another title designed to reach more readers of popular history. In the meantime, if you are a student of Jacksonian America, this is an essential read that receives my highest recommendation.
Did you know that the single greatest president in America’s first half-century was James Monroe? Even more than that, did you know that the most significant Founder of the fledgling Republic was James Monroe? That Monroe’s long-overlooked accomplishments and contributions dwarfed those of Washington, Jefferson and Madison and all the rest? That Monroe was a towering figure in both establishing and leading the new nation? I didn’t either, but that is the boast of The Last Founding Father: James Monroe and a Nation’s Call to Greatness, by Harlow Giles Unger.
Should you suspect that I am unfairly exaggerating the author’s bold claim, look no further than page two of the “Prologue” to learn that while Washington may have won American independence, his legacy was little more than a “fragile little nation” and his “… three successors—John Adams, Thomas Jefferson, and James Madison—were mere caretaker presidents who left the nation bankrupt, its people deeply divided, its borders under attack, its capital city in ashes.” It was, apparently, left to the heroic, brilliant, and larger-than-life character of James Monroe to step in and make America great, as summarized by Unger:
Monroe’s presidency made poor men rich, turned political allies into friends, and united a divided people … Political parties dissolved and disappeared. Americans of all political persuasions rallied around him under a single “Star Spangled Banner.” He created an era never seen before or since in American history … that propelled the nation and its people to greatness.
That’s from page three. I might have closed the cover after that burst of hyperbole, which better channels the ending of a Disney movie than a historian’s measured analysis. But then I checked the dust jacket bio to find that Unger is “A former Distinguished Visiting Fellow in American History at George Washington’s Mount Vernon … a veteran journalist, broadcaster, educator and historian … the author of sixteen books, including four other biographies of America’s Founding Fathers.” Perhaps I was misjudging him? So, I read on …
Spoiler alert: it does not get any better.
Presidential biography is a favorite of mine, and I have read more than a couple of dozen. For the uninitiated, the genre tends to diverge along three paths: the laudatory, the condemnatory and the analytical. While closer to the first category, The Last Founding Father really fits into none of these classifications. In fact, one might argue that it is less biography than hagiography, for the author is so consumed with awe by his subject that the latter is simply incapable of transgression in any arena. When I was a child, I could do no wrong in my grandmother’s eyes. If I did go astray, she would redefine right and wrong to suit the circumstances, so I always landed on the positive side of the equation. Unger offers similar dispensation for Monroe throughout this work.
Unger’s inflated reverence for Monroe should not diminish his subject’s importance to the early Republic, only compel us to examine the man and his legacy with a more critical eye. The list of “Founding Fathers”—a term only coined by Woodrow Wilson in 1916—is somewhat arbitrary, and Monroe does not even always make the cut. The essential seven that all historians agree upon are: John Adams, Benjamin Franklin, Alexander Hamilton, John Jay, Thomas Jefferson, James Madison, and George Washington. Other lists are more broad, and many also include Monroe, who was after all not only the fifth President of the United States, but also U.S. Senator, Ambassador to France and England, Secretary of State, and Secretary of War—at one point even holding the latter two cabinet positions simultaneously. Monroe’s tenure in the White House has famously been dubbed the “Era of Good Feelings,” but only school kids—and Unger, apparently—believe that this is because suddenly faction disappeared, and both rival politics and personalities gave way to a mythical fellowship. In fact, historians have long recognized that this period was characterized by the one-party rule of the Democratic-Republican Party that dominated after the disintegration of the Federalist Party, which had flirted with treason and been discredited in its opposition to the War of 1812. But Monroe’s Democratic-Republicans represented far more of a coalition of loose factions than the powerful central force that the party had been under the stewardship of Jefferson and Madison before him. The fissures unacknowledged by Unger were brewing all along, later made manifest in the Second Party System of Clay and Jackson.
Most studies of Monroe reveal a man of great personal courage with stalwart dedication to principle and service to his country. Few—Unger is the exception—credit him with the kind of intellectual brilliance seen in peers like Jefferson, Madison and Hamilton. Like Hamilton—who indeed once challenged him to a duel—Monroe seems to have possessed an outsize ego and a prickly sense of honor that was easily slighted if not subject to the praise and recognition he felt certain he rightly deserved, such as sole credit for the Louisiana Purchase! Nearly a decade earlier than that milestone, Monroe had served as ambassador to France but was later recalled by Washington, who found him too easily flattered and otherwise lacking in the traits essential to upholding American diplomatic interests. Monroe was stung by this, but in his long future in government service he was in turn to have fallings-out with both Jefferson and his old friend Madison, unable to tolerate differences in opinion and bristling in his perception of being ever snubbed by not being elevated to the prominence he felt due him. Like Jefferson, Madison’s presidency proved to be a disappointing chapter in a life marked by great achievements. But while the War of 1812 was hardly Madison’s finest hour, and Monroe indeed played a pivotal role during the existential crisis of the burning of Washington and its aftermath, Madison was hardly the bewildered, sniveling coward Unger portrays in his account, so incapacitated by events that Monroe had to heroically swoop in to serve as acting president and single-handedly rescue the Republic.
The many flaws in this biography are unfortunate, because Unger writes very well and citations are abundant, lending to the book the style and form of a solid history. On a closer look, however, the reader will find that the excerpts from primary sources that populate the narrative are often focused on superficial topics, such as food served at events, room furnishings, or styles of dress. And Unger seems to sport a weirdly singular crush on Monroe’s wife, Eliza, whom he describes as “beautiful” more than a dozen times in the text—and that before I gave up counting! Attractive of not, she seems as First Lady to have come off as cold and imperious, with aristocratic airs that she no doubt accumulated during her times abroad with her husband, when they lived often in a grand style that was well beyond their means. Oddly, far more paragraphs are devoted to descriptions of Eliza’s clothing and social activities, and her many debilitating illnesses, real or imagined, than to Monroe’s eight years in the White House.
A greater complaint is that for a book published as recently as 2009, conspicuous in its absence are the less privileged people that walked the earth in Monroe’s time, Native Americans and most especially the enslaved African Americans kept as chattel property by elite Virginia planters like Monroe—as well as Jefferson, Madison and Washington—something that manifestly flies in the face of recent historiographical trends. Although Monroe owned hundreds of human beings over the course of his lifetime, the reader would hardly know it from turning the pages of The Last Founding Father, where the enslaved are mentioned in passing if mentioned at all, such as: “Although Monroe had to sell some slaves to rescue [his brother] Joseph from bankruptcy, he held to the belief that brotherly ties were indissoluble …” [p207] Long before the more famous Nat Turner Revolt, there was Gabriel’s Rebellion, and Monroe was Governor of Virginia when it was repressed and twenty-five blacks were hanged in retribution. The slightly more than two pages given to this episode lacks critical analysis but credits Monroe with promptly calling out the militia to put down the uprising [p140-142]. Such a cursory treatment of the inherent contradictions of the institution of chattel slavery to the ideals of the new Republic are an inexcusable blemish on any work of a twenty-first century historian. Since there is much in the literature about the incongruity of Monroe the plantation master—much like Jefferson—at times decrying while yet sustaining the peculiar institution, we can only conclude that Unger deliberately passed over this material lest it cast some aspersion upon the adoring portrait that this volume advances.
It pains me to write a bad review of any book. After all, the author typically labors mightily to generate the product, while I can read it—or not—in my leisure. But I am passionate about both historical studies and the rigors of scholarship, which should apply even more scrupulously to someone such as Harlow Giles Unger, who not only possesses appropriate credentials but has written widely in the field, and thus owes the student of history far more than this, which after all does no real service to the reader—nor to James Monroe by overstating his achievements while failing to contextualize his role as a key figure in the early Republic with the nuance and complexity that his legacy deserves.
Astronaut William Anders began: “For all the people on Earth the crew of Apollo 8 has a message we would like to send you:
In the beginning God created the heaven and the earth.
And the earth was without form, and void; and darkness was upon the face of the deep.
And the Spirit of God moved upon the face of the waters. And God said, Let there be light: and there was light.
And God saw the light, that it was good: and God divided the light from the darkness.”
On Christmas Eve fifty-one years ago, millions in the United States and around the globe—including this then eleven year old boy—gathered breathlessly around their TV’s to watch the first live broadcast from space, an extraordinary transmission beamed back to earth from more than two hundred thousand miles away from an American spacecraft in orbit around the moon. The largest television audience to that date was treated to remarkable photographs of the forbidding moonscape, but far more awe-inspiring and humbling were the images they viewed of their very own living planet, appearing so tiny and so remote from such a great distance. The three astronauts closed out the broadcast by reading passages from the biblical book of Genesis. Lunar Module Pilot Bill Anders was followed by Command Module Pilot Jim Lovell, and then Commander Frank Borman, who added: “And from the crew of Apollo 8, we close with good night, good luck, a Merry Christmas, and God bless all of you – all of you on the good Earth.”
While this episode remains a heartwarming moment that celebrates both the universality of the human endeavor as well as the singularity of this accomplishment, it should not obscure the reality of what was really happening on that blue planet viewed from afar, of the wars and famines and cruelty and disasters that did not take a pause while space travelers read aloud from an ancient book that itself once gave witness to its same share of wars and famines and cruelty and disasters. Nor should it fail to remind us that these representatives of the earth blasted off from a badly fractured landscape at home.
The claim that America on this Christmas Eve of 2019 has never been this divided is at once refuted by a glance back to 1968, replete with acts of terror, campus unrest, cities in flames, mass demonstrations, political assassinations, and violence in the streets—the perfect storm of the increasingly unpopular war in Vietnam and the revolution of rising expectations among long-disenfranchised blacks frustrated by the pace of change. If there was a kind of unifying force that remained to serve as some sort of glue amid the chaos and dissonance of a splintered national polity it had to be the space program and its race for the moon. The actual moon landing was not until the following year, but 1968 closed with the remarkable Apollo 8 mission, the first manned spacecraft to orbit the moon, made more dramatic by that live Christmas Eve audio-video transmission from space that included those readings from Genesis, and later forever enshrined in our collective consciousness by the iconic photo “Earthrise” that depicts the earth rising over the moon’s horizon, snapped by astronaut Bill Anders, that is said to have inspired the environmental movement.
Martin W. Sandler revisits this existential moment that briefly comforted a troubled nation with the oversize and lavishly illustrated Apollo 8: The Mission That Changed Everything, directed at a young adult (YA) audience but suitable for all. I have read and reviewed Sandler before. The author has a talent for clear, concise writing that while targeting a younger readership does not dumb-down the topic, an otherwise frequent tarnish to this genre of nonfiction. I obtained this book as part of an Early Reviewer program and my copy was an Advanced Reader’s Copy (ARC) with black and white images, but the published edition is full-color and worth the purchase if only for the magnificent color photographs, though these are nicely enhanced by a well-written narrative that encompasses the totality of this highly significant space mission and its ramifications back home. The only caution I would add is that I have detected glaring historical errors in some of Sandler’s other works. I did not stumble upon any here, but then I am hardly an expert on the space program. Thus, the reader should trust but verify!
Some—at the time and since—have objected to the astronauts’ choice of verses from Genesis, as if there was an attempt to impose religion from the beyond, or to celebrate the Judeo-Christian experience at the expense of others. We should not be so hard on them; they were simply seeking some kind of universal message to inspire us all. That they may have failed to please everyone may only underscore how diverse we are even as we transcend the myth of race to acknowledge that we all share the very same DNA, the same hopes and dreams and fears and needs and especially the desire to love and be loved. Astronaut Bill Landers himself returned from space as an atheist, awed by his place in the vast universe. I am not a religious person: I celebrate Christmas as a time for peace and love and Santa Claus. But I can still, like the astronauts on Apollo 8 fifty-one years ago, wish my readers a good night, good luck, and a Merry Christmas to all of you on the good Earth.
Some years ago, I had the pleasure of reading the Booker-prize winning masterpiece Birdsong, by Sebastian Faulks, which motivated me to pick up a couple of his other novels for later consumption, including Engleby. One day, I randomly plucked it off the shelf and turned to the first page. Honestly, it was not easy to put down. Also, to be even more honest, there were times that I really wanted to.
As a reviewer, it sounds somewhat awkward or even unseemly to resort to a term like “creepy” to describe a novel, but that would most accurately describe the subtle if sustained punch in the gut I experienced while reading this one, propelled by a growing revulsion for the central character. As the narrative unfolds, that character—the eponymous Mike Engleby—is a working-class Brit on scholarship to “an ancient” university in the early 1970s. He comes across as a bit of an oddball, but for those of us who lived through this era that was hardly unusual nor especially undesirable, given that to be an iconoclast in those days was often seen as a virtue. But the reader cannot help but experience an emerging disquiet as Engleby develops an infatuation that veers to obsession that then turns more ominously to the outright stalking of his bright and beautiful classmate Jennifer Arkland. Along the way, there are flashbacks to the bitter poverty of Engleby’s youth, the regular beatings by his father, the quotidian brutality of his life at public school where he is condemned to the unfortunate nickname “Toilet” and subjected to an ongoing torment that stretches the limits of endurance to cruelty—the cumulative effect of which, it becomes clear, shapes him into a bully, a thief, a drug dealer, an opportunist. Flash forward again and Jennifer has disappeared, never found, presumed murdered.
Did Engleby murder her? Could he be a serial killer? Is he mere weirdo or sociopath? That’s for you to find out: I don’t believe in folding spoilers into reviews. But the narrative is laced with plenty of clues, scattered within an interior monologue that invites an uncertain sympathy for a protagonist whom at best provokes the uneasy, at worst the repellent. Yet, it is the genius of the author to tempt the reader to veer from repugnance to empathy, against all odds, even if this shift may prove temporary. And the reader, like it or not, is ensnared in an uncomfortable fascination with this very same well-crafted interior monologue, a kind of labyrinth pregnant with Engleby’s barely suppressed anxiety, which he overcompensates for with visions of grandeur and a disdainful arrogance for all others in his orbit—except perhaps, that is, for Jennifer Arkland. And then that anxiety grows contagious as the reader begins to question the reliability of the narrator! Are the things revealed by Engleby’s inner thoughts real or imagined? Is Faulks himself, acting as both wizard and jester, simply mocking us from behind the curtain?
The last time I found myself as deeply unsettled by a work of fiction, it was Perfume, by Patrick Süskind, the unlikely tale of an eighteenth-century serial killer, but that novel was tempered with a pronounced sense of the ironic if not especially comedic. Not so with this one: there’s nothing even a little bit funny about Engleby. For his part, Faulks proves himself a true artist of the written word, his pen taking full command of his character and his audience alike. I recommend it, even if it may keep you up at night.
The most consequential figure of what historians dub Europe’s “long nineteenth century” (1789-1914)—from the start of the French Revolution to the outbreak of World War I—came to virtually define the first part of that era while setting forces into motion that shaped all that was to follow. Over the course of a single decade, Napoleon Bonaparte controlled not only much of the territory on the continent, but the entirety of its destiny. When he fell from power, the peace that was crafted in his wake largely held for a full century. The Europe that was obliterated by the catastrophe of the Great War that followed was the Europe both made and unmade by Napoleon. And even well beyond that, in the nearly two centuries since he walked the earth, no other individual—not Bismarck, not Stalin, not Churchill, not even Hitler—has emerged in the West, for ill or for good, to rival his significance or challenge his legacy. Yet for most, these days Napoleon is, if not exactly a forgotten character, a much overlooked one, a rarely referenced ghost of a distant past whose specter though perhaps unnoticed nevertheless still haunts the twenty-first century capitals of Paris, London, Rome, Berlin, Warsaw and Moscow.
An outstanding remedy to our collective negligence is Napoleon: A Life, by Adam Zamoyski, a noted historian and author with a long resume who masterfully resurrects the outsize character that was the living man and places him in the context of his times. At nearly seven hundred pages, at first glance this hefty tome might seem intimidating, but Zamoyski writes so well that there are few sluggish spots in a fast-moving, highly accessible narrative that will likely take its place in the historiography as the definitive single-volume biography. And this is surely the treatment his subject deserves.
There could perhaps not have been a more unlikely individual to command the world stage and change the course of history than Napoleon Bonaparte, born to a family of minor Italian nobility of quite modest means on Corsica in 1769, somewhat ironically in the same year that the Republic of Genoa ceded the island to France. It may be a minor point but it certainly adds to that irony that the future Emperor of France apparently ever spoke French with an atrocious accent, which—knowing the conceit of those native to the language—could only have rankled those in his orbit, both friend and foe. Yet, this is just one of the many, many contradictions that cling to Napoleon’s person. As a child, he was sent to a religious school in France, and later attended a military academy, which led to his commission as a second lieutenant in the artillery.
It was the outbreak of the French Revolution a few short years later that catapulted him onto the world stage in a bizarre trajectory that saw him first as a fervent Corsican nationalist seeking the island’s independence from France, then a pro-republican pamphleteer allied with Robespierre, and then artillery commander at the Siege of Toulon, where he first demonstrated his military genius. He was wounded but survived to be promoted to brigadier general at the age of twenty-four and later placed in command in Italy, where he led the army to victory in virtually every battle, while taking time out to crush a Royalist rebellion in Paris. He also survived his association with Robespierre. Proving himself as gifted in the partisan arena as he was on the battlefield, he adroitly commandeered the dangerous and ever-shifting political ground of revolutionary France to engineer a coup and make himself dictator, euphemistically styled as First Consul of what was now a republic in little more than name only. He was just thirty years old. Within five years, he was Emperor of France in a retooled monarchy that both resembled and served as counterpoint to the ancien régime that revolution had swept away.
The rare general with talents equally exceptional in the tactical and the strategic, Napoleon managed both on and off the battlefield to defeat a succession of great power coalitions aligned against him until he commanded much of Europe directly or through his proxies, while crippling British trade through his “continental system” that controlled key ports. Like Alexander two millennia before him, Napoleon was brilliant, courageous, opportunistic and lucky—all the ingredients necessary for unparalleled triumph on such a grand scale. Unlike Alexander, he outlived his conquests to try to remake his realm, in his case by spreading liberal reforms, stamping out feudalism, promoting meritocracy and codifying laws. But he also lived to fall from power and to fall hard. At the risk of stretching the metaphor, the ancient Greeks invented the term hubris to describe the tragedy in the excessive pride personified by men just such as Napoleon. Whereas Alexander looked to Achilles and the Olympic pantheon, Napoleon looked only to his own “star,” which he fully relied upon to guarantee his success in every endeavor. And one day that star dimmed. He famously overreached with the ill-conceived invasion of Russia that turned to debacle, but it was more than that. For all his genius, he ruled the French Empire like a medieval lord—or a crime boss—placing on the thrones of puppet states that served him members of his extended family or his cronies, most of whom lacked competence or even loyalty. His dramatic rise was met with an equally dramatic fall, and he ended his days in exile on a remote island in the South Atlantic, slowly succumbing to what was likely stomach cancer at the age of fifty-one.
Of course, you could learn all of this from the prevailing literature—there are literally thousands of books that chronicle Napoleon—but Zamoyski’s rare achievement is to capture the essential nature of his subject, something that too often eludes biographers. The Napoleon he conjures for us is a basket of contradictions: at once kind, despotic, magnanimous, ruthless, noble, petty, confident, insecure, charismatic, and socially awkward. Zamoyski does not stoop to play psychoanalyst, but the Napoleon that emerges from the narrative often smacks of a narcissist and depressive who frequently rode waves of highs and lows. If nothing else, he was certainly a very peculiar man who was repellent to some just as others were somehow drawn to him irresistibly, a paradox perhaps captured best in this passage recounting the recollections of those who knew him as a young man:
He was out of his depth, not so much socially as in terms of simple human communication: he showed a curious lack of empathy which meant that he did not know what to say to people, and therefore either said nothing or something inappropriate. His gracelessness, unkempt appearance, and poor French … did not help … He could sit through a comedy … and remain impassive while the whole house laughed, and then laugh raucously at odd moments … [He once told] a tasteless joke about one of his men having his testicles shot off at Toulon, and laughing uproariously while all around sat horrified. Yet there was something about his manner that some found unaccountably attractive. [p92]
Zomoyski does not pass judgment on Napoleon, but deftly brings color, form and substance to his sketches of him so that the reader is rewarded with a genuine sense of familiarity with the living man, an accomplishment that cannot be overstated. If there is a flaw, it is that the work is skimpy on the historical backdrop, on the prequel to Napoleon; those not already well-schooled with the milieu of late eighteenth century Europe may be at a disadvantage. But this is perhaps a quibble, for to do so competently would have further swelled the size of the book and risked an unwieldy text. On the other hand, there is a welcome supply of many fine maps, as well as copious notes.
Napoleon’s ambition left thousands of dead in his wake, and he left his mark far beyond the Europe he transformed. Modern Egyptology was born out of Napoleon’s military campaign in Egypt; the famous “Rosetta Stone” was among the spoils of war, although it ultimately ended up in British rather than French hands. Napoleon was the force behind the Louisiana Purchase, which effectively doubled the size of the nascent United States. It was the impressment of American seamen during the Napoleonic Wars that was a leading Casus belli in the War of 1812, and it was British exhaustion at the conclusion of that conflict that spared the young republic a harsher price for peace. Look closely and you will find Napoleon’s fingerprints nearly everywhere—and you will see them in far greater detail if you treat yourself to Zamoyski’s magnificent biography, which surely does justice to his legacy.
[CORRECTION: the podcast version of this review misidentifies the location of Napoleon’s death as on an island in the Pacific rather than in the South Atlantic, which has been corrected in the written text above.]
While browsing a bookstore sometime in 1982, I picked up a thick hardcover entitled The Years of Lyndon Johnson: The Path to Power, by Robert A. Caro. I had never heard of Caro, but the jacket flap told of his winning the 1975 Pulitzer Prize for biography for his very first book, The Power Broker: Robert Moses and the Fall of New York. I had never heard of Moses either, but in the days before smartphones and Google might let me dig a little deeper, that accolade spoke directly to the author’s reputation. I did—and still do—like to browse bookstores and to read books about American presidents. The twenty bucks I shelled out to buy that book was probably most of the cash I had in my wallet that afternoon, something else that was and remains characteristic of me to this day: given a choice between buying lunch or a new book, I will almost always choose the latter. I mean, I can wait until dinner …
That volume of The Path to Power is 768 pages of small print, not including notes and back matter, of mostly dense material, but Caro’s voice is so commanding that I found myself both absorbed and obsessed. For those who have not read him, it is difficult to describe Caro’s style, which exists somewhere at the confluence of incisive reporting and towering epic, a kind of literary salad that blends the best of Edward R. Murrow and Robert Penn Warren—seasoned with a dash or two of Thucydides—that the reader is driven to devour.
There are great presidential biographers out there—think Robert Remini, David McCullough, Joseph Ellis, Jon Meacham—yet Caro is in a league all his own. And unlike the others, he has not been prolific, devoting the decades since the publication of The Path to Power to just three books, all part of his The Years of Lyndon Johnson saga, one of which—Master of the Senate—is a landmark synthesis of history and biography and politics that won him a second Pulitzer Prize in 2003. Another ten years passed before the release of The Passage of Power, which only just follows LBJ into his first months in the White House. Now an octogenarian still doggedly at work on what is to be the final book in the series, Caro has broken precedent by releasing a slim volume that is a study of the author rather than his subjects.
This latest book, Working: Researching, Interviewing, Writing, is less a memoir than a profile of what Caro has set out to do and how he has approached the process, as neatly summarized by the subtitle. Surprisingly, Caro is not a historian, but instead started off as a journalist who won the respect of an old-fashioned hardboiled editor when his diligence in the field turned up info vital to a storyline. The editor, who had barely acknowledged him before, advised: “Turn every page. Never assume anything. Turn every goddamned page.” That has been his mantra ever since.
Caro is fascinated by power and those who wield it, and especially by the ways power can be obtained and exercised outside of ordinary channels. For instance, his first subject— “master builder” Robert Moses—was never elected to any office, yet at one point simultaneously held twelve official titles and used his accumulated authority to preside over the utter and lasting reshaping of New York City and its suburbs. In his research on LBJ, by turning “every page,” Caro encountered an obscure reference that led him to learn that Lyndon Johnson’s political rise and own personal wealth was closely linked to a long-secret relationship with the principals of Brown & Root, a construction company that built roads and dams and was later enriched by government contracts sent their way by Johnson; in turn, their largesse was to overflow LBJ’s campaign coffers. The rest is—quite literally—history.
A silent partner in Caro’s award-winning achievements has long been his wife Ina, who has quietly devoted her life to aiding his research and managing the household so that he could concentrate entirely on his book projects. In Working, Caro reveals that Ina once sold their home—without telling him—in order to ensure their financial solvency. Another time, when he announced they were moving to the Texas Hill Country for three years to continue his research on LBJ, Ina cracked: “Why can’t you do a biography of Napoleon?” But she went along, without complaint. And Caro makes it clear that Ina was no mere admin or assistant: she often sat across from him at long library tables and turned over half of those “goddamned pages” herself.
By my own calculation, I have read nearly three thousand pages of Robert Caro in his four volumes on Lyndon Johnson. I eagerly and impatiently await the final book. I did not know what to expect from Working, which is closer to memoir than autobiography but truly defies categorization. Most great writers are incapable of talking about themselves without something like bitterness or bravado. Hemingway certainly couldn’t do it. Steinbeck—think Travels with Charley—was better at it, but he tended to conflate fiction and nonfiction along the way. Caro would have none of that. His work has always had a singular focus that has been about the unvarnished facts, about the warts and all, about the inconvenient truths that swirl about the lives of his subjects, and he delivers no more and certainly nothing less when he turns the lens on himself.
Working would be a party favor if written by anyone but Robert Caro. But because he is a magnificent writer gifted with extraordinary insight, it is a kind of a minor masterpiece packaged in an undersized edition that is an easy read of less than two hundred pages. If there is a fault, it is the odd inclusion of an interview with The Paris Review from 2016 that is not only superfluous but distracting; I would urge skipping it. But that’s a quibble. Even if you have never heard of Robert Caro yet are fascinated with history and how solid research serves as the foundation to analysis, interpretation and an ever-evolving historiography, you should read this. If you have read Caro’s other books, of course, then you must read this one!
In August 1831, in Virginia’s Southampton County, a literate, highly intelligent if eccentric enslaved man—consumed with such an outsize religious fervor that he was nicknamed “The Prophet” by those in his orbit—led what was to become the largest slave uprising in American history. Nat Turner’s Rebellion turned out to be a brief but bloody affair that resulted in the largely indiscriminate slaughter of dozens of whites—men, women, children, even infants—before it was put down. The failed revolt itself was and remains far less important than its repercussions and the dramatic echoes that still resounded many years hence during the secession crisis. Rarely would any historian of the American Civil War cite Nat Turner as a direct cause of the conflict—after all, the rebellion took place three decades prior to Fort Sumter—but it is almost always part of the conversation. Turner’s uprising not only reinforced but validated a deep-simmering paranoia of southern whites—who like ancient Spartans vastly outnumbered by Helots were often in the minority to their larger chattel population—and spawned a host of reactionary legislation in Virginia and throughout much of the south that outlawed teaching blacks to read and write, and prohibited religious gatherings without a white minister present. And while for those below the Mason-Dixon it was an underscore to the perils of their peculiar institution, at a time when abolitionism was in its infancy it also served to remind at least some of their northern brethren that the morally questionable practice of owning other human beings was part of the fabric of southern life. Indeed, one could argue that the true dawn of what we conceive of as the antebellum era began with Nat Turner.
For such a pivotal event in the nation’s past, the historiography has been somewhat scant. There is the controversial “confession” that Turner dictated to lawyer Thomas Ruffin Gray in the days between his capture, trial and hanging, which some take at face value and others dispute. But in the intervening years, surprisingly few scholars have carefully scrutinized the rebellion and its legacy, which remains far better known to a wider audience from William Styron’s Pulitzer Prize-winning novel The Confessions of Nat Turner than from the analytical authority of credentialed historians.
A welcome remedy can be found in The Land Shall be Deluged in Blood: A New History of the Nat Turner Revolt, a brilliant if uneven treatment of the uprising and its aftermath by Patrick H. Breen, first published in 2016, that likely will serve as the academic gold standard for some time to come. While giving a respectful nod to the existing historiography—which has tended to breed competing narratives that pronounce Turner hero or villain or madman—Breen, an Associate Professor of History at Providence College, instead went all in by conducting an impressive amount of highly original research that locates the revolt within the greater sphere of the changing nature of the institution of slavery in southeastern Virginia in the early 1830s, which as a labor mechanism was in fact in a slow but pronounced decline. Nat Turner and his uprising certainly did not occur in a vacuum, but prior to Breen’s keen analysis, the rebellion was generally interpreted out of its critical context, which thus distorted conclusions that often pronounced it an anomaly nurtured by a passionate if deranged figure. For the modern historian, of course, this is not all that shocking, since the uncomfortable dynamics found in the relationships of the enslaved with wider communities of whites and other blacks (both free and enslaved) has until recent times been typically afforded only superficial attention or entirely overlooked. It is nevertheless surprising—given the notoriety of the Turner revolt—that until Breen there was such a lack of scholarly focus in this arena.
The book has eight chapters but there are three clear divisions that follow a distinct if sometimes awkward chronology. The first part traces the start and course of the rebellion and presents the full cast of characters of conspirators and victims. The second is devoted to subsequent events, including both the extrajudicial murder by whites of blacks swept up in the initial hysteria spawned by the revolt, as well as the carefully orchestrated trials and executions of many of the participants. The final and shortest section concerns the fate of Nat Turner himself, who evaded capture for two months—long after many of his accomplices had been tried and hanged.
The general reader may find the first part slow-going. The story of the revolt should be an exciting read, especially given the passion of prophecy that consumed Turner and the violence that it begat with its slaughter of innocents by an unlikely band of recruits whose motives were ambiguous. Instead, the prose at times is so dispassionate that the drama slips away. In my opinion, this is less Breen’s fault—he is, after all, a talented writer—than the stultifying structure of academic writing that burdens the field, the unfortunate reason why most best-selling works of history are not written by historians. But I would encourage the discouraged to press on, because the effort is intellectually rewarding; the author has deftly stripped away myth and legend to separate fact from the surmise and invention pregnant in other accounts. If there can be such a thing as a definitive study of the Nat Turner rebellion, Breen has delivered it.
It is clear from the character of the narrative that follows that Breen’s true passion lies in the aftermath of the revolt, where he serves as revisionist to what has long been taken for granted as settled history. This is as it should be, because it was the repercussions of the rebellion and the way it was remembered (north and south) in the thirty years leading up to secession that was always of far greater importance to history than the uprising itself. And it is unfortunately this echo—much of which has been unsubstantiated—which has tainted later scholarship. The central notion that prevailed, which Breen challenges, is that the reaction to Nat Turner was a widespread bloodbath of African Americans by unruly mobs whose suspicion was that all blacks were complicit or were simply driven by revenge. The other, also disputed by Breen, is that whatever trust might have once existed between white masters and the enslaved had forever evaporated, the former ever in fear that the latter were secretly plotting a repeat of the Turner episode. Finally, Breen takes issue with the view of many historians that the authorial voice in Turner’s “confession” is unreliable because it was dictated to a white man who was guided by his own agenda when he published it.
Breen refutes the first by lending scrutiny to the empirical evidence in the extant records of the enslaved population. A little general background for the uninitiated here: the enslaved were treated as taxable chattel property in the antebellum era, so meticulous records were kept and a good deal of that survives. Many slave-owners insured their human “property,” often through insurance companies based in the north. If an enslaved person was convicted of a capital crime, the state compensated the slave-owner for the executed offender. Breen, as a good historian, simply reviewed the records to determine if prevailing views of the rebellion’s aftermath were accurate or exaggerated. What he learned was that there was indeed much hyperbole in reports of widespread massacres of African Americans. Yes, certain individuals and militias did commit atrocities by murdering blacks, and sometimes torturing them first. But the numbers were vastly overstated. And local officials quickly put a stop to this, motivated perhaps far less by ethical concerns than in an effort to protect valuable “property” from the extrajudicial depredations of the mob, whose owners would not then be duly compensated. Breen should be commended for his careful research—which demonstrates that long-accepted reports of mass murder are simply unsupported by the records—yet it seems astonishing that those who came before him failed to follow the same road of due diligence that he traveled. This should underscore to all budding historians out there that there remains lots of solid history work ahead, even and especially in otherwise familiar areas like this one where what turns out to be a flawed analysis has long been taken for granted as the scholarly consensus.
This business of assigning value to chattel human property is uncomfortable stuff for modern students of this era, but as those who have read The Price for Their Pound of Flesh, Daina Ramey Berry’s outstanding treatment of the topic, it is absolutely essential to understanding how slavery operated in the antebellum south. The Land Shall be Deluged in Blood steps beyond the specifics of Nat Turner to offer a wider perspective in this vein, as well. The enslaved were often subject to the arbitrary sanctions of their masters, but those accused of capital crimes were technically granted a kind of due process of law. Breen points out that special courts of “Oyer and Terminer” that lacked juries—the same kind that convicted and hanged those accused of witchcraft in Salem—were ordained in Virginia to judge such cases. Initially enacted to expedite the trial process of the enslaved, the courts—captained by five magistrates who were typically wealthy slave-owners, and which duly supplied defense attorneys to the accused—came to have the opposite effect, convicting only about a third of those brought before them. [p108] Much of the reason for these results seems to be connected to an effort to limit the cost of the state for compensation for those sent to the gallows for their crimes.
It turns out that these same courts also had a tempering effect on the trials of those accused of taking part in the rebellion. But this time, it wasn’t only about the money. Breen argues convincingly that the elite magistrates who controlled the trial process also created and marketed to the wider community a reassuring narrative that the uprising was a small affair involving only a small number of the misguided. In the end, eighteen were executed, more than a dozen were transported and there were even some acquittals. Thus, state liability was limited, and the peculiar institution was protected.
That reassurance seems to have been effective: freedom of movement for the enslaved subsequent to the revolt was not as constrained as some have maintained, as evidenced by the fact that Nat Turner was discovered in hiding and betrayed by other enslaved individuals who were hardly prohibited from wandering alone after dark. By the time Nat Turner was captured and executed, the rebellion was almost already history. As to the veracity of Turner’s “confessions” to Grey, Breen makes a compelling argument in support of Turner’s words as recorded, but that will likely remain both controversial and open to interpretation. So too will the person of Nat Turner. The horror of human chattel slavery might urge us to cheer Nat and his accomplices in their revolt, while the murder of babies in the course of events can’t help but give us pause. Likewise, we might harshly judge those white slave-owners who dared to judge them. But, of course, that is not the strict business of historians, who must sift through the nuance and complexity of people and events to get to the bottom of what really happened, warts and all.
I first learned of The Land Shall be Deluged in Blood when I sat enthralled by Breen’s presentation of his research at the Civil War Institute (CWI) 2019 Summer Conference at Gettysburg College, and I purchased a copy at the college bookstore. While I have some quibbles with the style and arrangement of the book, especially to the strict adherence to chronology that in part weakens the narrative flow, the author has made an invaluable contribution to the historiography with what is surely the authoritative account of the Nat Turner Rebellion. This is and should be required reading for all students of the antebellum era.
NOTE: My review of The Price for Their Pound of Flesh is here:
There’s an abiding irony to the fact that the United Nations, formed in the wake of a catastrophic global war to keep the peace, instead gave sanction to the first and most significant multinational armed conflict since World War II, not even five full years after Japan’s capitulation. It never would have happened had Stalin not ordered Soviet delegates to boycott that Security Council session in protest over the seating of Chiang Kai-shek’s government-in-exile on Taiwan instead of Mao’s de facto People’s Republic of China. It might never have happened if United States President Truman was not under enormous political pressure due to a hysterical campaign of right-wing outrage known as “Who Lost China” born out of Mao’s surprise victory in 1949, the same year that the Cold War grew much hotter when the Soviets successfully tested an atomic bomb, and fears of global communist domination magnified. It probably never would have found the support of so many other nations if the memories of appeasement to Hitler were still not so fresh and compelling.
“It”—of course—was the Korean War, which took place on a wide swath of East Asian geography that remains unresolved to this very day. Historically, the Korean peninsula hosted at various times both competing kingdoms and a unitary state but was always dominated by its more powerful neighbors: China, Russia and Japan. In 1910, Japan annexed Korea, and an especially brutal occupation ensued. Following the Japanese defeat, the peninsula was divided at the 38th parallel into two zones administered in the north by the Soviet Union and in the south by the United States. Cold War politics enabled the creation of two separate states in the two zones, each mutually hostile to one another. In June 1950, the Soviet-backed communist regime in the north invaded the pro-western capitalist state in the south, which spawned a UN resolution to intervene and launched the Korean War. At first South Korea fared poorly, but an American-led multinational coalition eventually pushed communist forces back across the 38th parallel. The fateful decision was then made by the Truman Administration to pursue the enemy and expand full-scale combat operations into North Korea. This brought China into the war and a long bloody struggle to stalemate ensued. Like a weird Twilight Zone loop, more than sixty-six years later a state of war still exists on the peninsula, and Kim Jong-un—the erratic supreme leader of a now nuclear-armed North Korea who regularly taunts the United States—is the grandson of supreme leader Kim Il-sung, whose invasion of the south sparked the conflict!
The origins, history and consequences of the Korea War makes for a fascinating story that—especially given both its scope and its dramatic contemporary echo—has received far less attention in the literature than it deserves. Unfortunately, Michael Pembroke’s recent attempt, Korea: Where the American Century Began, contributes almost nothing worthwhile to the historiography. This is a shame, because Pembroke—a self-styled historian who currently serves as a judge of the Supreme Court of New South Wales, Australia—is a talented writer who seems to have conducted significant research for this work. Alas, he squanders it all on what turns out to be little more than a lengthy philippic that serves as a multilayered condemnation of the United States.
As the subtitle suggests, Pembroke’s bitter polemic is directed not only at US intervention in Korea, but at the subsequent muscular but misguided American foreign policy that has begat a series of often pointless wars at a terrible cost in blood and treasure not only for the United States but also for the allies and adversaries in her orbit. Many—including this reviewer—might be in rough agreement with a good portion of that assessment. But the author sacrifices all credibility with a narrative that repeatedly acts as apologist for Mao, Kim Il-sung and even Stalin! For Pembroke, Truman takes on an outsize stature of a bloodthirsty monster who is not satisfied with the hundreds of thousands he vaporized at Hiroshima and Nagasaki, but is willing and even eager to sacrifice millions more in order to achieve his nefarious goal of global domination. Stalin and Mao, on the other hand, simply had their reasons, and were often misunderstood. Left unexplained is why, invested with that motivation and given that the United States in that era had overwhelming strategic nuclear and conventional superiority, Truman and his successors chose not to deploy that capability to pave a dramatic sanguinary road to hegemony.
To my mind, America’s war in Korea was a calamitous misstep, further exacerbated by the escalation that ensued with the crossing of the 38th parallel after achieving the initial objective of driving communist forces from the south. And one could make a good argument that none of the seemingly endless conflicts the United States has engaged in since that time was worth the life of a single American serviceman or woman. Yet, it is a hideous distortion to disfavorably juxtapose America—warts and all—with the endemic mass murder of Stalin’s Soviet Union. History, as I have often noted, is a matter of complexity and nuance, a perspective that seems utterly alien to Michael Pembroke in a book that is neither a history nor an analysis but simply an almost breathless diatribe that reduces characters to caricature and events to a bizarre comic book style of exposing villainy—but in this case all the villains happen to be American.
Because I received this book as part of an early reviewer’s program, I felt an obligation to plod through it to the very last page. In other circumstances, I would have abandoned it far, far earlier. As a reviewer, rarely would I suggest that a work has absolutely no value to a reader, but here I will make an exception: the best-case scenario for this book is for it to go out of print.
The best book I ever read about Theodore Roosevelt was actually about a river, with T.R. in a supporting role. By lending focus to just a single episode in the colorful drama of his remarkable life in The River of Doubt, Candice Millard’s insight and gifted prose delivered a superlative study of the existential Roosevelt that has often eluded biographers, while recounting the little-known challenge of his sunset years that nearly broke him.
Millard brings a similar technique to her third and most recent effort, Hero of the Empire: The Boer War, a Daring Escape and the Making of Winston Churchill. With pen dipped in the inkwells of careful scholarship as well as great storytelling, the author adroitly marries history and literature to deliver an unexpectedly original and fascinating tale that reads like something from Robert Louis Stevenson. If there are similarities to her earlier work, there is also a twist, with the storied figures in nearly inverse circumstances. Rather than the late-in-life challenge that nearly does the central character in, this is the chronicle of a young man’s extraordinary adventure that was to launch his long celebrity.
Not that Churchill was ever really anonymous. But first: is it even possible to imagine a young Churchill? Think of the man and what comes to mind is the steely but beefy, even rotund British leader who was already all of sixty-five years old when he became Prime Minister at the onset of World War II, after many decades both in and out of power. (And he was to live yet another two decades after Hitler’s defeat, again both in and out of power!) But the Churchill of Hero of the Empire is a slight fellow in his early twenties with an outsize ego and seemingly boundless ambition who talks too much and annoys most of those in his orbit. Yet, even then, he was hardly unknown, born into the upper echelons of the aristocracy, scion of a famous father who committed a kind of political suicide before his own early death, and the celebrated and sometimes notorious American beauty Jennie Randolph, a brilliant iconoclast legendary for her many lovers. Before the action unfolds in Hero of the Empire, the twenty-four-year-old Winston had already traveled much of the world, had a brief career as an army officer, served as war correspondent, published two books, and made an unsuccessful run for Parliament.
Anticipating what would become known as the Second Boer War and determined to be in the thick of the fray, in 1899 Churchill obtained credentials as a journalist and set off for Cape Town, then on to Ladysmith amid fierce hostilities. Journalist or not, when his train came under Boer attack, he took the lead and mounted a heroic defense that although it ultimately ended with his capture is credited with saving countless lives of those aboard, most of whom were in uniform. His time as prisoner of war and his bold escape is the central focus of the narrative.
Telling this story as well as Millard does might well be achievement enough, but this book succeeds far beyond that because the author not only brings a singular authenticity to her portrait of Churchill, but also to the wider canvas of the milieu that was England, the British empire, and the Boer republics at the turn of the century. This is especially impressive because rather than a trained historian, Millard comes to her craft with a master’s degree in literature, although there is no lack of citations to underscore the meticulous research that is the foundation of her work.
Millard’s account of Churchill’s escape from prison in Pretoria is no less than thrilling, tracing his footsteps as he wandered alone in unknown territory, stowed away on freight trains, and even concealed himself for a time in the bowels of a mine. Eventually he made it to safety, hundreds of miles away at what was then Portuguese East Africa. The British public followed Churchill’s exploits with great excitement, and at war’s end he returned home to wide acclaim. His next attempt at Parliament met with success; his long career in politics and public service had begun.
What would any Churchill book be without the anecdotes born of his eccentricities? Hero of the Empire has its share, especially as it recounts his captivity, where he demonstrated that regardless of his circumstances he was and ever would be a creature of the elite. So it was that as P.O.W. Churchill nevertheless regularly indulged in fine wines, traced troop movements on wall-size maps, and was only missed after his audacious escape because the local barber he had hired refused to be turned away by fellow prisoners when the time came for his regularly scheduled haircut!
Churchill has fallen out of favor to large portions of our modern audience. His racism, his imperialism, his misogyny, are all somewhat cringeworthy nearly one hundred fifty years after his birth. And it is not all political correctness: many of his views were well out of step with others more enlightened in his own era. At the same time, warts and all, Churchill was indeed a great man. It is impossible to imagine England under the siege of the Nazi war machine without Churchill cheering the Brits on, collaborating with FDR, demanding the sacrifice of the nation, and his clarion call to “Never, never, never give in.” The character, the determination, the heroism, the steadfastness of that iconic figure is already manifest in the form of that spindly young overconfident fellow brought back to life for us once more in the pages of this fine book. There are indeed too few characters like Winston Churchill to animate our history, and far too few writers like Candice Millard to deliver such readable accounts of past times.
From the start of the Civil War, enslaved African Americans sensed the opportunity for freedom as Union forces seized territory at the outer margins of seceded states. Initially, there was the odd phenomenon of officers in blue uniforms turning over escapees to their slave masters. But all that changed in 1861 at Fort Monroe, at the southern tip of the Virginia Peninsula, when the famously chameleonlike General Benjamin Butler refused to return the three enslaved men who fled to his lines. Butler himself, at least at this stage of his life, could care less about blacks, slave or free, but reasoning that the Fugitive Slave Act no longer applied to the seceded states, and observing that every enslaved person serving as support behind Confederate lines freed up a white soldier to fire upon Union ranks, Butler ruled that such escapees be treated as “contrabands” of war and confiscated. Contraband was an unfortunate term that equated the enslaved with property instead of people, but it nevertheless stuck—but then so too did Butler’s policy, which only a few months later was enshrined by Congress in the Confiscation Act of 1861.
What began as a trickle to Butler’s fort turned into a veritable flood that eventually was to bring something like a half-million formerly enslaved people to seek shelter with the Union army over the next four years. About one-fifth of these would later serve, often heroically, as soldiers in the United States Colored Troops (USCT), but what about the other roughly four hundred thousand? What became of them? If their fate never occurred to you before, it is because the story of this huge, largely anonymous population has remained conspicuous in its absence in much of the vast historiography of the Civil War—at least until Amy Murrell Taylor’s brilliant, groundbreaking recent book, Embattled Freedom: Journeys through the Civil War’s Slave Refugee Camps.
Fleeing to Union lines was only possible if the army was in your vicinity, which put this option out of reach to much of the south’s enslaved population. That approximately one-seventh of the Confederacy’s enslaved population of 3.5 million fled to the surmised safety of Union lines when this limited opportunity knocked gives lie to the notion that the “peculiar institution” was benign and that the majority of the enslaved were satisfied with their lot—a sadly resurgent fiction promoted by “Lost Cause” apologists that has again found an unfortunate home within contemporary political discourse. These 500,000 men, women and children—and yes, Taylor learned, there were indeed significant numbers of children—were of course not “contrabands” but refugees, as that term was understood both then and now. And they fled, usually in great peril, with little more than the rags on their backs, to what may have been a promise of freedom but also an unknown future fraught with difficulty.
What would become of them? It turns out that rather than a single shared outcome there was a variety of experiences that depended upon geography, the fortunes of war, and the arbitrary rule of local commanders. Neither the Union army nor the civilian north was prepared for the phenomenon of hundreds of thousands of black refugees, and the result was often not favorable to those who were the most vulnerable. At the dawn of the war, abolitionists still comprised only a tiny minority in the United States. Most of the north remained deeply racist, and those championing “free soil” generally had little concern for the welfare of African Americans on either side of the Mason-Dixon line. This reality informed policy, which even when well-intentioned tended to be patronizing, and was in fact frequently ignored. Embattled Freedom describes how orders were issued mandating both payments and provisions for refugees, who if physically capable were expected to provide the kind of support to the army as paid laborers that they might otherwise have given to the Confederate effort as slaves. But in practice, they were rarely paid, their wages euphemistically diverted to the “general welfare,” or simply stolen by dishonest opportunists. And military necessity trumped all: there was a war on, blood was being shed, and the existential future of the nation was at stake. Refugees would ever remain a lower priority, at the mercy of the corrupt or the indifferent. Rarely consulted, decisions were made for them that often proved less than ideal. The author treats us to a number of examples of this, but perhaps the most ironic is the campaign by well-meaning missionaries to equip refugee shelters with windows, when their occupants assiduously eschewed these for the sake of privacy and security.
Then there was the case of the Emancipation Proclamation, which freed the enslaved in Confederate-controlled territory, but paradoxically did not apply to areas controlled by the Union army. Only a rather obscure directive that would cashier any soldier returning a person to slavery served as an unlikely safety-net for refugees. More significantly, there was the border state of Kentucky, which when it opted not to join the Confederacy became the largest slave state in the Union, something that endured until the Thirteenth Amendment was ratified, well beyond the end of the war. Refugee camps in Kentucky were ringed by slaveowners; wandering outside of camp could result in capture and enslavement that could be nearly impossible to dispute by a black person in a state where slavery was both legal and widespread.
Refugees ever lived at risk elsewhere in what can only be described as uncertain sanctuaries. Camps evolved into “freedman’s villages”—replete with churches, schools, stores and tidy public squares—that sprang up at the edges of Confederate territory occupied by Union troops, but long-term security was tenuous, dependent entirely on these garrisons. If the army was redeployed, refugees were suddenly thrust into great danger and forced to flee once more lest they be captured and returned to slavery by roving bands of locals. It is well documented that Confederates habitually executed USCT troops wounded or seeking surrender. Less familiar perhaps was the devastation visited upon these undefended villages by rebels and their partisan allies enraged at the formerly enslaved living in freedom in their midst. Hunger often accompanied the refugee, even in the best of circumstances; a camp or village razed and burned could portend starvation.
The end of the war and abolition seemed to suggest a new beginning, but optimism was short-lived. Lincoln’s untimely death sent Andrew Johnson to the White House. The new president was deeply hostile to African Americans, and ensuing years saw pardons issued to former CSA political and military elites, property returned to once dislodged slave masters, and refugees terrorized and murdered, ultimately driven off the lands that once hosted thriving freedman’s villages. Where can you see a freedman’s village today? You can’t: they were all plowed under, sometimes along with the bones of occupants less than willing to be displaced.
Embattled Freedom is an especially valuable resource because it contains not only a panoramic view of the refugee experience but an expertly narrowed lens that zooms in upon a handful of individuals that Taylor’s careful research has redeemed from obscurity. Especially fascinating is the saga of Edward and Emma Whitehurst, an enslaved couple that had managed over time to stockpile a surprisingly large savings through Edward’s side work, in a unique arrangement with his owner. Fleeing slavery, the entrepreneurial Whitehurst’s turned their nest egg into a highly successful and profitable store at a refugee camp in Virginia—only to one day lose it all to retreating Union forces desperate for supplies. There is also the inspiring story of Eliza Bogan of Helena, Arkansas, who as refugee leaves the harsh existence of picking cotton behind only to endure one obstacle after another in her pursuit of life as a free woman in uncertain circumstances. There are other stories, as well. These personal studies not only enrich a well-written narrative, but ever engage the reader well beyond the typical scholarly work.
A week after I finished reading Embattled Freedom, I sat in the audience during Amy Taylor’s presentation at the Civil War Institute Summer Conference 2019 at Gettysburg College, which highlighted both her passion and her scholarship. During the Q&A, I asked what surprised her most during her research. Hard-pressed to answer, she finally settled on the number of children that turned up in the refugee population. I would suggest that as a topic for her next book. In the meantime, drop everything and read Embattled Freedom. You will not regret it.
A few years ago, I had the honor of being selected for a key role on a team engaged in scanning, transcribing and digitizing a trove of recently rediscovered letters, diaries and narratives of the Massachusetts 31st Infantry, which turned up more than a century after these were compiled by their regimental historian but left unpublished. In a lifetime of studying the American Civil War, soldiers’ letters were hardly new to me, of course, but I found myself surprisingly emotional as I became one of the very first in so many decades to get a glimpse at the sometimes-hidden hearts of these long-dead souls. And there was something else: rather than the random excerpt, often highlighted for its dramatic impact, that makes a familiar appearance in the pages of history books, these materials represent continuous strands of communication by nearly two dozen individuals, some of which stretched over a three-year period. The stories they tell run the gamut from the mundane to the comedic to the horrific, but collectively the nature and the personalities of the storytellers emerge to reveal authenticity in their experience too frequently lost in grand narratives about the war. A careful read of a man’s letters home over several years often unexpectedly expose truths that are omitted or deliberately distorted by the correspondent.
This overarching point is subtly but expertly made again and again in historian Peter S. Carmichael’s magnificent work, The War for the Common Soldier: How Men Thought, Fought and Survived in Civil War Armies, certainly one of the most significant recent contributions to the historiography. As primary sources, surviving letters from the front are critical and invaluable, but even more critical may be interpretation, which can be misled by taking these at face value, or plucking them out of context, or being seduced by the words of a man who wants his wife or mother—or especially himself—to believe that he is courageous or confident or committed to his cause when only some or none of those may be true.
In a dense, but highly readable account that brings a surprisingly fresh perspective to a frequently overlooked aspect of Civil War studies, Carmichael defies often prevailing generalizations of soldiers north and south that tend to predominate in the literature, reminding the reader that a tendency to oversimplification distorts the reality on the ground. Something like a total of 2.75 million men fought on both sides in the Civil War. These were living, breathing human beings, not simply the statistical figures fed into databases to produce the broad generalities pervasive in many narratives. At the same time, he does not fail to locate and identify the commonalities in the rank and file that exist in multiple arenas, but his skillful approach to this end is guided by the nuance and complexity that is the mark of a great historian.
Carmichael’s well-written chronicle explores almost all aspects of a soldier’s life in camp, on the march and in battle, but that nuance is made most manifest in the chapter entitled “Desertion and Military Justice.” The accepted wisdom has long argued that bounty jumpers constituted the majority of those shot for desertion over the course of the war, and perhaps with some justification. But while the numbers underscore that there were plenty who likely fit that profile, Carmichael’s research demonstrates that such a broad brush obscures a reality that saw men on both sides leaving the lines and returning, frequently more than once, and typically with little or no penalty. This was especially common among Confederates, who usually fled not out of cowardice or convenience but rather to aid starving families back home desperate for survival. And there was, in many cases, a fine line between AWOL and desertion. It is surprising how often luck or simply the vagaries of enforcement separated men made to sit on their own coffins with eyes bandaged while the firing squad formed up from those docked a month’s pay instead. It does seem that Lincoln’s moral compass was more finely oriented to the circumstances of the soldier missing from his company—even if this found friction among the Union brass—than was the case on the other side, for the reality was that by percentage far more men clad in gray were put to death than those in blue, and some of these were mass executions before the lines. What is clear is that on both sides, the common soldier—even the veteran accustomed to the gore and slaughter of battle—was deeply disturbed when compelled to witness the cold-blooded murder of a fellow soldier, even if he thought the man got his just deserts.
A review such as this cannot possibly touch upon all of the themes Carmichael surveys in this outstanding study, but I was especially drawn to his treatment of the phenomenon of malingering, which instantly found a familiar face in Cpl. Joshua W. Hawkes, one of my men from the 31st, who bragged in letters to his mother about his health while he served away from the cannon fire as part of the occupation army in New Orleans, even taking swipes at those pretending to be ill to avoid duty. Yet later, on the very eve of combat, he fell victim first to “diarrhoea” and then to a bewildering set of ever-shifting complaints that kept him confined to a hospital bed for months until he was eventually discharged for disability. I read this man’s letters in isolation, of course, but Carmichael’s impressive research demonstrates not only that this soldier’s manufactured symptoms put him in the company of thousands of other “shirkers,” but also underscores how difficult it was for doctors equipped with the primitive diagnostic tools of mid-nineteenth century medicine to distinguish the truly afflicted from those talented at feigning illness to avoid combat or earn a discharge. As such, there were men who genuinely suffered sent back to come under enemy fire, while others who were quite healthy succeeded in dodging the same.
Some years after my project with the 31st, I was given access to a private collection of unpublished letters from George W. Gould, a Massachusetts private killed at the bloody battle of Cold Harbor in 1864. I transcribed his correspondence and created a website for public access to honor him, and I visit his grave in Paxton MA several times a year. When I placed a flag on his grave to commemorate Memorial Day 2019, I found myself in somber reflection of not only the sacrifice of Private Gould, but also of the vast territory covered in The War for the Common Soldier, because although his name appears nowhere in the narrative this book is surely about George W. Gould and every man who marched alongside him, as well as every man he marched against in opposition with musket held high. Pvt. George W. Gould and Cpl. Joshua W. Hawkes are just two of the millions who either gasped their last breaths on Civil War battlefields or drank beer at memorials in the decades that followed. If you want to understand that terrible war, you should indeed visit battlefields and explore the latest historiography, but you should also pause to read Carmichael’s superlative work. The truth is that you will never comprehend the Civil War until you come to understand the Civil War soldier. Some books should be required reading. This is one of them.
[REVIEW ADDENDUM: Some years back, I had the great honor of being selected for a key role on a team engaged in scanning, transcribing and digitizing a trove of recently rediscovered letters, diaries and narratives of the Massachusetts 31st Infantry—a regiment that first served with Benjamin Butler as an occupying force in New Orleans, and later as part of the Red River campaign under Nathaniel Banks—which turned up in the archives of the Lyman & Merrie Wood Museum of Springfield History more than a century after they were compiled by their regimental historian but left unpublished due to his untimely death. These materials can be accessed at: https://31massinf.wordpress.com
I found Carmichael’s treatment of malingerers especially fascinating, because it related to my own work with the Massachusetts 31st and Cpl. Joshua W. Hawkes, who in letters to his mother made dozens of references to his generally good health during the first portion of his service, where he thrived as part of the occupying force under Benjamin Butler in New Orleans. In one missive from the autumn of 1862 [letter 10/18/62], he even bragged about how quickly he recovered from the “ague” while taking a swipe at those who pretended to be ill, noting that while he was “back to duty now there is so much playing off sick I do not wish any such name.” Ironically then, in April 1863, on the eve of what would have been his first foray into combat, [letter 04/17/63] Hawkes was beset with “diarrhoea” [SIC] which eventually led to his return to New Orleans, this time to the St. James Hospital, where a bewildering set of ever-shifting complaints kept him confined—but not incapable of eating fairly well, such as “an egg in the morning, a piece of toasted bread each meal and a little claret wine,” [letter 6/4/63] and occasionally exploring the city when granted a pass—until he eventually succeeded in gaining a discharge for disability in July 1863. In one of his more histrionic letters to mother, he proclaims:
“I am perhaps disposed to magnify my ails, but when I have seen men brought in here who had been forced to march with diarrhoea [SIC] … coming here too weak to walk and living but a week or two, then I have thought it was not best to beg to be sent away to the exposures of an army on active duty in the field. They can call me a coward, a shirk, what they choose, but I think it a duty to take care of my health not only for myself but on my mother’s account, what do you think of this logic?” [letter 06/04/63]
Apparently, this “logic” served Hawkes’ well, since he was sent home without ever coming under enemy fire and lived on until 1890!
Some years after my project with the 31st, I was given access to a private collection of the unpublished letters of Pvt. George W. Gould, who was killed at the bloody battle of Cold Harbor in 1864. He has come to serve as my “adopted” Civil War soldier, so by honoring him I likewise honor all of those who have made the ultimate sacrifice. I scanned and transcribed his letters and created a website to honor him, which can be accessed at: https://resurrectinglostvoices.com
I have attached this addendum not because these particular soldiers who fell or survived have a greater or lesser import than any of the other hundreds of thousands who served in the American Civil War, but rather to add meaningful context, and to underscore the essential point of Carmichael’s wonderful book, which is that you must read far more deeply into what these men had to say in their letters home if you really want to try to understand the war at all.]
Tasmanian author Richard Flanagan has written seven novels, one of which—Gould’s Book of Fish—I would rank among the very finest of twenty-first century literature to date. I primarily read books of history, biography and science these days, but I do stray to the realm of fiction from time to time. When I happen upon a writer whose literary output not only consistently transcends the best published fiction of its day, but is so iconic that it comes to define its own genre—Cormac McCarthy and Haruki Murakami also come to mind—I latch on to that novelist and set out to read their full body of work. Wanting marks my completion of all of Flanagan’s novels, and it turns out that I saved one of the very best for the very last.
There is irony here because I have long resisted it, based upon its off-putting description on Flanagan’s Wikipedia page—“Wanting tells two parallel stories: about the novelist Charles Dickens in England, and Mathinna, an Aboriginal orphan adopted by Sir John Franklin, the colonial governor of Van Diemen’s Land, and his wife, Lady Jane Franklin”—which struck me as a formula for fictional disaster! It turns out that I could not have been more wrong.
While several of Flanagan’s novels include characters from history, it would not be accurate to tag these as historical fiction, the way that category is generally understood. But then, the author’s work often defies classification. Flanagan is all about redefining genres—or creating new ones. Think Gabriel Garcia Marquez, John Irving, André Brink: Richard Flanagan truly belongs in that league.
The real Sir John Franklin did indeed serve as Lieutenant Governor of Van Diemen’s Land (today’s Tasmania), but he is better remembered as the arctic explorer who made a tragic end in 1847 in a disastrous attempt to chart the Northwest Passage, when his ships became icebound, resulting in his death as well as that of his entire crew. The legend of the lost expedition he commanded, and the true fate of his crew, have been the subject of much speculation right down to the present day, and Franklin has often been lionized for his heroism. But the John Franklin of Wanting is not only less heroic, but rather instead a grotesque, self-absorbed, disturbing individual. Franklin and his equally narcissistic wife, Lady Jane—desperate for a child of her own—ignore prevailing taboos to adopt Mathinna, also a historic figure, one of the few full-blooded aborigines still remaining on the island after a sustained reign of terror by colonial settlers and a succession of pandemics had reduced their numbers to near extinction. What at first glance smacks of altruism masks more questionable desires by each of the Franklins—their brand of “wanting”—that Mathinna comes to fulfill, or fails to fulfill. The tragedy of Mathinna is brilliantly revealed through the nuance and complexity of a masterfully written narrative that subtly draws the reader in to expose a series of horrors hidden among the mundane that is ever chilling yet never stoops to the gratuitous.
As if these characters and themes were not sufficiently complicated for any work of fiction, the novel contains an equally compelling parallel tale, told in alternating chapters, of author Charles Dickens in London, some ten thousand miles away. The connection of the Franklins to Dickens was a visit by Lady Jane to the famed novelist, seeking his support. In the years after her husband was lost to the Arctic, Lady Jane devoted her life both to memorializing him and sponsoring expeditions to locate him, in the feeble hope that he survived. Then evidence emerged that Franklin was in fact dead, hinting that in their last gasps he and the crew resorted to cannibalism to survive. Franklin’s widow will have none of it, and she enlists the aid of England’s most celebrated figure to defend Franklin’s honor against such horrid innuendo. Dickens, a Victorian rags-to-riches miracle who is both brilliant and wildly successful while yet morose and dissatisfied, haunted by the death of a favored child and locked in a loveless marriage, is plagued by his own sort of “wanting.” The intersection of his unrequited deepening well of discontent and Lady Jane’s determination to restore her husband’s reputation serves as the linchpin of the novel, spawning new purpose in Dickens even as Lady Jane basks in anticipation of the martyred explorer’s vindication. Dickens is far more intelligent and far more accomplished than either of the hapless Franklins, but despite his genius and outsize public persona he shares a similar unmistakable shallowness in his nature. In Flanagan’s Wanting, Dickens struggles to exist outside of the characters in his novels, and then takes it upon himself to produce, direct and cast himself in a role on the stage that permits him to stand before an audience as the heroic, romantic figure he longed to be.
Fiction reviews should largely avoid spoilers so I will leave it here, but history buffs will certainly google the main characters to learn what really happened. It won’t be giving much away to note that six years after Wanting was published in 2008, the wreck of the HMS Erebus—one of Franklin’s ships—was discovered, and two years after that his second ship was found, the HMS Terror, said to be in pristine condition. Even prior to that, evidence that cannibalism was in fact part of the crew’s final days was substantiated, contradicting both Lady Jane and the ardent defense mounted by Dickens. I will withhold the fate of poor Mathinna, other than to note that her gripping story—in the novel and in real life—will likely shadow the reader long after the last page of this book is turned.
I believe that every fiction review should include a snippet of the author’s own pen for those unfamiliar with their style and talent. This bit concerns a minor character—if any of Flanagan’s characters can be said to be minor ones—an aging actress in Dickens’ London:
On the night she had received the news of Louisa’s death, leaving her the only surviving member of her family, Mrs Ternan had stifled her weeping with a pillow so her daughters would not hear her heart breaking and would never suspect what she now knew: that every death of those you love is the death also of so many shared memories and understanding, of a now irretrievable part of your own life; that every death is another irrevocable step in your own dying, and it ends not with the ovation of a full house, but the creak and crack and dust of the empty theatre. [p90]
That powerful excerpt is just a tiny sample of Flanagan’s superlative prose. Wanting ranks amongst his finest novels, which in addition to Gould’s Book of Fish should also include Death of a River Guide, and The Narrow Road to the Deep North, although there is not a bad one in the catalog. For the uninitiated who would like to experience Flanagan’s art, Wanting is a great place to start. Perhaps you may find yourself, like this reviewer, going on to read them all.
As a reader, some of my most serendipitous finds have been plucked off the shelves of used bookshops. Such was the case some years ago with Cradle of Life: The Discovery of Earth’s Earliest Fossils, by J. William Schopf, a fascinating account of how the author in 1965 was the first to discover Precambrian microfossils of prokaryotic life in stromatolitic sediments in Australia’s Apex chert dated to 3.5 billion years ago, the oldest confirmed evidence for life on earth at the time. My 2017 review of Cradle of Life—nearly twenty years after it was first published—sparked an email exchange with Bill Schopf that later led to his sending me a signed edition of his most recent book, Life in Deep Time: Darwin’s “Missing” Fossil Record. He did not ask me to read and review it, but naturally I did.
In this work, Schopf—an unusually modest man of outsize accomplishment—typically credits good fortune rather than his own estimable talents, often emphasizing the centrality of teamwork in the pursuit of sound science, as well as frequently paying tribute to the notion that each discovery and its discoverers are after all “standing on the shoulders of the giants” that preceded them. A young grad student when he first got into the game, at seventy-seven the author now remains the most significant living survivor of those paleobiologists that devoted decades in an effort to identify and substantiate traces of the most ancient forms of life on the planet. He feels the clock ticking, and thus is strongly motivated by a desire to leave a record of the journey that led to such consequential discoveries now that most of his peers have passed on.
The result is Life in Deep Time, a curious book—actually something of a blend of three different kinds of books—that succeeds more often than not in its efforts, even if at times it can be an uphill climb for the general reader. It is first and foremost a memoir that dwells for a surprisingly long time on the author’s youth and upbringing, which can be awkward at times because of his decision to employ a third-person limited literary technique in the narrative, so that it is “Bill wondered about …” rather than “I wondered about …” Early on, the reader might grow a bit impatient as Bill negotiates high school, often under the disapproving glare of his father, an admirable man who nevertheless sets impossibly high standards for his son and is quite difficult to please. Yet, even then Schopf is ever the optimist, always grateful for that which goes his way, and treating that which does not as a valuable learning experience. Rather than being scarred from the travails of enduring a demanding parent, he seems to sit in awe of a father who sets challenges that are always another chalk-mark higher than Bill can grasp. Such circumstances for another might leave that child a substance abuser or a ne’er-do-well, but it simply inspires Bill Schopf to be the best-of-the-best, fully absent an uncontainable ego or an axe to grind.
Beyond memoir, the second focal point of the book recounts Schopf’s scientific achievements, while paying tribute to those he worked with, many of whom are little known or entirely unknown outside of the paleobiology community. Science, the author repeatedly underscores, is a team effort. While the ever-modest Schopf does not dodge the recognition he clearly deserves for his key contributions to the field, he makes certain that credit gets appropriately shared among mentors and colleagues and even assistants.
Schopf’s work has spawned controversy that sometimes spilled over into the public arena. In the first case, there was pushback on his remarkable find of those 3.5-billion-year-old microfossils. Peer-reviewed science upheld his claim, although a prominent rival paleobiologist continued to dispute it. In the second, Schopf was brought in by NASA in 1996 to evaluate the extraordinary if premature announcement that life had been identified in a Martian meteorite, which was trumpeted by scientists, politicians and the media. Schopf was skeptical, and subsequent careful research proved him correct. The author’s well-written examination of these controversies is both coherent and enlightening, although blemished a bit by the continued use of that third-person limited literary technique, which feels especially awkward as he answers his critics through the narrative.
Schopf’s greatest triumph was certainly his discovery of those ancient fossils in Australia’s Apex chert, detailed in Cradle of Life and revisited in Life in Deep Time. Modern science has established that the earth is a little more than 4.5 billion years old, but in the mid-nineteenth century, when Charles Darwin devised his theory of evolution, no one could be sure what the true age of the planet was, although most scientists knew it was far older than the six thousand years that theologians claimed. In his groundbreaking 1859 treatise, On the Origin of Species, Darwin estimated that the erosion of England’s Sussex Weald must have taken some 300 million years, but he was taken to task on this by the famed Lord Kelvin, who publicly scolded that the earth could not possibly be older than 100 million years. Whatever the actual number, Darwin was deeply troubled because the process of natural selection that he envisioned would take much, much longer in order for higher life forms to evolve. In the century that followed Darwin, greater scientific sophistication established the true age of the earth with greater specificity, but it turned out that identifying the planet’s earliest life forms proved quite elusive. This is because traces of these unicellular organisms lacking a membrane-bound nucleus—the prokaryotes that include Archaea and Bacteria—can be maddeningly difficult to identify, and often actually appear to be inorganic remains with strikingly similar characteristics. A famous false positive in this venue set paleobiology back for many decades. As a result, even as late as 1965, Schopf’s find of 3.5-billion-year-old microfossils of prokaryotic life proved controversial, although eventually gained full acceptance by the scientific community.
The science behind all this is remarkably complex, and that is the third focus in Life in Deep Time, a welcome addition for those comfortable with textbooks on paleobiology, but often inaccessible to the general reader. I am trained in history rather than science, so I found some challenging moments in Cradle of Life that had me re-reading a paragraph or two, but much of it was indeed comprehensible to me as a non-scientist, which is not always the case with the final section of Life in Deep Time, which casually includes sentences such as this one:
“By this time, Bill had gained sufficient knowledge of the chemistry of kerogen, the coaly carbonaceous matter of which ancient microscopic fossils are composed, that he imagined that if the dominating polycyclic aromatic ring structures of the fossil kerogen were irradiated with an appropriate wavelength of laser light, they too would fluoresce and produce the images he sought.” [p186]
Material like this is certainly not impenetrable for an educated reader, but long discourses in this vein can lose a wider audience not schooled in paleobiology. Perhaps this content, although critical to scientists reading the book, might have been better placed in the appendix so as not to lose the flow of an otherwise engaging narrative.
While portions of Life in Deep Time may be difficult to navigate for the general reader, I would nevertheless recommend it. Bill Schopf is a remarkable man, a great scientist and a fine writer. The various threads of the tale he relates here add up to a storied saga of the evidenced-based search for the earliest life on the planet, as well as that of the distinguished if often otherwise anonymous men and women who were responsible for marking one of the greatest milestones in recent scientific history. The voice of Bill Schopf is a humble yet commanding one: it deserves to be heard.
Apparently, Sigmund Freud spent the final year of his long and productive life as a refugee from the Nazi menace, in a house in London that is now a museum to his legacy. On the great exile’s preserved desk still sits a good number of statuettes from ancient cultures that he collected, including on one corner a carved stone baboon—known as the “Baboon of Thoth”—symbolic of that ancient Egyptian deity identified with both writing and wisdom. “Freud’s housekeeper recalled that he often stroked the smooth head of the stone baboon, like a favourite pet.” [p13] This anecdote serves as an introduction to Egypt, by Christina Riggs, a 2017 addition to the wonderful Lost Civilizations series that also features volumes devoted to the Etruscans, the Persians, and the Goths.
I was so taken by one of these—The Indus, by Andrew Robinson—that I put the others on a birthday list later fulfilled by my wonderful wife, so I now own the remainder of the set, each one destined to sit in queue in my ever-lengthening TBR until its time arrives. Egypt came up first. But it turns out that Riggs’ book stands apart from the others because it is not at all a history of Egyptian civilization, but rather a studied essay on the numerous ways that ancient Egypt came to be understood by subsequent cultures, its historical record manipulated and frequently distorted to support forced interpretations that suited its various interpreters. The toolkit deployed to construct sometimes elaborate visions that reflected far more kindly upon the later civilizations that succeeded it rather than accurately representing the ancient one that inspired these included its monumental architecture, its tomb painting, its mummified dead, its hieroglyphs, even abstract and unfounded notions of race and superiority—as well as, of course, objets d’art like the “Baboon of Thoth.”
Riggs, whose background is in art and archaeology, writes well and presents a series of articulate arguments to support her examination of all the ways Egypt has echoed down through the ages. It is often overlooked that to the first century Roman tourists who scribbled graffiti on tombs in the Nile valley, the pyramids of Giza were more ancient by half a millennium than those long-dead Romans are to us today! So, it is a very long echo indeed. Alas, for all of Rigg’s talent, I myself made a poor audience for her narrative. I opened the cover yearning to learn more about Egypt, not more about how we recall it. I might not have made the mistake had I noticed at the outset how her title—which is absent the definitive article—differed from the others in the series. There is The Indus, The Barbarians, The Etruscans. Riggs’ edition is simply Egypt. That should have been a clue! But that is, as we say on the street, “my bad,” not the author’s. Despite this, I did find enough to hold my interest, to finish the book, and to recommend it—but only to those with a far greater interest in art history and interpretation than I possess.
I have reviewed other volumes in the Lost Civilizations series here:
A small island called “Bermeja” in the Gulf of Mexico that was first charted in 1539 was—after an extensive search of the coordinates—found to be a “phantom” that never actually existed in that latitude, or anywhere else for that matter. It turns out that this kind of thing is not unusual, that countless phantom islands, some the stuff of great legend, appeared on countless charts dating back well beyond the so-called “Age of Discovery” to the very earliest maps of antiquity. What is unusual about Bermeja is that its nonexistence was only determined in 2009, after showing up on maps for almost five hundred years!
The reader first encounters Bermejo in the “Introduction” to The Phantom Atlas: The Greatest Myths, Lies and Blunders on Maps, by Edward Brooke-Hitching, a delightful, beautifully illustrated volume that is marked by both the eclectic and the eccentric. But the island that never was also later gets its due in its own chapter, along with a wonderful, detailed map of its alleged location. This is just one of nearly sixty such chapters that explores the mythical and the fantastical, ranging from the famous and near-famous—such as the Lost Continent of Atlantis and the Kingdom of Prester John—to the utterly obscure, like Bermeja, and the near-obscure, like the island of Wak-Wak. While the latter, also known as Waq-Waq in some accounts, apparently existed only in the imagination of the author of one of the tales in One Thousand and One Nights, it nevertheless made it into the charts courtesy of Muhammad al-Idrisi, a respected twelfth-century Arab cartographer.
But The Phantom Atlas is not just all about islands. There are mythical lands, like El Dorado and the Lost City of the Kalahari; cartographic blunders, such as mapping California and Korea as islands; even persistent wrong-headed notions like the Flat Earth. There is also a highly entertaining chapter devoted to the outlandish beings that populate the 1493 “Nuremberg Chronicle Map,” featuring such wild and weird creatures as the “six-handed man,” hairy women known as “Gorgades,” the four-eyed Ethiopian “Nistyi,” and the dog-headed “Cynocephali.” That at least some audiences once entertained the notion that such inhabitants thrived in various corners of the globe is a reminder that the exotic characters invented by Jonathan Swift for Gulliver’s Travels were not so outrageous after all.
One of the longer and most fascinating chapters, entitled “Earthly Paradise,” relates the many attempts to fix the Biblical Garden of Eden to a physical, mapped location. The author places that into the context of a wider concept that extends far beyond the People of the Book to a universal longing that he suggests is neatly conjured up with the Welsh word “Hiraeth,” which he loosely defines as “an overwhelming feeling of grief and longing for one’s people and land of the past, a kind of amplified spiritual homesickness for a place one has never been to.” [p92] It is charming prose like that which marks Brooke-Hitching as a talented writer and distinguishes this volume from so many other atlases that are often simply a collection of maps mated with text to serve as a kind of obligatory device to fill out the pages. In happy contrast, there are enchanting stories attached to these maps, and the author is a master raconteur. But the maps and other illustrations, nearly all in full color, clearly steal the show in The Phantom Atlas.
Because I obtained this book as part of an Early Reviewers program, I felt an obligation to read it cover-to-cover, but that is hardly necessary. A better strategy is to simply pick up the book and let it open to any page at random, then feast your eyes on the maps and pause to read the narrative—if you can take your eyes off the maps! From al-Idrisi’s 1154 map of Wak-Wak, to Ortelius’s 1598 map of the Tartar Kingdom, to a 1939 map of Antarctica featuring Morrell’s Island—which of course does not really exist—you are guaranteed to never grow bored with the visual content or the chronicles.
There are, it should be noted, a couple of drawbacks in arrangement and design, but these are to be laid at the feet of the publisher, not the author. First of all, the book is organized alphabetically—from the Strait of Anian to the Phantom Lands of the Zeno—rather than grouped thematically, which would have no doubt made for a more sensible editorial alternative. Most critically, while the volume is somewhat oversize, the pages are hardly large enough to do the maps full justice, even with the best reading glasses. Perhaps the cost was prohibitive but given the quality of the art this well-deserves treatment in a much grander coffee table size edition. Still, despite these quibbles, fans of both cartography and the mysteries of history will find themselves drawn to this fine book.
The phantom island of Bermeja, featured in an 1846 map.
When I was growing up in the 1960s, the Civil War was often dubbed a struggle of “brother against brother,” uttered with a smack of wonderment at how it was that a nation united by so many commonalities could have could come apart like that, only one short century prior, taking more than six hundred thousand lives in the process? Then, as the centrality of slavery came to be properly emphasized, both historiography and sentiment shifted. Certainly, there were plenty of families divided by war—perhaps most famousl