Europe – The Third Greater Nazi Reich „Edelweiss“
Edelweiss – The Third Greater Nazi Reich
The Man in the High Castle (1962) is an alternate history novel by American writer Philip K. Dick. Set in 1962, fifteen years after an alternate ending to World War II in which the war lasted until 1947, the novel concerns intrigues between the victorious Axis Powers—Imperial Japan and Nazi Germany—as they rule over the former United States, as well as daily life under the resulting totalitarian rule. The Man in the High Castle won the Hugo Award for Best Novel in 1963.
The Man in the High Castle
a television series created by Frank Spotnitz and adapted from the novel by Philip K. Dick
In October, The New York Times Magazine presented its readers with an unexpected question: “Could You Kill a Baby Hitler?” The response to the online poll was closely divided, with 42 percent of respondents saying they would indeed take the opportunity to kill Hitler when he was a baby, if provided with a time machine. Another 30 percent said no, and the remainder were uncertain. But the response to the question on Twitter, where the baby-Hitler poll became a momentary sensation, was pretty much unanimous: mockery of the entire idea, and of The New York Times for asking such a futile and unanswerable question. (“Can I use another baby as a weapon?” asked one sarcastic tweet, while others suggested that it would be more humane simply to rewrite the Treaty of Versailles instead.) Clearly, the idea of changing history to eliminate Hitler and everything he made possible—Nazism, World War II, the Holocaust—has a deep appeal to our imagination; and just as clearly, the notion is seen as a not quite respectable fantasy.
Just this blend of fascination and condescension has long greeted writers who take the premise a step further, and try to imagine an alternative history in which World War II played out differently. What would a world without Hitler have looked like? Or, what if Hitler had triumphed, leaving Germany in control of a world empire? When proposed by historians, such experiments in “alternate” or “virtual” or “counterfactual” history are usually disdained as, at best, a busman’s holiday from serious research. At worst, they are a betrayal of the historian’s calling, which is to explore what actually did happen, not to speculate about what didn’t. Novelists have more leeway to imagine alternate realities, but traditionally, their alternative World War II tales have been regarded as mere genre fiction—pulp sci-fi or mysteries.
Still, none of this resistance has prevented writers from engaging in such thought experiments. According to Gavriel D. Rosenfeld, whose study The World Hitler Never Made (2005) analyzes what he calls “allohistorical” World War II stories, “well over one hundred” of them were published between the 1930s and the end of the twentieth century—not just novels, but TV shows, comic books, and video games. And such alternative worlds are increasingly making their way into literary fiction. No longer the exclusive province of mystery writers like Robert Harris (Fatherland) and Len Deighton (SS–GB), they have also been featured in the work of Philip Roth (The Plot Against America) and Michael Chabon (The Yiddish Policemen’s Union).
Academic historians, too, are more willing to risk speculating on what might have been, as evidenced by books like Virtual History: Alternatives and Counterfactuals, edited by Niall Ferguson, and What If?: The World’s Foremost Military Historians Imagine What Might Have Been, edited by Robert Cowley. World War II counterfactuals are especially popular in such anthologies: Ferguson and Andrew Roberts portray a world in which the D-Day invasion failed, while Michael Burleigh imagines a Nazi Europe following a German victory over the US.
No book better demonstrates the changing status of alternative World War II histories than the masterpiece of the genre, The Man in the High Castle by Philip K. Dick. Dick himself, once considered an eccentric cult writer, is now in the Library of America, and The Man in the High Castle—winner of the Hugo Award for science fiction when it was published in 1963—is widely considered his greatest book. This fall, Amazon, which has emerged as an important producer of original TV programming, released a ten-episode series based on the novel, which attracted a great deal of media attention and brought Dick’s story—or something resembling it—to a wide new audience. There was even a scandal, without which no PR campaign is complete: Amazon was forced to remove its ads from the New York City subway, after decorating an entire car with American flags bearing Nazi and Japanese symbols.
The fact that the governor of New York, Andrew Cuomo, thought it necessary to take down these ads is a backhanded compliment to their power—and to the power of Dick’s premise. The Man in the High Castle imagines that, following an Allied defeat in World War II, the United States is partitioned between Nazi Germany, which controls the East Coast, and Imperial Japan, which rules the West Coast. Both the TV series and the novel follow a small group of characters in San Francisco—American subjects and Japanese rulers—as they adjust to or resist the new Axis order in the year 1962.
The crucial difference between the two versions lies in which part of this process they choose to emphasize. For Dick, the drama is in the adjustment, and so it is mainly internal—the story of how (white) Americans, so used to independence and supremacy, learn to think of themselves as subservient and second-rate. This quiet drama unfolds in the stories of Robert Childan, a dealer in collectible Americana, and Frank Frink, a frustrated would-be artist and jewelry designer.
Childan is overjoyed when he is invited to the home of a Japanese couple, thinking he has at last been admitted to terms of equality with the occupier. But the dinner turns out to be a fiasco, and he is humiliatingly reminded of his inferiority. Similarly, Frank tries to market his original jewelry, only to find that there is no market for it. The Japanese are interested in pilfering the American past for quaint artifacts, not in supporting living American artists. It is these kinds of slights, not outright acts of cruelty, that characterize the Japanese occupation as Dick imagines it. “We are defeated and our defeats are like this, so tenuous, so delicate, that we’re hardly able to perceive them,” Childan reflects. “What more proof could be presented, as to the Japanese fitness to rule?”
By contrast, Frank Spotnitz, the producer who adapted the series for television, focuses on resistance, which means that the action is mainly external. His version of The Man in the High Castle is a story of outlaws who try to bring down the occupying regime through a series of daring exploits. Of course, the difference is largely owed to the strengths of the two genres; unlike fiction, television must always show something happening. But the result is that the TV version of The Man in the High Castle is much more conventional and melodramatic than the book it is based on, even as the two share many characters and scenes.
The television series focuses on Juliana Crain (Alexa Davalos), a young woman whose sister, Trudy, is involved in an underground resistance movement. The main work of this movement is the distribution of samizdat newsreels, one of which Juliana watches in the first episode. Though we see it only in glimpses, it seems to portray footage from the “true” version of World War II, in which the Allies triumph. This makes it a morale-booster for the defeated and occupied America of the show’s world. But there is also an implication, which grows stronger in the season’s final episodes, that the films themselves have some kind of power to overthrow the regime and change the course of history. Indeed, if the show is renewed for a second season, we will likely hear more about the mysterious “Heisenberg device” that the Nazis are said to have developed, which seems to allow for travel between alternate realities.
For most of this ten-episode season, however, the newsreels function as a classic Hitchcockian MacGuffin. It doesn’t matter what’s in them, only that every character in the story is after them. When Trudy is shot by the Japanese police in the first episode, she hands off the film to Juliana, thus involving her willy-nilly in the resistance. Juliana undertakes a trip to the “neutral zone,” a strip of territory in the former mountain states that serves as a buffer between the German and Japanese empires. Here she encounters Joe Blake (Luke Kleintank), a handsome young truck driver who also claims to be in the resistance—but who is actually, the viewer knows, an undercover Nazi spy, sent from New York to recover the film for Hitler.
Meanwhile, back in San Francisco, Juliana’s boyfriend, Frank Frink (Rupert Evans), is dragged in by the Japanese police and questioned about her whereabouts. Because Frank is part Jewish, he is in danger from the Japanese government, which has adopted Nazi-style racial laws. In one of the series’ most powerful, if manipulative, scenes, Frank’s sister and her two children are held hostage in order to force him to give up his secret. When he refuses to do so, they are gassed, in a chamber made to look like an ordinary waiting room.
This loss drives Frank to seek revenge: getting his hands on an antique Colt pistol, he plans to assassinate the visiting Japanese crown prince as he gives a speech. Before Frank can fire his gun, however, another shot rings out and the prince falls down wounded—a scene cleverly designed to evoke memories of the Kennedy assassination, with the mysterious multiple plotters who were alleged to have been involved. Now Frank and Juliana are both on the run; they must figure out a way to recover the film, help the resistance, and escape San Francisco before the Japanese police capture them.
As with most TV thrillers, however, the details of the plot do not matter much in The Man in the High Castle. There are the usual improbabilities and illogicalities, with complication piled on complication—in addition to the Japanese and the SS, the lovers must contend with psychopathic bounty hunters and Yakuza gangsters. Lest we lose sight of the grimness during all this excitement, the actors remain permanently doleful and soft-spoken, and the art direction conspires in setting the depressive mood. Every bold color has apparently disappeared from occupied America, and every single character is dressed in a tastefully muted shade of brown or blue. This palette is so insistently maintained that it becomes an affectation, though it pays off in the final scene of the last episode, when color is suddenly restored in a striking coup de théâtre.
These attempts to externalize the mood of defeat only underscore how well Dick establishes that mood in his novel. Almost none of the action of the TV series can be found in the book: no Yakuza, no crown prince, no resistance movement. There are not even any newsreels. Instead, in the book, appropriately, there is a book—a novel within the novel called The Grasshopper Lies Heavy, which narrates an alternate reality in which the Allies won the war. This book is not a clandestine item, however; it is freely distributed, at least within the Pacific states, and many characters end up reading it.
Juliana—who in the novel is not Frank’s girlfriend but his ex-wife—becomes so obsessed with it that she travels to Wyoming to meet its author, Hawthorne Abendsen, who is known as the man in the high castle because he lives in a mountain retreat. (In the TV series, the identity of the titular “man” remains a mystery.)
Every alternate history has a “crux,” a moment when history takes a different path to a new outcome. In The Man in the High Castle, the crux is the attempted assassination of president-elect Franklin Roosevelt, which took place on Feburary 15, 1933. In reality, the assassin, Giuseppe Zangara, missed Roosevelt and instead shot Anton Cermak, the mayor of Chicago, who died a few weeks later. In Dick’s story, Roosevelt is killed, leading Vice President John Nance Garner to take office. Garner fails to pull the US out of the Depression or to arm it for war, and so the country is unable to resist the Axis invaders in the 1940s.
The Grasshopper Lies Heavy, the novel within the novel, imagines this history being righted: Roosevelt survives assassination and the US wins the war. However, the history Abendsen imagines is not “our” history; it is yet another alternate world, in which Roosevelt serves two terms and is succeeded as president by Rexford Tugwell, one of the members of the New Deal’s “brain trust.” It is Tugwell who then wins World War II, since he has the foresight to order the US fleet out of Pearl Harbor before it is attacked—something that Roosevelt, in reality, failed to do.
By placing a book about an alternate reality at the center of his own book about an alternate reality, Dick creates a wilderness-of-mirrors effect. The history Abendsen imagined is no more “correct” than the history Dick himself imagines. But by proliferating the possible courses of events like this, Dick casts a kind of doubt on the history we recognize as true. We live in a world where America did win World War II, and to that extent we see it as having taken the right course. Good triumphed over evil and the world remained livable.
But ours is also a world in which the Holocaust took place and the atom bomb was dropped on Hiroshima and Nagasaki. The happy ending of World War II was a forty-four-year cold war in which the world stood constantly at the brink of nuclear annihilation. Perhaps, Dick leads us to wonder, we see true history as good history only because it is the one we are used to. Seen from another perspective, we ourselves inhabit what might be called “bad history”—a reality in which things occurred that ought not, by any standard of sanity and justice, to have taken place.
The fragility of the boundary between good history and bad history—and the impossibility, sometimes, of telling them apart—is the real subject of the best alternate World War II stories. One of the most popular, Fatherland by Robert Harris, is a police procedural set in an alternative history in which Hitler managed to conquer the Soviet Union, though he did not invade the United States. In sketching the geopolitics of this world, Harris suggests that it would not actually be much different from our own: instead of a cold war between the US and the Soviet Union, there is a cold war between the US and a nuclear-armed Nazi Germany. (Indeed, there is even a President Kennedy—except it is not John Kennedy, but his father, the appeaser Joe.)
To the characters in Fatherland, life in a Nazi empire is maddeningly normal, in part because nobody is officially aware of the Holocaust, which remains cloaked in “night and fog” just as it was in the actual Third Reich. The plot of Fatherland involves the hero, a cynical detective named Xavier March, trying to solve a murder that turns out to be linked to the extermination of the Jews. In revealing the culprit, Harris suggests, March will publicize the Holocaust, and somehow the course of history will change for the better.
In the real world, however, the Holocaust was known about almost from the moment it began—and yet it still took place. The form of the detective novel inevitably suggests that solving a mystery will bring justice to the world: the murderer is captured, the crime avenged. But real history is the opposite of a detective novel, in that it exposes the hollowness of such retribution. The Nuremberg Trials recorded Nazi crimes, but they didn’t “solve” them; justice can write history, but not rewrite it.
It is when we recognize this limitation of historical writing that we feel the need for counterhistories. Walter Benjamin expressed something like this in his last work, “Theses on the Philosophy of History.” “To articulate the past historically,” Benjamin writes, “does not mean to recognize it ‘the way it really was.’ …It means to seize hold of a memory as it flashes up at a moment of danger.” Benjamin was writing in 1940, a moment of supreme danger for Europe, and for himself as a German Jew. With Hitler triumphant in France, and with America and Russia not yet in the war, it was easy to recognize the true history of 1940 as “bad history”—as if events had pivoted into an intolerably evil dimension.
That is why, in this “moment of danger,” Benjamin clung to the desperate, metaphysical faith that he had described years earlier in his essay “The Task of the Translator”:
One might…speak of an unforgettable life or moment even if all men had forgotten it. If the nature of such a life or moment required that it be unforgotten, that predicate would imply not a falsehood but merely a claim unfulfilled by men, and probably also a reference to a realm in which it is fulfilled: God’s remembrance.
When history fails to fulfill the just claims of human beings—and when doesn’t it?—it is up to the writer, Benjamin implies, to remember or invent the world that should have been, and in so doing to play a divine role.
This is exactly what Hawthorne Abendsen does in The Man in the High Castle. Indeed, there is a strong suggestion, in the book’s final scene, that Abendsen dwells in our reality, rather than the alternate reality of the rest of the novel. (He may even be a pseudonym for Philip K. Dick.) In that case, when Juliana accosts him, she is a kind of ghost—“a daemon, a little chthonic spirit,” as Abendsen puts it—come from her bad world to demand justice from our good one. This places Abendsen, and Dick, in the position of God, the God we all cry out to from our bad world to demand rectification and redemption. But Abendsen can do nothing for Juliana, just as the writer can do nothing to amend real history. The continuing popularity of World War II counterhistories is a measure of how much we wish—or fear—that things had been different.
by Nicholas Stargardt
Basic Books, 704 pp., $35.00
by Michael Neiberg
Basic Books, 310 pp., $29.99
by Jonathan Schneer
Basic Books, 323 pp., $29.99
In 1943, Germans who enjoyed a joke envisaged two panzer-grenadiers sitting on a bridgehead in Russia in 1999, puzzling over an incomprehensible word they have come across in a book: PEACE. No one in their bunker understands it. The platoon sergeant shrugs his shoulders. Their lieutenant shakes his head, and the next day at headquarters asks the battalion commander what it means. This august figure consults a new dictionary and finds the definition: “Peace, way of life unfit for human beings, abolished in 1939.”
This sort of gallows humor, says Nicholas Stargardt in his book The German War, was a significant element in a formula tried and tested between 1914 and 1918 for durchhalten—“holding out”—in the extraordinary fashion the German people did between 1942 and 1945, Hitler’s years of eclipse, in which around 90 percent of all those who perished in the global conflict met their fates.
The journalist Ursula von Kardorff, no admirer of the Nazis, retreated to the countryside for several days in November 1943, amid the first concentrated RAF bombing of Berlin. But she returned to the city, and to her job, fortified by a surge of determination to resist the attackers: “I feel a wild vitality welling up within me, mixed with defiance—the opposite of resignation.” The bombing, she thought, far from breaking the spirit of the German people, was welding unity: “If the English believe they can undermine morale, then that’s a miscalculation.”
Far too many books are written about the leading Nazis, personalities of awesome banality. The proper object for study must be the German people. How could it be that one of the most educated societies in Europe, the nation of Thomas Mann, inheritors of centuries of high culture and scientific achievement, fell prey to the designs of such gangsters as Hitler, Himmler, and Goebbels, and remained so even when it became plain that the outcome must be an epic catastrophe?
Nicholas Stargardt, an Oxford professor of modern history, draws on diaries, letters, and contemporary documents to paint a huge social canvas of Germans at war, soldiers and civilians, men and women of all ages. There are many unexpected vignettes, such as that concerning Kurt Gerstein, a disinfection expert and SS officer, who visited the Belzec and Treblinka extermination facilities in August 1942.
Gerstein found himself sharing a compartment on the night train back to Berlin with a Swedish diplomat, and risked telling this man what he had seen. Himself a devout Protestant, back in the capital he also reported the gassing of Jews to Otto Dibelius, the liberal Protestant bishop of Berlin, and Dibelius’s Catholic counterpart Konrad Count von Preysing. Nothing came of his revelations, any more than when he told his own father, a retired judge. Gerstein reproached this parent with a scorn that suggested generational role reversal:
When a man has spent his professional life in the service of the law something must have happened inside him during these last few years…. You said: Hard times demand tough methods!—No! No maxim of that kind is adequate to justify what has happened.
Bishop August Clement, Count von Galen of Munster, was another of those who became lost in a moral maze. He protested vigorously and courageously against the Nazis’ first extermination program, directed against mentally handicapped patients. He deplored the ethics of seeking retaliation in kind for Allied bombing of German cities. But he offered impassioned support for the invasion of Russia, which he and other bishops characterized as a “crusade” against “Godless Bolshevism.” After Germany’s defeat, the bishop offered thanks to his country’s
Christian soldiers…who in good conscience of doing right have risked their lives for the nation and Fatherland and who even in the hubbub of war kept their hearts and hands clean of hatred, plundering and unjust acts of violence…. The soldier’s death stands in honour and value next to the martyr’s death.
In seeking to understand Europe during the Nazi era, it is essential to recall that in 1933 the Russian Revolution lay only sixteen years in the past; Stalin was sustaining the slaughter of innocents initiated by Lenin. Terror of Bolshevism—well-merited terror—was a phenomenon common to the bourgeoisie of the entire continent. Only a small, enlightened minority acknowledged from the outset that fascism posed an equal menace. Czechoslovakia, Poland, and Yugoslavia had existed as independent states for less than two decades, and it seemed hard to regard their frontiers as beyond dispute.
Many of Hitler’s people nursed grievances about their loss of national territory under the Versailles Treaty and about the sufferings, real and imagined, of German minorities in Eastern Europe. As Stargardt notes laconically, “only German rights mattered.” In 1939 there was no popular enthusiasm for war, but this blossomed during the first year of the conflict: “Victory was sweet because it seemed astonishingly easy.”
When Hitler’s September 1940 peace offer to Churchill was rejected, many Germans saw this as evidence of their enemies’ intransigence. William Shirer’s Berlin house cleaner grumbled: “Why didn’t the British accept the Führer’s offer?” The author identifies inter-generational support for the June 1941 onslaught upon Stalin’s people:
What bound fathers and sons together was more than shared experience…. The sons had to achieve what their fathers had failed to do. They had to break the cycle of repetition, which condemned each generation to fight in Russia.
Aryans must prevail over Slavs, they believed, or be destroyed by them.
As the war intensified, then turned against Germany in the wake of Stalingrad, a different mood overtook the nation. First came shock at the discovery that the Wehrmacht was not invincible; revelation of the hollow mockery of Goering’s pronouncement that if a single enemy aircraft bombed the Reich, he would call himself Meier. There followed a national closing of ranks, a stiffening of sinews, that astonished the Nazis themselves.
Liselotte Purper, a Hamburger, contemplated Germany’s bomb-battered cities and professed herself filled with impotent rage, not toward Hitler but instead against the “global criminal conspiracy [moved by] such a bottomless hatred, such a fanatical will to destroy as there never has been in the world. They know not what they do!” Even most of those who considered themselves anti-Nazis, says Stargardt, could not bring themselves to wish for Germany’s defeat, though “their sense of profound vulnerability grew.”
Intermingled with expressions of outraged victimhood such as that of Frau Purper was the fact that enough Germans knew what they had done to other nations, especially to Russians and Jews, to conclude that if they lost the war, retribution must follow that seemed likely to be annihilatory. Stargardt writes:
Neither Nazism nor the war itself could be rejected, because Germans envisaged their own defeat in existential terms. The worse their war went, the more obviously “defensive” it became. Far from leading to collapse, successive crises acted as catalysts of radical transformation…. Major disasters like Stalingrad and Hamburg did indeed lead to a catastrophic fall in the regime’s popularity, but they did not in themselves call patriotic commitment into question.
If the Germans had thrown in the towel in 1943 or 1944, on any terms or more plausibly on none, they could have spared themselves the worst consequences of Hitler, and the more than two million German deaths that took place in the last year of the war, at Russian hands or from Allied bombing that attained a crescendo at Dresden, Chemnitz, Leipzig.
Stargardt does not address at length the moral failure of the army, the one force in Germany capable of overthrowing the Nazis: only a small minority of officers participated in the ineffectual July 1944 bomb plot against Hitler. He probably takes the view that this issue has been exhaustively addressed in many other books, as indeed it has. But it poses an intractable dilemma for the author of any work of this kind, which afflicted even the third and last volume of Richard Evans’s monumental study of the Third Reich, to determine how far it is necessary to retrace familiar historical narrative for the sake of completeness. Stargardt addresses extensively and well the active or passive complicity of vast numbers of Germans in the Holocaust, but we have known about this for years.
It is an interesting semantic point that in the later war years Nazi rhetoric constantly deployed the word “fanatical” with approbation, when instructing the German people on the conduct expected of them. Anglo-Saxons, of course, reflexively recoil from fanaticism: no Englishman would have applauded it as an ideal, even in the darkest days of 1940.
Since 1945, much ink has been expended on arguing the case that the Allies were wrong to insist upon Germany’s unconditional surrender, because this persuaded anti-Nazis that they had no choice but to fight on. Yet even most of the plotters who tried to kill Hitler in July 1944, including Colonel Claus von Stauffenburg himself, were not liberals but right-wing nationalists with amazing delusions about the prospect of preserving Germany’s 1939 frontiers in a negotiation with the Allies.
One of Woodrow Wilson’s biggest mistakes was to insist that Germany should be granted an armistice in November 1918, rather than being obliged to surrender. This, together with the fact that the Kaiser’s country emerged from the conflict structurally intact, created the basis for later Nazi claims of the “stab in the back,” the pretense that Germany had not really been defeated.
At the end of World War II, amid the absolute ruin of the Reich, there could be no such delusion. The Western Allies had compromised the virtue of their cause by joining with Stalin’s bloodstained tyranny, and by allowing Russia to bear most of the blood sacrifice for destroying Nazism. But once they did this, it is hard to imagine how any negotiated peace could have been offered or achieved, to which Moscow would have agreed to be a party, or to which the Americans and British should have been.
Nicholas Stargardt notes that in the twenty-first century, Germans have become hugely interested in their own wartime past, though favoring TV programs and books created by their compatriots rather than by British or American authors:
The victim narrative has been most prominent, as interviewers have concentrated on unearthing the buried memories of civilians who experienced the fire-bombing of German cities by the RAF and the USAAF, the epic flight ahead of the Red Army and the killing and rape which so often followed…. Groups of self-designated “war children” formed and everywhere commentators reached for terms like “trauma” and “collective trauma”…[which] tends to emphasise the passivity and innocence of the victims.
More than a decade ago, I myself interviewed for a book a German woman who, in 1945, had suffered many humiliations at the hands of the Red Army, alongside her daughter. She said: “It was so terrible, going through all that, when we knew we had done nothing wrong.” She allowed knowledge that she had killed no Jews, raped no Russian women, to banish any sense of personal shame or guilt, both at the time and since. Stargardt observes that only the extreme right in Germany today seeks to establish a direct moral equivalence between the Holocaust and Allied bombing. Yet it seems depressing enough that many modern Germans wish to regard their parents and grandparents as kindred to the peoples of the nations they invaded and ravaged, all alike victims of Hitler rather than his accomplices.
The publishers describe this book as “the definitive portrait of Nazi Germany during World War II.” No portrait of anything or anyone is “definitive” and any publicist who suggests otherwise should be sent for pulping. Stargardt’s judgments seem impeccably sensible, but they are scarcely original: his furrows are well ploughed. The author nonetheless tells his bleak story fluently and well, and illustrates it with a host of telling and often unfamiliar anecdotes.
The historian Sir Michael Howard argues wisely that counterfactuals are not the proper business of historians, but it is sometimes salutary to consider “what ifs?” Had Hitler conquered Britain, some of us suspect that under occupation its people would have behaved a little, but not much, better than did the French. The aristocracy and commercial classes would have collaborated wholesale.
Likewise Stargardt describes how one German doctor in 1939 tipped off the families of mentally handicapped patients in his sanatorium that their loved ones were destined for extermination, and urged their removal. Few took advantage of his warning. Are we sure, absolutely sure, that in the same circumstances American or British people would have displayed greater compassion? The likely answer, like so many answers to so many questions of this kind, may be uncomfortable for humanitarians.
Most Germans found the first decade of the Nazi era deeply gratifying. When Hitler’s grand vision went wrong for his own people—several years after it began to impose ghastly horrors on the rest of the world—all but a small minority of his people were too busy feeling sorry for themselves to spare sympathy for his victims abroad and at home, least of all Jews. As for the Wehrmacht, no student of its mid-twentieth century record could with a straight face describe its leaders as officers and gentlemen.
A popular delusion exists that while the victorious Allies messed up the end of World War I by making a botched treaty at Versailles, they somehow did better after 1945. It is certainly true that continuing American strategic and economic engagement, exemplified by NATO and the Marshall Plan, led to a much better outcome for Western Europe. But it is hard to propose congratulations all around, when 90 million hapless East Europeans merely exchanged Soviet tyranny for the Nazi variety; upward of half a million Germans perished in the 1945–1946 refugee flights and ethnic cleansing of minorities in the East; and a murderous civil war in Greece persisted until 1949.
It is very hard to bring a vast global conflict to a tidy conclusion. Protracted piecemeal diplomacy achieved more than did “Big Three” summits. In July 1945, the Allies nonetheless conducted the last such wartime meeting at Potsdam. Winston Churchill, impatient to meet the new US president, was the prime mover for it, and harbored his accustomed delusion that his own physical presence could extract more from the Russians and Americans than Britain’s shrunken national status could secure.
Before the conference began in the mock-Tudor Cecilienhof Palace built for the Kaiser’s son during World War I, the British and American delegations engaged in some half-awed, half-appalled tourism amid the wreckage of Berlin. Their Russian hosts were entirely accommodating, indulging even the generals’ and officials’ quest for souvenirs from Hitler’s bunker. Churchill gazed without animosity upon the Germans foraging amid the rubble: “My hate died with their surrender,” he wrote later. “I was much moved by their desolation.”
But when the serious business of the conference began, Churchill was seen to little advantage. He was old and tired, and had failed to do his homework. He lapsed into rambling monologues, and allowed himself to be charmed by Stalin. So poor was his instinct for the new age that, when news was passed to him of the successful atomic bomb test at Alamogordo, he expressed enthusiasm for the prospect of brandishing this threat to bring the Russians to heel in Eastern Europe.
Michael Neiberg’s account of Potsdam dwells at length on the clumsy cramming process to which Harry Truman was subjected, following Roosevelt’s death on April 12. This was handicapped by the disappearance of the records of earlier wartime summits at Tehran, Cairo, and Yalta, so that officials were obliged to brief the new president on American positions from their own imperfect and often contradictory memories.
The book quotes extensively the remarks of Joseph Davies, a former US ambassador in Moscow and one of the most disastrous Americans ever to represent his country abroad. Davies wrote in earnest of Stalin in 1938: “A child would like to sit on his lap and a dog would sidle up to him.” He played a marginal role on the US team at Potsdam, and the author treats both his character and contribution with less contempt than they deserve.
The “Big Three” met thirteen times at Potsdam, their foreign ministers twelve. On the last four occasions, following the July 26 announcement of Labour’s triumph in the British election, Churchill was supplanted by Clement Attlee, Foreign Secretary Anthony Eden by Ernest Bevin. This operation of democracy baffled the Russians, who had taken it for granted that Churchill had power enough to fix the result. It emphasized the Soviet impression of the Western delegations as fumbling novices, of Stalin alone as the assured master of his own nation’s destinies.
Yet Truman did as well as any man could have expected of him at Potsdam, given the huge handicaps under which he labored, not least having the inadequate James Byrnes as his secretary of state. He conducted himself with dignity and firmness, making plain America’s intention to remain engaged in Europe, rather than to retreat into isolationism as in 1919. He set off home, after a mere three weeks abroad, satisfied that he had achieved the two foremost US policy objectives: to make the Russians biddable members of the new United Nations, and to persuade them to attack the Japanese in Manchuria.
The Americans failed to perceive that Stalin, rather than requiring any inducements to join the war against Japan, was bent upon doing so, to secure territorial booty on the Pacific coast. He cared little one way or another about the UN, a mere talking shop. What he wanted, and got, out of Potsdam was to stage a protracted victory parade, and to make plain to the Western Allies that Poland was now a Soviet imperial dependency. The British and Americans reluctantly agreed to cede eastern Poland to Russia along the so-called Curzon Line, so that Stalin kept the Polish turf he had secured from Hitler in the August 1939 Nazi–Soviet Pact.
But what choice was there? As Stalin mockingly demanded at the conference table: “Did your army liberate Poland, Mr. Churchill? Did your army liberate Poland, Mr. President?” The Red Army held Eastern Europe, and could be dislodged only by force of arms. In May 1945, Churchill had caused his chiefs of staff to draw up a detailed plan for Operation Unthinkable, to dislodge the Russians from Poland using forty-two Anglo-American divisions—and the remains of Hitler’s Wehrmacht. Naturally Washington rejected this crazy notion out of hand, but the fascinating planning document still exists, and it is a little disappointing that Neiberg does not mention it.
His problem, in this entirely unexceptionable narrative, is to make a case that Potsdam changed or decided anything. The conference was a fascinating piece of theater, wherein Stalin toyed effortlessly with Churchill and Truman. All the strategic and atomic secrets that they cradled out of sight with the glee of schoolboys were known to him through American and British traitors.
The author, a professor at the US Army War College, describes how on the way home, aboard the cruiser USS Augusta, Truman and Byrnes hit the bourbon together “as they celebrated their success at Potsdam.” The new president later developed into one of the most distinguished statesmen in American history, his high quality especially manifested in the June 1950 Korean crisis. In July 1945, he impressed Europeans as a solid, decent, and trustworthy US leader. But the only participant in the conference entitled to rejoice at getting everything he wanted was Joseph Stalin. World War II achieved its real conclusion not at Potsdam, but with the 1989 fall of the Berlin Wall, and the subsequent collapse of Stalin’s empire.
Many Americans, both at the time and since, have viewed Britain’s wartime governance through a Churchillian prism. It is certainly true that the prime minister dominated his nation’s affairs more than did Roosevelt those of the United States. But Jonathan Schneer’s book, Ministers at War, provides a corrective, emphasizing the important parts played by the War Cabinet, a supporting cast of five to eight members, to whom Churchill had the good sense to delegate many matters in which he was uninterested, and which he knew himself unqualified to arbitrate.
Sir John Anderson, the career civil servant who served as lord president of the council and latterly as chancellor of the exchequer, was mocked by Lord Beaverbrook and Brendan Bracken as “Pomposo,” and indeed Anderson was an arid, lofty figure. But he was an exceptionally able administrator, unafraid of standing up to his leader.
Ernest Bevin, minister of labor, was born into poverty in 1881, worked as a Bristol docker, then rose to become the most powerful trade unionist in Europe, leading the five million members of the Transport and General Workers Union. Bevin, a passionate anti-Communist, was perhaps the only man in Britain with the moral and political authority to sustain working-class support for what might otherwise have been deemed a “Tory war.” There were wartime strikes in plenty, but this rough, tough, chunky man with great hands likened to bunches of bananas earned the respect as well as affection of all those who served with him.
Except, perhaps, Sir Stafford Cripps. The two men hated each other. Cripps was an upper-middle-class Marxist ascetic who had been a highly successful lawyer before entering Labour politics in 1928. He served in 1940–1941 as British ambassador in Moscow, but Stalin vastly preferred the company of Lord Beaverbrook, the capitalist red in tooth and claw, to that of Cripps the would-be champion of the proletariat. Schneer reminds us that, extraordinary as it now seems, at a low point of Churchill’s fortunes as prime minister in 1942, for a time it seemed as if Cripps might supplant him—especially to Cripps.
The prime minister detested the man: “he has all of the virtues I dislike, and none of the vices I admire,” but felt obliged to admit him to the War Cabinet in February 1942 as Lord Privy Seal and leader of the Commons. Cripps’s stock fell, however, when he led a mission to India, in a vain attempt to persuade its nationalists to postpone independence until the war ended. On his return, he made repeated foolish demands—for instance, for an immediate general election, and for the creation of a three-man “military directorate” to oversee the chiefs of staff, one of whom would be himself.
He antagonized a host of humble British people by proclaiming his opposition to their favorite sports—boxing, horse and dog racing. Cripps was a clever, honest man bereft of wisdom. He kept threatening to resign, but fatally delayed doing so until after the November El Alamein offensive, the success of which made Churchill politically invulnerable.
Clement Attlee, leader of the Labour Party and deputy prime minister, played a thankless role, subject to many indignities at his chief’s hands. But Attlee gained the respect of all who worked with him for his patience, loyalty, modesty, and hard work in support of the war effort, especially chairing Cabinet committees. It is good that he secures his due from this book by Schneer, as does the villainous Lord Beaverbrook.
“The Beaver,” a press baron to whom Churchill felt closer than any other man after the death of Lord Birkenhead, commanded the prime minister’s fascination for his wit and wealth. He enjoyed a brief moment of glory as minister of aircraft production in 1940–1941, hastening Spitfire and Hurricane production through the Battle of Britain. But many historians believe that his achievement was greater as a self-publicist and impresario than as an industrial manager.
Thereafter, Beaverbrook drifted in and out of office as the mood suited him, indulged by Churchill in repeated acts of treachery, especially his noisy campaign for a premature second front in France. Schneer writes: “Beaverbrook did not so much crave the top position—although occasionally he thought he could fill it better than anyone else—as crave excitement.” Beaverbrook was a clever but unprincipled man, unworthy of Churchill’s intimacy.
Schneer concludes: “It is useful to remember that Churchill’s colleagues did not treat him with the reverence he so often receives today.” If the War Cabinet had bowed and scraped rather than argued, its members would have been undeserving of credit. The author salutes these “giants as they really were, harnessed together to a common purpose, but often pulling in opposite directions.” He justly concludes that Churchill’s choice of such men for his War Cabinet, and skillful management of them in office, constituted an important element of his own greatness as national leader in his finest years. Whatever the limits of Britain’s military contribution to victory, it could boast the most impressive machine for managing the war effort of any belligerent nation.
by Mary Beard
Liveright, 606 pp., $35.00
by Tom Holland
by Donatien Grau
Paris: Gallimard, 407 pp., €32.00
The empire of ancient Rome spanned the entire Mediterranean world. It included two of the world’s great monotheist religions, Judaism and Christianity, and it provided the environment for the creation of a third, Islam. Historians from antiquity to the present have struggled to comprehend how a small Italian town grew from modest beginnings into a republic and then, after a succession of civil wars, into a great empire. Edward Gibbon was not the only one to recognize that the market for Roman history was huge. It still is, not least because of its colorful and larger-than-life rulers but above all because it embraced so many different and yet interconnected peoples. From the Atlantic to the Euphrates, from the Rhine and the Danube to the edge of the Sahara, Rome transformed and refashioned the cultures it absorbed, and we live today with the aftermath of its conquests.
Rome’s achievement was as paradoxical as it was immense. It seems to have happened without any design or master plan. Gibbon was the first to see that this global transformation could be explained neither by listing dates and sources nor by appealing to divine intervention. The antiquarians who preceded Gibbon not only failed to explain Rome’s rise but failed to perceive, as he conspicuously did, that Roman history had all the ingredients for a great work of literature. Gibbon set the gold standard for literary history, which not even Johann Gustav Droysen on Alexander the Great or Francis Parkman on France and England in America could match. His success was arguably due as much to his great theme as to his tireless industry in composing his work. The three books under review prove that the appetite for Roman history continues unabated to this day.
Anglophone readers have every reason to rejoice that Gibbon, the first and greatest of modern Roman historians, wrote in their language. Theodor Mommsen, who won the Nobel Prize for writing about ancient Rome in German, knew perfectly well that he was no Gibbon. He steadfastly refused to bring his Roman history into the imperial period, where he would have had to compete with his admired eighteenth-century English predecessor. Apart from Ronald Syme’s The Roman Revolution of 1939, which distilled the irony and insight of Tacitus’s Latin into lapidary English prose, no histories of Rome in English have achieved Gibbon’s unique combination of deep scholarship and literary style.
Yet by an astonishing coincidence two contemporary English authors who write often and well about ancient Rome, Mary Beard and Tom Holland, have simultaneously produced readable histories of Rome. It would be patronizing and wrong to speak of their work as popularization, but there can be little doubt that both writers are deservedly popular. Between them they have done more to promote classical studies than all the professors who try to reach thousands through the electronic programs currently known as massive open online courses (MOOCs).
The new books by Beard and Holland overlap most closely in their treatment of the end of the Roman Republic and the first century of the empire, but they also look backward as far as Romulus and Remus. Both show the experience of the two writers in communicating with a general audience by beginning in the middle of the narrative, to engage the reader’s attention, and then circling back to fill in what came before. Beard starts with Cicero’s exposure in 63 BC of the conspiracy of Catiline, and Holland starts in 40 AD with Caligula sitting on a beach on the coast of France looking out toward Britain. These opening pages draw the reader inexorably into the complex web that the authors are spinning.
But the books could not be more different. Beard expressly calls SPQR “a history of ancient Rome,” and her opening sentence bluntly asserts, “Ancient Rome is important.” Her title is the standard ancient abbreviation for Senatus Populusque Romanus, “the Senate and People of Rome,” and as she points out, it still adorns manhole covers and rubbish bins in Rome today. No one could doubt that what she has written has contemporary relevance. Her history evokes a past that visibly impinges upon the present, as modern travelers in Europe, the Balkans, Anatolia, North Africa, and the Near East are constantly made aware.
By the time Beard has finished, she has explored not only archaic, republican, and imperial Rome, but the eastern and western provinces over which it eventually won control. She deploys an immense range of ancient sources, in both Greek and Latin, and an equally wide range of material objects, from pots and coins to inscriptions, sculptures, reliefs, and temples. She moves with ease and mastery through archaeology, numismatics, and philology, as well as a mass of written documents on stone and papyrus.
Not unreasonably Beard brings her history to a close with the conferral of Roman citizenship by the emperor Caracalla in 212 AD upon virtually everyone who lived within the confines of the Roman Empire. What historians have traditionally called the Crisis of the Third Century was just about to begin. This brought the devastating replacement of the Parthians—an Iranian empire that had, since the first century BC, fought occasionally with the Romans—by the Sassanian Persians, who would soon invade Syria. The crisis also included barbarian invasions from the north and a great plague. The conversion of Constantine to Christianity was still a century away. Beard could not have covered those tumultuous times without writing another large volume, but she rightly looks ahead to Constantine just as she looks back to Romulus.
Holland’s book is not like this. His title, Dynasty, tells us at once, with the aid of a subtitle, The Rise and Fall of the House of Caesar, that this is a story rather than a work of history. It is a novel about historical events and personalities that will be familiar to most readers from Robert Graves, but it is not fiction. It reproduces, with marmoreal grandeur, what Holland has learned directly from ancient sources, above all Tacitus and Suetonius, about the court intrigues, sexual scandals, and monstrous personalities that dominated the Julio-Claudian age—the period of the first five Roman emperors—Augustus, Tiberius, Caligula, Claudius, and Nero. The frightful eccentricities of the last of the Julio-Claudians included murdering his mother and presiding over a vast conflagration at Rome that has been thought to have wiped out many of the Christians in the city.
Holland’s novelistic approach enhances a story that he has not invented. This means that his account is gripping and occasionally eloquent, but sometimes the larger historical setting vanishes as he concentrates on vivid personalities at the expense of the vast empire within which all the domestic horrors were taking place. The Gibbonian miracle had been the felicitous union, in a single writer, of a thoughtful historian and a memorable narrator, but this was possible because Gibbon brought an uncommonly large vision to his scholarly and literary gifts. He famously called his work The Decline and Fall of the Roman Empire, whereas Holland seems to like single-word titles—Dynasty for the new one on the Julio-Claudians and Rubicon for an earlier one on Julius Caesar. This seems to be part of a current fashion, to judge from the work of another expert writer on Rome in a novelistic style, Robert Harris, who shows a similar predilection for single-word titles: Imperium, Conspirata, and now his forthcoming Dictator.1
By contrast, in SPQR—not a single word, of course, though admirably concise—Beard spreads out the uncertainties and inconsistencies that every historian must face in sorting out what really happened in the past. She has no hesitation in breaking the continuity of her account by jumping backward and forward to illuminate her argument and by wandering freely across the entire Mediterranean world to provide glimpses of provincial life. She is not telling a story.
Near the end of her book, in a close-up for which she draws on personal knowledge of the site, she suddenly transports her reader to the monuments and history of the city of Aphrodisias in modern Turkey—a city named for the goddess of love that, in the Christian empire, would become Stauropolis, “the city of the cross.” Splicing of this kind is indispensable in writing good history, and Beard gives her readers a master class in historical analysis, with due attention to the reliability of sources, the corruption of traditions, politically motivated myth-making, and the mysterious process by which perceptions of the past determine the course of subsequent events.
Beard begins simply enough by declaring that her account of the Senate and people of Rome will begin in the year 63 BC, the year of Catiline’s great conspiracy to overthrow the Roman Republic dominated by Julius Caesar, a plot that Cicero prided himself on exposing. She even asserts, “Roman history, as we know it, started here.” Why this should be is not at all obvious to me. Although 63 is not a bad place to start an account of the collapse of the Roman Republic, it must be said that a thoughtful eyewitness, Asinius Pollio, who wrote an influential, though now lost, account of the end of the republic, opted to begin in 60, when Pompey and Caesar became allies. This was famously the year with which the great modern historian of Rome, Ronald Syme, began his classic history, The Roman Revolution, and it was Pollio’s example that inspired him to do so.
By starting with 63, instead of 60, Beard must have known that she was repudiating the date that Syme and Pollio had adopted. She does not address this issue, but unexpectedly in the middle of her book she gives a reference to the first poem in Book Two of Horace’s Odes, where the year 60 is named as the launchpad of civil war. It was precisely in this poem that Horace celebrated the audacity of Asinius Pollio in writing a history about inflammatory events that were so recent the embers were still glowing.
To my eyes Pollio rightly marked the beginning of the civil war that brought down the Roman Republic, and it would have made more sense to start here. But even had Beard begun with this date, she would still have had to provide background from centuries before in order to give her readers the necessary perspective to understand what was going on. Beard is an experienced scholar, teacher, and communicator, and she enriches her history by preventing it from becoming a more or less chronological register of events. Her many years in front of students, colleagues, and television cameras have accustomed her to convey a wealth of information and ideas in a chatty style that no one should mistake for a lack of substance, erudition, or insight.
Beard’s relatively brief account of the Julio-Claudians is more than supplemented by the detailed narrative that Holland has provided in Dynasty. His story, though essentially centered upon Rome and its court, provides many lubricious details for which Beard has no space. Apart from the outrageous conduct of Caligula, whom professional historians scrupulously call Gaius, it is Nero who dominates the final years of the Julio-Claudian dynasty that descended from Augustus. This paranoid emperor, who loved to act and sing on stage, felt himself at heart more a Greek than a Roman, and he proceeded relentlessly, after a few tranquil years at the start, to commit crime and engage in depraved acts until his suicide in 68. Yet his reign left its mark through the magnificent Latin literature of his own time and subsequently in the retrospective literature of Western Europe down to the present.
In a wide-ranging book that is more about the perception of Nero after his death than the character of the man in his lifetime, a talented French writer, Donatien Grau, interrogates the sources for the emperor’s reign not only from Nero’s own time but from many centuries after. His book begins, as it should, with a review of the Latin masterpieces that Neronian writers, such as Seneca the philosopher, Petronius the novelist (author of the Satyricon), and Lucan the epic poet (author of the Pharsalia), have left behind. They were writing in the very years when Nero presented himself with increasing flamboyance as a Hellene, performing on stage and competing in the Olympic games.
Grau subtly creates an illuminating counterpoint between the undoubted achievements of Neronian culture and the delusions of the emperor himself. In this respect he can offer interpretations that neither Beard nor Holland attempts to provide, and he does so with an engagingly Gallic rhetoric that serves to highlight the differences between the ways Roman history is practiced on the two sides of the Channel. Grau, for example, questions Syme’s total confidence in the veracity of Tacitus by observing that in Roman studies reactions to ancient claims of accuracy and good faith have been “absolutely contradictory.”
What emerges above all from a comparison of the Nero of Beard, Holland, and Grau is that none of them really tries to get at Nero himself, beyond the caricature and criminality that appear so often in the ancient sources. Since we actually possess several letters from Nero and one long speech, it might have been useful to consider what the man reveals in lines that he may have composed himself.
We know from Tacitus that Seneca sometimes served as a ghostwriter for Nero’s speeches, and he may also have served in that capacity for letters and administrative communications. But a major speech at Corinth, coming after Seneca’s suicide, which was demanded by Nero, and composed in pretentiously florid Greek, seems obviously to transmit the emperor’s authentic voice across two millennia. Its discovery in modern times on an inscription from Akraiphia in Boeotia, north of Athens, was first made known in 1888, as Grau is aware, by the great French epigraphist Maurice Holleaux, who immediately recognized the highly personal tone of the emperor’s Greek: “le style précieux et sentimental à faux, l’emphase egoïste [the precious and falsely sentimental style, the emphatic egotism].”
Eighteen lines of text present Nero in 67 AD at Corinth, at the time of the Olympic competition nearby, when the emperor granted freedom to Greece, or rather, as it was then known, the province of Achaea. Nero was obviously very pleased with what he was doing, and his training in a style of Greek that was often described as Asian served him well. Nero’s generosity had no future, because only a few years later the emperor Vespasian revoked Nero’s gift and restored the Greeks to their prior provincial status. But the speech itself furnishes a unique glimpse into a brief moment of triumph and self-satisfaction near the pathetic end of a monarch who reportedly declared as he was dying, “What an artist dies in me!” Here is Nero to his beloved Hellenes:
For you, men of Greece, it is an unexpected gift which, even though nothing from my generous nature is unhoped-for, I grant to you—such a great gift that you would have been incapable of requesting it. All Greeks inhabiting Achaea and what is now known as the Peloponnesus, receive freedom with no taxation—something which none of you ever possessed in your most fortunate of times, for you were subject to others or to yourselves. Would that Greece were still at its peak as I grant you this gift, in order that more people might enjoy this favor of mine. For this reason I blame Time for exhausting prematurely the size of my favor. But even now it is not out of pity for you but out of goodwill that I bestow this benefaction, and I give it in exchange to your gods whose forethought for me on land and sea I have always experienced, because they granted me the opportunity of conferring such benefits. Other leaders have liberated cities, only Nero a province.
This glimpse into the emperor’s unbridled megalomania is far more precious than any attempt to deduce his character from the ancient authors who wrote about him. It is not part of later reportage or a novelistic invention, as Holland clearly recognized when he chose to cite a brief excerpt from it in his account of Nero’s Greek tour. It is a raw historical document, almost without parallel. Only the surviving text of a rambling speech by the emperor Claudius to the Senate is comparable in its immediacy, but not in its extravagant language. What Gibbon would have done with Nero’s speech if it had been known to him is hard to imagine, because in this case reality itself goes far beyond any irony.
It is of course natural to wonder what the Greeks themselves might have made of this imperial flattery of their gods and their culture through the medium of their own language at its most artificial. But the sober Plutarch, writing a decade or two after Nero’s great gesture, leaves us in no doubt that, however ridiculous Nero may have appeared at Corinth, the Greeks genuinely appreciated him as an emperor who admired their ancient traditions. Plutarch declared that for all Nero’s crimes the Hellenic peoples owed him some measure of gratitude for his goodwill toward them, and a century later Philostratus, the biographer of the legendary miracle-worker Apollonius of Tyana, said that Nero showed unusual wisdom in freeing the Greeks.
Mary Beard observes that after Nero’s death several pretenders to the imperial throne arose in the eastern Mediterranean world by claiming to be the still-living Nero. Beard astutely remarks of these so-called “false Neros” that their deception “suggests that in some areas of the Roman world Nero was fondly remembered: no one seeks power by pretending to be an emperor universally hated.” This was a strange fate for the last of the Julio-Claudians, whose memory was so detested generally that his name was systematically gouged out in most of the inscriptions in which it appeared.
Over the centuries after Nero’s death the greatest example of his megalomania undoubtedly remained the fire at Rome in 64, in which, according to Tacitus, Christians were crucified and burned alive. The authority of Tacitus has conferred upon this horror a degree of credibility that has even led historians to assume that the fiery deaths of Christians at Rome were but part of a more general policy of persecution launched by Nero. Although few now believe that the emperor promulgated some kind of institutum against the Christians, most historians, including Beard, Holland, Grau, and myself, still believe that Christians died, as Tacitus says they did, in the fire of 64.
But even this apparently solid testimony for early Christian persecution has now been forcefully challenged. Our view of Neronian Rome and early Christianity would be dramatically altered if the crucified and flaming Christians in 64 turned out to be mythical, as the Princeton historian Brent Shaw now claims they are. His recent and carefully reasoned article in support of this view rests essentially upon a conviction that it would be anachronistic to refer to Christians in 64, since he questions whether they were then identified as such. Therefore he believes that Tacitus’s version of the fire derives from a fiction, Christian or otherwise, that was devised and disseminated at some point between 64 and the time when he was writing, more than five decades later.2
Shaw’s argument is well made and persuasive at many points, but I still find it hard to believe that there were no Christians in Neronian Rome, when, at least according to the Acts of the Apostles, they were already known under that name at Antioch in the 60s. Suetonius, who was a contemporary of Tacitus and, like him, more than half a century removed from the events he was writing about, even believed that the name of Christ, whom he calls Chrestus, was known at Rome in the 40s when Claudius expelled the Jews from the city. But this may be no more than a vestige of reports that Jesus’s first followers were Jews. Nevertheless it is both important and humbling to recognize that the history with which we have all grown up can change in the twinkling of an eye when a scholar as acute and deeply read as Shaw detects cracks in an edifice we thought we knew well.
Beard is absolutely correct in her opening manifesto that Roman history is important. The world she evokes, through its material culture as much as its textual sources, is a world in which we are, as Grau insists, deeply rooted. Holland conveys its excitement and its fascination in a way that no scholarly tinkering with details can possibly diminish. All three books testify to the enduring appeal of Roman history, but in different ways. Gibbon’s theme for his great work remains as indestructible, varied, instructive, and relevant as it was in the eighteenth century. Yet when it is addressed anew, in the light of discoveries that constantly emerge from every corner of Rome’s ancient empire, Roman history itself subtly changes. That in turn means that all of us who read it and write it change too.
In October 1951 the prime minister decided to call a general election. He wrote to the leader of the opposition informally (“My dear Churchill”) to let him know, before Parliament was dissolved, and polling day took place less than three weeks later. Clement Attlee was displeased by the outcome, with some reason: the Labour Party he had previously led to victory in two elections won a slightly larger popular vote than the Conservatives, but the vagaries of the electoral system gave the Tories a majority of parliamentary seats. And so Winston Churchill returned to 10 Downing Street, shortly before his seventy-seventh birthday.
To look back at that election and to compare it with this year’s is to see the complete transformation of British politics in my lifetime. Even though we now have a Conservative government with a parliamentary majority for the first time in eighteen years, there was another startling outcome in Scotland, quite unimagined as late as last summer. A generation ago the Scottish National Party (SNP) was a tiny fringe group advocating Scottish independence, and at the last election in 2010 it won a mere six seats. This May it won fifty-six of fifty-nine seats in Scotland.
Another fringe group, as it had long seemed, the right-wing Europhobic United Kingdom Independence Party (UKIP), gained a dramatic 13 percent of the popular vote. Thanks to the capricious electoral system, this gave it a solitary MP, but the indirect effect of the UKIP vote was very important in drawing votes away from Labour.
One unhappy change is that the prime minister no longer chooses to call an election as Attlee did. The decision has now been taken out of the prime minister’s hands by the deplorable Fixed-term Parliaments Act. This was passed when the coalition between Conservatives and Liberal Democrats was arranged five years ago, as a way of yoking the two together, but it not only weakened the power of Parliament, it meant that we have known for years that there would be an election this May 7. The effect was to paralyze the government for months before polling day, and to give us the unending electoral cycle that afflicts American politics.
But if there was no shock about the date, there was about the result. In 1951 Labour hoped to win again, and barely lost. This year, there was an almost universal assumption that no party would win an outright majority of parliamentary seats, and that we were in for the kind of wrangling between parties that follows a Belgian or Israeli election. Labour seriously hoped to win at the least the largest number of seats, allowing it to form a coalition or minority government, and was encouraged in this belief by the polls, which right until the end suggested that the two larger parties, Conservatives and Labour, were neck and neck, with the lead regularly alternating: only ten days before election day a headline in the Financial Times read “Poll Gives Labour a 3-Point Lead.”
Polling stations here close at 10 PM. As the hour struck ten on that night of Thursday, May 7, hopes were high at the headquarters of Ed Miliband, the Labour leader. Then just after ten came the news from the exit polls: David Cameron and the Tories were well ahead of Labour and on their way to a clear victory. Several people expressed incredulity. Paddy Ashdown, the former Liberal Democrat leader, said that he would “eat my hat” if these figures were true, while a well-known pollster writhed with embarrassment as he wondered whether the exit polls could be accurate.
As the night wore on, they proved more than that, and every previous poll turned out to be wrong. Nuneaton in Warwickshire (of which more later) lies in the very middle of England, a marginal seat held by the Tories and one of the seats that Labour had to win to have any chance at all of forming a government. At 1:30 AM the Tories held Nuneaton, having doubled their majority, and the game was up for Labour. There was worse to come. Had he won, Miliband would have made Ed Balls his chancellor of the exchequer and Douglas Alexander his foreign secretary. At 8:30 on Friday morning the news that Balls had lost his seat in Yorkshire was greeted by raucous cheers on the trading floor of Credit Suisse, and no doubt other London banks.
And then Alexander also lost his seat near Glasgow to the SNP candidate, Mhairi Black, a twenty-year-old student who hasn’t yet completed her degree. By the time the last votes were counted, that “3-Point Lead” over the Tories, if it ever existed, had turned into a 6.5-point lead for the Conservatives, who had 36.9 percent of the popular vote to Labour’s 30.4. Labour had in fact slightly increased its overall vote from five years before, and increased it more than the Tories increased theirs, but that’s small comfort. The only thing that counts is which party wins the most votes in each parliamentary constituency. The Tories won 331 seats, Labour 232.
As Walter Bagehot, the Victorian writer who is the godfather of political theorizing and punditry, would have said, there’s no arguing with the brute fact of a parliamentary majority. Labour won twenty-six fewer seats than in 2010. Miliband acknowledged the scale of the defeat by resigning as party leader. So did Nick Clegg, the leader of the Liberal Democrats, whose party had suffered a worse disaster.
In the 1983 election, the newly formed Social Democratic Party, led by Roy Jenkins and Shirley Williams and others who had bolted from the Labour Party as it swerved left, formed an electoral alliance with the old Liberal Party. This Alliance won more than 25 percent of the popular vote in that election, but the cruel inequities of the “Westminster” (though also Capitol Hill) electoral system—“first past the post,” or simple plurality in individual districts—meant that, with a quarter of the vote, the Alliance won only about one seat in twenty-eight.
But they persevered, amalgamated as the Liberal Democrats, and five years ago in 2010, when no party won an absolute majority, the “Lib Dems” with fifty-seven seats held the parliamentary balance, and Nick Clegg decided to go into coalition government with the Tories. In his rather weird memoir, A Journey (2010), Tony Blair says that the Lib Dems’ trouble was that they always preferred being critics to being actors. In May 2010 Clegg took to the stage, accepted the burden of office and the burden of responsibility that goes with it, and he has now suffered most grievously, as his party’s vote plummeted and it won only eight seats.
As for Cameron, he had pulled off a remarkable victory, against the polls and pundits and literally against the odds. Few of us can claim to have foreseen this. Although I had a strong hunch the Tories would do better than the 285 seats that was the bookmakers’ margin the day before the election, I didn’t think they would win a clear victory. I can never remember an election in which the betting odds against the winner were so high, up to 7 to 1 for anyone who backed an outright Tory victory in the day before the election. So a little humility is in order, certainly from the pollsters. Just why were they so wrong?
The first excuse advanced is the “shy Tories”—people who were reluctant to say that they might vote Conservative but then did so with ballot paper and pencil in hand. This is far from new, and far from only British. In the crucial Italian election of 1948, when it looked for a moment as though the Communists might win, the Christian Democrats used the clever unofficial slogan, “In the polling booth, Stalin cannot see you but God can.”
In American politics it became known as the “Bradley effect,” after the 1982 election for governor of California, when all the polls showed that the African-American mayor of Los Angeles, Tom Bradley, would win easily, but on the day he lost. Likewise the hard-right French National Front has habitually done better in actual elections than polls have suggested, as Likud did in the recent Israeli election, although that may have been partly the result of Benjamin Netanyahu’s lurid warning that Arabs were voting “in droves.”
But there is more to it. This election might have been about the economy, or the National Health Service, or Europe, or immigration. All those played their part, but a year ago we never imagined the reality—the election was above all about Scotland. On May 7, we witnessed the implosion of our traditional political culture, and perhaps the imminent dissolution of the United Kingdom.
When Attlee and Labour contested with Churchill’s Tories, popular engagement in politics was high, there was an almost purely two-party system, and those two parties were truly national, both enjoying strong support throughout the United Kingdom from Caithness to Cornwall. In 1951, Tories and Labour divided almost 97 percent of the popular vote between them; by 2010 that had fallen to 65 percent and it was only a little more this year. That makes prediction harder than ever.
We have had periods when third parties held the parliamentary balance, as the Irish party did in favor of the Liberals after 1910, or when there was true three-party politics, as in the 1920s, with Labour coming up to overtake the Liberals. But if, as A.J.P. Taylor said of that time, the British electoral system “was ill adapted to cope with three parties,” how much less adapted is it to cope with electing a Commons where eleven parties are now represented, five of which campaign throughout the whole country, and six more of which are from the “Celtic fringe,” Scotland, Wales, and Northern Ireland?
In that 1951 election, Tories and Labour won thirty-five seats each in Scotland. Four years later in the 1955 election, hard as it is to believe as I write this, the Tories actually won a majority of the seats in Scotland, and even managed to win seven of fifteen seats in the great industrial city of Glasgow. To cut short a fascinating story, the Tories dwindled away in Scotland, a decline accelerated by the Tory government of the 1980s and Margaret Thatcher’s open disdain for Scottish municipal socialism, expressed in her wonderfully tactless “Sermon on the Mound” to the General Assembly of the Church of Scotland in Edinburgh, in which she quoted Saint Paul’s words that “if a man will not work he shall not eat,” and in general told the Scots to get their act together.
Maybe she thought she had nothing to lose, but Labour had. Labour had come to dominate Scotland politically, but had been spooked by the intermittent successes of the SNP, which won eleven seats in 1974 and then won a surprising by-election in 1988, the same year as Thatcher’s sermon. Labour responded by stoking the fire of Scottish resentment and making a “Claim of Right”: “The sovereign right of the Scottish people to determine the form of government best suited to their needs.” The Labour politician Robin Cook went further, saying that under the Tory government, “to all intents and purposes Scotland is an occupied country.” Those words would come back to haunt his party.
In 1997, Tony Blair and Labour won a landslide victory, and the Tories were obliterated in Scotland: within a little over forty years they had gone from winning a majority of Scottish seats to winning none at all. The Labour government created a devolved assembly and executive at Edinburgh, carefully designed so that no one party, in particular the SNP, could ever win an absolute majority there. The SNP then proceeded to win an absolute majority.
When Cameron became prime minister, he was maneuvered by Alex Salmond, the former leader of the SNP, into holding a referendum on Scottish independence on what Salmond thought favorable terms. It took place last September. Spooked in turn by polls suggesting that “Yes” might win, Cameron made reckless last-minute promises to the Scots, and then, as soon as “No” had won a clear victory, said that there would have to be compensation for England in the form of “EVEL,” the unhappy acronym meaning that in Parliament there should be only English votes for English laws. This enraged many Scots, and produced a tidal wave of support for the SNP, which swept away the other parties this May.
Nearly the most prominent figure in this election was Nicola Sturgeon, leader of the SNP. There were several television debates, their odd character shaped by Cameron’s determination not to meet Miliband and Clegg face to face. So first we had Cameron and Miliband interviewed separately (and contemptuously) by Jeremy Paxman of the BBC. Then later, those two and Clegg appeared together in Leeds, but taking questions from an audience rather than arguing with each other.
Between those we had a debate with seven party leaders, but not Cameron. Sturgeon was the star of that show, which is all the odder since she was not even herself a parliamentary candidate. She may have impressed some people with her aggressive and intransigent language about an independent Scotland, but all unseen she was having another effect. The rise of the SNP has been compared with the struggle for Home Rule in Ireland in the 1880s, when the Irish Parliamentary Party was led by Charles Parnell, another formidable figure. Listening to Sturgeon I was reminded of what the historian Robert Ensor, who was a young radical journalist during the later part of the Home Rule drama, wrote much later:
Parnell had made a worse mistake than that. All through his career, in practising oderint dum metuant [the phrase that Cicero borrowed and Caligula relished, “Let them hate us so long as they fear us”] towards the English politicians, he had forgotten that there was an England behind them. He had never tired of saying that he held himself responsible to his countrymen only, and did not in the least care what the English thought or said about him; his whole attitude expressed a deliberate hatred towards their nation, which was not unnaturally returned.
That comparison is in one way unfair to Sturgeon, who likes to say that she bears no animosity toward the people of England. But she expresses vituperative animosity toward the Tories. During the debates, she claimed to be the real voice of the left, and she engaged in a kind of banter with Miliband, with him saying that he would never form any coalition or do any kind of deal with the SNP, and her saying, in effect, oh yes you will. Since her strategy clearly did depend on Labour’s being the largest party after the election, it has now gone badly awry, though that hasn’t tempered her aggrieved rhetoric.
For all his haughty disdain, Parnell never claimed any right to dictate British politics beyond his demand for Irish autonomy. But Sturgeon has not only insisted that the Tory government must not “defy the democratic will of the people of Scotland”; she says that she has a “mandate on a scale unprecedented for any political party, not just in Scotland but right across the UK.” This is megalomania, or to be more polite, simply delusional. A great European democracy of 63 million people has just held a general election, in which her party won 4.7 percent of the popular vote. A mandate across the whole country?
If we knew early on that the SNP was going to triumph in Scotland, it was harder until the very end of the campaign to perceive the reaction to this in England, and what its effect would be. It was instructive to compare two writers, the playwright David Hare and John Harris of The Guardian. A week before the election, Hare told us that “round here we’re all talking ABC: Anyone But Cameron,” and then, after a good deal more self-righteousness, “The obligation of any patriot at this election is, by guile or otherwise, either to unseat or reject their Tory candidate.”
What was so fascinating was the first words, “round here,” which is to say Hampstead, the leafy and very expensive borough in north London. Hare says he lives there “because it’s delicious watching American bankers go broke,” but if his personal acquaintances include few of those bankers, I doubt it includes many proletarians either. I was reminded of Pauline Kael saying after the 1972 presidential election:
I live in a rather special world. I only know one person who voted for Nixon. Where they are I don’t know. They’re outside my ken. But sometimes when I’m in a theater I can feel them.
By contrast, Harris calls his column “Anywhere but Westminster” (or Hampstead, as it might be). He is a Labour supporter who comes from a northern working-class family, he doesn’t live in London, and he goes around the country talking to ordinary people, whom he doesn’t patronize whatever their politics. To follow his fascinating election diary, partly in video form, was to be given a hint about the strong performance of the UKIP and the success of the Tories.
It was long assumed that UKIP was a repository for disaffected, reactionary middle-class Tories, but it has now cut heavily into the Labour vote by appealing to another disaffected group, the white working class. UKIP may only have won one seat but, not having run second anywhere in 2010, it came second in no fewer than 120 seats this time, forty-four of them won by Labour. In the Yorkshire steel town of Sheffield, Labour managed to hold four of the five seats; and in three of those four, UKIP ran second.
But there was another indirect effect. One week before the election, Harris reported from Nuneaton (“home town of George ‘Middlemarch’ Eliot,” The Guardian helpfully explained), the seat that Labour had to win. There he found “one message”:
The Scots are always getting one over on the English, and some climactic Caledonian heist is now a very real prospect. Throughout the day, the same refrains repeatedly come back from people we meet: “They wanted self-rule for their country, now they want to poke their bloody noses in ours…Nicola Sturgeon’s after as much money as possible for Scotland, and I think they have a pretty good deal already…If the Scottish get in with Labour, we’re done for.”
On the eve of the election, a television crew from Channel 4 News was in Carlisle, in the far northwest of England, hard by Hadrian’s Wall. They found just the same as Harris had. Two people stopped in the street both said that they had voted Labour all their lives but were voting Tory this time, because they were frightened of the “Scot Nats.”
This is indeed Nicola Sturgeon’s election, with two great victories to her credit: her own party has swept Scotland; and she has helped to win the national election for the Tories.
None of the parties can contemplate the next few years with serenity. In his hour of victory Cameron ought to be chastened by the knowledge that fewer than four votes in ten were cast for his party, and less than a quarter of the whole electorate voted Tory. The overall Tory vote increased by only 0.08 percent since the last election, while Labour’s increased by 1.4 percent. But his party will not be chastened at all. The trouble with a parliamentary majority of twelve is that it leaves the opposition parties powerless but gives huge power to the prime minister’s own backbench MPs, who will always be threatening to rebel.
Many of the Tory MPs are excitedly scenting the raw meat of right-wing government. George Osborne will continue as chancellor, and continue his austerity policies even if they haven’t been much of a success so far even in reducing the deficit as he promised, let alone in addressing the underlying weakness of the British economy, with its low investment, low skills, low wages, and low productivity. Theresa May will continue as home secretary, with plans for new powers of surveillance by the state. Michael Gove is now justice secretary (he is not a lawyer and has no legal training), and intends to repeal the Human Rights Act of 1998.
Triumphant though he is, Cameron faces a problem. He has promised to hold a referendum on whether the United Kingdom should leave the European Union, after negotiating new terms for British membership. Angela Merkel and other European leaders want the British to stay, but it’s doubtful whether they can offer much beyond cosmetic changes, and there are five dozen Tory MPs who are determined to leave come what may.
The prostrate Lib Dems have been replaced as the third-largest party in the Commons by the SNP, whose MPs cheekily tried to crowd out Labour from the Opposition benches on the first day the new Parliament met. But Sturgeon—or Salmond, who has now returned to the Commons—has a tricky hand to play. Short of full independence, the SNP want enhanced devolution, with full fiscal powers to raise and spend money in Scotland. Since that must mean an end to the present large subsidy Scotland receives from England, Tory MPs are increasingly inclined to let them have what they want, and see how they get on.
For Labour, the immediate outlook is very bleak, and the party is in turmoil, with Miliband’s departure followed by the resignation of the leader of the Scottish Labour party, and the withdrawal of the favorite candidate to succeed as leader. The worst danger is for Labour to return to the internal feuding that so damaged the party in the 1930s, 1950s, and 1980s, and indeed, the last result had barely been declared before loud voices were heard denouncing Miliband, from his own party. Peter Mandelson, Tony Blair’s old consigliere and one of the architects of “New Labour,” was everywhere, demanding that Labour should shed any foolish leftist notions and return to Blairism. And Blair’s own name appeared above a column in The Observer, lamenting the defeat more in sorrow than in anger.
Doubtless Miliband was a poor leader who ran an inept campaign, but those who praise New Labour forget the legacy he was dealing with. Almost the turning point of the campaign was when the audience in Leeds rounded on Miliband, asking him to admit that the last Labour government overspent in the golden years before the economy crashed in 2008, and public finances with it—the Blair government, with Gordon Brown as chancellor, and Miliband as one of Brown’s henchmen.
And Blair seems quite unaware of just how low his repute has fallen, with the vast dark cloud of Iraq hanging over him. Robert Harris, now a novelist but once a political journalist, wrote derisively after the election:
When Miliband took over in September 2010 he declared: “The era of New Labour has passed. A new generation has taken over.” Well, yes, it certainly had passed—if by New Labour one means the ability to win elections.
Although Harris was fiercely critical of the Iraq war—see his roman à clef, The Ghost Writer (2007), with a thinly disguised Blair as the central character—he suggests that “Blair needs to be rehabilitated before the process of renewal can begin,” and that he should “make some sort of mea culpa” for the war. But since most people now recognize that the war was a shameful crime as well as a catastrophic blunder, waged on consciously mendacious claims, there is no apology Blair can offer that is neither hypocritical nor self-incriminating.
That same cloud hangs over so many American politicians, not least Hillary Clinton and Jeb Bush. Far from those responsible for that great crime being rehabilitated by current politics, a generation may have to pass before we can recover.
Imperium: A Novel of Ancient Rome
by Robert Harris
Simon and Schuster, 305 pp., $26.00
In 79 BCE, Pompey the Great—Republican Rome’s home-grown answer to the Greek Alexander—was upstaged by some elephants. He was celebrating his military victories in Africa with a triumphal procession through the streets of Rome. This was the nearest thing to heaven for a Roman general. Almost literally; for the triumph involved not only a shameless parade of spoils, captured weapons, looted artworks, and exotic prisoners, it also allowed the general to dress up for the day in the costume of Jupiter Optimus Maximus (“Best and Greatest”), patron god of the city.
This particular triumph was irregular in a number of ways. First of all, Pompey was far too young for any such honor, being at most twenty-six years old. “He got a triumph before he grew a beard,” as one ancient commentator later summed up the precocious celebration—vividly, if not wholly accurately. “Murderous teenager” was how an elderly adversary described Pompey nearer the time, though admittedly this had been provoked by an insult about age from the young man himself. Pompey had inquired whether his gray-beard opponent was just visiting town on a short break from the underworld.
Second, it was said to be only by barefaced chutzpah that he had squeezed permission to hold the ceremony out of Lucius Cornelius Sulla, warlord, conservative ideologue, and (at that time) dictator of Rome. When Sulla, who was then in his late fifties, seemed set to refuse, Pompey again displayed that combination of youthful arrogance and canny prescience: “You should bear in mind that more people worship the rising sun than the setting sun.” Sulla gave in.
Third, instead of the usual team of four horses, Pompey chose to ride in a triumphal chariot drawn by elephants. These animals had been rounded up during the African campaigns and the big-game hunting trip which Pompey apparently enjoyed on his vacation before sailing home. But they did more than remind spectators of the far-flung foreign territories over which the boy-general had been victorious. They cast an even stronger divine light over the whole show. For it was in a chariot drawn by elephants that the god Bacchus (or Dionysos) was believed to have returned to the West after his conquest of India. Pompey was, in other words, going beyond the traditional identification of the general as “Jupiter for the day”; he was presenting himself as the new Bacchus.
But as so often happens when mortals choose to imitate the gods, the elephants did not turn out to be so easy to handle for Pompey as they had for Bacchus. When the triumphal procession came to pass through a gateway on its route up to the Capitoline hill, the lumbering animals proved too big to fit and promptly became stuck. Pompey went into reverse, drew them back, and attempted the maneuver a second time—but still without success. He had no choice but to give up, unhitch them, and hang around while the standard four horses were put in their place.
Most modern historians have taken this as a classic case of an ambitious Roman getting above himself. And they have interpreted the gleeful recounting of Pompey’s discomfiture by a number of ancient writers as typically Roman moralizing rhetoric. The message is: look at the red-faced humiliation that may result from such self-promotion and from claiming the attributes of the gods. Yet a few historians, recently, have seen a rather more sophisticated public relations exercise at work. The whole scene was, they have suggested, carefully stage-managed, to demonstrate to the assembled spectators that Pompey had literally outgrown the traditional constraints imposed by the city and the norms of Roman political life. Pompey was now bigger than Rome.
I remain unconvinced that any such clever scheme by Pompey’s spin doctors is to be detected here. But whichever interpretation we choose, the story calls to mind a side of Pompey and a stage in his career which, from HBO’s Rome to more sober works of history, is often forgotten. Pompey’s image in the modern world tends to be defined by his later years. Politically sidelined by the 50s BCE and overshadowed by the even more dramatic rise of Julius Caesar, he was eventually drawn out of semiretirement to uphold Republican liberty in the face of Caesar’s looming autocracy. Vain and vacillating, he made a poor job of it and was himself crushed by the new “rising sun.” There are few more ignominious deaths in the history of Rome than Pompey’s in 48 BCE. Routed by Caesar’s forces at the Battle of Pharsalus, a pathetic fugitive, he was decapitated on the shores of Egypt by the eunuch henchman of a princeling (in fact the brother of the famous Cleopatra) who was keen to ingratiate himself with the victorious Caesar. In fact, on Caesar’s arrival in Egypt he was presented with Pompey’s head and wept—but according to the poet Lucan, they were crocodile tears.
This sad and seedy final scene of a generally disappointing last act now overshadows Pompey’s biography. His power and glamour in the 70s and 60s BCE are hard to take seriously; his pretensions seem faintly ridiculous. His famous portrait bust from Rome (now in the Ny Carlsberg Glyptotek, in Copenhagen) raises more laughs than it does admiration: a fat, middle-aged Roman, with piggy eyes and flabby cheeks, topped incongruously with a head of hair imitating the distinctive style of the dashing young Alexander the Great. Only a century after Pompey’s death, that obsessive polymath (and moralizing bore) Pliny the Elder could play for laughs this mismatch between Pompey’s early glory and his later fall. Describing one of the most exquisitely expensive (or grossly vulgar) objects displayed in one of Pompey’s later triumphal celebrations—a large portrait head of the general made entirely out of pearls—Pliny cannot resist pointing to the obvious irony: how horribly appropriate that a man who would suffer Pompey’s eventual fate should have put this head on show like this; what an omen for the future.
In fact Pompey’s early career was anything but ridiculous. And despite his eventual role as the well-meaning if ineffectual defender of traditional Republican liberty against the threat of one-man rule, there is a good case for seeing him, rather than Caesar, as the first Roman emperor—in all but name, at least. He dominated the political process at Rome for two decades, he conquered more territory than any Roman general before him and most after (the comparison with Alexander the Great was not purely self-serving), and on several occasions he was granted by a desperate, or grateful, Roman people more or less autocratic power. In 67 BCE, for example, when piracy was rampant throughout the Roman world (Julius Caesar was only one of many young nobles to be kidnapped on the lawless seas and ransomed for a fat sum), Pompey was given an extraordinary military command which made him effectively master of the whole Mediterranean. It was not long before—in the East, even if never in Rome itself—you could find his head on the coinage, cities named after him, religious cults in his honor.
Pompey’s problem was that he lived too long. Like many Romans, he has had the bad luck to enter the popular imagination already old, and the successes of his youth have been capped by the pompous follies of his later years.
Another victim of this treatment is his exact contemporary (they were both born in 106 BCE) Marcus Tullius Cicero. Cicero rose determinedly to the consulship, the highest elected office in the Roman state, without any advantages of aristocratic birth, hereditary political connections, or even the military expertise that gave Pompey a fast track to power. He was one of the “newest” of all the so-called “new men” at Rome, talking his way to the top through a series of high-profile legal cases, which brought with them a wealth of influential political friends. His consulship in 63 BCE was to be his finest hour: he uncovered (or, to follow the suspicions of some skeptical modern historians, invented) a terrorist plot, hatched under the leadership of a bankrupt aristocrat, Lucius Sergius Catilina—and so, as he was ever after to boast, saved the city of Rome from internal destruction, much as Pompey had saved it from the pirates.
But from that point on it was, for him too, downhill all the way. Not only was he likewise eclipsed by the emergence of Julius Caesar, but the success of his consulship quickly turned sour. In quashing the terrorists, Cicero had sheltered behind an “Emergency Powers Act” of dubious legality, and before long he was (briefly) sent into exile on the charge of putting Roman citizens to death without a proper trial. On his return he found himself marginal to the power politics of a looming civil war, an uncomfortably irrelevant figure, forever looking back to his great moment of glory and mouthing political slogans that were as outdated as they were honorable. An “alliance of all decent men,” fine as it sounded, was hardly a realistic solution to the troubles of a city torn between anarchy and autocracy.
In the end, the rhetorical skill that had underpinned his rise to power brought about his downfall and murder. For after the assassination of Caesar, Cicero delivered a series of blistering tirades, some of the cleverest exercises in invective in the history of the West, against Mark Antony, Caesar’s principal lieutenant. It was a brave and simultaneously self-destructive gesture. As soon as Antony had a chance, in 43 BCE, he had Cicero put to death; his tongue and hands were pinned to the rostra in Rome. The story goes that Fulvia, Antony’s wife, took the final vengeance, stabbing the tongue repeatedly with her long gold hairpins.
If Cicero’s later years have fared even worse than Pompey’s in the judgment of history, that is partly because of his voluminous writings, dating mostly to that period, which still survive. Literary giant he may have been, the greatest Roman rhetorician ever, and one of the most influential voices in the introduction of Greek philosophy and theory to the Latin West. But his day-to-day private correspondence (published shortly after his death) and his essays and speeches cruelly document the pretensions and pomposities of a man who has not fully grasped how far his own influence has been eroded.
This is never clearer than in the run-up to the civil war between Caesar and the “senatorial” forces under Pompey in 49 BCE. Cicero, always uncomfortable if forced to move too far outside Italy, had been unwillingly sent off to govern the province of Cilicia, in modern Turkey. While he was there, he had, however, scored some kind of military victory in a skirmish against a posse of troublesome natives and had stormed the (otherwise unheard of) town of Pindenissum. This was not far from where Alexander the Great had passed in his victorious march eastward almost three hundred years earlier—and Cicero, with a degree of self-irony (one hopes), compared his own achievements to Alexander’s.
Returning to Rome, he decided to request a triumphal procession to celebrate his success, almost at the very moment when Caesar was crossing the Rubicon and invading his homeland. Cicero’s letters of this period are an almost poignant testimony to his unrealism and self-obsession. As the Roman world collapsed into civil war, and the Roman elite were leaving the city to join up with one or other of the rival camps, Cicero was still calculating how many votes he needed to secure his triumph. Rather than face the realities of the new conflict, he lingered outside the city (triumphing generals were not allowed to reenter Rome until the very day of their celebration), the laurel wreaths decorating his rods of office, or fasces, wilting by the day.
In Robert Harris’s new Roman novel, Imperium, all this is yet to come; and presumably it will, in the later volumes of what is a projected trilogy on the life of Cicero. This first episode concentrates instead on the rise to power of its hero, a sharp-talking young lawyer with a bit of a social and political conscience—in a city where the precocious Pompey is the key force to be reckoned with. Told in the voice of Cicero’s slave Tiro (in real life, his favorite secretary and confidant, as well as the author of a biography of his master, now lost), it takes the story from the twenty-something-year-old Cicero’s training in oratory to the moment in 64 BCE when he tops the poll in the elections for the next year’s consulship. Harris has already tried his hand at recreating the ancient world. After his best-selling books Enigma and Fatherland—a fantasy about the Nazi conquest of England—his Pompeii (which appeared in 2003) was a clever new take on the eruption of Vesuvius, combining reflections on world geopolitics and modern imperialism with a surprisingly infectious enthusiasm for the details of Roman aqueducts and water supply. Imperium—whose title translates as the military and political “power” of Roman officeholders—has even more bite and appeal than that.
Harris is an expert storyteller, with a sharp eye for evocative detail and an ability to inject suspense into some at first sight unpromising material. An exploding volcano, after all, is one thing; whether Marcus Tullius Cicero is going to achieve the consulship or Pompey will be granted special powers under the Lex Gabinia quite another. Imperium can be read with pleasure by readers of all sorts, whether they know anything about the Roman Republic or not: from the depiction of the Senate house, with pigeons in its rafters dropping excrement onto the Roman bigwigs seated below, to the sparring but affectionate relationship between Cicero and his well-connected and shrewd wife Terentia (though those who already know that a divorce is set to follow in a future volume and that the elderly Cicero will eventually make a fool of himself by marrying an heiress young enough to be his granddaughter may read the sparring less affectionately).
My own particular favorite walk-on part (at least from a Roman historical point of view) is the junior senator Marcus Lollius Palicanus, a bluff, energetic henchman of Pompey—who, unlike his more sophisticated senatorial colleagues, sees no reason to conceal Pompey’s superhuman ambitions. When Cicero and Tiro go to call on Palicanus, they walk into his front hall to be confronted by a bust of Pompey resplendent in the outfit of Alexander the Great (“I suppose it makes a change from the Three Graces” mused Cicero). Moving into Palicanus’ small study, they find it completely dominated by “a huge wall-painting of a laureled Pompey, this time dressed as Jupiter, complete with lightning bolts shooting from his fingers.” “Do you like it?” asked Palicanus. “It is remarkable,” said Cicero—while he and Tiro try not to catch each other’s eyes for fear of collapsing in laughter. This is a brilliantly simple little exchange, which manages to say a good deal about the complex cultural world of the first century BCE: its competing levels of sophistication, mistaken metaphors, crude literalizations. In this case, however, it was actually the clever and urbane Cicero and his slave buddy who in their superior irony had got it wrong—for Pompey in a sense was Alexander and Jupiter rolled into one.
In Imperium, even more engagingly than in Pompeii, Harris manages to exploit the potential of historical fiction both to bridge and at the same time to open up the gap between the Roman world (or any past culture, for that matter) and our own. “Authenticity” and “accuracy” are not the main routes to success in this genre, even though Harris comes out reasonably well on that score. (I would not, however, trust him on plural Latin nouns: it should be Laenates not Laeni. Nor on the byways of Roman constitutional law: the financial qualification for senatorial status at this period was exactly the same as that for equestrian status, 400,000 sesterces, not a million.) Harris, like all the best historical novelists, offers something more than technical accuracy or well-observed re-creation. He is constantly playing with (rather than pandering to) his readers’ desire to look into the Roman world and find themselves. He challenges us both to enjoy and to resist his marvelously inventive, if sometimes glaringly implausible, parallels between Cicero’s situation and our own modern world. Part of the fun of the book is our wary engagement with him as a hugely entertaining, deeply insightful, and, simultaneously, treacherous guide to the history he conjures up. Part of his skill is to set us on our guard against his own seductive narrative.
Wry glances at modern political life, which Harris knows better than most, from years spent as a political journalist and commentator in the UK, enliven almost every page. Cicero’s teacher of rhetoric hammers home the mantra “Delivery, delivery, delivery” as insistently as Tony Blair with his “Education, education, education.” His election campaigns are here conducted on the model of a mainstream British political party, with energetic and well-planned canvassing in the regions (“hands shaken, stories listened to, bores endured,…local worthies smoothed and flattered,” while Cicero delivers exactly the same rousing speech in each dreary town hall). The convenient birth of baby Marcus to Terentia is hailed as an electoral advantage much along the lines of the infant Leo Blair (“suggestive of a virile candidate”); and even while his wife is still pregnant Cicero stoops to some half-serious banter about arranging the ancient equivalent of photo opportunities for the kid. At the end of the book, relaxing after his final victory at the polls, he urges his brother Quintus to think of a “third way” between the two extremes of the horribly plutocratic Marcus Licinius Crassus and the radical, even revolutionary, Julius Caesar: a crafty allusion to the “Third Way” slogan of modern centrist politicians (as well as to the idea that Cicero might in due course find it just as tricky a position to occupy as Clinton, Blair, and Schröder have found it).
One episode in particular is systematically recast by Harris in distinctive modern terms: that is, the Roman reactions to the menace of the pirates running amok in the Mediterranean and the decision in 67 BCE under the Lex Gabinia (“The Gabinian Law,” named after the junior magistrate who proposed it) to grant Pompey almost limitless power, as well as a vast budget and more than 120,000 troops, to deal with them. Harris sees these events in the light of modern terrorism, its political manipulation by interest groups at home, and the threat to traditional civil liberties and democratic values that some responses to terrorism can raise: while the self-serving Pompey insisted that “the existing national security system…was clearly inadequate to the challenge,” traditionalists in the Senate urged that “ancient liberties were not to be flung aside merely because of some passing scare about pirates.”
In fact, the Lex Gabinia has a large part in Imperium. This was, at the time, a highly contested piece of Roman legislation. In handing over power to a single man, it struck at the traditional system of checks and balances that lay at the heart of the Roman Republican constitution. Conservatives, not unreasonably, objected that Pompey was becoming little short of a king and the law was only finally passed in the face of riotous opposition. But as a turning point in Roman history, it has never before escaped from the hands of academics to become the stuff of popular fiction. Harris uses the controversy in part to explore the dynamics of the relationship between his Cicero, a brilliant political tactician, and his Pompey, a slippery, dissembling politician—endowed with immense power and ambition but surprisingly few political wiles. In this fictional version, it is Cicero who steers the Gabinian bill through the assemblies in Pompey’s favor, by a clever combination of appealing to historical precedent and persuading a reluctant Pompey that he would be more likely to be called on to save the city if he at least pretended not to be so eager to do so. And it is, of course, Cicero who ghostwrites Pompey’s nauseatingly self-effacing speeches (“the idea of appearing modest appealed to Pompey’s vanity”). In the end Cicero suffers the indignity of many others who sell their words to others, then or now. As Pompey leaves one of the debates, rather too pleased with his own performance, he turns to Cicero and asks, “Did you like the line about my heart remaining among the hearths and temples of Rome forever?” Cicero can only mutter under his breath, “Naturally I did, you great booby—I wrote it!”
But it is in the run-up to the passage of Gabinian law that Harris offers his most glaring modern parallel. In Imperium, what provokes panic in Rome is not the omnipresence of the pirates across the Mediterranean (people had been living with that for years), but a particularly daring pirate strike on the port of Ostia, almost at the very center of the Roman world, only a few miles downstream from the forum of Rome itself. Ships and granaries are destroyed, a few hundred people killed, a couple of politicians taken hostage. There is much talk in Rome of security breaches, organized conspiracy, unconventional weapons (“poison-tipped arrows, and Greek fire”), proportionate response, and the problems of dealing with a new type of enemy not tied to state or government. “I do not believe we should negotiate with such people, as it will only encourage them,” thunders Pompey (before Cicero has started coaching him in how to get his own way).
The sting in the tail is obvious. Even as Cicero begins to plan how to achieve for Pompey the vast powers he craves, he sees the “pirate menace” for what it is: a weapon in Pompey’s rise to dominance. The panic is being fanned and pointless security measures being put in place precisely to heighten fear. (“This is absurd,” said Cicero as he watched the early-warning devices installed. “As if any sane pirate would dream of sailing twenty miles up an open river to attack a defended city!”) The aim is to make acceptance of the Lex Gabinia easier, for—so the public was to think—only a supreme commander could hope to bring security to the terrified homeland once more. In the end Cicero must go along with the lie.
When, after the law was passed, Pompey did manage to clear the Mediterranean of pirates in seven short weeks, most of Rome was delighted, including the now collusive Cicero: “Whatever Pompey’s faults, no one disputed that he was a brilliant soldier.” Harris leaves it to Terentia waspishly to restate Cicero’s earlier doubts: if they really had been swept away so quickly, “perhaps they had not been quite the menace that Cicero and his friends had made them out to be in the first place!” Cicero did not take kindly to his own bad faith being revealed: “The mood in the house during the following days was as fragile as Neapolitan glass.”
Whether Harris in fact believes the parallel between pirates and modern terrorism to be a good one is not the main point (though he did recently repeat it in The New York Times). Most readers will not anyway take it straight. The enjoyment of Imperium, as of other historical fiction of this quality, comes from watching Harris manipulate a modern parallel and push it with verve and style up to and beyond the bounds of plausibility. This is a genre in which the clever reader is always deemed to know better than the author. We know that the home life of Roman politicians was far from the almost suburban domesticity conjured up by Harris (in which, apart from Tiro, the slaves are almost invisible—and Cicero’s confidant Atticus is, implausibly, living in a “perfect bachelor setup”). We know that this Cicero’s “Labour Front Bench”–style reactions to political crisis would have been baffling in the Roman world itself. Harris’s inspiration comes more from the party structure and ritualized competition of the House of Commons than from the violent bloodbath of the late Republican Senate and assemblies. The pleasure comes from watching the elegant literary gymnastics with which he tries to pull the parallel off.
There is, of course, also something satisfying about the way Harris takes cover in the first century BCE to discuss some of the most raw issues of contemporary political life: not just the threat of global terrorism, but the manipulation of public reaction and the complicity of politicians who should know better. It has been one of the most traditional uses of the ancient world from at least as long ago as Shakespeare decided to discuss political morality under the alibi of Julius Caesar; and is one of the worthiest reasons to continue to support the study of ancient history and literature today.