How MPs resign: The Chiltern Hundreds and the Manor of Northstead

 

Two parliamentary by-elections took place yesterday in Stoke on Trent and Copeland. During the count one of the resigning MPs tweeted:

 

This refers to the convention by which MPs cannot just give up their seats. Instead they must apply for an ‘office of profit’ under the Crown which disqualifies them from sitting in the House of Commons. Jamie Reed raises an interesting point – here I will answer the question as well as provide some background on the mysterious process by which an MP can leave Parliament voluntarily.

It was in 1624 that the House of Commons resolved that a man, after he is duly chosen, cannot relinquish”. There are a number of ways an MP can be removed – death, expulsion by resolution of the House, or the election being declared invalid. However, if a member wishes to leave this is only possible by rendering themselves ineligible for office.

It had long been accepted that a person who held a paid position from the Crown had a conflict of interest – necessary criticism of the Government and the Crown could easily be curtailed if an MP was also an object of patronage. In 1680 this was confirmed in another Commons resolution which stated that no member could “accept of any Office, or Place of Profit, from the Crown” while remaining an MP.

During Queen Anne’s reign a 1707 Act of Parliament (6 Anne c.7) made the position clear:

If any person being chosen a Member of the House of Commons, shall accept of any Office of Profit from the Crown, during such Time as he shall continue a Member, his Election shall be, and is hereby declared to be void, and a new Writ shall issue for a new Election, as if such Person so accepting was naturally dead. Provided nevertheless, that such Person shall be capable of being again elected, as if his Place had not become void as aforesaid.

Initially a number of empty positions, which conferred no real advantage, were used for the purpose of allowing MPs to stand down. This could be so that they could stand for re-election to test the feelings of their electors, to change constituencies, or to be returned after being appointed a Minister (a requirement until 1919).

The ‘offices of profit’ included stewardships of the Manors of:

  • Old Shoreham, Sussex (last used for the purpose in 1799)
  • East Hendred, Berkshire (1840)
  • Poynings, Sussex (1843)
  • Hempholme, Yorkshire (1865)

Now only two of these stewardships are used for the purpose: the Stewardship of the Chiltern Hundreds, Buckinghamshire (first used around 1750); and the Manor of Northstead in Yorkshire (first used in 1844). By the 1600s there were no duties or revenues associated with either position – their only use was as a legal fiction.

Now, when an MP wishes to resign he has to apply for one of these offices of profit. The Chancellor of the Exchequer signs a warrant of appointment which disqualifies the MP and enables a writ for a by-election to be issued. The two offices are used alternately, which allows more than one MP to step down at the same time.

The holder of the office holds it until the next appointment. So, the answer to Jamie Reed’s question is – he is still Steward of Northstead and he can enjoy the title for the time being. The next resignee will get the Chiltern Hundreds, and only when another MP wishes to step down will he lose the stewardship.

Advertisements

The Age of Champions: Roger Federer and the older tennis elite

Federer WM16 (37) (28136155830).jpg

Roger Federer is a phenomenal player. He holds more Grand Slam titles than any other man in the history of tennis. His latest victory at the Australian Open was his 18th. Pete Sampras and Rafael Nadal lag behind on 14 each. Federer’s most recent triumph was remarkable not merely for adding to his already dominating total, but due to his age.

Winning the Australian Open men’s singles at the age of 35, Roger Federer has justly been acclaimed as one of the oldest champions of a tennis Grand Slam tournament. In an era of increasing power and speed, with the punishing schedule of competitions taking its toll of exhaustion and injury on players of all ages, male winners over the age of 30 are seen as something of a rarity. (While not ignoring the achievements of female players, not least the still-unstoppable force of the Williams sisters, here I will be considering the men’s game. I shall also be concentrating on singles.)

Federer is far from being the oldest champion. In 1909 the 41-year-old Arthur Gore triumphed in the Gentlemen’s Singles at Wimbledon. Indeed, he competed in his final tournament aged 52, far beyond the usual retirement age in the modern game. But that was a different time, when the players were amateurs and the game far slower. Most tennis records therefore start with the Open Era in 1968, when professional players became eligible to play at the major tournaments for the first time.

Even then, Roger Federer is not the oldest Grand Slam winner. That honour belongs to the extraordinary Ken Rosewall, who not only won the Australian Open in 1972 aged 37, but two years later finished runner-up at both Wimbledon and the US Open. He competed into his 40s.

Are these just physical and mental anomalies in a sport dominated by younger players, or do champions need to be matured?

According to The Economist (‘Senior Slammers: Roger Federer and Serena Williams defy age at the Australian Open’, 29 January 2017), older players are back with a vengeance. It suggests that Federer’s victory is the culmination of a growing trend for players to continue to excel into their 30s. We are told that the average age of a top-100 ATP player is currently 28.6 (in 1990 it was 24.6). But while this tells us that professional players are retiring later, it does not help us to establish whether older players have a better chance at succeeding at the highest level.

To test this, I have made an analysis of the ages of winners of the four Grand Slams from the start of the Open Era to the present (including Federer’s recent win in Australia) (Chart 1). This seems to suggest that players such as Rosewall and Federer are anomalies, and that throughout the period winners seem to be predominantly aged in their 20s. Teenagers and older players are equally unusual.

winners-age

Chart 1

 

Analysing this data further, we see that in all the main measures of averages (mean, median, and mode) the average age of Grand Slam winners is 24. Looking at winners arranged by age there is a definite curve showing a small number from age 17 with a peak at age 24 (14% of the total); there is then a tailing off with players aged over 32 accounting for only 4%. Even when we discount players who won multiple tournaments in a particular year the same general pattern remains (Chart 2).

grand-slam-age-without-mutiples

Chart 2

 

The picture changes slightly, however, if we take our original data and apply a 5-year moving average to it. A moving average enables us to see how different subsets of the overall data behave (Chart 3). This gives us quite a different picture. While the general grouping throughout the Open Era has still been solidly around victors in their mid-20s, there is a sharp decline in the early 1970s accounted largely by the ageing Ken Rosewall being succeeded by a younger generation of players. A more gradual and consistent rise is also visible from the mid-2000s to the present, with an average age increase of 23 to 28.

winners-5year-moving-average

Chart 3

 

This does not explain how and why older players are successful. Escaping injury and retaining a high level of fitness are key, but the accumulation of stamina and experience must have a role to play. It is natural for mental resilience to build up over time and for a player to build up an intimate acquaintance with their major opponents’ games and styles. Whatever the reasons, the trend towards later success announced by The Economist appears well founded.

It remains to be seen whether Roger Federer’s Indian summer will continue, but if this year’s Australian Open is anything to go by, we are likely to be rewarded with more interesting tennis from our older players.

 

Book Review: Anthony Grafton, The Footnote: A curious history (1997)

Image result for wikimedia commons footnote

Footnotes are an integral part of the historical writing process. No scholar can hope to have his work published academically if he does not accurately, if not lavishly, cite his sources. Even when writing for a popular audience, most contemporary historian will add a dusting of references to their most interesting quotations from their original material.

Readers may be either impressed or bored by footnotes, but few have probably given much thought to the origins and history of this phenomenon. Anthony Grafton, a professor at Princeton University, has supplied this deficiency. It may well be asked why such a study is necessary. Students and academics use footnotes because they need to demonstrate their professionalism – they are a practical tool rather than a thing to be loved. But that does not mean that their story is itself uninteresting.

Grafton takes us on a journey around the western world, highlighting curious episodes in the history of the footnote, whether as a vehicle for bibliographical citation or for commentary on a text. From the outset he defines what a footnote and his purpose is to the modern historian:

Historians perform two complementary tasks. They must examine all the sources relevant to the solution of a problem and construct a new narrative or argument from them. The footnote proves that both tasks have been carried out. It identifies both the primary evidence that guarantees the story’s novelty in substance and the secondary works that do not undermine its novelty in form and thesis. By doing so, moreover, it identifies the work of history in question as the creation of a professional (p. 4-5)

From this description the reader is equipped to understand how the authors of the past moved towards or away from this goal. We are shown how Edward Gibbon, in The Decline and Fall of the Roman Empire, reserved his most acerbic comments for his footnotes. Leopold von Ranke, the celebrated father of ‘scientific history’ was addicted to archives, gaining access to the greatest private collections long before the institution of state archives; though even he, we learn, vacillated between overscrupulousness and abandon in his citation of the sources.

Grafton also shows us the early interest of antiquarians in documenting their facts, together with the controversies that periodically ensued. Ralph Brooke, York Herald, criticized his colleague William Camden for not being accurate in his citation – he advised Camden, formerly headmaster of Winchester, to return to his “inferior province of boy-beating”.

Others took their commentaries to bizarre lengths. Pierre Bayle’s Historical and Critical Dictionary (1696) was required reading for the European intelligentsia for a century. Bayle could count Voltaire and Winckelmann amongst his dedicated readers. Yet this work is comprised mainly of footnotes and even footnotes to footnotes. The original text became dwarfed by his own commentary.

Grafton’s path is sometimes hard to follow. He does not present a strict chronological account of the development of the footnote. We begin in the late-eighteenth century, go to the 1800s, back to the Enlightenment, and end up in the seventeenth century. This is an understandable treatment of the subject matter. There is no single teleological history of the footnote and it would be disingenuous to claim there could be. Different authors, and generations, and schools dealt with footnotes in their own way until very recently. What is more difficult to keep up with is the dotting around between periods, the hope that in moving backwards in time we might finally reach his opinion on how the footnote originated. Incidentally, Grafton seems never to settle upon this.

The Footnote is entertaining and engagingly written, with humour and irony even extending to the copious and lavish footnotes which adorn the book. But it is more than a piece of mere frippery. It is an important contribution to the hitherto scattered scholarship on the history of the footnote.

 

Electing a President: the origin and reality of the Electoral College

The White House, Washington DC

 

To outsiders such as myself, the system of US presidential elections seems bafflingly complex. (Though as a British citizen it might be said the system by which the Prime Minister is appointed is just as bizarre.) Like many others I have been following the latest contest between Donald Trump and Hillary Clinton. Meanwhile I have been trying to get to grips with the method of electing a president: the Electoral College. At a time when the voice of the people is so highly prized, this indirect election which takes place every four years in America does not seem to sit well with the self-proclaimed land of liberty. The system, though, is an interesting one, with purpose even if imperfect in practice.

 

For the benefit of non-Americans, this is how the system works. In the United States each elector votes in his or her state for the presidential candidate they support; the candidate gaining a simple majority of votes is the winner and that state (with certain exceptions) appoints a number of representatives to endorse that candidate. These representatives are known as the Electoral College and it is their votes taken together that determines who will be the next president.

The number of Electoral College members equals the number of senators plus members of the House of Representatives that state elects. Each state has two senators, while the number of congressmen depends upon population. So, a small state like Vermont has three members of the Electoral College, while the densely-populated state of New York has 29. Currently there is a total of 538 votes available and so the candidate who gains at least 270 of these becomes President of the United States.

This contains certain flaws, but it is instructive to look at how such a system arose in the first place. Following the independence of the United States and the agreement that the country should be led by a president, the founders of the new nation had to determine how he should be selected.

At the meeting of the Constitutional Convention in 1787 the delegates considered various options. The initial plan was to elect the president by a vote of Congress, or instead by the legislatures of the states. This was rejected as it would make the president subservient to the legislature – they preferred to keep the executive power of the president separate. The other main alternative was direct popular election, letting the people choose. This was far too radical a possibility for the late eighteenth century: the people would not be able to understand the issues sufficiently to make a discriminating choice.

Accordingly, a compromise was required and in August 1787 the Committee on Postponed Matters was constituted to consider this issue among others. Within four days a solution had been found: the Electoral College. Based on the number of elected federal representatives members of the Electoral College would have two votes. When these were combined the leading candidate would be elected President; the runner-up would become Vice-President. This meant that the system was not dominated by either the legislature or the tyranny of the majority.

Initially it seemed a sensible compromise but very soon it began to show its flaws. It was assumed by the founders that the states would select the best leader objectively. In the first election none of the candidates had a party affiliation, and George Washington (President, 1789-1796) always remained non-partisan. However, by the second election (1792) parties had begun to emerge and in 1800 it was agreed that the Electoral College would use their two votes separately, one for President and the other for Vice-President.

This dealt with the immediate problem but over the following decades other weaknesses emerged. The electorate became better educated. Smaller states were deemed to be overrepresented. The national popular vote did not always tally with the outcome of the election. Improved technology has meant that the counting of votes is more quickly, easily, and accurately achieved than at any time in the past.

Although in terms of direct democracy the Electoral College can be deemed a failure – George W Bush became President in 2000 with half a million fewer votes nationally than his opponent Al Gore – it does have its positive attributes. For instance, the relative equality of the states means that even small ones are given attention and valued; if they were not so electorally important they might well be ignored.

There have been moves to reform the electoral system dating back several decades, though none has ever been enacted. A more recent movement aims to get enough states to agree to abide by the national popular vote, though this is a work in progress. Perhaps the ultimate obstacle is that the Electoral College is written into the US Constitution; matters that are so entrenched are hard to change. But it is not my place to criticize the system. The Electoral College arose from a particular set of historical circumstances. These have changed; the system has not. But so long as the American people support it and believe that it works in their interests and not against them, the Electoral College will be here to stay.

 

Boris Johnson’s “Titanic success” – a lesson in allusion

Image result for boris johnson pericles

At the Spectator magazine’s annual awards for Parliamentarian of the Year, the Foreign Secretary Boris Johnson, on receiving the Comeback of the Year award spoke about Brexit. He claimed that “Brexit means Brexit and we are going to make a Titanic success of it”. The former Chancellor, George Osborne, presenting the award quipped: “It sank”. Johnson quickly sought to clarify his meaning, urging that Britain would make “a colossal” success of Brexit.

Since uttering the words “Titanic success”, Boris Johnson has been returned to his usual role of gaffe-ridden bonhomie. Mr Osborne and the media have assumed that Johnson was referring to RMS Titanic, the ill-fated liner that sank on its maiden voyage in 1912 in spite of claims of its invulnerability.

As one of the leading advocates of Brexit, Johnson likening it to a notorious disaster would be bizarre and illogical. Accordingly, I would suggest that was not his intention. His comment was misinterpreted because his cultural references are on a different level.

Anyone who has read Johnson’s journalism or heard his speeches will know of his deep love of the Classical world. It is well known that his hero is the Athenian statesman Pericles. He peppers his output with references to Greek and Roman culture, accurately or not. It has been claimed this is part of a deeply eccentric self-defence mechanism. But it is an important part of his public psyche.

The Oxford Classics graduate, Boris Johnson has long been the Government’s classicist-in-residence. So it is unsurprising that he should refer to the “Titanic success” of Brexit. For I would suggest he was not referring to the marine disaster, but to Greek mythology. In this context, before Zeus and the Olympian gods came to rule the universe they were preceded by an older race of deities, the Titans.

Because of this there is a meaning of “titan” as “a person of gigantic stature or strength, physical or intellectual, a ‘giant’; sometimes, one who belongs to the race of ‘giants’ as distinct from the Olympians or ‘gods’” – so says the Oxford English Dictionary. This is clearly a more positive reading of the word and is far more likely to be the allusion that was intended.

Thomas Carlyle, in his history of Frederick the Great, noted that “the figure of Napoleon with titanic”, while Byron in writing of Rome spoke of “the skeleton of her Titanic form”. These authors, writing long before the ship was ever contemplated, could speak of things “Titanic” that were enormous and powerful. It is rather that tradition that Johnson is part of. But it should be noted that Byron wrote of Rome as a ruin and Napoleon was undone by ὕβρις (as Johnson might put it).

Students of the Classics will also be aware that the Titans engaged in a 10-year war with the Olympian gods, at the end of which they were vanquished and imprisoned in Tartarus, the deepest part of the Underworld. One hopes that this is an oversight in Johnson’s analogy making rather than a prediction of the fate of Brexit and its adherents.

Book review: David Lough, No More Champagne: Churchill and His Money (2015)

Image result for churchill cigar commons

Winston Churchill has a well-deserved reputation as a bon vivant. From the iconic cigar to the John Bull physique to the legendary consumption of Pol Roger champagne, he was a man who enjoyed life. This is fully documented in the vast biographical literature on Churchill. What is perhaps less well known is that for the majority of his long life his finances were out of control, veering from bursts of fortune to overwhelming debt. In this original study of Churchill and his money David Lough shows us the constant preoccupation the statesman had with personal finance.

Churchill was born into a background of high social status but great financial uncertainty. His father, Lord Randolph Churchill, was a younger son of the Duke of Marlborough, a radical Tory and sometime Chancellor of the Exchequer. Throughout his life he constantly struggled to maintain an income that would meet the enormous expenditure of himself and his family. His early plan of marrying a wealthy American heiress had unravelled when his father-in-law’s magic financial touch disappeared along with his fortune. Consequently, much of Randolph’s correspondence with his son at school and in the army concerned Winston’s consistent and unrepentant overspending of his allowance.

Writing was Churchill’s salvation. An author and orator of genuine persuasion and poetic inspiration, he was only set upon this path professionally by his mother’s social connections. The Daily Graphic was owned by a friend of hers and in 1895 she arranged for Churchill to write a series of war dispatches. Two years later she used her influence at the Daily Telegraph to get him work as a war correspondent on the North West Frontier. A year after this his first book was published. This was a craft that was to sustain him – with periodic ‘retirements’ to avoid tax – for the rest of his professional life. It culminated in his history of The Second World War which earned him the Nobel Prize for Literature and a fortune in cash, not to mention a succession of all-expenses-paid foreign holidays awarded by his publishers.

His true vocation, however, was as a politician. First elected an MP in 1900, he would retire from the House of Commons more than half a century later. When he entered parliament members received no salary, but it was not long before he began to climb the ministerial ladder. By 1910 he was Home Secretary at the age of 36, with a salary of £5,000 (the rough equivalent of £450,000 in today’s prices) and would serve in various roles, including First Lord of the Admiralty, and ultimately Prime Minister. However, this masked Churchill’s great inability to keep his spending under control.

He bought the Chartwell estate in 1922 but this proved such a burden that after the war a group of businessmen bought the property from Churchill. His family retained the right to live there for a relatively modest rent until he and his wife died, after which time Chartwell was presented to the National Trust.

In 1926, Churchill addressed a memo to his wife in which he ordered that ‘no more champagne is to be bought’ and that ‘cigars must be reduced to four a day’. This was a time when he was Chancellor of the Exchequer, again on a salary of £5,000 (equivalent to £250,000). In addition to his spending on material comforts, Churchill gambled on a massive scale (fortunately mainly only on vacation, but still losing an average of £40,000 a year), owned racehorses, and was a compulsive speculator on the stock market. What the modern media would make of a Chancellor with such chaotic personal finances can only be imagined, but Churchill benefitted from an age in which the private life of politicians was still taboo to newspapermen.

Coming from an aristocratic family of steadily decaying financial stability Churchill knew the difficulties of maintaining an estate. Following the war, the Queen is reputed to have suggested making Churchill Duke of London, an honour he refused. It is generally believed that this was a gesture of humility and wish to remain a commoner. Lough, however, suggests his reluctance was due to the lack of an estate to support the dukedom, which clearly was not being offered. This is but one example of how money matters can serve as a key to understanding Churchill’s decisions.

Churchill led something of a charmed life. Having been given his break as a writer by his mother, he was only sustained and possibly kept from bankruptcy by a succession of wealthy patrons: Sir Ernest Cassel, a neighbour of the Churchills and sometime financial advisor to Edward VII; Sir Henry Strakosch, banker and chairman of The Economist; and Lord Camrose, owner of the Telegraph. All these magnates loaned (without expectation of repayment) or gifted Churchill enormous sums of money to bail him out of tight financial corners, due to their personal affection for him or firm belief in his political talents.

Alongside a host of biographies of varying length and quality, several authors have examined other aspects of Churchill’s life. Currently, books on Churchill and architecture, and Churchill as painter have just been published. But this is the first systematic study of Churchill and his money. Lough is the ideal candidate to write such a book: a history graduate, his professional background is in financial markets and wealth management. In spite of a tight grip on the figures, Lough presents an attractive narrative, weaving financial information around the ups and downs of Churchill’s personal and professional life. For a first book this is an extraordinary achievement and perhaps could be a promise of future studies of the finances of the famous.

Britain and Europe II: The Hanseatic League

800px-Hans_Holbein_der_Jüngere_-_Der_Kaufmann_Georg_Gisze_-_Google_Art_Project

Hans Holbein the Younger, Portrait of Georg Gisze

Yesterday, I wrote about Britain’s leading but interdependent role in the Napoleonic Wars as an illustration of the fact that its greatest moments have not been self-generated but the result of meaningful international collaboration. Today, I would like to consider Britain’s trading networks in the light of the greatest European mercantile organisation prior to the EU, the Hanseatic League.

 

Many Brexiteers have claimed that although they object to the integration of laws and policy that has become a leading characteristic of the European Union, they broadly endorse its free trade aims. They heartily, though mistakenly, state that Britain’s involvement with Europe was a positive one so long as it was all about trade and not about politics. But it may be helpful to consider whether it has ever been possible to have secure trading alliances without making political compromises for the greater good.

In the Middle Ages, communities of merchants operating in northern European cities began to work with one another. Beginning with the three independent cities of Bremen, Hamburg, and Lübeck, the network known as the Hanseatic League eventually stretched as far as Poland, Estonia, and England. The Hanse communities were independent of each other but capable of collective action. They had a diplomatic system and even had the military capacity to enforce their terms of trade.

In London there were German merchants trading a century before the Norman Conquest and over time they obtained more privileges for the trade they brought to England. They had a headquarters fronting the Thames, called the Steelyard, and other outposts at the (then) major ports of Boston and Kings Lynn. The English used the Hanse to export wool to the Continent and to import much-sought-after German wine, a much more efficient means of accessing European markets than native merchants could have achieved alone.

They not only spread trade but were a vehicle for cultural dissemination. A young German artist, Hans Holbein, painted some of his first significant portraits in England with Hanseatic merchants as his sitters. Partly due to this, he soon came to the attention of the English elite and in time came to produce the distinctive image of Henry VIII that is still so recognisable.

In spite of this, in time groups of English merchants, thinking they could do a better job, gradually drove the Hanseatic merchants out. During the reign of Elizabeth I, the Society of Merchant Adventurers wanted to conduct trade on their own terms and in 1598 put pressure on the Queen to order them to leave the country. As one of the merchants sadly reported back to the League ‘we at last, because it could not be otherwise, with gloom in our hearts, went out of the door, and the door was shut behind us. May God have compassion.’

Typically, the merchants’ absence soon became conspicuous and a few years later they were allowed back; but things were never the same again.

The British continued to find a use for the Hanseatic League, even once they had tired of their mercantile services. During the Napoleonic Wars, the French emperor forbade European nations from trading with Britain or communicating with them diplomatically. The Hanseatic League, however, still retained a network of embassies, and these were vital in enabling the British to communicate with their allies on the Continent.

What is the lesson of this story? It is that when Britain opens itself to trade, it is a better place. Where money, goods, people, and culture can freely move this is a situation that can only be beneficial. We also learn that domestic arrogance can soon be regretted, and irreparably so; had the Merchant Adventurers not driven out the Hanseatic Merchants, what good might have been achieved. Without the League’s international diplomatic system there may have been no victory for Britain over Napoleon.

It is a mistake to cast aside the patterns of trade that have worked so well, a mistake to close Britain off from the freedom of movement of goods, services, and people, a mistake to assume that bureaucratic networks are inherently defective. Britain should embrace the European Union as once the Hanseatic League was embraced. It must not make the same arrogant mistakes that drove away our partners in trade.