EH.net is owned and operated by the Economic History Association
with the support of other sponsoring organizations.

The Economic History of Indonesia

Jeroen Touwen, Leiden University, Netherlands

Introduction

In recent decades, Indonesia has been viewed as one of Southeast Asia’s successful highly performing and newly industrializing economies, following the trail of the Asian tigers (Hong Kong, Singapore, South Korea, and Taiwan) (see Table 1). Although Indonesia’s economy grew with impressive speed during the 1980s and 1990s, it experienced considerable trouble after the financial crisis of 1997, which led to significant political reforms. Today Indonesia’s economy is recovering but it is difficult to say when all its problems will be solved. Even though Indonesia can still be considered part of the developing world, it has a rich and versatile past, in the economic as well as the cultural and political sense.

Basic Facts

Indonesia is situated in Southeastern Asia and consists of a large archipelago between the Indian Ocean and the Pacific Ocean, with more than 13.000 islands. The largest islands are Java, Kalimantan (the southern part of the island Borneo), Sumatra, Sulawesi, and Papua (formerly Irian Jaya, which is the western part of New Guinea). Indonesia’s total land area measures 1.9 million square kilometers (750,000 square miles). This is three times the area of Texas, almost eight times the area of the United Kingdom and roughly fifty times the area of the Netherlands. Indonesia has a tropical climate, but since there are large stretches of lowland and numerous mountainous areas, the climate varies from hot and humid to more moderate in the highlands. Apart from fertile land suitable for agriculture, Indonesia is rich in a range of natural resources, varying from petroleum, natural gas, and coal, to metals such as tin, bauxite, nickel, copper, gold, and silver. The size of Indonesia’s population is about 230 million (2002), of which the largest share (roughly 60%) live in Java.

Table 1

Indonesia’s Gross Domestic Product per Capita

Compared with Several Other Asian Countries (in 1990 dollars)

Indonesia Philippines Thailand Japan
1900 745 1 033 812 1 180
1913 904 1 066 835 1 385
1950 840 1 070 817 1 926
1973 1 504 1 959 1 874 11 439
1990 2 516 2 199 4 645 18 789
2000 3 041 2 385 6 335 20 084

Source: Angus Maddison, The World Economy: A Millennial Perspective, Paris: OECD Development Centre Studies 2001, 206, 214-215. For year 2000: University of Groningen and the Conference Board, GGDC Total Economy Database, 2003, http://www.eco.rug.nl/ggdc.

Important Aspects of Indonesian Economic History

“Missed Opportunities”

Anne Booth has characterized the economic history of Indonesia with the somewhat melancholy phrase “a history of missed opportunities” (Booth 1998). One may compare this with J. Pluvier’s history of Southeast Asia in the twentieth century, which is entitled A Century of Unfulfilled Expectations (Breda 1999). The missed opportunities refer to the fact that despite its rich natural resources and great variety of cultural traditions, the Indonesian economy has been underperforming for large periods of its history. A more cyclical view would lead one to speak of several ‘reversals of fortune.’ Several times the Indonesian economy seemed to promise a continuation of favorable economic development and ongoing modernization (for example, Java in the late nineteenth century, Indonesia in the late 1930s or in the early 1990s). But for various reasons Indonesia time and again suffered from severe incidents that prohibited further expansion. These incidents often originated in the internal institutional or political spheres (either after independence or in colonial times), although external influences such as the 1930s Depression also had their ill-fated impact on the vulnerable export-economy.

“Unity in Diversity”

In addition, one often reads about “unity in diversity.” This is not only a political slogan repeated at various times by the Indonesian government itself, but it also can be applied to the heterogeneity in the national features of this very large and diverse country. Logically, the political problems that arise from such a heterogeneous nation state have had their (negative) effects on the development of the national economy. The most striking difference is between densely populated Java, which has a long tradition of politically and economically dominating the sparsely populated Outer Islands. But also within Java and within the various Outer Islands, one encounters a rich cultural diversity. Economic differences between the islands persist. Nevertheless, for centuries, the flourishing and enterprising interregional trade has benefited regional integration within the archipelago.

Economic Development and State Formation

State formation can be viewed as a condition for an emerging national economy. This process essentially started in Indonesia in the nineteenth century, when the Dutch colonized an area largely similar to present-day Indonesia. Colonial Indonesia was called ‘the Netherlands Indies.’ The term ‘(Dutch) East Indies’ was mainly used in the seventeenth and eighteenth centuries and included trading posts outside the Indonesian archipelago.

Although Indonesian national historiography sometimes refers to a presumed 350 years of colonial domination, it is exaggerated to interpret the arrival of the Dutch in Bantam in 1596 as the starting point of Dutch colonization. It is more reasonable to say that colonization started in 1830, when the Java War (1825-1830) was ended and the Dutch initiated a bureaucratic, centralizing polity in Java without further restraint. From the mid-nineteenth century onward, Dutch colonization did shape the borders of the Indonesian nation state, even though it also incorporated weaknesses in the state: ethnic segmentation of economic roles, unequal spatial distribution of power, and a political system that was largely based on oppression and violence. This, among other things, repeatedly led to political trouble, before and after independence. Indonesia ceased being a colony on 17 August 1945 when Sukarno and Hatta proclaimed independence, although full independence was acknowledged by the Netherlands only after four years of violent conflict, on 27 December 1949.

The Evolution of Methodological Approaches to Indonesian Economic History

The economic history of Indonesia analyzes a range of topics, varying from the characteristics of the dynamic exports of raw materials, the dualist economy in which both Western and Indonesian entrepreneurs participated, and the strong measure of regional variation in the economy. While in the past Dutch historians traditionally focused on the colonial era (inspired by the rich colonial archives), from the 1960s and 1970s onward an increasing number of scholars (among which also many Indonesians, but also Australian and American scholars) started to study post-war Indonesian events in connection with the colonial past. In the course of the 1990s attention gradually shifted from the identification and exploration of new research themes towards synthesis and attempts to link economic development with broader historical issues. In 1998 the excellent first book-length survey of Indonesia’s modern economic history was published (Booth 1998). The stress on synthesis and lessons is also present in a new textbook on the modern economic history of Indonesia (Dick et al 2002). This highly recommended textbook aims at a juxtaposition of three themes: globalization, economic integration and state formation. Globalization affected the Indonesian archipelago even before the arrival of the Dutch. The period of the centralized, military-bureaucratic state of Soeharto’s New Order (1966-1998) was only the most recent wave of globalization. A national economy emerged gradually from the 1930s as the Outer Islands (a collective name which refers to all islands outside Java and Madura) reoriented towards industrializing Java.

Two research traditions have become especially important in the study of Indonesian economic history during the past decade. One is a highly quantitative approach, culminating in reconstructions of Indonesia’s national income and national accounts over a long period of time, from the late nineteenth century up to today (Van der Eng 1992, 2001). The other research tradition highlights the institutional framework of economic development in Indonesia, both as a colonial legacy and as it has evolved since independence. There is a growing appreciation among scholars that these two approaches complement each other.

A Chronological Survey of Indonesian Economic History

The precolonial economy

There were several influential kingdoms in the Indonesian archipelago during the pre-colonial era (e.g. Srivijaya, Mataram, Majapahit) (see further Reid 1988,1993; Ricklefs 1993). Much debate centers on whether this heyday of indigenous Asian trade was effectively disrupted by the arrival of western traders in the late fifteenth century

Sixteenth and seventeenth century

Present-day research by scholars in pre-colonial economic history focuses on the dynamics of early-modern trade and pays specific attention to the role of different ethnic groups such as the Arabs, the Chinese and the various indigenous groups of traders and entrepreneurs. During the sixteenth to the nineteenth century the western colonizers only had little grip on a limited number of spots in the Indonesian archipelago. As a consequence much of the economic history of these islands escapes the attention of the economic historian. Most data on economic matters is handed down by western observers with their limited view. A large part of the area remained engaged in its own economic activities, including subsistence agriculture (of which the results were not necessarily very meager) and local and regional trade.

An older research literature has extensively covered the role of the Dutch in the Indonesian archipelago, which began in 1596 when the first expedition of Dutch sailing ships arrived in Bantam. In the seventeenth and eighteenth centuries the Dutch overseas trade in the Far East, which focused on high-value goods, was in the hands of the powerful Dutch East India Company (in full: the United East Indies Trading Company, or Vereenigde Oost-Indische Compagnie [VOC], 1602-1795). However, the region was still fragmented and Dutch presence was only concentrated in a limited number of trading posts.

During the eighteenth century, coffee and sugar became the most important products and Java became the most important area. The VOC gradually took over power from the Javanese rulers and held a firm grip on the productive parts of Java. The VOC was also actively engaged in the intra-Asian trade. For example, cotton from Bengal was sold in the pepper growing areas. The VOC was a successful enterprise and made large dividend payments to its shareholders. Corruption, lack of investment capital, and increasing competition from England led to its demise and in 1799 the VOC came to an end (Gaastra 2002, Jacobs 2000).

The nineteenth century

In the nineteenth century a process of more intensive colonization started, predominantly in Java, where the Cultivation System (1830-1870) was based (Elson 1994; Fasseur 1975).

During the Napoleonic era the VOC trading posts in the archipelago had been under British rule, but in 1814 they came under Dutch authority again. During the Java War (1825-1830), Dutch rule on Java was challenged by an uprising led by Javanese prince Diponegoro. To repress this revolt and establish firm rule in Java, colonial expenses increased, which in turn led to a stronger emphasis on economic exploitation of the colony. The Cultivation System, initiated by Johannes van den Bosch, was a state-governed system for the production of agricultural products such as sugar and coffee. In return for a fixed compensation (planting wage), the Javanese were forced to cultivate export crops. Supervisors, such as civil servants and Javanese district heads, were paid generous ‘cultivation percentages’ in order to stimulate production. The exports of the products were consigned to a Dutch state-owned trading firm (the Nederlandsche Handel-Maatschappij, NHM, established in 1824) and sold profitably abroad.

Although the profits (‘batig slot’) for the Dutch state of the period 1830-1870 were considerable, various reasons can be mentioned for the change to a liberal system: (a) the emergence of new liberal political ideology; (b) the gradual demise of the Cultivation System during the 1840s and 1850s because internal reforms were necessary; and (c) growth of private (European) entrepreneurship with know-how and interest in the exploitation of natural resources, which took away the need for government management (Van Zanden and Van Riel 2000: 226).

Table 2

Financial Results of Government Cultivation, 1840-1849 (‘Cultivation System’) (in thousands of guilders in current values)

1840-1844 1845-1849
Coffee 40 278 24 549
Sugar 8 218 4 136
Indigo, 7 836 7 726
Pepper, Tea 647 1 725
Total net profits 39 341 35 057

Source: Fasseur 1975: 20.

Table 3

Estimates of Total Profits (‘batig slot’) during the Cultivation System,

1831/40 – 1861/70 (in millions of guilders)

1831/40 1841/50 1851/60 1861/70
Gross revenues of sale of colonial products 227.0 473.9 652.7 641.8
Costs of transport etc (NHM) 88.0 165.4 138.7 114.7
Sum of expenses 59.2 175.1 275.3 276.6
Total net profits* 150.6 215.6 289.4 276.7

Source: Van Zanden and Van Riel 2000: 223.

* Recalculated by Van Zanden and Van Riel to include subsidies for the NHM and other costs that in fact benefited the Dutch economy.

The heyday of the colonial export economy (1900-1942)

After 1870, private enterprise was promoted but the exports of raw materials gained decisive momentum after 1900. Sugar, coffee, pepper and tobacco, the old export products, were increasingly supplemented with highly profitable exports of petroleum, rubber, copra, palm oil and fibers. The Outer Islands supplied an increasing share in these foreign exports, which were accompanied by an intensifying internal trade within the archipelago and generated an increasing flow of foreign imports. Agricultural exports were cultivated both in large-scale European agricultural plantations (usually called agricultural estates) and by indigenous smallholders. When the exploitation of oil became profitable in the late nineteenth century, petroleum earned a respectable position in the total export package. In the early twentieth century, the production of oil was increasingly concentrated in the hands of the Koninklijke/Shell Group.


Figure 1

Foreign Exports from the Netherlands-Indies, 1870-1940

(in millions of guilders, current values)

Source: Trade statistics

The momentum of profitable exports led to a broad expansion of economic activity in the Indonesian archipelago. Integration with the world market also led to internal economic integration when the road system, railroad system (in Java and Sumatra) and port system were improved. In shipping lines, an important contribution was made by the KPM (Koninklijke Paketvaart-Maatschappij, Royal Packet boat Company) that served economic integration as well as imperialist expansion. Subsidized shipping lines into remote corners of the vast archipelago carried off export goods (forest products), supplied import goods and transported civil servants and military.

The Depression of the 1930s hit the export economy severely. The sugar industry in Java collapsed and could not really recover from the crisis. In some products, such as rubber and copra, production was stepped up to compensate for lower prices. In the rubber exports indigenous producers for this reason evaded the international restriction agreements. The Depression precipitated the introduction of protectionist measures, which ended the liberal period that had started in 1870. Various import restrictions were launched, making the economy more self-sufficient, as for example in the production of rice, and stimulating domestic integration. Due to the strong Dutch guilder (the Netherlands adhered to the gold standard until 1936), it took relatively long before economic recovery took place. The outbreak of World War II disrupted international trade, and the Japanese occupation (1942-1945) seriously disturbed and dislocated the economic order.

Table 4

Annual Average Growth in Economic Key Aggregates 1830-1990

GDP per capita Export volume Export

Prices

Government Expenditure
Cultivation System 1830-1840 n.a. 13.5 5.0 8.5
Cultivation System 1840-1848 n.a. 1.5 – 4.5 [very low]
Cultivation System 1849-1873 n.a. 1.5 1.5 2.6
Liberal Period 1874-1900 [very low] 3.1 – 1.9 2.3
Ethical Period 1901-1928 1.7 5.8 17.4 4.1
Great Depression 1929-1934 -3.4 -3.9 -19.7 0.4
Prewar Recovery 1934-1940 2.5 2.2 7.8 3.4
Old Order 1950-1965 1.0 0.8 – 2.1 1.8
New Order 1966-1990 4.4 5.4 11.6 10.6

Source: Booth 1998: 18.

Note: These average annual growth percentages were calculated by Booth by fitting an exponential curve to the data for the years indicated. Up to 1873 data refer only to Java.

The post-1945 period

After independence, the Indonesian economy had to recover from the hardships of the Japanese occupation and the war for independence (1945-1949), on top of the slow recovery from the 1930s Depression. During the period 1949-1965, there was little economic growth, predominantly in the years from 1950 to 1957. In 1958-1965, growth rates dwindled, largely due to political instability and inappropriate economic policy measures. The hesitant start of democracy was characterized by a power struggle between the president, the army, the communist party and other political groups. Exchange rate problems and absence of foreign capital were detrimental to economic development, after the government had eliminated all foreign economic control in the private sector in 1957/58. Sukarno aimed at self-sufficiency and import substitution and estranged the suppliers of western capital even more when he developed communist sympathies.

After 1966, the second president, general Soeharto, restored the inflow of western capital, brought back political stability with a strong role for the army, and led Indonesia into a period of economic expansion under his authoritarian New Order (Orde Baru) regime which lasted until 1997 (see below for the three phases in New Order). In this period industrial output quickly increased, including steel, aluminum, and cement but also products such as food, textiles and cigarettes. From the 1970s onward the increased oil price on the world market provided Indonesia with a massive income from oil and gas exports. Wood exports shifted from logs to plywood, pulp, and paper, at the price of large stretches of environmentally valuable rainforest.

Soeharto managed to apply part of these revenues to the development of technologically advanced manufacturing industry. Referring to this period of stable economic growth, the World Bank Report of 1993 speaks of an ‘East Asian Miracle’ emphasizing the macroeconomic stability and the investments in human capital (World Bank 1993: vi).

The financial crisis in 1997 revealed a number of hidden weaknesses in the economy such as a feeble financial system (with a lack of transparency), unprofitable investments in real estate, and shortcomings in the legal system. The burgeoning corruption at all levels of the government bureaucracy became widely known as KKN (korupsi, kolusi, nepotisme). These practices characterize the coming-of-age of the 32-year old, strongly centralized, autocratic Soeharto regime.

From 1998 until present

Today, the Indonesian economy still suffers from severe economic development problems following the financial crisis of 1997 and the subsequent political reforms after Soeharto stepped down in 1998. Secessionist movements and the low level of security in the provincial regions, as well as relatively unstable political policies, form some of its present-day problems. Additional problems include the lack of reliable legal recourse in contract disputes, corruption, weaknesses in the banking system, and strained relations with the International Monetary Fund. The confidence of investors remains low, and in order to achieve future growth, internal reform will be essential to build up confidence of international donors and investors.

An important issue on the reform agenda is regional autonomy, bringing a larger share of export profits to the areas of production instead of to metropolitan Java. However, decentralization policies do not necessarily improve national coherence or increase efficiency in governance.

A strong comeback in the global economy may be at hand, but has not as yet fully taken place by the summer of 2003 when this was written.

Additional Themes in the Indonesian Historiography

Indonesia is such a large and multi-faceted country that many different aspects have been the focus of research (for example, ethnic groups, trade networks, shipping, colonialism and imperialism). One can focus on smaller regions (provinces, islands), as well as on larger regions (the western archipelago, the eastern archipelago, the Outer Islands as a whole, or Indonesia within Southeast Asia). Without trying to be exhaustive, eleven themes which have been subject of debate in Indonesian economic history are examined here (on other debates see also Houben 2002: 53-55; Lindblad 2002b: 145-152; Dick 2002: 191-193; Thee 2002: 242-243).

The indigenous economy and the dualist economy

Although western entrepreneurs had an advantage in technological know-how and supply of investment capital during the late-colonial period, there has been a traditionally strong and dynamic class of entrepreneurs (traders and peasants) in many regions of Indonesia. Resilient in times of economic malaise, cunning in symbiosis with traders of other Asian nationalities (particularly Chinese), the Indonesian entrepreneur has been rehabilitated after the relatively disparaging manner in which he was often pictured in the pre-1945 literature. One of these early writers, J.H. Boeke, initiated a school of thought centering on the idea of ‘economic dualism’ (referring to a modern western and a stagnant eastern sector). As a consequence, the term ‘dualism’ was often used to indicate western superiority. From the 1960s onward such ideas have been replaced by a more objective analysis of the dualist economy that is not so judgmental about the characteristics of economic development in the Asian sector. Some focused on technological dualism (such as B. Higgins) others on ethnic specialization in different branches of production (see also Lindblad 2002b: 148, Touwen 2001: 316-317).

The characteristics of Dutch imperialism

Another vigorous debate concerns the character of and the motives for Dutch colonial expansion. Dutch imperialism can be viewed as having a rather complex mix of political, economic and military motives which influenced decisions about colonial borders, establishing political control in order to exploit oil and other natural resources, and preventing local uprisings. Three imperialist phases can be distinguished (Lindblad 2002a: 95-99). The first phase of imperialist expansion was from 1825-1870. During this phase interference with economic matters outside Java increased slowly but military intervention was occasional. The second phase started with the outbreak of the Aceh War in 1873 and lasted until 1896. During this phase initiatives in trade and foreign investment taken by the colonial government and by private businessmen were accompanied by extension of colonial (military) control in the regions concerned. The third and final phase was characterized by full-scale aggressive imperialism (often known as ‘pacification’) and lasted from 1896 until 1907.

The impact of the cultivation system on the indigenous economy

The thesis of ‘agricultural involution’ was advocated by Clifford Geertz (1963) and states that a process of stagnation characterized the rural economy of Java in the nineteenth century. After extensive research, this view has generally been discarded. Colonial economic growth was stimulated first by the Cultivation System, later by the promotion of private enterprise. Non-farm employment and purchasing power increased in the indigenous economy, although there was much regional inequality (Lindblad 2002a: 80; 2002b:149-150).

Regional diversity in export-led economic expansion

The contrast between densely populated Java, which had been dominant in economic and political regard for a long time, and the Outer Islands, which were a large, sparsely populated area, is obvious. Among the Outer Islands we can distinguish between areas which were propelled forward by export trade, either from Indonesian or European origin (examples are Palembang, East Sumatra, Southeast Kalimantan) and areas which stayed behind and only slowly picked the fruits of the modernization that took place elsewhere (as for example Benkulu, Timor, Maluku) (Touwen 2001).

The development of the colonial state and the role of Ethical Policy

Well into the second half of the nineteenth century, the official Dutch policy was to abstain from interference with local affairs. The scarce resources of the Dutch colonial administrators should be reserved for Java. When the Aceh War initiated a period of imperialist expansion and consolidation of colonial power, a call for more concern with indigenous affairs was heard in Dutch politics, which resulted in the official Ethical Policy which was launched in 1901 and had the threefold aim of improving indigenous welfare, expanding the educational system, and allowing for some indigenous participation in the government (resulting in the People’s Council (Volksraad) that was installed in 1918 but only had an advisory role). The results of the Ethical Policy, as for example measured in improvements in agricultural technology, education, or welfare services, are still subject to debate (Lindblad 2002b: 149).

Living conditions of coolies at the agricultural estates

The plantation economy, which developed in the sparsely populated Outer Islands (predominantly in Sumatra) between 1870 and 1942, was in bad need of labor. The labor shortage was solved by recruiting contract laborers (coolies) in China, and later in Java. The Coolie Ordinance was a government regulation that included the penal clause (which allowed for punishment by plantation owners). In response to reported abuse, the colonial government established the Labor Inspectorate (1908), which aimed at preventing abuse of coolies on the estates. The living circumstances and treatment of the coolies has been subject of debate, particularly regarding the question whether the government put enough effort in protecting the interests of the workers or allowed abuse to persist (Lindblad 2002b: 150).

Colonial drain

How large of a proportion of economic profits was drained away from the colony to the mother country? The detrimental effects of the drain of capital, in return for which European entrepreneurial initiatives were received, have been debated, as well as the exact methods of its measurement. There was also a second drain to the home countries of other immigrant ethnic groups, mainly to China (Van der Eng 1998; Lindblad 2002b: 151).

The position of the Chinese in the Indonesian economy

In the colonial economy, the Chinese intermediary trader or middleman played a vital role in supplying credit and stimulating the cultivation of export crops such as rattan, rubber and copra. The colonial legal system made an explicit distinction between Europeans, Chinese and Indonesians. This formed the roots of later ethnic problems, since the Chinese minority population in Indonesia has gained an important (and sometimes envied) position as capital owners and entrepreneurs. When threatened by political and social turmoil, Chinese business networks may have sometimes channel capital funds to overseas deposits.

Economic chaos during the ‘Old Order’

The ‘Old Order’-period, 1945-1965, was characterized by economic (and political) chaos although some economic growth undeniably did take place during these years. However, macroeconomic instability, lack of foreign investment and structural rigidity formed economic problems that were closely connected with the political power struggle. Sukarno, the first president of the Indonesian republic, had an outspoken dislike of colonialism. His efforts to eliminate foreign economic control were not always supportive of the struggling economy of the new sovereign state. The ‘Old Order’ has for long been a ‘lost area’ in Indonesian economic history, but the establishment of the unitary state and the settlement of major political issues, including some degree of territorial consolidation (as well as the consolidation of the role of the army) were essential for the development of a national economy (Dick 2002: 190; Mackie 1967).

Development policy and economic planning during the ‘New Order’ period

The ‘New Order’ (Orde Baru) of Soeharto rejected political mobilization and socialist ideology, and established a tightly controlled regime that discouraged intellectual enquiry, but did put Indonesia’s economy back on the rails. New flows of foreign investment and foreign aid programs were attracted, the unbridled population growth was reduced due to family planning programs, and a transformation took place from a predominantly agricultural economy to an industrializing economy. Thee Kian Wie distinguishes three phases within this period, each of which deserve further study:

(a) 1966-1973: stabilization, rehabilitation, partial liberalization and economic recovery;

(b) 1974-1982: oil booms, rapid economic growth, and increasing government intervention;

(c) 1983-1996: post-oil boom, deregulation, renewed liberalization (in reaction to falling oil-prices), and rapid export-led growth. During this last phase, commentators (including academic economists) were increasingly concerned about the thriving corruption at all levels of the government bureaucracy: KKN (korupsi, kolusi, nepotisme) practices, as they later became known (Thee 2002: 203-215).

Financial, economic and political crisis: KRISMON, KRISTAL

The financial crisis of 1997 started with a crisis of confidence following the depreciation of the Thai baht in July 1997. Core factors causing the ensuing economic crisis in Indonesia were the quasi-fixed exchange rate of the rupiah, quickly rising short-term foreign debt and the weak financial system. Its severity had to be attributed to political factors as well: the monetary crisis (KRISMON) led to a total crisis (KRISTAL) because of the failing policy response of the Soeharto regime. Soeharto had been in power for 32 years and his government had become heavily centralized and corrupt and was not able to cope with the crisis in a credible manner. The origins, economic consequences, and socio-economic impact of the crisis are still under discussion. (Thee 2003: 231-237; Arndt and Hill 1999).

(Note: I want to thank Dr. F. Colombijn and Dr. J.Th Lindblad at Leiden University for their useful comments on the draft version of this article.)

Selected Bibliography

In addition to the works cited in the text above, a small selection of recent books is mentioned here, which will allow the reader to quickly grasp the most recent insights and find useful further references.

General textbooks or periodicals on Indonesia’s (economic) history:

Booth, Anne. The Indonesian Economy in the Nineteenth and Twentieth Centuries: A History of Missed Opportunities. London: Macmillan, 1998.

Bulletin of Indonesian Economic Studies.

Dick, H.W., V.J.H. Houben, J.Th. Lindblad and Thee Kian Wie. The Emergence of a National Economy in Indonesia, 1800-2000. Sydney: Allen & Unwin, 2002.

Itinerario “Economic Growth and Institutional Change in Indonesia in the 19th and 20th centuries” [special issue] 26 no. 3-4 (2002).

Reid, Anthony. Southeast Asia in the Age of Commerce, 1450-1680, Vol. I: The Lands below the Winds. New Haven: Yale University Press, 1988.

Reid, Anthony. Southeast Asia in the Age of Commerce, 1450-1680, Vol. II: Expansion and Crisis. New Haven: Yale University Press, 1993.

Ricklefs, M.C. A History of Modern Indonesia since ca. 1300. Basingstoke/Londen: Macmillan, 1993.

On the VOC:

Gaastra, F.S. De Geschiedenis van de VOC. Zutphen: Walburg Pers, 1991 (1st edition), 2002 (4th edition).

Jacobs, Els M. Koopman in Azië: de Handel van de Verenigde Oost-Indische Compagnie tijdens de 18de Eeuw. Zutphen: Walburg Pers, 2000.

Nagtegaal, Lucas. Riding the Dutch Tiger: The Dutch East Indies Company and the Northeast Coast of Java 1680-1743. Leiden: KITLV Press, 1996.

On the Cultivation System:

Elson, R.E. Village Java under the Cultivation System, 1830-1870. Sydney: Allen and Unwin, 1994.

Fasseur, C. Kultuurstelsel en Koloniale Baten. De Nederlandse Exploitatie van Java, 1840-1860. Leiden, Universitaire Pers, 1975. (Translated as: The Politics of Colonial Exploitation: Java, the Dutch and the Cultivation System. Ithaca, NY: Southeast Asia Program, Cornell University Press 1992.)

Geertz, Clifford. Agricultural Involution: The Processes of Ecological Change in Indonesia. Berkeley: University of California Press, 1963.

Houben, V.J.H. “Java in the Nineteenth Century: Consolidation of a Territorial State.” In The Emergence of a National Economy in Indonesia, 1800-2000, edited by H.W. Dick, V.J.H. Houben, J.Th. Lindblad and Thee Kian Wie, 56-81. Sydney: Allen & Unwin, 2002.

On the Late-Colonial Period:

Dick, H.W. “Formation of the Nation-state, 1930s-1966.” In The Emergence of a National Economy in Indonesia, 1800-2000, edited by H.W. Dick, V.J.H. Houben, J.Th. Lindblad and Thee Kian Wie, 153-193. Sydney: Allen & Unwin, 2002.

Lembaran Sejarah, “Crisis and Continuity: Indonesian Economy in the Twentieth Century” [special issue] 3 no. 1 (2000).

Lindblad, J.Th., editor. New Challenges in the Modern Economic History of Indonesia. Leiden: PRIS, 1993. Translated as: Sejarah Ekonomi Modern Indonesia. Berbagai Tantangan Baru. Jakarta: LP3ES, 2002.

Lindblad, J.Th., editor. The Historical Foundations of a National Economy in Indonesia, 1890s-1990s. Amsterdam: North-Holland, 1996.

Lindblad, J.Th. “The Outer Islands in the Nineteenthh Century: Contest for the Periphery.” In The Emergence of a National Economy in Indonesia, 1800-2000, edited by H.W. Dick, V.J.H. Houben, J.Th. Lindblad and Thee Kian Wie, 82-110. Sydney: Allen & Unwin, 2002a.

Lindblad, J.Th. “The Late Colonial State and Economic Expansion, 1900-1930s.” In The Emergence of a National Economy in Indonesia, 1800-2000, edited by H.W. Dick, V.J.H. Houben, J.Th. Lindblad and Thee Kian Wie, 111-152. Sydney: Allen & Unwin, 2002b.

Touwen, L.J. Extremes in the Archipelago: Trade and Economic Development in the Outer Islands of Indonesia, 1900‑1942. Leiden: KITLV Press, 2001.

Van der Eng, Pierre. “Exploring Exploitation: The Netherlands and Colonial Indonesia, 1870-1940.” Revista de Historia Económica 16 (1998): 291-321.

Zanden, J.L. van, and A. van Riel. Nederland, 1780-1914: Staat, instituties en economische ontwikkeling. Amsterdam: Balans, 2000. (On the Netherlands in the nineteenth century.)

Independent Indonesia:

Arndt, H.W. and Hal Hill, editors. Southeast Asia’s Economic Crisis: Origins, Lessons and the Way forward. Singapore: Institute of Southeast Asian Studies, 1999.

Cribb, R. and C. Brown. Modern Indonesia: A History since 1945. Londen/New York: Longman, 1995.

Feith, H. The Decline of Constitutional Democracy in Indonesia. Ithaca, New York: Cornell University Press, 1962.

Hill, Hal. The Indonesian Economy. Cambridge: Cambridge University Press, 2000. (This is the extended second edition of Hill, H., The Indonesian Economy since 1966. Southeast Asia’s Emerging Giant. Cambridge: Cambridge University Press, 1996.)

Hill, Hal, editor. Unity and Diversity: Regional Economic Development in Indonesia since 1970. Singapore: Oxford University Press, 1989.

Mackie, J.A.C. “The Indonesian Economy, 1950-1960.” In The Economy of Indonesia: Selected Readings, edited by B. Glassburner, 16-69. Ithaca NY: Cornell University Press 1967.

Robison, Richard. Indonesia: The Rise of Capital. Sydney: Allen and Unwin, 1986.

Thee Kian Wie. “The Soeharto Era and After: Stability, Development and Crisis, 1966-2000.” In The Emergence of a National Economy in Indonesia, 1800-2000, edited by H.W. Dick, V.J.H. Houben, J.Th. Lindblad and Thee Kian Wie, 194-243. Sydney: Allen & Unwin, 2002.

World Bank. The East Asian Miracle: Economic Growth and Public Policy. Oxford: World Bank /Oxford University Press, 1993.

On economic growth:

Booth, Anne. The Indonesian Economy in the Nineteenth and Twentieth Centuries. A History of Missed Opportunities. London: Macmillan, 1998.

Van der Eng, Pierre. “The Real Domestic Product of Indonesia, 1880-1989.” Explorations in Economic History 39 (1992): 343-373.

Van der Eng, Pierre. “Indonesia’s Growth Performance in the Twentieth Century.” In The Asian Economies in the Twentieth Century, edited by Angus Maddison, D.S. Prasada Rao and W. Shepherd, 143-179. Cheltenham: Edward Elgar, 2002.

Van der Eng, Pierre. “Indonesia’s Economy and Standard of Living in the Twentieth Century.” In Indonesia Today: Challenges of History, edited by G. Lloyd and S. Smith, 181-199. Singapore: Institute of Southeast Asian Studies, 2001.

Citation: Touwen, Jeroen. “The Economic History of Indonesia”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/the-economic-history-of-indonesia/

Indentured Servitude in the Colonial U.S.

Joshua Rosenbloom, University of Kansas

During the seventeenth and eighteenth centuries a variety of labor market institutions developed to facilitate the movement of labor in response to the opportunities created by American factor proportions. While some immigrants migrated on their own, the majority of immigrants were either indentured servants or African slaves.

Because of the cost of passage—which exceeded half a year’s income for a typical British immigrant and a full year’s income for a typical German immigrant—only a small portion of European migrants could afford to pay for their passage to the Americas (Grubb 1985a). They did so by signing contracts, or “indentures,” committing themselves to work for a fixed number of years in the future—their labor being their only viable asset—with British merchants, who then sold these contracts to colonists after their ship reached America. Indentured servitude was introduced by the Virginia Company in 1619 and appears to have arisen from a combination of the terms of two other types of labor contract widely used in England at the time: service in husbandry and apprenticeship (Galenson 1981). In other cases, migrants borrowed money for their passage and committed to repay merchants by pledging to sell themselves as servants in America, a practice known as “redemptioner servitude (Grubb 1986). Redemptioners bore increased risk because they could not predict in advance what terms they might be able to negotiate for their labor, but presumably they did so because of other benefits, such as the opportunity to choose their own master, and to select where they would be employed.

Although data on immigration for the colonial period are scattered and incomplete a number of scholars have estimated that between half and three quarters of European immigrants arriving in the colonies came as indentured or redemptioner servants. Using data for the end of the colonial period Grubb (1985b) found that close to three-quarters of English immigrants to Pennsylvania and nearly 60 percent of German immigrants arrived as servants.

A number of scholars have examined the terms of indenture and redemptioner contracts in some detail (see, e.g., Galenson 1981; Grubb 1985a). They find that consistent with the existence of a well-functioning market, the terms of service varied in response to differences in individual productivity, employment conditions, and the balance of supply and demand in different locations.

The other major source of labor for the colonies was the forced migration of African slaves. Slavery had been introduced in the West Indies at an early date, but it was not until the late seventeenth century that significant numbers of slaves began to be imported into the mainland colonies. From 1700 to 1780 the proportion of blacks in the Chesapeake region grew from 13 percent to around 40 percent. In South Carolina and Georgia, the black share of the population climbed from 18 percent to 41 percent in the same period (McCusker and Menard, 1985, p. 222). Galenson (1984) explains the transition from indentured European to enslaved African labor as the result of shifts in supply and demand conditions in England and the trans-Atlantic slave market. Conditions in Europe improved after 1650, reducing the supply of indentured servants, while at the same time increased competition in the slave trade was lowering the price of slaves (Dunn 1984). In some sense the colonies’ early experience with indentured servants paved the way for the transition to slavery. Like slaves, indentured servants were unfree, and ownership of their labor could be freely transferred from one owner to another. Unlike slaves, however, they could look forward to eventually becoming free (Morgan 1971).

Over time a marked regional division in labor market institutions emerged in colonial America. The use of slaves was concentrated in the Chesapeake and Lower South, where the presence of staple export crops (rice, indigo and tobacco) provided economic rewards for expanding the scale of cultivation beyond the size achievable with family labor. European immigrants (primarily indentured servants) tended to concentrate in the Chesapeake and Middle Colonies, where servants could expect to find the greatest opportunities to enter agriculture once they had completed their term of service. While New England was able to support self-sufficient farmers, its climate and soil were not conducive to the expansion of commercial agriculture, with the result that it attracted relatively few slaves, indentured servants, or free immigrants. These patterns are illustrated in Table 1, which summarizes the composition and destinations of English emigrants in the years 1773 to 1776.

Table 1

English Emigration to the American Colonies, by Destination and Type, 1773-76

Total Emigration
Destination Number Percentage Percent listed as servants
New England 54 1.20 1.85
Middle Colonies 1,162 25.78 61.27
New York 303 6.72 11.55
Pennsylvania 859 19.06 78.81
Chesapeake 2,984 66.21 96.28
Maryland 2,217 49.19 98.33
Virginia 767 17.02 90.35
Lower South 307 6.81 19.54
Carolinas 106 2.35 23.58
Georgia 196 4.35 17.86
Florida 5 0.11 0.00
Total 4,507 80.90

Source: Grubb (1985b, p. 334).

References

Dunn, Richard S. “Servants and Slaves: The Recruitment and Employment of Labor.” In Colonial British America: Essays in the New History of the Early Modern Era, edited by Jack P. Greene and J.R. Pole. Baltimore: Johns Hopkins University Press, 1984.

Galenson, David W. White Servitude in Colonial America. New York: Cambridge University Press, 1981.

Galenson, David W. “The Rise and Fall of Indentured Servitude in the Americas: An Economic Analysis.” Journal of Economic History 44, no. 1 (1984): 1-26.

Grubb, Farley. “The Market for Indentured Immigrants: Evidence on the Efficiency of Forward Labor Contracting in Philadelphia, 1745-1773.” Journal of Economic History 45, no. 4 (1985a): 855-68.

Grubb, Farley. “The Incidence of Servitude in Trans-Atlantic Migration, 1771-1804.” Explorations in Economic History 22 (1985b): 316-39.

Grubb, Farley. “Redemptioner Immigration to Pennsylvania: Evidence on Contract Choice and Profitability.” Journal of Economic History 46, no. 2 (1986): 407-18.

McCusker, John J. and Russell R. Menard. The Economy of British America: 1607-1789. Chapel Hill: University of North Carolina Press, 1985.

Morgan, Edmund S. “The Labor Problem at Jamestown, 1607-18.” American Historical Review 76 (1971): 595-611.

Citation: Rosenbloom, Joshua. “Indentured Servitude in the Colonial U.S.”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/indentured-servitude-in-the-colonial-u-s/

Immigration to the United States

Raymond L. Cohn, Illinois State University (Emeritus)

For good reason, it is often said the United States is a nation of immigrants. Almost every person in the United States is descended from someone who arrived from another country. This article discusses immigration to the United States from colonial times to the present. The focus is on individuals who paid their own way, rather than slaves and indentured servants. Various issues concerning immigration are discussed: (1) the basic data sources available, (2) the variation in the volume over time, (3) the reasons immigration occurred, (4) nativism and U.S. immigration policy, (5) the characteristics of the immigrant stream, (6) the effects on the United States economy, and (7) the experience of immigrants in the U.S. labor market.

For readers who wish to further investigate immigration, the following works listed in the Reference section of this entry are recommended as general histories of immigration to the United States: Hansen (1940); Jones (1960); Walker (1964); Taylor (1971); Miller (1985); Nugent (1992); Erickson (1994); Hatton and Williamson (1998); and Cohn (2009).

The Available Data Sources

The primary source of data on immigration to the United States is the Passenger Lists, though U.S. and state census materials, Congressional reports, and company records also contain material on immigrants. In addition, the Integrated Public Use Microdata Series (IPUMS) web site at the University of Minnesota (http://www.ipums.umn.edu/usa/) contains data samples drawn from a number of federal censuses. Since the samples are of individuals and families, the site is useful in immigration research. A number of the countries from which the immigrants left also kept records about the individuals. Many of these records were originally summarized in Ferenczi (1970). Although records from other countries are useful for some purposes, the U.S. records are generally viewed as more complete, especially for the period before 1870. It is worthy of note that comparisons of the lists between countries often lead to somewhat different results. It is also probable that, during the early years, a few of the U.S. lists were lost or never collected.

Passenger Lists

The U.S. Passenger Lists resulted from an 1819 law requiring every ship carrying passengers that arrived in the United States from a foreign port to file with the port authorities a list of all passengers on the ship. These records are the basis for the vast majority of the historical data on immigration. For example, virtually all of the tables in the chapter on immigration in Carter et. al (2006) are based on these records. The Passenger Lists recorded a great deal of information. Each list indicates the name of the ship, the name of the captain, the port(s) of embarkation, the port of arrival, and the date of arrival. Following this information is a list of the passengers. Each person’s name is listed, along with age, gender, occupation, country of origin, country of destination, and whether or not the person died on the voyage. It is often possible to distinguish family groups since family members were usually grouped together and, to save time, the compilers frequently used ditto marks to indicate the same last name. Various data based on the lists were published in Senate or Congressional Reports at the time. Due to their usefulness in genealogical research, the lists are now widely available on microfilm and are increasingly available on CD-rom. Even a few public libraries in major cities have full or partial collections of these records. Most of the ship lists are also available on-line at various web sites.

The Volume of Immigration

Both the total volume of immigration to the United States and the immigrants’ countries of origins varied substantially over time. Table 1 provides the basic data on total immigrant volume by time period broken down by country or area of origin. The column “Average Yearly Total – All Countries” presents the average yearly total immigration to the United States in the time period given. Immigration rates – the average number of immigrants entering per thousand individuals in the U.S. population – are shown in the next column. The columns headed by country or area names show the percentage of immigrants coming from that place. The time periods in Table 1 have been chosen for illustrative purposes. A few things should be noted concerning the figures in Table 1. First, the estimates for much of the period since 1820 are based on the original Passenger Lists and are subject to the caveats discussed above. The estimates for the period before 1820 are the best currently available but are less precise than those after 1820. Second, though it was legal to import slaves into the United States (or the American colonies) before 1808, the estimates presented exclude slaves. Third, though illegal immigration into the United States has occurred, the figures in Table 1 include only legal immigrants. In 2015, the total number of illegal immigrants in the United States is estimated at around 11 million. These individuals were mostly from Mexico, Central America, and Asia.

Trends over Time

From the data presented in Table 1, it is apparent that the volume of immigration and its rate relative to the U.S. population varied over time. Immigration was relatively small until a noticeable increase occurred in the 1830s and a huge jump in the 1840s. The volume passed 200,000 for the first time in 1847 and the period between 1847 and 1854 saw the highest rate of immigration in U.S. history. From the level reached between 1847 and 1854, volume decreased and increased over time through 1930. For the period from 1847 through 1930, the average yearly volume was 434,000. During these years, immigrant volume peaked between 1900 and 1914, when an average of almost 900,000 immigrants arrived in the United States each year. This period is also second in terms of the rate of immigration relative to the U.S. population. The volume and rate fell to low levels between 1931 and 1946, though by the 1970s the volume had again reached that experienced between 1847 and 1930. The rise in volume continued through the 1980s and 1990s, though the rate per one thousand American residents has remained well below that experienced before 1915. It is notable that since about 1990, the average yearly volume of immigration has surpassed the previous peak experienced between 1900 and 1914. In 2015, reflecting the large volume of immigration, about 15 percent of the U.S. population was foreign-born.

Table 1
Immigration Volume and Rates

Years Average Yearly Total – All Countries Immigration Rates (Per 1000 Population) Percent of Average Yearly Total
Great Britain Ireland Scandinavia and Other NW Europe Germany Central and Eastern Europe Southern Europe Asia Africa Australia and Pacific Islands Mexico Other America
1630‑1700 2,200 —- —- —- —- —- —- —- —- —- —- —- —-
1700-1780 4,325 —- —- —- —- —- —- —- —- —- —- —- —-
1780-1819 9,900 —- —- —- —- —- —- —- —- —- —- —- —-
1820-1831 14,538 1.3 22 45 12 8 0 2 0 0 —- 4 6
1832-1846 71,916 4.3 16 41 9 27 0 1 0 0 —- 1 5
1847-1854 334,506 14.0 13 45 6 32 0 0 1 0 —- 0 3
1855-1864 160,427 5.2 25 28 5 33 0 1 3 0 —- 0 4
1865-1873 327,464 8.4 24 16 10 34 1 1 3 0 0 0 10
1874-1880 260,754 5.6 18 15 14 24 5 3 5 0 0 0 15
1881-1893 525,102 8.9 14 12 16 26 16 8 1 0 0 0 6
1894-1899 276,547 3.9 7 12 12 11 32 22 3 0 0 0 2
1900-1914 891,806 10.2 6 4 7 4 45 26 3 0 0 1 5
1915-1919 234,536 2.3 5 2 8 1 7 21 6 0 1 8 40
1920-1930 412,474 3.6 8 5 8 9 14 16 3 0 0 11 26
1931-1946 50,507 0.4 10 2 9 15 8 12 3 1 1 6 33
1947-1960 252,210 1.5 7 2 6 8 4 10 8 1 1 15 38
1961-1970 332,168 1.7 6 1 4 6 4 13 13 1 1 14 38
1971-1980 449,331 2.1 3 0 1 2 4 8 35 2 1 14 30
1981-1990 733,806 3.1 2 0 1 1 3 2 37 2 1 23 27
1991-2000 909,264 3.4 2 1 1 1 11 2 38 5 1 30 9
2001-2008 1,040,951 4.4 2 0 1 1 9 1 35 7 1 17 27
2009-2015 1,046,459 4.8 1 0 1 1 5 1 40 10 1 14 27

Sources: Years before 1820: Grabbe (1989). 1820-1970: Historical Statistics (1976). Years since 1970: U.S. Immigration and Naturalization Service (various years). Note: Entries with a zero indicate less than one-half of one percent. Entries with dashes indicate no information or no immigrants. 2002-2015: Department of Homeland Security: Office of Immigration Statistics (various years).

Sources of Immigration

The sources of immigration have changed a number of times over the years. In general, four relatively distinct periods can be identified in Table 1. Before 1881, the vast majority of immigrants, almost 86% of the total, arrived from northwest Europe, principally Great Britain, Ireland, Germany, and Scandinavia. During the colonial period, though the data do not allow an accurate breakdown, most immigrants arrived from Britain, with smaller numbers coming from Ireland and Germany. The years between 1881 and 1893 saw a transition in the sources of U.S. immigrants. After 1881, immigrant volume from central, eastern, and southern Europe began to increase rapidly. Between 1894 and 1914, immigrants from southern, central, and eastern Europe accounted for 69% of the total. With the onset of World War I in 1914, the sources of U.S. immigration again changed. From 1915 to the present day, a major source of immigrants to the United States has been the Western Hemisphere, accounting for 46% of the total. In the period between 1915 and 1960, virtually all of the remaining immigrants came from Europe, though no specific part of Europe was dominant. Beginning in the 1960s, immigration from Europe fell off substantially and was replaced by a much larger percentage of immigrants from Asia. Also noteworthy is the rise in immigration from Africa in the twenty-first century. Thus, over the course of U.S. history, the sources of immigration changed from northwestern Europe to southern, central and eastern Europe to the Americas in combination with Europe to the current situation where most immigrants come from the Americas, Asia and Africa.

Duration of Voyage and Method of Travel

Before the 1840s, immigrants arrived on sailing ships. General information on the length of the voyage is unavailable for the colonial and early national periods. By the 1840s, however, the average voyage length for ships from the British Isles was five to six weeks, with those from the European continent taking a week or so longer. In the 1840s, a few steamships began to cross the Atlantic. Over the course of the 1850s, steamships began to account for a larger, though still minority, percentage of immigrant travel. By 1873, virtually all immigrants arrived on steamships (Cohn 2005). As a result, the voyage time fell initially to about two weeks and it continued to decline into the twentieth century. Steamships remained the primary means of travel until after World War II. As a consequence of the boom in airplane travel over the last few decades, most immigrants now arrive via air.

Place of Arrival

Where immigrants landed in the United States varied, especially in the period before the Civil War. During the colonial and early national periods, immigrants arrived not only at New York City but also at a variety of other ports, especially Philadelphia, Boston, New Orleans, and Baltimore. Over time, and especially when most immigrants began arriving via steamship, New York City became the main arrival port. No formal immigration facilities existed at any of the ports until New York City established Castle Garden as its landing depot in 1855. This facility, located at the tip of Manhattan, was replaced in 1892 with Ellis Island, which in turn operated until 1954.

Death Rates during the Voyage

A final aspect to consider is the mortality experienced by the individuals on board the ships. Information taken from the Passenger Lists for the period of the sailing ship between 1820 and 1860 finds a loss rate of one to two percent of the immigrants who boarded (Cohn, 2009). Given the length of the trip and taking into account the ages of the immigrants, this rate represents mortality approximately four times higher than that experienced by non-migrants. Mortality was mainly due to outbreaks of cholera and typhus on some ships, leading to especially high death rates among children and the elderly. There appears to have been little trend over time in mortality or differences in the loss rate by country of origin, though some evidence suggests the loss rate may have differed by port of embarkation. In addition, the best evidence from the colonial period finds a loss rate only slightly higher than that of the antebellum years. In the period after the Civil War, with the change to steamships and the resulting shorter travel time and improved on-board conditions, mortality on the voyages fell, though exactly how much has not been determined.

The Causes of Immigration

Economic historians generally believe no single factor led to immigration. In fact, different studies have tried to explain immigration by emphasizing different factors, with the first important study being done by Thomas (1954). The most recent attempt to comprehensively explain immigration has been by Hatton and Williamson (1998), who focus on the period between 1860 and 1914. Massey (1999) expresses relatively similar views. Hatton and Williamson view immigration from a country during this time as being caused by up to five different factors: (a) the difference in real wages between the country and the United States; (b) the rate of population growth in the country 20 or 30 years before; (c) the degree of industrialization and urbanization in the home country; (d) the volume of previous immigrants from that country or region; and (e) economic and political conditions in the United States. To this list can be added factors not relevant during the 1860 to 1914 period, such as the potato famine, the movement from sail to steam, and the presence or absence of immigration restrictions. Thus, a total of at least eight factors affected immigration.

Causes of Fluctuations in Immigration Levels over Time

As discussed above, the total volume of immigration trended upward until World War I. The initial increase in immigration during the 1830s and 1840s was caused by improvements in shipping, more rapid population growth in Europe, and the potato famine in the latter part of the 1840s, which affected not only Ireland but also much of northwest Europe. As previously noted, the steamship replaced the sailing ship after the Civil War. By substantially reducing the length of the trip and increasing comfort and safety, the steamship encouraged an increase in the volume of immigration. Part of the reason volume increased was that temporary immigration became more likely. In this situation, an individual came to the United States not planning to stay permanently but instead planning to work for a period of time before returning home. All in all, the period from 1865 through 1914, when immigration was not restricted and steamships were dominant, saw an average yearly immigrant volume of almost 529,000. In contrast, average yearly immigration between 1820 and 1860 via sailing ship was only 123,000, and even between 1847 and 1860 was only 266,000.

Another feature of the data in Table 1 is that the yearly volume of immigration fluctuated quite a bit in the period before 1914. The fluctuations are mainly due to changes in economic and political conditions in the United States. Essentially, periods of low volume corresponded with U.S. economic depressions or times of widespread opposition to immigrants. In particular, volume declined during the nativist outbreak in the 1850s and the major depressions of the 1870s and 1890s and the Great Depression of the 1930s. As discussed in the next section, the United States imposed widespread restrictions on immigration beginning in the 1920s. Since then, the volume has been subject to more direct determination by the United States government. Thus, fluctuations in the total volume of immigration over time are due to four of the eight factors discussed in the first paragraph of this section: the potato famine, the movement from sail to steam, economic and political conditions in the United States, and the presence or absence of immigration restrictions.

Factors Influencing Immigration Rates from Particular Countries

The other four factors are primarily used to explain changes in the source countries of immigration. A larger difference in real wages between the country and the United States increased immigration from the country because it meant immigrants had more to gain from the move. Because most immigrants were between 15 and 35 years old, a higher population growth 20 or 30 years earlier meant there were more individuals in the potential immigrant group. In addition, a larger volume of young workers in a country reduced job prospects at home and further encouraged immigration. A greater degree of industrialization and urbanization in the home country typically increased immigration because traditional ties with the land were broken during this period, making laborers in the country more mobile. Finally, the presence of a larger volume of previous immigrants from that country or region encouraged more immigration because potential immigrants now had friends or relatives to stay with who could smooth their transition to living and working in the United States.

Based on these four factors, Hatton and Williamson explain the rise and fall in the volume of immigration from a country to the United States. Immigrant volume initially increased as a consequence of more rapid population growth and industrialization in a country and the existence of a large gap in real wages between the country and the United States. Within a number of years, volume increased further due to the previous immigration that had occurred. Volume remained high until various changes in Europe caused immigration to decline. Population growth slowed. Most of the countries had undergone industrialization. Partly due to the previous immigration, real wages rose at home and became closer to those in the United States. Thus, each source country went through stages where immigration increased, reached a peak, and then declined.

Differences in the timing of these effects then led to changes in the source countries of the immigrants. The countries of northwest Europe were the first to experience rapid population growth and to begin industrializing. By the latter part of the nineteenth century, immigration from these countries was in the stage of decline. At about the same time, countries in central, eastern, and southern Europe were experiencing the beginnings of industrialization and more rapid population growth. This model holds directly only through the 1920s, because U.S. government policy changed. At that point, quotas were established on the number of individuals allowed to immigrate from each country. Even so, many countries, especially those in northwest Europe, had passed the point where a large number of individuals wanted to leave and thus did not fill their quotas. The quotas were binding for many other countries in Europe in which pressures to immigrate were still strong. Even today, the countries providing the majority of immigrants to the United States, those south of the United States and in Asia and Africa, are places where population growth is high, industrialization is breaking traditional ties with the land, and real wage differentials with the United States are large.

Immigration Policy and Nativism

This section summarizes the changes in U.S. immigration policy. Only the most important policy changes are discussed and a number of relatively minor changes have been ignored. Interested readers are referred to Le May (1987) and Briggs (1984) for more complete accounts of U.S. immigration policy.

Few Restrictions before 1882

Immigration into the United States was subject to virtually no legal restrictions before 1882. Essentially, anyone who wanted to enter the United States could and, as discussed earlier, no specified arrival areas existed until 1855. Individuals simply got off the ship and went about their business. Little opposition among U.S. citizens to immigration is apparent until about the 1830s. The growing concern at this time was due to the increasing volume of immigration in both absolute terms and relative to the U.S. population, and the facts that more of the arrivals were Catholic and unskilled. The nativist feeling burst into the open during the 1850s when the Know-Nothing political party achieved a great deal of political success in the 1854 off-year elections. This party did not favor restrictions on the number of immigrants, though they did seek to restrict their ability to quickly become voting citizens. For a short period of time, the Know-Nothings had an important presence in Congress and many state legislatures. With the downturn in immigration in 1855 and the nation’s attention turning more to the slavery issue, their influence receded.

Chinese Exclusion Act

The first restrictive immigration laws were directed against Asian countries. The first law was the Chinese Exclusion Act of 1882. This law essentially prohibited the immigration of Chinese citizens and it stayed in effect until it was removed during World War II. In 1907, Japanese immigration was substantially reduced through a Gentlemen’s Agreement between Japan and the United States. It is noteworthy that the Chinese Exclusion Act also prohibited the immigration of “convicts, lunatics, idiots” and those individuals who might need to be supported by government assistance. The latter provision was used to some extent during periods of high unemployment, though as noted above, immigration fell anyway because of the lack of jobs.

Literacy Test Adopted in 1917

The desire to restrict immigration to the United States grew over the latter part of the nineteenth century. This growth was due partly to the high volume and rate of immigration and partly to the changing national origins of the immigrants; more began arriving from southern, central, and eastern Europe. In 1907, Congress set up the Immigration Commission, chaired by Senator William Dillingham, to investigate immigration. This body issued a famous report, now viewed as flawed, concluding that immigrants from the newer parts of Europe did not assimilate easily and, in general, blaming them for various economic ills. Attempts at restricting immigration were initially made by proposing a law requiring a literacy test for admission to the United States, and such a law was finally passed in 1917. This same law also virtually banned immigration from any country in Asia. Restrictionists were no doubt displeased when the volume of immigration from Europe resumed its former level after World War I in spite of the literacy test. The movement then turned to explicitly limiting the number of arrivals.

1920s: Quota Act and National Origins Act

The Quota Act of 1921 laid the framework for a fundamental change in U.S. immigration policy. It limited the number of immigrants from Europe to a total of about 350,000 per year. National quotas were established in direct proportion to each country’s presence in the U.S. population in 1910. In addition, the act assigned Asian countries quotas near zero. Three years later in 1924, the National Origins Act instituted a requirement that visas be obtained from an American consulate abroad before immigrating, reduced the total European quota to about 165,000, and changed how the quotas were determined. Now, the quotas were established in direct proportion to each country’s presence in the U.S. population in 1890, though this aspect of the act was not fully implemented until 1929. Because relatively few individuals immigrated from southern, central, and eastern Europe before 1890, the effect of the 1924 law was to drastically reduce the number of individuals allowed to immigrate to the United States from these countries. Yet total immigration to the United States remained fairly high until the Great Depression because neither the 1921 nor the 1924 law restricted immigration from the Western Hemisphere. Thus, it was the combination of the outbreak of World War I and the subsequent 1920s restrictions that caused the Western Hemisphere to become a more important source of immigrants to the United States after 1915, though it should be recalled the rate of immigration fell to low levels after 1930.

Immigration and Nationality Act of 1965

The last major change in U.S. immigration policy occurred with the passage of the Immigration and Nationality Act of 1965. This law abolished the quotas based on national origins. Instead, a series of preferences were established to determine who would gain entry. The most important preference was given to relatives of U.S. citizens and permanent resident aliens. By the twenty-first century, about two-thirds of immigrants came through these family channels. Preferences were also given to professionals, scientists, artists, and workers in short supply. The 1965 law kept an overall quota on total immigration for Eastern Hemisphere countries, originally set at 170,000, and no more than 20,000 individuals were allowed to immigrate to the United States from any single country. This law was designed to treat all countries equally. Asian countries were treated the same as any other country, so the virtual prohibition on immigration from Asia disappeared. In addition, for the first time the law also limited the number of immigrants from Western Hemisphere countries, with the original overall quota set at 120,000. It is important to note that neither quota was binding because immediate relatives of U.S. citizens, such as spouses, parents, and minor children, were exempt from the quota. In addition, the United States has admitted large numbers of refugees at different times from Vietnam, Haiti, Cuba, and other countries. Finally, many individuals enter the United States on student visas, enroll in colleges and universities, and eventually get companies to sponsor them for a work visa. Thus, the total number of legal immigrants to the United States since 1965 has always been larger than the combined quotas. This law has led to an increase in the volume of immigration and, by treating all countries the same, has led to Asia recently becoming a more important source of U.S. immigrants.

Though features of the 1965 law have been modified since it was enacted, this law still serves as the basis for U.S. immigration policy today. The most important modifications occurred in 1986 when employer sanctions were adopted for those hiring illegal workers. On the other hand, the same law also gave temporary resident status to individuals who had lived illegally in the United States since before 1982. The latter feature led to very high volumes of legal immigration being recorded in 1989, 1990, and 1991.

The Characteristics of the Immigrants

In this section, various characteristics of the immigrant stream arriving at different points in time are discussed. The following characteristics of immigration are analyzed: gender breakdown, age structure, family vs. individual migration, and occupations listed. Virtually all the information is based on the Passenger Lists, a source discussed above.

Gender and Age

Data are presented in Table 2 on the gender breakdown and age structure of immigration. The gender breakdown and age structure remain fairly consistent in the period before 1930. Generally, about 60% of the immigrants were male. As to age structure, about 20% of immigrants were children, 70% were adults up to age 44, and 10% were older than 44. In most of the period and for most countries, immigrants were typically young single males, young couples, or, especially in the era before the steamship, families. For particular countries, such as Ireland, a large number of the immigrants were single women (Cohn, 1995). The primary exception to this generalization was the 1899-1914 period, when 68% of the immigrants were male and adults under 45 accounted for 82% of the total. This period saw the immigration of a large number of single males who planned to work for a period of months or years and return to their homeland, a development made possible by the steamship shortening the voyage and reducing its cost (Nugent, 1992). The characteristics of the immigrant stream since 1930 have been somewhat different. Males have comprised less than one-half of all immigrants. In addition, the percentage of immigrants over age 45 has increased at the expense of those between the ages of 14 and 44.

Table 2
Immigration by Gender and Age

Percent Males Percent under 14 years Percent 14–44 years Percent 45 years and over
Years
1820-1831 70 19 70 11
1832-1846 62 24 67 10
1847-1854 59 23 67 10
1855-1864 58 19 71 10
1865-1873 62 21 66 13
1873-1880 63 19 69 12
1881-1893 61 20 71 10
1894-1898 57 15 77 8
1899-1914 68 12 82 5
1915-1917 59 16 74 10
1918-1930 56 18 73 9
1931-1946 40 15 67 17
1947-1960 45 21 64 15
1961-1970 45 25 61 14
1971-1980 46 24 61 15
1981-1990 52 18 66 16
1991-2000 51 17 65 18
2001-2008 45 15 64 21
2009-2015 45 15 61 24

Notes: From 1918-1970, the age breakdown is “Under 16” and “16-44.” From 1971 to 1998, the age breakdown is “Under 15” and “15-44.” For 2001-2015, it is again “Under 16” and “16-44.”

Sources: 1820-1970: Historical Statistics (1976). Years since 1970: U.S. Immigration and Naturalization Service (various years). 2002-2015: Department of Homeland Security: Office of Immigration Statistics (various years).

Occupations

Table 3 presents data on the percentage of immigrants who did not report an occupation and the percentage breakdown of those reporting an occupation. The percentage not reporting an occupation declined through 1914. The small percentages between 1894 and 1914 are a reflection of the large number of single males who arrived during this period. As is apparent, the classification scheme for occupations has changed over time. Though there is no perfect way to correlate the occupation categories used in the different time periods, skilled workers comprised about one-fourth of the immigrant stream through 1970. The immigration of farmers was important before the Civil War but declined steadily over time. The percentage of laborers has varied over time, though during some time periods they comprised one-half or more of the immigrants. The highest percentages of laborers occurred during good years for the U.S. economy (1847-54, 1865-73, 1881-93, 1899-1914), because laborers possessed the fewest skills and would have an easier time finding a job when the U.S. economy was strong. Commercial workers, mainly merchants, were an important group of immigrants very early when immigrant volume was low, but their percentage fell substantially over time. Professional workers were always a small part of U.S. immigration until the 1930s. Since 1930, these workers have comprised a larger percentage of immigrants reporting an occupation.

Table 3
Immigration by Occupation

Year Percent with no occup. listed Percent of immigrants with an occupation in each category
Professional Commercial Skilled Farmers Servants Laborers Misc.
1820-1831 61 3 28 30 23 2 14
1832-1846 56 1 12 27 33 2 24
1847-1854 54 0 6 18 33 2 41
1855-1864 53 1 12 23 23 4 37 0
1865-1873 54 1 6 24 18 7 44 1
1873-1880 47 2 4 24 18 8 40 5
1881-1893 49 1 3 20 14 9 51 3
1894-1898 38 1 4 25 12 18 37 3
Professional, technical, and kindred workers Farmers and farm managers Managers, officials, and proprietors, exc. farm Clerical, sales, and kindred workers Craftsmen, foremen, operatives, and kindred workers Private HH workers Service workers, exc. private household Farm laborers and foremen Laborers, exc. farm and mine
1899-1914 26 1 2 3 2 18 15 2 26 33
1915-1919 37 5 4 5 5 21 15 7 11 26
1920-1930 39 4 5 4 7 24 17 6 8 25
1931-1946 59 19 4 15 13 21 13 6 2 7
1947-1960 53 16 5 5 17 31 8 6 3 10
1961-1970 56 23 2 5 17 25 9 7 4 9
1971-1980 59 25 — a 8 12 36 — b 15 5 — c
1981-1990 56 14 — a 8 12 37 — b 22 7 — c
1991-2000 61 17 — a 7 9 23 — b 14 30 — c
2001-2008 76 45 — a — d 14 21 — b 18 5 — c
2009-2015 76 46 — a — d 12 19 — b 19 5 — c

a – included with “Farm laborers and foremen”; b – included with “Service workers, etc.”; c – included with “Craftsmen, etc.”; d – included with “Professional.”

Sources: 1820-1970: Historical Statistics (1976). Years since 1970: U.S. Immigration and Naturalization Service (various years). 2002-2015: Department of Homeland Security: Office of Immigration Statistics (various years). From 1970 through 2001, the INS has provided the following occupational categories: Professional, specialty, and technical (listed above under “Professional”); Executive, administrative, and managerial (listed above under “Managers, etc.”); Sales; Administrative support (these two are combined and listed above under “Clerical, etc.”); Precision production, craft, and repair; Operator, fabricator, and laborer (these two are combined and listed above under “Craftsmen, etc.”); Farming, forestry, and fishing (listed above under “Farm laborers and foremen”); and Service (listed above under “Service workers, etc.). Since 2002, the Department of Homeland Security has combined the Professional and Executive categories.  Note: Entries with a zero indicate less than one-half of one percent. Entries with dashes indicate no information or no immigrants.

Skill Levels

The skill level of the immigrant stream is important because it potentially affects the U.S. labor force, an issue considered in the next section. Before turning to this issue, a number of comments can be made concerning the occupational skill level of the U.S. immigration stream. First, skill levels fell substantially in the period before the Civil War. Between 1820 and 1831, only 39% of the immigrants were farmers, servants, or laborers, the least skilled groups. Though the data are not as complete, immigration during the colonial period was almost certainly at least this skilled. By the 1847-54 period, however, the less-skilled percentage had increased to 76%. Second, the less-skilled percentage did not change dramatically late in the nineteenth century when the source of immigration changed from northwest Europe to other parts of Europe. Comparing 1873-80 with 1899-1914, both periods of high immigration, farmers, servants, and laborers accounted for 66% of the immigrants in the former period and 78% in the latter period. The second figure is, however, similar to that during the 1847-54 period. Third, the restrictions on immigration imposed during the 1920s had a sizable effect on the skill level of the immigrant stream. Between 1930 and 1970, only 31-34% of the immigrants were in the least-skilled group.

Fourth, a deterioration in immigrant skills appears in the numbers in the 1980s and 1990s, and then an improvement appears since 2001. Both changes may be an illusion.. In Table 3 for the 1980s and 1990s, the percentage in the “Professional” category falls while the percentages in the “Service” and “Farm workers” categories rise. These changes are, however, due to the amnesty for illegal immigrants resulting from the 1986 law. The amnesty led to the recorded volume of immigration in 1989, 1990, and 1991 being much higher than typical, and most of the “extra” immigrants recorded their occupation as “Service” or “Farm laborer.” If these years are ignored, then little change occurred in the occupational distribution of the immigrant stream during the 1980s and 1990s. Two caveats, however, should be noted. First, the illegal immigrants can not, of course, be ignored. Second, the skill level of the U.S. labor force was improving over the same period. Thus, relative to the U.S. labor force and including illegal immigration, it is apparent the occupational skill level of the U.S. immigrant stream declined during the 1980s and 1990s.  Turning to the twenty-first century, the percentage of the legal immigrant stream in the highest-skilled category appears to have increased. This conclusion is also not certain because the changes that occurred in how occupations were categorized beginning in 2001 make a straightforward comparison potentially inexact. This uncertainty is increased by the growing percentage of immigrants for which no occupation is reported. It is not clear whether a larger percentage of those arriving actually did not work (recall that a growing percentage of legal immigrants are somewhat older) or if more simply did not list an occupation. Overall, detecting changes in the skill level of the legal immigrant stream since about 1930 is fraught with difficulty.

The Effects of Immigration on the United States Economy

Though immigration has effects on the country from which the immigrants leave, this section only examines the effects on the United States, mainly those occurring over longer periods of time. Over short periods of time, sizeable and potentially negative effects can occur in a specific area when there is a huge influx of immigrants. A large number of arrivals in a short period of time in one city can cause school systems to become overcrowded, housing prices and welfare payments to increase, and jobs to become difficult to obtain. Yet most economists believe the effects of immigration over time are much less harmful than commonly supposed and, in many ways, are beneficial. . The following longer-term issues are discussed: the effects of immigration on the overall wage rate of U.S. workers; the effects on the wages of particular groups of workers, such as those who are unskilled; and the effects on the rate of economic growth, that is, the standard of living, in the United States. Determining the effects of immigration on the United States is complex and virtually none of the conclusions presented here are without controversy.

Immigration’s Impact on Overall Wage Rates

Immigration is popularly thought to lower the overall wage rate in the United States by increasing the supply of individuals looking for jobs. This effect may occur in an area over a fairly short period of time. Over longer time periods, however, wages will only fall if the amounts of other resources don’t change. Wages will not fall if the immigrants bring sufficient amounts of other resources with them, such as capital, or cause the amount of other resources in the economy to increase sufficiently. For example, historically the large-scale immigration from Europe contributed to rapid westward expansion of the United States during most of the nineteenth century. The westward expansion, however, increased the amounts of land and natural resources that were available, factors that almost certainly kept immigration from lowering wage rates. Immigrants also increase the amounts of other resources in the economy through running their own businesses, which both historically and in recent times has occurred at a greater rate among immigrants than native workers. By the beginning of the twentieth century, the westward frontier had been settled. A number of researchers have estimated that immigration did lower wages at this time (Hatton and Williamson, 1998; Goldin, 1994), though others have criticized these findings (Carter and Sutch, 1999). For the recent time period, most studies have found little effect of immigration on the level of wages, though a few have found an effect (Borjas, 1999).

Even if immigration leads to a fall in the wage rate, it does not follow that individual workers are worse off. Workers typically receive income from sources other than their own labor. If wages fall, then many other resource prices in the economy rise. For example, immigration increases the demand for housing and land and existing owners benefit from an increase in the current value of their property. Whether any individual worker is better off or worse off in this case is not easy to determine. It depends on the amounts of other resources each individual possesses.

Immigration’s Impact on Wages of Unskilled Workers

Consider the second issue, the effects of immigration on the wages of unskilled workers. If the immigrants arriving in the country are primarily unskilled, then the larger number of unskilled workers could cause their wage to fall if the overall demand for these workers doesn’t change. A requirement for this effect to occur is that the immigrants be less skilled than the U.S. labor force they enter. As discussed above, during colonial times immigrant volume was small and the immigrants were probably more skilled than the existing U.S. labor force. During the 1830s and 1840s, the volume and rate of immigration increased substantially and the skill level of the immigrant stream fell to approximately match that of the native labor force. Instead of lowering the wages of unskilled workers relative to those of skilled workers, however, the large inflow apparently led to little change in the wages of unskilled workers, while some skilled workers lost and others gained. The explanation for these results is that the larger number of unskilled workers resulting from immigration was a factor in employers adopting new methods of production that used more unskilled labor. As a result of this technological change, the demand for unskilled workers increased so their wage did not decline. As employers adopted these new machines, however, skilled artisans who had previously done many of these jobs, such as iron casting, suffered losses. Other skilled workers, such as many white-collar workers who were not in direct competition with the immigrants, gained. Some evidence exists to support a differential effect on skilled workers during the antebellum period (Williamson and Lindert, 1980; Margo, 2000). After the Civil War, however, the skill level of the immigrant stream was close to that of the native labor force, so immigration probably did not further affect the wage structure through the 1920s (Carter and Sutch, 1999).

Impact since World War II

The lower volume of immigration in the period from 1930 through 1960 meant immigration had little effect on the relative wages of different workers during these years. With the resumption of higher volumes of immigration after 1965, however, and with the immigrants’ skill levels being low through 2000, an effect on relative wages again became possible. In fact, the relative wages of high-school dropouts in the United States deteriorated during the same period, especially after the mid-1970s. Researchers who have studied the question have concluded that immigration accounted for about one-fourth of the wage deterioration experienced by high-school dropouts during the 1980s, though some researchers find a lower effect and others a higher one (Friedberg and Hunt, 1995; Borjas, 1999). Wages are determined by a number of factors other than immigration. In this case, it is thought the changing nature of the economy, such as the widespread use of computers increasing the benefits to education, bears more of the blame for the decline in the relative wages of high-school dropouts.

Economic Benefits from Immigration

Beyond any effect on wages, there are a number of ways in which immigration might improve the overall standard of living in an economy. First, immigrants may engage in inventive or scientific activity, with the result being a gain to everyone. Evidence exists for both the historical and more recent periods that the United States has attracted individuals with an inventive/scientific nature. The United States has always been a leader in these areas. Individuals are more likely to be successful in such an environment than in one where these activities are not as highly valued. Second, immigrants expand the size of markets for various goods, which may lead to lower firms’ average costs due to an increase in firm size. The result would be a decrease in the price of the goods in question. Third, most individuals immigrate between the ages of 15 and 35, so the expenses of their basic schooling are paid abroad. In the past, most immigrants, being of working age, immediately got a job. Thus, immigration increased the percentage of the population in the United States that worked, a factor that raises the average standard of living in a country. Even in more recent times, most immigrants work, though the increased proportion of older individuals in the immigrant stream means the positive effects from this factor may be lower than in the past. Fourth, while immigrants may place a strain on government services in an area, such as the school system, they also pay taxes. Even illegal immigrants directly pay sales taxes on their purchases of goods and indirectly pay property taxes through their rent. Finally, the fact that immigrants are less likely to immigrate to the United States during periods of high unemployment is also beneficial. By reducing the number of people looking for jobs during these periods, this factor increases the likelihood U.S. citizens will be able to find a job.

The Experience of Immigrants in the U.S. Labor Market

This section examines the labor market experiences of immigrants in the United States. The issue of discrimination against immigrants in jobs is investigated along with the issue of the success immigrants experienced over time. Again, the issues are investigated for the historical period of immigration as well as more recent times. Interested readers are directed to Borjas (1999), Ferrie (2000), Carter and Sutch (1999), Hatton and Williamson (1998), and Friedberg and Hunt (1995) for more technical discussions.

Did Immigrants Face Labor Market Discrimination?

Discrimination can take various forms. The first form is wage discrimination, in which a worker of one group is paid a wage lower than an equally productive worker of another group. Empirical tests of this hypothesis generally find this type of discrimination has not existed. At any point in time, immigrants have been paid the same wage for a specific job as a native worker. If immigrants generally received lower wages than native workers, the differences reflected the lower skills of the immigrants. Historically, as discussed above, the skill level of the immigrant stream was similar to that of the native labor force, so wages did not differ much between the two groups. During more recent years, the immigrant stream has been less skilled than the native labor force, leading to the receipt of lower wages by immigrants. A second form of discrimination is in the jobs an immigrant is able to obtain. For example, in 1910, immigrants accounted for over half of the workers in various jobs; examples are miners, apparel workers, workers in steel manufacturing, meat packers, bakers, and tailors. If a reason for the employment concentration was that immigrants were kept out of alternative higher paying jobs, then the immigrants would suffer. This type of discrimination may have occurred against Catholics during the 1840s and 1850s and against the immigrants from central, southern, and eastern Europe after 1890. In both cases, it is possible the immigrants suffered because they could not obtain higher paying jobs. In more recent years, reports of immigrants trained as doctors, say, in their home country but not allowed to easily practice as such in the United States, may represent a similar situation. Yet the open nature of the U.S. schooling system and economy has been such that this effect usually did not impact the fortunes of the immigrants’ children or did so at a much smaller rate.

Wage Growth, Job Mobility, and Wealth Accumulation

Another aspect of how immigrants fared in the U.S. labor market is their experiences over time with respect to wage growth, job mobility, and wealth accumulation. A study done by Ferrie (1999) for immigrants arriving between 1840 and 1850, the period when the inflow of immigrants relative to the U.S. population was the highest, found immigrants from Britain and Germany generally improved their job status over time. By 1860, over 75% of the individuals reporting a low-skilled job on the Passenger Lists had moved up into a higher-skilled job, while fewer than 25% of those reporting a high-skilled job on the Passenger Lists had moved down into a lower-skilled job. Thus, the job mobility for these individuals was high. For immigrants from Ireland, the experience was quite different; the percentage of immigrants moving up was only 40% and the percentage moving down was over 50%. It isn’t clear if the Irish did worse because they had less education and fewer skills or whether the differences were due to some type of discrimination against them in the labor market. As to wealth, all the immigrant groups succeeded in accumulating larger amounts of wealth the longer they were in the United States, though their wealth levels fell short of those enjoyed by natives. Essentially, the evidence indicates antebellum immigrants were quite successful over time in matching their skills to the available jobs in the U.S. economy.

The extent to which immigrants had success over time in the labor market in the period since the Civil War is not clear. Most researchers have thought that immigrants who arrived before 1915 had a difficult time. For example, Hanes (1996) concludes that immigrants, even those from northwest Europe, had slower earnings growth over time than natives, a finding he argues was due to poor assimilation. Hatton and Williamson (1998), on the other hand, criticize these findings on technical grounds and conclude that immigrants assimilated relatively easily into the U.S. labor market. For the period after World War II, Chiswick (1978) argues that immigrants’ wages have increased relative to those of natives the longer the immigrants have been in the United States. Borjas (1999) has criticized Chiswick’s finding by suggesting it is caused by a decline in the skills possessed by the arriving immigrants between the 1950s and the 1990s. Borjas finds that 25- to 34-year-old male immigrants who arrived in the late 1950s had wages 9% lower than comparable native males, but by 1970 had wages 6% higher. In contrast, those arriving in the late 1970s had wages 22% lower at entry. By the late 1990s, their wages were still 12% lower than comparable natives. Overall, the degree of success experienced by immigrants in the U.S. labor market remains an area of controversy.

References

Borjas, George J. Heaven’s Door: Immigration Policy and the American Economy. Princeton: Princeton University Press, 1999.

Briggs, Vernon M., Jr. Immigration and the American Labor Force. Baltimore: Johns Hopkins University Press, 1984.

Carter, Susan B., and Richard Sutch. “Historical Perspectives on the Economic Consequences of Immigration into the United States.” In The Handbook of International Migration: The American Experience, edited by Charles Hirschman, Philip Kasinitz, and Josh DeWind, 319-341. New York: Russell Sage Foundation, 1999

Carter, Susan B., et. al.  Historical Statistics of the United States: Earliest Times to the Present – Millennial Edition. Volume 1: Population. New York: Cambridge University Press, 2006.

Chiswick, Barry R. “The Effect of Americanization on the Earnings of Foreign-Born Men.” Journal of Political Economy 86 (1978): 897-921.

Cohn, Raymond L. “A Comparative Analysis of European Immigrant Streams to the United States during the Early Mass Migration.” Social Science History 19 (1995): 63-89.

Cohn, Raymond L.  “The Transition from Sail to Steam in Immigration to the United States.” Journal of Economic History 65 (2005): 479-495.

Cohn, Raymond L. Mass Migration under Sail: European Immigration to the Antebellum United States. New York: Cambridge University Press, 2009.

Erickson, Charlotte J. Leaving England: Essays on British Emigration in the Nineteenth Century. Ithaca: Cornell University Press, 1994.

Ferenczi, Imre. International Migrations. New York: Arno Press, 1970.

Ferrie, Joseph P. Yankeys Now: Immigrants in the Antebellum United States, 1840-1860. New York: Oxford University Press, 1999.

Friedberg, Rachael M., and Hunt, Jennifer. “The Impact of Immigrants on Host Country Wages, Employment and Growth.” The Journal of Economic Perspectives 9 (1995): 23-44.

Goldin, Claudia. “The Political Economy of Immigration Restrictions in the United States, 1890 to 1921.” In The Regulated Economy: A Historical Approach to Political Economy, edited by Claudia Goldin and Gary D. Libecap, 223-257. Chicago: University of Chicago Press, 1994.

Grabbe, Hans-Jürgen. “European Immigration to the United States in the Early National Period, 1783-1820.” Proceeding of the American Philosophical Society 133 (1989): 190-214.

Hanes, Christopher. “Immigrants’ Relative Rate of Wage Growth in the Late Nineteenth Century.” Explorations in Economic History 33 (1996): 35-64.

Hansen, Marcus L. The Atlantic Migration, 1607-1860. Cambridge, MA.: Harvard University Press, 1940.

Hatton, Timothy J., and Jeffrey G. Williamson. The Age of Mass Migration: Causes and Economic Impact. New York: Oxford University Press, 1998.

Jones, Maldwyn Allen. American Immigration. Chicago: University of Chicago Press, Second Edition, 1960.

Le May, Michael C. From Open Door to Dutch Door: An Analysis of U.S. Immigration Policy Since 1820. New York: Praeger, 1987.

Margo, Robert A. Wages and Labor Markets in the United States, 1820-1860. Chicago: University of Chicago Press, 2000.

Massey, Douglas S. “Why Does Immigration Occur? A Theoretical Synthesis.” In The Handbook of International Migration: The American Experience, edited by Charles Hirschman, Philip Kasinitz, and Josh DeWind, 34-52. New York: Russell Sage Foundation, 1999.

Miller, Kerby A. Emigrants and Exiles: Ireland and the Irish Exodus to North America. Oxford: Oxford University Press, 1985.

Nugent, Walter. Crossings: The Great Transatlantic Migrations, 1870-1914. Bloomington and Indianapolis: Indiana University Press, 1992.

Taylor, Philip. The Distant Magnet. New York: Harper & Row, 1971.

Thomas, Brinley. Migration and Economic Growth: A Study of Great Britain and the Atlantic Economy. Cambridge, U.K.: Cambridge University Press, 1954.

U.S. Department of Commerce. Historical Statistics of the United States. Washington, DC, 1976.

U.S. Immigration and Naturalization Service. Statistical Yearbook of the Immigration and Naturalization Service. Washington, DC: U.S. Government Printing Office, various years.

Walker, Mack. Germany and the Emigration, 1816-1885. Cambridge, MA: Harvard University Press, 1964.

Williamson, Jeffrey G., and Peter H. Lindert, Peter H. American Inequality: A Macroeconomic History. New York: Academic Press, 1980.

Citation: Cohn, Raymond L. “Immigration to the United States”. EH.Net Encyclopedia, edited by Robert Whaples. Revised August 2, 2017. URL http://eh.net/encyclopedia/immigration-to-the-united-states/

Hours of Work in U.S. History

Robert Whaples, Wake Forest University

In the 1800s, many Americans worked seventy hours or more per week and the length of the workweek became an important political issue. Since then the workweek’s length has decreased considerably. This article presents estimates of the length of the historical workweek in the U.S., describes the history of the shorter-hours “movement,” and examines the forces that drove the workweek’s decline over time.

Estimates of the Length of the Workweek

Measuring the length of the workweek (or workday or workyear) is a difficult task, full of ambiguities concerning what constitutes work and who is to be considered a worker. Estimating the length of the historical workweek is even more troublesome. Before the Civil War most Americans were employed in agriculture and most of these were self-employed. Like self-employed workers in other fields, they saw no reason to record the amount of time they spent working. Often the distinction between work time and leisure time was blurry. Therefore, estimates of the length of the typical workweek before the mid-1800s are very imprecise.

The Colonial Period

Based on the amount of work performed — for example, crops raised per worker — Carr (1992) concludes that in the seventeenth-century Chesapeake region, “for at least six months of the year, an eight to ten-hour day of hard labor was necessary.” This does not account for other required tasks, which probably took about three hours per day. This workday was considerably longer than for English laborers, who at the time probably averaged closer to six hours of heavy labor each day.

The Nineteenth Century

Some observers believe that most American workers adopted the practice of working from “first light to dark” — filling all their free hours with work — throughout the colonial period and into the nineteenth century. Others are skeptical of such claims and argue that work hours increased during the nineteenth century — especially its first half. Gallman (1975) calculates “changes in implicit hours of work per agricultural worker” and estimates that hours increased 11 to 18 percent from 1800 to 1850. Fogel and Engerman (1977) argue that agricultural hours in the North increased before the Civil War due to the shift into time-intensive dairy and livestock. Weiss and Craig (1993) find evidence suggesting that agricultural workers also increased their hours of work between 1860 and 1870. Finally, Margo (2000) estimates that “on an economy-wide basis, it is probable that annual hours of work rose over the (nineteenth) century, by around 10 percent.” He credits this rise to the shift out of agriculture, a decline in the seasonality of labor demand and reductions in annual periods of nonemployment. On the other hand, it is clear that working hours declined substantially for one important group. Ransom and Sutch (1977) and Ng and Virts (1989) estimate that annual labor hours per capita fell 26 to 35 percent among African-Americans with the end of slavery.

Manufacturing Hours before 1890

Our most reliable estimates of the workweek come from manufacturing, since most employers required that manufacturing workers remain at work during precisely specified hours. The Census of Manufactures began to collect this information in 1880 but earlier estimates are available. Much of what is known about average work hours in the nineteenth century comes from two surveys of manufacturing hours taken by the federal government. The first survey, known as the Weeks Report, was prepared by Joseph Weeks as part of the Census of 1880. The second was prepared in 1893 by Commissioner of Labor Carroll D. Wright, for the Senate Committee on Finance, chaired by Nelson Aldrich. It is commonly called the Aldrich Report. Both of these sources, however, have been criticized as flawed due to problems such as sample selection bias (firms whose records survived may not have been typical) and unrepresentative regional and industrial coverage. In addition, the two series differ in their estimates of the average length of the workweek by as much as four hours. These estimates are reported in Table 1. Despite the previously mentioned problems, it seems reasonable to accept two important conclusions based on these data — the length of the typical manufacturing workweek in the 1800s was very long by modern standards and it declined significantly between 1830 and 1890.

Table 1
Estimated Average Weekly Hours Worked in Manufacturing, 1830-1890

Year Weeks Report Aldrich Report
1830 69.1
1840 67.1 68.4
1850 65.5 69.0
1860 62.0 66.0
1870 61.1 63.0
1880 60.7 61.8
1890 60.0

Sources: U.S. Department of Interior (1883), U.S. Senate (1893)
Note: Atack and Bateman (1992), using data from census manuscripts, estimate average weekly hours to be 60.1 in 1880 — very close to Weeks’ contemporary estimate. They also find that the summer workweek was about 1.5 hours longer than the winter workweek.

Hours of Work during the Twentieth Century

Because of changing definitions and data sources there does not exist a consistent series of workweek estimates covering the entire twentieth century. Table 2 presents six sets of estimates of weekly hours. Despite differences among the series, there is a fairly consistent pattern, with weekly hours falling considerably during the first third of the century and much more slowly thereafter. In particular, hours fell strongly during the years surrounding World War I, so that by 1919 the eight-hour day (with six workdays per week) had been won. Hours fell sharply at the beginning of the Great Depression, especially in manufacturing, then rebounded somewhat and peaked during World War II. After World War II, the length of the workweek stabilized around forty hours. Owen’s nonstudent-male series shows little trend after World War II, but the other series show a slow, but steady, decline in the length of the average workweek. Greis’s two series are based on the average length of the workyear and adjust for paid vacations, holidays and other time-off. The last column is based on information reported by individuals in the decennial censuses and in the Current Population Survey of 1988. It may be the most accurate and representative series, as it is based entirely on the responses of individuals rather than employers.

Table 2
Estimated Average Weekly Hours Worked, 1900-1988

Year Census of Manu-facturing JonesManu-

facturing

OwenNonstudent Males GreisManu-

facturing

GreisAll Workers Census/CPS All Workers
1900 59.6* 55.0 58.5
1904 57.9 53.6 57.1
1909 56.8 (57.3) 53.1 55.7
1914 55.1 (55.5) 50.1 54.0
1919 50.8 (51.2) 46.1 50.0
1924 51.1* 48.8 48.8
1929 50.6 48.0 48.7
1934 34.4 40.6
1940 37.6 42.5 43.3
1944 44.2 46.9
1947 39.2 42.4 43.4 44.7
1950 38.7 41.1 42.7
1953 38.6 41.5 43.2 44.0
1958 37.8* 40.9 42.0 43.4
1960 41.0 40.9
1963 41.6 43.2 43.2
1968 41.7 41.2 42.0
1970 41.1 40.3
1973 40.6 41.0
1978 41.3* 39.7 39.1
1980 39.8
1988 39.2

Sources: Whaples (1990a), Jones (1963), Owen (1976, 1988), and Greis (1984). The last column is based on the author’s calculations using Coleman and Pencavel’s data from Table 4 (below).
* = these estimates are from one year earlier than the year listed.
(The figures in parentheses in the first column are unofficial estimates but are probably more precise, as they better estimate the hours of workers in industries with very long workweeks.)

Hours in Other Industrial Sectors

Table 3 compares the length of the workweek in manufacturing to that in other industries for which there is available information. (Unfortunately, data from the agricultural and service sectors are unavailable until late in this period.) The figures in Table 3 show that the length of the workweek was generally shorter in the other industries — sometimes considerably shorter. For example, in 1910 anthracite coalminers’ workweeks were about forty percent shorter than the average workweek among manufacturing workers. All of the series show an overall downward trend.

Table 3
Estimated Average Weekly Hours Worked, Other Industries

Year Manufacturing Construction Railroads Bituminous Coal Anthracite Coal
1850s about 66 about 66
1870s about 62 about 60
1890 60.0 51.3
1900 59.6 50.3 52.3 42.8 35.8
1910 57.3 45.2 51.5 38.9 43.3
1920 51.2 43.8 46.8 39.3 43.2
1930 50.6 42.9 33.3 37.0
1940 37.6 42.5 27.8 27.2
1955 38.5 37.1 32.4 31.4

Sources: Douglas (1930), Jones (1963), Licht (1983), and Tables 1 and 2.
Note: The manufacturing figures for the 1850s and 1870s are approximations based on averaging numbers from the Weeks and Aldrich reports from Table 1. The early estimates for the railroad industry are also approximations.

Recent Trends by Race and Gender

Some analysts, such as Schor (1992) have argued that the workweek increased substantially in the last half of the twentieth century. Few economists accept this conclusion, arguing that it is based on the use of faulty data (public opinion surveys) and unexplained methods of “correcting” more reliable sources. Schor’s conclusions are contradicted by numerous studies. Table 4 presents Coleman and Pencavel’s (1993a, 1993b) estimates of the average workweek of employed people — disaggregated by race and gender. For all four groups the average length of the workweek has dropped since 1950. Although median weekly hours were virtually constant for men, the upper tail of the hours distribution fell for those with little schooling and rose for the well-educated. In addition, Coleman and Pencavel also find that work hours declined for young and older men (especially black men), but changed little for white men in their prime working years. Women with relatively little schooling were working fewer hours in the 1980s than in 1940, while the reverse is true of well-educated women.

Table 4
Estimated Average Weekly Hours Worked, by Race and Gender, 1940-1988

Year White Men Black Men White Women Black Women
1940 44.1 44.5 40.6 42.2
1950 43.4 42.8 41.0 40.3
1960 43.3 40.4 36.8 34.7
1970 43.1 40.2 36.1 35.9
1980 42.9 39.6 35.9 36.5
1988 42.4 39.6 35.5 37.2

Source: Coleman and Pencavel (1993a, 1993b)

Broader Trends in Time Use, 1880 to 2040

In 1880 a typical male household head had very little leisure time — only about 1.8 hours per day over the course of a year. However, as Fogel’s (2000) estimates in Table 5 show, between 1880 and 1995 the amount of work per day fell nearly in half, allowing leisure time to more than triple. Because of the decline in the length of the workweek and the declining portion of a lifetime that is spent in paid work (due largely to lengthening periods of education and retirement) the fraction of the typical American’s lifetime devoted to work has become remarkably small. Based on these trends Fogel estimates that four decades from now less than one-fourth of our discretionary time (time not needed for sleep, meals, and hygiene) will be devoted to paid work — over three-fourths will be available for doing what we wish.

Table 5
Division of the Day for the Average Male Household Head over the Course of a Year, 1880 and 1995

Activity 1880 1995
Sleep 8 8
Meals and hygiene 2 2
Chores 2 2
Travel to and from work 1 1
Work 8.5 4.7
Illness .7 .5
Left over for leisure activities 1.8 5.8

Source: Fogel (2000)

Table 6
Estimated Trend in the Lifetime Distribution of Discretionary Time, 1880-2040

Activity 1880 1995 2040
Lifetime Discretionary Hours 225,900 298,500 321,900
Lifetime Work Hours 182,100 122,400 75,900
Lifetime Leisure Hours 43,800 176,100 246,000

Source: Fogel (2000)
Notes: Discretionary hours exclude hours used for sleep, meals and hygiene. Work hours include paid work, travel to and from work, and household chores.

Postwar International Comparisons

While hours of work have decreased slowly in the U.S. since the end of World War II, they have decreased more rapidly in Western Europe. Greis (1984) calculates that annual hours worked per employee fell from 1908 to 1704 in the U.S. between 1950 and 1979, a 10.7 percent decrease. This compares to a 21.8 percent decrease across a group of twelve Western European countries, where the average fell from 2170 hours to 1698 hours between 1950 and 1979. Perhaps the most precise way of measuring work hours is to have individuals fill out diaries on their day-to-day and hour-to-hour time use. Table 7 presents an international comparison of average work hours both inside and outside of the workplace, by adult men and women — averaging those who are employed with those who are not. (Juster and Stafford (1991) caution, however, that making these comparisons requires a good deal of guesswork.) These numbers show a significant drop in total work per week in the U.S. between 1965 and 1981. They also show that total work by men and women is very similar, although it is divided differently. Total work hours in the U.S. were fairly similar to those in Japan, but greater than in Denmark, while less than in the USSR.

Table 7
Weekly Work Time in Four Countries, Based on Time Diaries, 1960s-1980s

Activity US USSR (Pskov)
Men Women Men Women
1965 1981 1965 1981 1965 1981 1965 1981
Total Work 63.1 57.8 60.9 54.4 64.4 65.7 75.3 66.3
Market Work 51.6 44.0 18.9 23.9 54.6 53.8 43.8 39.3
Commuting 4.8 3.5 1.6 2.0 4.9 5.2 3.7 3.4
Housework 11.5 13.8 41.8 30.5 9.8 11.9 31.5 27.0
Activity Japan Denmark
Men Women Men Women
1965 1985 1965 1985 1964 1987 1964 1987
Total Work 60.5 55.5 64.7 55.6 45.4 46.2 43.4 43.9
Market Work 57.7 52.0 33.2 24.6 41.7 33.4 13.3 20.8
Commuting 3.6 4.5 1.0 1.2 n.a n.a n.a n.a
Housework 2.8 3.5 31.5 31.0 3.7 12.8 30.1 23.1

Source: Juster and Stafford (1991)

The Shorter Hours “Movement” in the U.S.

The Colonial Period

Captain John Smith, after mapping New England’s coast, came away convinced that three days’ work per week would satisfy any settler. Far from becoming a land of leisure, however, the abundant resources of British America and the ideology of its settlers, brought forth high levels of work. Many colonial Americans held the opinion that prosperity could be taken as a sign of God’s pleasure with the individual, viewed work as inherently good and saw idleness as the devil’s workshop. Rodgers (1978) argues that this work ethic spread and eventually reigned supreme in colonial America. The ethic was consistent with the American experience, since high returns to effort meant that hard work often yielded significant increases in wealth. In Virginia, authorities also transplanted the Statue of Artificers, which obliged all Englishmen (except the gentry) to engage in productive activity from sunrise to sunset. Likewise, a 1670 Massachusetts law demanded a minimum ten-hour workday, but it is unlikely that these laws had any impact on the behavior of most free workers.

The Revolutionary War Period

Roediger and Foner (1989) contend that the Revolutionary War era brought a series of changes that undermined support for sun-to-sun work. The era’s republican ideology emphasized that workers needed free time, away from work, to participate in democracy. Simultaneously, the development of merchant capitalism meant that there were, for the first time, a significant number of wageworkers. Roediger and Foner argue that reducing labor costs was crucial to the profitability of these workers’ employers, who reduced costs by squeezing more work from their employees — reducing time for meals, drink and rest and sometimes even rigging the workplace’s official clock. Incensed by their employers’ practice of paying a flat daily wage during the long summer shift and resorting to piece rates during short winter days, Philadelphia’s carpenters mounted America’s first ten-hour-day strike in May 1791. (The strike was unsuccessful.)

1820s: The Shorter Hours Movement Begins

Changes in the organization of work, with the continued rise of merchant capitalists, the transition from the artisanal shop to the early factory, and an intensified work pace had become widespread by about 1825. These changes produced the first extensive, aggressive movement among workers for shorter hours, as the ten-hour movement blossomed in New York City, Philadelphia and Boston. Rallying around the ten-hour banner, workers formed the first city-central labor union in the U.S., the first labor newspaper, and the first workingmen’s political party — all in Philadelphia — in the late 1820s.

Early Debates over Shorter Hours

Although the length of the workday is largely an economic decision arrived at by the interaction of the supply and demand for labor, advocates of shorter hours and foes of shorter hours have often argued the issue on moral grounds. In the early 1800s, advocates argued that shorter work hours improved workers’ health, allowed them time for self-improvement and relieved unemployment. Detractors countered that workers would abuse leisure time (especially in saloons) and that long, dedicated hours of work were the path to success, which should not be blocked for the great number of ambitious workers.

1840s: Early Agitation for Government Intervention

When Samuel Slater built the first textile mills in the U.S., “workers labored from sun up to sun down in summer and during the darkness of both morning and evening in the winter. These hours ? only attracted attention when they exceeded the common working day of twelve hours,” according to Ware (1931). During the 1830s, an increased work pace, tighter supervision, and the addition of about fifteen minutes to the work day (partly due to the introduction of artificial lighting during winter months), plus the growth of a core of more permanent industrial workers, fueled a campaign for a shorter workweek among mill workers in Lowell, Massachusetts, whose workweek averaged about 74 hours. This agitation was led by Sarah Bagley and the New England Female Labor Reform Association, which, beginning in 1845, petitioned the state legislature to intervene in the determination of hours. The petitions were followed by America’s first-ever examination of labor conditions by a governmental investigating committee. The Massachusetts legislature proved to be very unsympathetic to the workers’ demands, but similar complaints led to the passage of laws in New Hampshire (1847) and Pennsylvania (1848), declaring ten hours to be the legal length of the working day. However, these laws also specified that a contract freely entered into by employee and employer could set any length for the workweek. Hence, these laws had little impact. Legislation passed by the federal government had a more direct, though limited effect. On March 31, 1840, President Martin Van Buren issued an executive order mandating a ten-hour day for all federal employees engaged in manual work.

1860s: Grand Eight Hours Leagues

As the length of the workweek gradually declined, political agitation for shorter hours seems to have waned for the next two decades. However, immediately after the Civil War reductions in the length of the workweek reemerged as an important issue for organized labor. The new goal was an eight-hour day. Roediger (1986) argues that many of the new ideas about shorter hours grew out of the abolitionists’ critique of slavery — that long hours, like slavery, stunted aggregate demand in the economy. The leading proponent of this idea, Ira Steward, argued that decreasing the length of the workweek would raise the standard of living of workers by raising their desired consumption levels as their leisure expanded, and by ending unemployment. The hub of the newly launched movement was Boston and Grand Eight Hours Leagues sprang up around the country in 1865 and 1866. The leaders of the movement called the meeting of the first national organization to unite workers of different trades, the National Labor Union, which met in Baltimore in 1867. In response to this movement, eight states adopted general eight-hour laws, but again the laws allowed employer and employee to mutually consent to workdays longer than the “legal day.” Many critics saw these laws and this agitation as a hoax, because few workers actually desired to work only eight hours per day at their original hourly pay rate. The passage of the state laws did foment action by workers — especially in Chicago where parades, a general strike, rioting and martial law ensued. In only a few places did work hours fall after the passage of these laws. Many become disillusioned with the idea of using the government to promote shorter hours and by the late 1860s, efforts to push for a universal eight-hour day had been put on the back burner.

The First Enforceable Hours Laws

Despite this lull in shorter-hours agitation, in 1874, Massachusetts passed the nation’s first enforceable ten-hour law. It covered only female workers and became fully effective by 1879. This legislation was fairly late by European standards. Britain had passed its first effective Factory Act, setting maximum hours for almost half of its very young textile workers, in 1833.

1886: Year of Dashed Hopes

In the early 1880s organized labor in the U.S. was fairly weak. In 1884, the short-lived Federation of Organized Trades and Labor Unions (FOTLU) fired a “shot in the dark.” During its final meeting, before dissolving, the Federation “ordained” May 1, 1886 as the date on which workers would cease working beyond eight hours per day. Meanwhile, the Knights of Labor, which had begun as a secret fraternal society and evolved a labor union, began to gain strength. It appears that many nonunionized workers, especially the unskilled, came to see in the Knights a chance to obtain a better deal from their employers, perhaps even to obtain the eight-hour day. FOTLU’s call for workers to simply walk off the job after eight hours beginning on May 1, plus the activities of socialist and anarchist labor organizers and politicians, and the apparent strength of the Knights combined to attract members in record numbers. The Knights mushroomed and its new membership demanded that their local leaders support them in attaining the eight-hour day. Many smelled victory in the air — the movement to win the eight-hour day became frenzied and the goal became “almost a religious crusade” (Grob, 1961).

The Knights’ leader, Terence Powderly, thought that the push for a May 1 general strike for eight-hours was “rash, short-sighted and lacking in system” and “must prove abortive” (Powderly, 1890). He offered no effective alternative plan but instead tried to block the mass action, issuing a “secret circular” condemning the use of strikes. Powderly reasoned that low incomes forced workmen to accept long hours. Workers didn’t want shorter hours unless their daily pay was maintained, but employers were unwilling and/or unable to offer this. Powderly’s rival, labor leader Samuel Gompers, agreed that “the movement of ’86 did not have the advantage of favorable conditions” (Gompers, 1925). Nelson (1986) points to divisions among workers, which probably had much to do with the failure in 1886 of the drive for the eight-hour day. Some insisted on eight hours with ten hours’ pay, but others were willing to accept eight hours with eight hours’ pay,

Haymarket Square Bombing

The eight-hour push of 1886 was, in Norman Ware’s words, “a flop” (Ware, 1929). Lack of will and organization among workers was undoubtedly important, but its collapse was aided by violence that marred strikes and political rallies in Chicago and Milwaukee. The 1886 drive for eight-hours literally blew up in organized labor’s face. At Haymarket Square in Chicago an anarchist bomb killed fifteen policemen during an eight-hour rally, and in Milwaukee’s Bay View suburb nine strikers were killed as police tried to disperse roving pickets. The public backlash and fear of revolution damned the eight-hour organizers along with the radicals and dampened the drive toward eight hours — although it is estimated that the strikes of May 1886 shortened the workweek for about 200,000 industrial workers, especially in New York City and Cincinnati.

The AFL’s Strategy

After the demise of the Knights of Labor, the American Federation of Labor (AFL) became the strongest labor union in the U.S. It held shorter hours as a high priority. The inside cover of its Proceedings carried two slogans in large type: “Eight hours for work, eight hours for rest, eight hours for what we will” and “Whether you work by the piece or work by the day, decreasing the hours increases the pay.” (The latter slogan was coined by Ira Steward’s wife, Mary.) In the aftermath of 1886, the American Federation of Labor adopted a new strategy of selecting each year one industry in which it would attempt to win the eight-hour day, after laying solid plans, organizing, and building up a strike fund war chest by taxing nonstriking unions. The United Brotherhood of Carpenters and Joiners was selected first and May 1, 1890 was set as a day of national strikes. It is estimated that nearly 100,000 workers gained the eight-hour day as a result of these strikes in 1890. However, other unions turned down the opportunity to follow the carpenters’ example and the tactic was abandoned. Instead, the length of the workweek continued to erode during this period, sometimes as the result of a successful local strike, more often as the result of broader economic forces.

The Spread of Hours Legislation

Massachusetts’ first hours law in 1874 set sixty hours per week as the legal maximum for women, in 1892 this was cut to 58, in 1908 to 56, and in 1911 to 54. By 1900, 26 percent of states had maximum hours laws covering women, children and, in some, adult men (generally only those in hazardous industries). The percentage of states with maximum hours laws climbed to 58 percent in 1910, 76 percent in 1920, and 84 percent in 1930. Steinberg (1982) calculates that the percent of employees covered climbed from 4 percent nationally in 1900, to 7 percent in 1910, and 12 percent in 1920 and 1930. In addition, these laws became more restrictive with the average legal standard falling from a maximum of 59.3 hours per week in 1900 to 56.7 in 1920. According to her calculations, in 1900 about 16 percent of the workers covered by these laws were adult men, 49 percent were adult women and the rest were minors.

Court Rulings

The banner years for maximum hours legislation were right around 1910. This may have been partly a reaction to the Supreme Court’s ruling upholding female-hours legislation in the Muller vs. Oregon case (1908). The Court’s rulings were not always completely consistent during this period, however. In 1898 the Court upheld a maximum eight-hour day for workmen in the hazardous industries of mining and smelting in Utah in Holden vs. Hardy. In Lochner vs. New York (1905), it rejected as unconstitutional New York’s ten-hour day for bakers, which was also adopted (at least nominally) out of concerns for safety. The defendant showed that mortality rates in baking were only slightly above average, and lower than those for many unregulated occupations, arguing that this was special interest legislation, designed to favor unionized bakers. Several state courts, on the other hand, supported laws regulating the hours of men in only marginally hazardous work. By 1917, in Bunting vs. Oregon, the Supreme Court seemingly overturned the logic of the Lochner decision, supporting a state law that required overtime payment for all men working long hours. The general presumption during this period was that the courts would allow regulation of labor concerning women and children, who were thought to be incapable of bargaining on an equal footing with employers and in special need of protection. Men were allowed freedom of contract unless it could be proven that regulating their hours served a higher good for the population at large.

New Arguments about Shorter Hours

During the first decades of the twentieth century, arguments favoring shorter hours moved away from Steward’s line that shorter hours increased pay and reduced unemployment to arguments that shorter hours were good for employers because they made workers more productive. A new cadre of social scientists began to offer evidence that long hours produced health-threatening, productivity-reducing fatigue. This line of reasoning, advanced in the court brief of Louis Brandeis and Josephine Goldmark, was crucial in the Supreme Court’s decision to support state regulation of women’s hours in Muller vs. Oregon. Goldmark’s book, Fatigue and Efficiency (1912) was a landmark. In addition, data relating to hours and output among British and American war workers during World War I helped convince some that long hours could be counterproductive. Businessmen, however, frequently attacked the shorter hours movement as merely a ploy to raise wages, since workers were generally willing to work overtime at higher wage rates.

Federal Legislation in the 1910s

In 1912 the Federal Public Works Act was passed, which provided that every contract to which the U.S. government was a party must contain an eight-hour day clause. Three year later LaFollette’s Bill established maximum hours for maritime workers. These were preludes to the most important shorter-hours law enacted by Congress during this period — 1916’s Adamson Act, which was passed to counter a threatened nationwide strike, granted rail workers the basic eight hour day. (The law set eight hours as the basic workday and required higher overtime pay for longer hours.)

World War I and Its Aftermath

Labor markets became very tight during World War I as the demand for workers soared and the unemployment rate plunged. These forces put workers in a strong bargaining position, which they used to obtain shorter work schedules. The move to shorter hours was also pushed by the federal government, which gave unprecedented support to unionization. The federal government began to intervene in labor disputes for the first time, and the National War Labor Board “almost invariably awarded the basic eight-hour day when the question of hours was at issue” in labor disputes (Cahill, 1932). At the end of the war everyone wondered if organized labor would maintain its newfound power and the crucial test case was the steel industry. Blast furnace workers generally put in 84-hour workweeks. These abnormally long hours were the subject of much denunciation and a major issue in a strike that began in September 1919. The strike failed (and organized labor’s power receded during the 1920s), but four years later US Steel reduced its workday from twelve to eight hours. The move came after much arm-twisting by President Harding but its timing may be explained by immigration restrictions and the loss of immigrant workers who were willing to accept such long hours (Shiells, 1990).

The Move to a Five-day Workweek

During the 1920s agitation for shorter workdays largely disappeared, now that the workweek had fallen to about 50 hours. However, pressure arose to grant half-holidays on Saturday or Saturday off — especially in industries whose workers were predominantly Jewish. By 1927 at least 262 large establishments had adopted the five-day week, while only 32 had it by 1920. The most notable action was Henry Ford’s decision to adopt the five-day week in 1926. Ford employed more than half of the nation’s approximately 400,000 workers with five-day weeks. However, Ford’s motives were questioned by many employers who argued that productivity gains from reducing hours ceased beyond about forty-eight hours per week. Even the reformist American Labor Legislation Review greeted the call for a five-day workweek with lukewarm interest.

Changing Attitudes in the 1920s

Hunnicutt (1988) argues that during the 1920s businessmen and economists began to see shorter hours as a threat to future economic growth. With the development of advertising — the “gospel of consumption” — a new vision of progress was proposed to American workers. It replaced the goal of leisure time with a list of things to buy and business began to persuade workers that more work brought more tangible rewards. Many workers began to oppose further decreases in the length of the workweek. Hunnicutt concludes that a new work ethic arose as Americans threw off the psychology of scarcity for one of abundance.

Hours’ Reduction during the Great Depression

Then the Great Depression hit the American economy. By 1932 about half of American employers had shortened hours. Rather than slash workers’ real wages, employers opted to lay-off many workers (the unemployment rate hit 25 percent) and tried to protect the ones they kept on by the sharing of work among them. President Hoover’s Commission for Work Sharing pushed voluntary hours reductions and estimated that they had saved three to five million jobs. Major employers like Sears, GM, and Standard Oil scaled down their workweeks and Kellogg’s and the Akron tire industry pioneered the six-hour day. Amid these developments, the AFL called for a federally-mandated thirty-hour workweek.

The Black-Connery 30-Hours Bill and the NIRA

The movement for shorter hours as a depression-fighting work-sharing measure built such a seemingly irresistible momentum that by 1933 observers predicting that the “30-hour week was within a month of becoming federal law” (Hunnicutt, 1988). During the period after the 1932 election but before Franklin Roosevelt’s inauguration, Congressional hearings on thirty hours began, and less than one month into FDR’s first term, the Senate passed, 53 to 30, a thirty-hour bill authored by Hugo Black. The bill was sponsored in the House by William Connery. Roosevelt originally supported the Black-Connery proposals, but soon backed off, uneasy with a provision forbidding importation of goods produced by workers whose weeks were longer than thirty hours, and convinced by arguments of business that trying to legislate fewer hours might have disastrous results. Instead, FDR backed the National Industrial Recovery Act (NIRA). Hunnicutt argues that an implicit deal was struck in the NIRA. Labor leaders were persuaded by NIRA Section 7a’s provisions — which guaranteed union organization and collective bargaining — to support the NIRA rather than the Black-Connery Thirty-Hour Bill. Business, with the threat of thirty hours hanging over its head, fell raggedly into line. (Most historians cite other factors as the key to the NIRA’s passage. See Barbara Alexander’s article on the NIRA in this encyclopedia.) When specific industry codes were drawn up by the NIRA-created National Recovery Administration (NRA), shorter hours were deemphasized. Despite a plan by NRA Administrator Hugh Johnson to make blanket provisions for a thirty-five hour workweek in all industry codes, by late August 1933, the momentum toward the thirty-hour week had dissipated. About half of employees covered by NRA codes had their hours set at forty per week and nearly 40 percent had workweeks longer than forty hours.

The FSLA: Federal Overtime Law

Hunnicutt argues that the entire New Deal can be seen as an attempt to keep shorter-hours advocates at bay. After the Supreme Court struck down the NRA, Roosevelt responded to continued demands for thirty hours with the Works Progress Administration, the Wagner Act, Social Security, and, finally, the Fair Labor Standards Acts, which set a federal minimum wage and decreed that overtime beyond forty hours per week would be paid at one-and-a-half times the base rate in covered industries.

The Demise of the Shorter Hours’ Movement

As the Great Depression ended, average weekly work hours slowly climbed from their low reached in 1934. During World War II hours reached a level almost as high as at the end of World War I. With the postwar return of weekly work hours to the forty-hour level the shorter hours movement effectively ended. Occasionally organized labor’s leaders announced that they would renew the push for shorter hours, but they found that most workers didn’t desire a shorter workweek.

The Case of Kellogg’s

Offsetting isolated examples of hours reductions after World War II, there were noteworthy cases of backsliding. Hunnicutt (1996) has studied the case of Kellogg’s in great detail. In 1946, 87% of women and 71% of men working at Kellogg’s voted to return to the six-hour day, with the end of the war. Over the course of the next decade, however, the tide turned. By 1957 most departments had opted to switch to 8-hour shifts, so that only about one-quarter of the work force, mostly women, retained a six-hour shift. Finally, in 1985, the last department voted to adopt an 8-hour workday. Workers, especially male workers, began to favor additional money more than the extra two hours per day of free time. In interviews they explained that they needed the extra money to buy a wide range of consumer items and to keep up with the neighbors. Several men told about the friction that resulted when men spent too much time around the house: “The wives didn’t like the men underfoot all day.” “The wife always found something for me to do if I hung around.” “We got into a lot of fights.” During the 1950s, the threat of unemployment evaporated and the moral condemnation for being a “work hog” no longer made sense. In addition, the rise of quasi-fixed employment costs (such as health insurance) induced management to push workers toward a longer workday.

The Current Situation

As the twentieth century ended there was nothing resembling a shorter hours “movement.” The length of the workweek continues to fall for most groups — but at a glacial pace. Some Americans complain about a lack of free time but the vast majority seem content with an average workweek of roughly forty hours — channeling almost all of their growing wages into higher incomes rather than increased leisure time.

Causes of the Decline in the Length of the Workweek

Supply, Demand and Hours of Work

The length of the workweek, like other labor market outcomes, is determined by the interaction of the supply and demand for labor. Employers are torn by conflicting pressures. Holding everything else constant, they would like employees to work long hours because this means that they can utilize their equipment more fully and offset any fixed costs from hiring each worker (such as the cost of health insurance — common today, but not a consideration a century ago). On the other hand, longer hours can bring reduced productivity due to worker fatigue and can bring worker demands for higher hourly wages to compensate for putting in long hours. If they set the workweek too high, workers may quit and few workers will be willing to work for them at a competitive wage rate. Thus, workers implicitly choose among a variety of jobs — some offering shorter hours and lower earnings, others offering longer hours and higher earnings.

Economic Growth and the Long-Term Reduction of Work Hours

Historically employers and employees often agreed on very long workweeks because the economy was not very productive (by today’s standards) and people had to work long hours to earn enough money to feed, clothe and house their families. The long-term decline in the length of the workweek, in this view, has primarily been due to increased economic productivity, which has yielded higher wages for workers. Workers responded to this rise in potential income by “buying” more leisure time, as well as by buying more goods and services. In a recent survey, a sizeable majority of economic historians agreed with this view. Over eighty percent accepted the proposition that “the reduction in the length of the workweek in American manufacturing before the Great Depression was primarily due to economic growth and the increased wages it brought” (Whaples, 1995). Other broad forces probably played only a secondary role. For example, roughly two-thirds of economic historians surveyed rejected the proposition that the efforts of labor unions were the primary cause of the drop in work hours before the Great Depression.

Winning the Eight-Hour Day in the Era of World War I

The swift reduction of the workweek in the period around World War I has been extensively analyzed by Whaples (1990b). His findings support the consensus that economic growth was the key to reduced work hours. Whaples links factors such as wages, labor legislation, union power, ethnicity, city size, leisure opportunities, age structure, wealth and homeownership, health, education, alternative employment opportunities, industrial concentration, seasonality of employment, and technological considerations to changes in the average workweek in 274 cities and 118 industries. He finds that the rapid economic expansion of the World War I period, which pushed up real wages by more than 18 percent between 1914 and 1919, explains about half of the drop in the length of the workweek. The reduction of immigration during the war was important, as it deprived employers of a group of workers who were willing to put in long hours, explaining about one-fifth of the hours decline. The rapid electrification of manufacturing seems also to have played an important role in reducing the workweek. Increased unionization explains about one-seventh of the reduction, and federal and state legislation and policies that mandated reduced workweeks also had a noticeable role.

Cross-sectional Patterns from 1919

In 1919 the average workweek varied tremendously, emphasizing the point that not all workers desired the same workweek. The workweek exceeded 69 hours in the iron blast furnace, cottonseed oil, and sugar beet industries, but fell below 45 hours in industries such as hats and caps, fur goods, and women’s clothing. Cities’ averages also differed dramatically. In a few Midwestern steel mill towns average workweeks exceeded 60 hours. In a wide range of low-wage Southern cities they reached the high 50s, but in high-wage Western ports, like Seattle, the workweek fell below 45 hours.

Whaples (1990a) finds that among the most important city-level determinants of the workweek during this period were the availability of a pool of agricultural workers, the capital-labor ratio, horsepower per worker, and the amount of employment in large establishments. Hours rose as each of these increased. Eastern European immigrants worked significantly longer than others, as did people in industries whose output varied considerably from season to season. High unionization and strike levels reduced hours to a small degree. The average female employee worked about six and a half fewer hours per week in 1919 than did the average male employee. In city-level comparisons, state maximum hours laws appear to have had little affect on average work hours, once the influences of other factors have been taken into account. One possibility is that these laws were passed only after economic forces lowered the length of the workweek. Overall, in cities where wages were one percent higher, hours were about -0.13 to -0.05 percent lower. Again, this suggests that during the era of declining hours, workers were willing to use higher wages to “buy” shorter hours.

Annotated Bibliography

Perhaps the most comprehensive survey of the shorter hours movement in the U.S. is David Roediger and Philip Foner’s Our Own Time: A History of American Labor and the Working Day (1989). It contends that “the length of the working day has been the central issue for the American labor movement during its most vigorous periods of activity, uniting workers along lines of craft, gender, and ethnicity.” Critics argue that its central premise is flawed because workers have often been divided about the optimal length of the workweek. It explains the point of view of organized labor and recounts numerous historically important events and arguments, but does not attempt to examine in detail the broader economic forces that determined the length of the workweek. An earlier useful comprehensive work is Marion Cahill’s Shorter Hours: A Study of the Movement since the Civil War (1932).

Benjamin Hunnicutt’s Work Without End: Abandoning Shorter Hours for the Right to Work (1988) focuses on the period from 1920 to 1940 and traces the political, intellectual, and social “dialogues” that changed the American concept of progress from dreams of more leisure to an “obsession” with the importance of work and wage-earning. This work’s detailed analysis and insights are valuable, but it draws many of its inferences from what intellectuals said about shorter hours, rather than spending time on the actual decision makers — workers and employers. Hunnicutt’s Kellogg’s Six-Hour Day (1996), is important because it does exactly this — interviewing employees and examining the motives and decisions of a prominent employer. Unfortunately, it shows that one must carefully interpret what workers say on the subject, as they are prone to reinterpret their own pasts so that their choices can be more readily rationalized. (See EH.NET’s review: http://eh.net/book_reviews/kelloggs-six-hour-day/.)

Economists have given surprisingly little attention to the determinants of the workweek. The most comprehensive treatment is Robert Whaples’ “The Shortening of the American Work Week” (1990), which surveys estimates of the length of the workweek, the shorter hours movement, and economic theories about the length of the workweek. Its core is an extensive statistical examination of the determinants of the workweek in the period around World War I.

References

Atack, Jeremy and Fred Bateman. “How Long Was the Workday in 1880?” Journal of Economic History 52, no. 1 (1992): 129-160.

Cahill, Marion Cotter. Shorter Hours: A Study of the Movement since the Civil War. New York: Columbia University Press, 1932.

Carr, Lois Green. “Emigration and the Standard of Living: The Seventeenth Century Chesapeake.” Journal of Economic History 52, no. 2 (1992): 271-291.

Coleman, Mary T. and John Pencavel. “Changes in Work Hours of Male Employees, 1940-1988.” Industrial and Labor Relations Review 46, no. 2 (1993a): 262-283.

Coleman, Mary T. and John Pencavel. “Trends in Market Work Behavior of Women since 1940.” Industrial and Labor Relations Review 46, no. 4 (1993b): 653-676.

Douglas, Paul. Real Wages in the United States, 1890-1926. Boston: Houghton, 1930.

Fogel, Robert. The Fourth Great Awakening and the Future of Egalitarianism. Chicago: University of Chicago Press, 2000.

Fogel, Robert and Stanley Engerman. Time on the Cross: The Economics of American Negro Slavery. Boston: Little, Brown, 1974.

Gallman, Robert. “The Agricultural Sector and the Pace of Economic Growth: U.S. Experience in the Nineteenth Century.” In Essays in Nineteenth-Century Economic History: The Old Northwest, edited by David Klingaman and Richard Vedder. Athens, OH: Ohio University Press, 1975.

Goldmark, Josephine. Fatigue and Efficiency. New York: Charities Publication Committee, 1912.

Gompers, Samuel. Seventy Years of Life and Labor: An Autobiography. New York: Dutton, 1925.

Greis, Theresa Diss. The Decline of Annual Hours Worked in the United States, since 1947. Manpower and Human Resources Studies, no. 10, Wharton School, University of Pennsylvania, 1984.

Grob, Gerald. Workers and Utopia: A Study of Ideological Conflict in the American Labor Movement, 1865-1900. Evanston: Northwestern University Press, 1961.

Hunnicutt, Benjamin Kline. Work Without End: Abandoning Shorter Hours for the Right to Work. Philadelphia: Temple University Press, 1988.

Hunnicutt, Benjamin Kline. Kellogg’s Six-Hour Day. Philadelphia: Temple University Press, 1996.

Jones, Ethel. “New Estimates of Hours of Work per Week and Hourly Earnings, 1900-1957.” Review of Economics and Statistics 45, no. 4 (1963): 374-385.

Juster, F. Thomas and Frank P. Stafford. “The Allocation of Time: Empirical Findings, Behavioral Models, and Problems of Measurement.” Journal of Economic Literature 29, no. 2 (1991): 471-522.

Licht, Walter. Working for the Railroad: The Organization of Work in the Nineteenth Century. Princeton: Princeton University Press, 1983.

Margo, Robert. “The Labor Force in the Nineteenth Century.” In The Cambridge Economic History of the United States, Volume II, The Long Nineteenth Century, edited by Stanley Engerman and Robert Gallman, 207-243. New York: Cambridge University Press, 2000.

Nelson, Bruce. “‘We Can’t Get Them to Do Aggressive Work’: Chicago’s Anarchists and the Eight-Hour Movement.” International Labor and Working Class History 29 (1986).

Ng, Kenneth and Nancy Virts. “The Value of Freedom.” Journal of Economic History 49, no. 4 (1989): 958-965.

Owen, John. “Workweeks and Leisure: An Analysis of Trends, 1948-1975.” Monthly Labor Review 99 (1976).

Owen, John. “Work-time Reduction in the United States and Western Europe.” Monthly Labor Review 111 (1988).

Powderly, Terence. Thirty Years of Labor, 1859-1889. Columbus: Excelsior, 1890.

Ransom, Roger and Richard Sutch. One Kind of Freedom: The Economic Consequences of Emancipation. New York: Cambridge University Press, 1977.

Rodgers, Daniel. The Work Ethic in Industrial America, 1850-1920. Chicago: University of Chicago Press, 1978.

Roediger, David. “Ira Steward and the Antislavery Origins of American Eight-Hour Theory.” Labor History 27 (1986).

Roediger, David and Philip Foner. Our Own Time: A History of American Labor and the Working Day. New York: Verso, 1989.

Schor, Juliet B. The Overworked American: The Unexpected Decline in Leisure. New York: Basic Books, 1992.

Shiells, Martha Ellen, “Collective Choice of Working Conditions: Hours in British and U.S. Iron and Steel, 1890-1923.” Journal of Economic History 50, no. 2 (1990): 379-392.

Steinberg, Ronnie. Wages and Hours: Labor and Reform in Twentieth-Century America. New Brunswick, NJ: Rutgers University Press, 1982.

United States, Department of Interior, Census Office. Report on the Statistics of Wages in Manufacturing Industries, by Joseph Weeks, 1880 Census, Vol. 20. Washington: GPO, 1883.

United States Senate. Senate Report 1394, Fifty-Second Congress, Second Session. “Wholesale Prices, Wages, and Transportation.” Washington: GPO, 1893.

Ware, Caroline. The Early New England Cotton Manufacture: A Study of Industrial Beginnings. Boston: Houghton-Mifflin, 1931.

Ware, Norman. The Labor Movement in the United States, 1860-1895. New York: Appleton, 1929.

Weiss, Thomas and Lee Craig. “Agricultural Productivity Growth during the Decade of the Civil War.” Journal of Economic History 53, no. 3 (1993): 527-548.

Whaples, Robert. “The Shortening of the American Work Week: An Economic and Historical Analysis of Its Context, Causes, and Consequences.” Ph.D. dissertation, University of Pennsylvania, 1990a.

Whaples, Robert. “Winning the Eight-Hour Day, 1909-1919.” Journal of Economic History 50, no. 2 (1990b): 393-406.

Whaples, Robert. “Where Is There Consensus Among American Economic Historians? The Results of a Survey on Forty Propositions.” Journal of Economic History 55, no. 1 (1995): 139-154.

Citation: Whaples, Robert. “Hours of Work in U.S. History”. EH.Net Encyclopedia, edited by Robert Whaples. August 14, 2001. URL http://eh.net/encyclopedia/hours-of-work-in-u-s-history/

Economic History of Hawai’i

Sumner La Croix, University of Hawai’i and East-West Center

The Hawaiian Islands are a chain of 132 islands, shoals, and reefs extending over 1,523 miles in the Northeast Pacific Ocean. Eight islands — Hawai’i, Maui, O’ahu, Kaua’i, Moloka’i, Lana’i, Ni’ihau, and Kaho’olawe — possess 99 percent of the land area (6,435 square miles) and are noted for their volcanic landforms, unique flora and fauna, and diverse climates.

From Polynesian Settlement to Western Contact

The Islands were uninhabited until sometime around 400 AD when Polynesian voyagers sailing double-hulled canoes arrived from the Marquesas Islands (Kirch, 1985, p. 68). Since the settlers had no written language and virtually no contact with the Western world until 1778, our knowledge of Hawai’i’s pre-history comes primarily from archaeological investigations and oral legends. A relatively egalitarian society and subsistence economy were coupled with high population growth rates until about 1100 when continued population growth led to a major expansion of the areas of settlement and cultivation. Perhaps under pressures of increasing resource scarcity, a new, more hierarchical social structure emerged, characterized by chiefs (ali’i) and subservient commoners (maka’ainana). In the two centuries prior to Western contact, there is considerable evidence that ruling chiefs (ali’i nui) competed to extend their lands by conquest and that this led to cycles of expansion and retrenchment.

Captain James Cook’s ships reached Hawai’i in 1778, thereby ending a long period of isolation for the Islands. Captain James King observed in 1779 that Hawaiians were generally “above the middle size” of Europeans, a rough indicator that Hawaiians generally had a diet superior to eighteenth-century Europeans. At contact, Hawaiian social and political institutions were similar to those found in other Polynesian societies. Hawaiians were sharply divided into three main social classes: ali’i (chiefs), maka’ainana (commoners), and kahuna (priests). Oral legends tell us that the Islands were usually divided into six to eight small kingdoms consisting of an island or part of an island, each governed by an ali’i nui (ruling chief). The ali’i nui had extensive rights to all lands and material goods and the ability to confiscate or redistribute material wealth at any time. Redistribution usually occurred only when a new ruling chief took office or when lands were conquered or lost. The ali’i nui gave temporary land grants to ali’i who, in turn, gave temporary land grants to konohiki (managers), who then “contracted” with maka’ainana, the great majority of the populace, to work the lands.

The Hawaiian society and economy has its roots in extended families (‘ohana) working cooperatively on an ahupua’a, a land unit running from the mountains to the sea. Numerous tropical root, tuber, and tree crops were cultivated. Taro, a wetland crop, was cultivated primarily in windward areas, while sweet potatoes and yams, both dryland crops, were cultivated in drier leeward areas. The maka’ainana apparently lived well above subsistence levels, with extensive time available for cultural activities, sports, and games. There were unquestionably periods of hardship, but these times tended to be associated with drought or other causes of poor harvest.

Unification of Hawai’i and Population Decline

The long-prevailing political equilibrium began to disintegrate shortly after the introduction of guns and the spread of new diseases to the Islands. In 1784, the most powerful ali’i nui, Kamehameha, began a war of conquest, and with his superior use of modern weapons and western advisors, he subdued all other chiefdoms, with the exception of Kaua’i, by 1795. Each chief in his ruling coalition received the right to administer large areas of land, consisting of smaller strips on various islands. Sumner La Croix and James Roumasset (1984) have argued that the strip system conveyed durability to the newly unified kingdom (by making it more costly for an ali’i to accumulate a power base on one island) and facilitated monitoring of ali’i production by the new king. In 1810, Kamehameha reached a negotiated settlement with Kaumuali’i, the ruling chief of Kaua’i, which brought the island under his control, thereby bringing the entire island chain under a single monarchy.

Exposure to Western diseases produced a massive decline in the native population of Hawai’i from 1778 through 1900 (Table 1). Estimates of Hawai’i’s population at the time of contact vary wildly, from approximately 110,000 to one million people (Bushnell, 1993; Dye, 1994). The first missionary census in 1831-1832 counted 130,313 people. A substantial portion of the decline can be attributed to a series of epidemics beginning after contact, including measles, influenza, diarrhea, and whooping cough. The introduction of venereal diseases was a factor behind declining crude birth rates. The first accurate census conducted in the Islands revealed a population of 80,641 in 1849. The native Hawaiian population reached its lowest point in 1900 when the U.S. census revealed only 39,656 full or part Hawaiians.

Table 1: Population of Hawai’i

Year

Total Population

Native Hawaiian Population

1778

110,000-1,000,000

110,000-1,000,000

1831-32

130,313

Na

1853

73,137

71,019

1872

56,897

51,531

1890

89,990

40,622

1900

154,001

39,656

1920

255,881

41,750

1940

422,770

64,310

1960

632,772

102,403

1980

964,691

115,500

2000

1,211,537

239,655

Sources: Total population from http://www.hawaii.gov/dbedt/db99/index.html, Table 1.01, Dye (1994), and Bushnell (1993). Native Hawaiian population for 1853-1960 from Schmitt (1977), p. 25. Data from the 2000 census includes people declaring “Native Hawaiian” as their only race or one of two races. See http://factfinder.census.gov/servlet/DTTable?_ts=18242084330 for the 2000 census population.

The Rise and Fall of Sandalwood and Whaling

With the unification of the Islands came the opening of foreign trade. Trade in sandalwood, a wood in demand in China for ornamental uses and burning as incense, began in 1805. The trade was interrupted by the War of 1812 and then flourished from 1816 to the late 1820s before fading away in the 1830s and 1840s (Kuykendall, 1957, I, pp. 86-87). La Croix and Roumasset (1984) have argued that the centralized organization of the sandalwood trade under King Kamehameha provided the king with incentives to harvest sandalwood efficiently. The adoption of a decentralized production system by his successor (Liholiho) led to the sandalwood being treated by ali’i as a common property resource. The reallocation of resources from agricultural production to sandalwood production not only led to rapid exhaustion of the sandalwood resource but also to famine.

As the sandalwood industry declined, Hawai’i became the base for the north-central Pacific whaling trade. The impetus for the new trade was the 1818 discovery of the “Offshore Ground” west of Peru and the 1820 discovery of rich sperm whale grounds off the coast of Japan. The first whaling ship visited the Islands in 1820, and by the late 1820s over 150 whaling ships were stopping in Hawai’i annually. While ship visits declined somewhat during the 1830s, by 1843 over 350 whaling ships annually visited the two major ports of Honolulu and Lahaina. Through the 1850s over 500 whaling ships visited Hawai’i annually. The demise of the Pacific whaling fleet during the U.S. Civil War and the rapid rise of the petroleum industry led to steep declines in the number of ships visiting Hawai’i, and after 1870 only a trickle of ships continued to visit.

Missionaries and Land Tenure

In 1819, King Kamehameha’s successor, Liholiho, abandoned the system of religious practices known as the kapu system and ordered temples (heiau) and images of the gods desecrated and burnt. In April 1820, missionaries from New England arrived and began filling the religious void with conversions to protestant Christianity. Over the next two decades as church attendance became widespread, the missionaries suppressed many traditional Hawaiian cultural practices, operated over 1,000 common schools, and instructed the ali’i in western political economy. The king promulgated a constitution with provisions for a Hawai’i legislature in 1840. It was followed, later in the decade, by laws establishing a cabinet, civil service, and judiciary. Under the 1852 constitution, male citizens received the right to vote in elections for a legislative lower house. Missionaries and other foreigners regularly served in cabinets through the end of the monarchy.

In 1844, the government began a 12-year program, known as the Great Mahele (Division), to dismantle the traditional system of land tenure. King Kauikeaouli gave up his interest in all island lands, retaining ownership only in selected estates. Ali’i had the right to take out fee simple title to lands held at the behest of the king. Maka’ainana had the right to claim fee simple title to small farms (kuleana). At the end of the claiming period, maka’ainana received less than ~40,000 acres of land, while the government (~1.5 million acres), the king (~900,000 acres), and the ali’i (~1.5 million acres) all received substantial shares. Foreigners were initially not allowed to own land in fee simple, but an 1850 law overturned this restriction. By the end of the 19th century, commoners and chiefs had sold, lost, or given up their lands, with foreigners and large estates owning most non-government lands.

Lilikala Kame’eleihiwa (1992) found the origins of the Mahele in the traditional duty of a king to undertake a redistribution of land and the difficulty of such an undertaking during the initial years of missionary influence. By contrast, La Croix and Roumasset (1990) found the origins of the Mahele in the rising value of Hawaii land in sugar cultivation, with fee simple title facilitating investment in the land, irrigation facilities, and processing factories.

Sugar, Immigration, and Population Increase

The first commercially-viable sugar plantation, Ladd and Co., was started on Kaua’i in 1835, and the sugar industry achieved moderate growth through the 1850s. Hawai’i’s sugar exports to California soared during the U.S. Civil War, but the end of hostilities in 1865 also meant the end of the sugar boom. The U.S. tariff on sugar posed a major obstacle to expanding sugar production in Hawai’i during peacetime, as the high tariff, ranging from 20 to 42 percent between 1850 and 1870, limited the extent of profitable sugar cultivation in the islands. Sugar interests helped elect King Kalakaua to the Hawaiian throne over the British-leaning Queen Emma in February 1874, and Kalakaua immediately sought a trade agreement with the United States. The 1876 reciprocity treaty between Hawai’i and the United States allowed duty-free sales of Hawai’i sugar and other selected agricultural products in the United States as well as duty-free sales of most U.S. manufactured goods in Hawai’i. Sugar exports from Hawai’i to the United States soared after the treaty’s promulgation, rising from 21 million pounds in 1876 to 114 million pounds in 1883 to 224.5 million pounds in 1890 (Table 2).

Table 2: Hawai’i Sugar Production (1000 short tons)

Year

Exports

Year

Production

Year

Production

1850

.4

1900

289.5

1950

961

1860

.7

1910

529.9

1960

935.7

1870

9.4

1920

560.4

1970

1162.1

1880

31.8

1930

939.3

1990

819.6

1890

129.9

1940

976.7

1999

367.5

Sources: Data for 1850-1970 are from Schmitt (1977), pp. 418-420. Data for 1990 and 1999 are from http://www.hawaii.gov/dbedt/db99/index.html, Table 22.09. Data for 1850-1880 are exports. Data for 1910-1990 are converted to 96° raw value.

The reciprocity treaty set the tone for Hawai’i’s economy and society over the next 80 years by establishing the sugar industry as the Hawai’i’s leading industry and altering the demographic composition of the Islands via the industry’s labor demands. Rapid expansion of the sugar industry after reciprocity sharply increased its demand for labor: Plantation employment rose from 3,921 in 1872 to 10,243 in 1882 to 20,536 in 1892. The increase in labor demand occurred while the native Hawaiian population continued its precipitous decline, and the Hawai’i government responded to labor shortages by allowing sugar planters to bring in overseas contract laborers bound to serve at fixed wages for 3-5 year periods. The enormous increase in the plantation workforce consisted of first Chinese, then Japanese, then Portuguese contract laborers.

The extensive investment in sugar industry lands and irrigations systems coupled with the rapid influx of overseas contract laborers changed the bargaining positions of Hawai’i and the United States when the reciprocity treaty was due for renegotiation in 1883. La Croix and Christopher Grandy (1997) argued that the profitability of the planters’ new investment was dependent on access to the U.S. market, and this improved the bargaining position of the United States. As a condition for renewal of the treaty, the United States demanded access to Pearl Bay [now Pearl Harbor]. King Kalakaua opposed this demand, and in July 1887, opponents of the government forced the king to accept a new constitution and cabinet. With the election of a new pro-American government in September 1887, the king signed an extension of the reciprocity treaty in October 1887 that granted access rights to Pearl Bay to the United States for the life of the treaty.

Annexation and the Sugar Economy

In 1890, the U.S. Congress enacted the McKinley Tariff, which allowed raw sugar to enter the United States free of duty and established a two-cent per pound bounty for domestic producers. The overall effect of the McKinley Tariff was to completely erase the advantages that the reciprocity treaty had provided to Hawaiian sugar producers over other foreign sugar producers selling in the U.S. market. The value of Hawaiian merchandise exports plunged from $13 million in 1890 to $10 million in 1891 to a low point of $8 million in 1892.

La Croix and Grandy (1997) argued that the McKinley Tariff threatened the wealth of the planters and induced important changes in Hawai’i’s domestic politics. King Kalakaua died in January 1891, and his sister succeeded him. After Queen Lili’uokalani proposed to declare a new constitution in January 1893, a group of U.S. residents, with the incautious assistance of the U.S. Minister and troops from a U.S. warship, overthrew the monarchy. The new government, dominated by the white minority, offered Hawai’i for annexation by the United States from 1893. Annexation was first opposed by U.S. President Cleveland, and then, during U.S. President McKinley’s term, failed to obtain Congressional approval. The advent of the Spanish-American War and the ensuing hostilities in the Philippines raised Hawai’i’s strategic value to the United States, and Hawai’i was annexed by a joint resolution of Congress in July 1898. Hawai’i became a U.S. territory with the passage of the Organic Act on June 14, 1900.

Economic Integration with the United States

In 1900 annexation by the United States eliminated bound labor contracts and freed the existing labor force from their contracts. After annexation, the sugar planters and the Hawaii government recruited workers from Japan, Korea, the Philippines, Spain, Portugal, Puerto Rico, England, Germany, and Russia. The ensuing flood of immigrants swelled the population of the Hawaiian Islands from 109,020 people in 1896 to 232,856 people in 1915. The growth in the plantation labor force was one factor behind the expansion of sugar production from 289,500 short tons in 1900 to 939,300 short tons in 1930. Pineapple production also expanded, from just 2,000 cases of canned fruit in 1903 to 12,808,000 cases in 1931.

La Croix and Price Fishback (2000) established that European and American workers on sugar plantations were paid job-specific wage premiums relative to Asian workers and that the premium paid for unskilled American workers fell by one third between 1901 and 1915 and for European workers by 50 percent or more over the same period. While similar wage gaps disappeared during this period on the U.S. West Coast, Hawai’i plantations were able to maintain a portion of the wage gaps because they constantly found new low-wage immigrants to work in the Hawai’i market. Immigrant workers from Asia failed, however, to climb many rungs up the job ladder on Hawai’i sugar plantations, and this was a major factor behind labor unrest in the sugar industry. Edward Beechert (1985) concluded that large-scale strikes on sugar plantations during 1909 and 1920 improved the welfare of sugar plantation workers but did not lead to recognition of labor unions. Between 1900 and 1941, many sugar workers responded to limited advancement and wage prospects on the sugar plantation by leaving the plantations for jobs in Hawai’i’s growing urban areas.

The rise of the sugar industry and the massive inflow of immigrant workers into Hawaii was accompanied by a decline in the Native Hawaiian population and its overall welfare (La Croix and Rose, 1999). Native Hawaiians and their political representatives argued that government lands should be made available for homesteading to enable Hawaiians to resettle in rural areas and to return to farming occupations. The U.S. Congress enacted legislation in 1921 to reserve specified rural and urban lands for a new Hawaiian Homes Program. La Croix and Louis Rose have argued that the Hawaiian Homes Program has functioned poorly, providing benefits for only a small portion of the Hawaiian population over the course of the twentieth century.

Five firms-Castle & Cooke, Alexander & Baldwin, C. Brewer & Co., Theo. Davies & Co., and American Factors-came to dominate the sugar industry. Originally established to provide financial, labor recruiting, transportation, and marketing services to plantations, they gradually acquired the plantations and also gained control over other vital industries such as banking, insurance, retailing, and shipping. By 1933, their plantations produced 96 percent of the sugar crop. The “Big Five’s” dominance would continue until the rise of the tourism industry and statehood induced U.S. and foreign firms to enter Hawai’i’s markets.

The Great Depression hit Hawai’i hard, as employment in the sugar and pineapple industries declined during the early 1930s. In December 1936, about one-quarter of Hawai’i’s labor force was unemployed. Full recovery would not occur until the military began a buildup in the mid-1930s in reaction to Japan’s occupation of Manchuria. With the Japanese invasion of China in 1937, the number of U.S. military personnel in Hawai’i increased to 48,000 by September 1940.

World War II and its Aftermath

The Japanese attack on the American Pacific Fleet at Pearl Harbor on December 7, 1941 led to a declaration of martial law, a state that continued until October 24, 1944. The war was accompanied by a massive increase in American armed service personnel in Hawai’i, with numbers increasing from 28,000 in 1940 to 378,000 in 1944. The total population increased from 429,000 in 1940 to 858,000 in 1944, thereby substantially increasing the demand for retail, restaurant, and other consumer services. An enormous construction program to house the new personnel was undertaken in 1941 and 1942. The wartime interruption of commercial shipping reduced the tonnage of civilian cargo arriving in Hawai’i by more than 50 percent. Employees working in designated high priority organizations, including sugar plantations, had their jobs and wages frozen in place by General Order 18 which also suspended union activity.

In March 1943, the National Labor Relations Board was allowed to resume operations, and the International Longshoreman’s Union (ILWU) organized 34 of Hawai’i’s 35 sugar plantations, the pineapple plantations, and the longshoremen by November 1945. The passage of the Hawai’i Employment Relations Act in 1945 facilitated union organizing by providing agricultural workers with the same union organizing rights as industrial workers.

After the War, Hawai’i’s economy stagnated, as demobilized armed services personnel left Hawai’i for the U.S. mainland. With the decline in population, real per capita personal income declined at an annual rate of 5.7 percent between 1945 and 1949 (Schmitt, 1976, pp. 148, 167). During this period, Hawai’i’s newly formed unions embarked on a series of disruptive strikes covering West Coast and Hawai’i longshoremen (1946-1949); the sugar industry (1946); and the pineapple industry (1947, 1951). The economy began a nine-year period of moderate expansion in 1949, with the annual growth rate of real personal income averaging 2.3 percent. The expansion of propeller-driven commercial air service sent visitor numbers soaring, from 15,000 in 1946 to 171,367 in 1958, and induced construction of new hotels and other tourism facilities and infrastructure. The onset of the Korean War increased the number of armed service personnel stationed in Hawai’i from 21,000 in 1950 to 50,000 in 1958. Pineapple production and canning also displayed substantial increases over the decade, increasing from 13,697,000 cases in 1949 to 18,613,000 cases in 1956.

Integration and Growth after Statehood

In 1959, Hawai’i became the fiftieth state. The transition from territorial to statehood status was one factor behind the 1958-1973 boom, in which real per capita personal income increased at an annual rate of 4 percent. The most important factor behind the long expansion was the introduction of commercial jet service in 1959, as the jet plane dramatically reduced the money and time costs of traveling to Hawai’i. Also fueled by rapidly rising real incomes in the United States and Japan, the tourism industry would continue its rapid growth through 1990. Visitor arrivals (see Table 3) increased from 171,367 in 1958 to 6,723,531 in 1990. Growth in visitor arrivals was once again accompanied by growth in the construction industry, particularly from 1965 to 1975. The military build-up during the Vietnam War also contributed to the boom by increasing defense expenditures in Hawai’i by 3.9 percent annually from 1958 to 1973 (Schmitt, 1977, pp. 148, 668).

Table 3: Visitor Arrivals to Hawai’i

Year

Visitor Arrivals

Year

Visitor Arrivals

1930

18,651

1970

1,745,904

1940

25,373

1980

3,928,789

1950

46,593

1990

6,723,531

1960

296,249

2000

6,975,866

Source: Hawai’i Tourism Authority, http://www.hawaii.gov/dbedt/monthly/historical-r.xls at Table 5 and http://www.state.hi.us/dbedt/monthly/index2k.html.

From 1973 to 1990, growth in real per capita personal income slowed to 1.1 percent annually. The defense and agriculture sectors stagnated, with most growth generated by the relentless increase in visitor arrivals. Japan’s persistently high rates of economic growth during the 1970s and 1980s spilled over to Hawai’i in the form of huge increases in the numbers of Japanese tourists and in the value of Japanese foreign investment in Hawai’i. At the end of the 1980s, the Hawai’i unemployment rate was just 2-3 percent, employment had been steadily growing since 1983, and prospects looked good for continued expansion of both tourism and the overall economy.

The Malaise of the 1990s

From 1991 to 1998, Hawai’i’s economy was hit by several negative shocks. The 1990-1991 recession in the United States, the closure of California military bases and defense plants, and uncertainty over the safety of air travel during the 1991 Gulf War combined to reduce visitor arrivals from the United States in the early and mid-1990s. Volatile and slow growth in Japan throughout the 1990s led to declines in Japanese visitor arrivals in the late 1990s. The ongoing decline in sugar and pineapple production gathered steam in the 1990s, with only a handful of plantations still in business by 2001. The cumulative impact of these adverse shocks was severe, as real per capita personal income did not change between 1991 and 1998.

The recovery continued through summer 2001 despite a slowing U.S. economy. It came to an abrupt halt with the terrorism attack of September 11, 2001, as domestic and foreign tourism declined sharply.

References

Beechert, Edward D. Working in Hawaii: A Labor History. Honolulu: University of Hawaii Press, 1985.

Bushnell, Andrew F. “The ‘Horror’ Reconsidered: An Evaluation of the Historical Evidence for Population Decline in Hawai’i, 1778-1803.” Pacific Studies 16 (1993): 115-161.

Daws, Gavan. Shoal of Time: A History of the Hawaiian Islands. Honolulu: University of Hawaii Press, 1968.

Dye, Tom. “Population Trends in Hawai’i before 1778.” The Hawaiian Journal of History 28 (1994): 1-20.

Hitch, Thomas Kemper. Islands in Transition: The Past, Present, and Future of Hawaii’s Economy. Honolulu: First Hawaiian Bank, 1992.

Kame’eleihiwa, Lilikala. Native Land and Foreign Desires: Pehea La E Pono Ai? Honolulu: Bishop Museum Press, 1992.

Kirch, Patrick V. Feathered Gods and Fishhooks: An Introduction to Hawaiian Archaeology and Prehistory. Honolulu: University of Hawaii Press, 1985.

Kuykendall, Ralph S. A History of the Hawaiian Kingdom. 3 vols. Honolulu: University of Hawaii Press, 1938-1967.

La Croix, Sumner J., and Price Fishback. “Firm-Specific Evidence on Racial Wage Differentials and Workforce Segregation in Hawaii’s Sugar Industry.” Explorations in Economic History 26 (1989): 403-423.

La Croix, Sumner J., and Price Fishback. “Migration, Labor Market Dynamics, and Wage Differentials in Hawaii’s Sugar Industry.” Advances in Agricultural Economic History 1 (2000): 31-72.

La Croix, Sumner J., and Christopher Grandy. “The Political Instability of Reciprocal Trade and the Overthrow of the Hawaiian Kingdom.” Journal of Economic History 57 (1997): 161-189.

La Croix, Sumner J., and Louis A. Rose. “The Political Economy of the Hawaiian Homelands Program.” In The Other Side of the Frontier: Economic Explorations into Native American History, edited by Linda Barrington. Boulder, Colorado: Westview Press, 1999.

La Croix, Sumner J., and James Roumasset. “An Economic Theory of Political Change in Pre-Missionary Hawaii.” Explorations in Economic History 21 (1984): 151-168.

La Croix, Sumner J., and James Roumasset. “The Evolution of Property Rights in Nineteenth-Century Hawaii.” Journal of Economic History 50 (1990): 829-852.

Morgan, Theodore. Hawaii, A Century of Economic Change: 1778-1876. Cambridge, MA: Harvard University Press, 1948.

Schmitt, Robert C. Historical Statistics of Hawaii. Honolulu: University Press of Hawaii, 1977.

Citation: La Croix, Sumner. “Economic History of Hawai’i”. EH.Net Encyclopedia, edited by Robert Whaples. September 27, 2001. URL http://eh.net/encyclopedia/economic-history-of-hawaii/

Medieval Guilds

Gary Richardson, University of California, Irvine

Guilds existed throughout Europe during the Middle Ages. Guilds were groups of individuals with common goals. The term guild probably derives from the Anglo-Saxon root geld which meant ‘to pay, contribute.’ The noun form of geld meant an association of persons contributing money for some common purpose. The root also meant ‘to sacrifice, worship.’ The dual definitions probably reflected guilds’ origins as both secular and religious organizations.

The term guild had many synonyms in the Middle Ages. These included association, brotherhood, college, company, confraternity, corporation, craft, fellowship, fraternity, livery, society, and equivalents of these terms in Latin, Germanic, Scandinavian, and Romance languages such as ambach, arte, collegium, corporatio, fraternitas, gilda, innung, corps de métier, societas, and zunft. In the late nineteenth century, as a professional lexicon evolved among historians, the term guild became the universal reference for these groups of merchants, artisans, and other individuals from the ordinary (non-priestly and non-aristocratic) classes of society which were not part of the established religious, military, or governmental hierarchies.

Much of the academic debate about guilds stems from confusion caused by incomplete lexicographical standardization. Scholars study guilds in one time and place and then assume that their findings apply to guilds everywhere and at all times or assert that the organizations that they studied were the one type of true guild, while other organizations deserved neither the distinction nor serious study. To avoid this mistake, this encyclopedia entry begins with the recognition that guilds were groups whose activities, characteristics, and composition varied greatly across centuries, regions, and industries.

Guild Activities and Taxonomy

Guilds filled many niches in medieval economy and society. Typical taxonomies divide urban occupational guilds into two types: merchant and craft.

Merchant guilds were organizations of merchants who were involved in long-distance commerce and local wholesale trade, and may also have been retail sellers of commodities in their home cities and distant venues where they possessed rights to set up shop. The largest and most influential merchant guilds participated in international commerce and politics and established colonies in foreign cities. In many cases, they evolved into or became inextricably intertwined with the governments of their home towns.

Merchant guilds enforced contracts among members and between members and outsiders. Guilds policed members’ behavior because medieval commerce operated according to the community responsibility system. If a merchant from a particular town failed to fulfill his part of a bargain or pay his debts, all members of his guild could be held liable. When they were in a foreign port, their goods could be seized and sold to alleviate the bad debt. They would then return to their hometown, where they would seek compensation from the original defaulter.

Merchant guilds also protected members against predation by rulers. Rulers seeking revenue had an incentive to seize money and merchandise from foreign merchants. Guilds threatened to boycott the realms of rulers who did this, a practice known as withernam in medieval England. Since boycotts impoverished both kingdoms which depended on commerce and governments for whom tariffs were the principal source of revenue, the threat of retaliation deterred medieval potentates from excessive expropriations.

Merchant guilds tended to be wealthier and of higher social status than craft guilds. Merchants’ organizations usually possessed privileged positions in religious and secular ceremonies and inordinately influenced local governments.

Craft guilds were organized along lines of particular trades. Members of these guilds typically owned and operated small businesses or family workshops. Craft guilds operated in many sectors of the economy. Guilds of victuallers bought agricultural commodities, converted them to consumables, and sold finished foodstuffs. Examples included bakers, brewers, and butchers. Guilds of manufacturers made durable goods, and when profitable, exported them from their towns to consumers in distant markets. Examples include makers of textiles, military equipment, and metal ware. Guilds of a third type sold skills and services. Examples include clerks, teamsters, and entertainers.

These occupational organizations engaged in a wide array of economic activities. Some manipulated input and output markets to their own advantage. Others established reputations for quality, fostering the expansion of anonymous exchange and making everyone better off. Because of the underlying economic realities, victualling guilds tended towards the former. Manufacturing guilds tended towards the latter. Guilds of service providers fell somewhere in between. All three types of guilds managed labor markets, lowered wages, and advanced their own interests at their subordinates’ expense. These undertakings had a common theme. Merchant and craft guilds acted to increase and stabilize members’ incomes.

Non-occupational guilds also operated in medieval towns and cities. These organizations had both secular and religious functions. Historians refer to these organizations as social, religious, or parish guilds as well as fraternities and confraternities. The secular activities of these organizations included providing members with mutual insurance, extending credit to members in times of need, aiding members in courts of law, and helping the children of members afford apprenticeships and dowries.

The principal pious objective was the salvation of the soul and escape from Purgatory. The doctrine of Purgatory was the belief that there lay between Heaven and Hell an intermediate place, by passing though which the souls of the dead might cleanse themselves of guilt attached to the sins committed during their lifetime by submitting to a graduated scale of divine punishment. The suffering through which they were cleansed might be abbreviated by the prayers of the living, and most especially by masses. Praying devoutly, sponsoring masses, and giving alms were three of the most effective methods of redeeming one’s soul. These works of atonement could be performed by the penitent on their own or by someone else on their behalf.

Guilds served as mechanisms for organizing, managing, and financing the collective quest for eternal salvation. Efforts centered on three types of tasks. The first were routine and participatory religious services. Members of guilds gathered at church on Sundays and often also on other days of the week. Members marked ceremonial occasions, such as the day of their patron saint or Good Friday, with prayers, processions, banquets, masses, the singing of psalms, the illumination of holy symbols, and the distribution of alms to the poor. Some guilds kept chaplains on call. Others hired priests when the need arose. These clerics hosted regular religious services, such as vespers each evening or mass on Sunday morning, and prayed for the souls of members living and deceased.

The second category consisted of actions performed on members’ behalf after their deaths and for the benefit of their souls. Postmortem services began with funerals and burials, which guilds arranged for the recently departed. The services were elaborate and extensive. On the day before internment, members gathered around the corpse, lit candles, and sung a placebo and a dirge, which were the vespers and matins from the Office of the Dead. On the day of internment, a procession marched from churchyard to graveyard, buried the body, distributed alms, and attended mass. Additional masses numbering one to forty occurred later that day and sometimes for months thereafter. Postmortem prayers continued even further into the future and in theory into perpetuity. All guilds prayed for the souls of deceased members. These prayers were a prominent part of all guild events. Many guilds also hired priests to pray for the souls of the deceased. A few guilds built chantries where priests said those prayers.

The third category involved indoctrination and monitoring to maintain the piety of members. The Christian catechism of the era contained clear commandments. Rest on the Sabbath and religious holidays. Be truthful. Do not deceive others. Be chaste. Do not commit adultery. Be faithful to your family. Obey authorities. Be modest. Do not covet thy neighbors’ possessions. Do not steal. Do not gamble. Work hard. Support the church. Guild ordinances echoed these exhortations. Members should neither gamble nor lie nor steal nor drink to excess. They should restrain their gluttony, lust, avarice, and corporal impulses. They should pray to the Lord, live like His son, and give alms to the poor.

Righteous living was important because members’ fates were linked together. The more pious one’s brethren, the more helpful their prayers, and the quicker one escaped from purgatory. The worse one’s brethren, the less salutary their supplications and the longer one suffered during the afterlife. So, in hopes of minimizing purgatorial pain and maximizing eternal happiness, guilds beseeched members to restrain physical desires and forgo worldly pleasures.

Guilds also operated in villages and the countryside. Rural guilds performed the same tasks as social and religious guilds in towns and cities. Recent research on medieval England indicates that guilds operated in most, if not all, villages. Villages often possessed multiple guilds. Most rural residents belonged to a guild. Some may have joined more than one organization.

Guilds often spanned multiple dimensions of this taxonomy. Members of craft guilds participated in wholesale commerce. Members of merchant guilds opened retail shops. Social and religious guilds evolved into occupational associations. All merchant and craft guilds possessed religious and fraternal features.

In sum, guild members sought prosperity in this life and providence in the next. Members wanted high and stable incomes, quick passage through Purgatory, and eternity in Heaven. Guilds helped them coordinate their collective efforts to attain these goals.

Guild Structure and Organization

To attain their collective goals, guild members had to cooperate. If some members slacked off, all would suffer. Guilds that wished to lower the costs of labor had to get all masters to reduce wages. Guilds that wished to raise the prices of products had to get all members to restrict output. Guilds that wished to develop respected reputations had to get all members to sell superior merchandise. Guild members contributed money – to pay priests and purchase pious paraphernalia – and contributed time, emotion, and personal energy, as well. Members participated in frequent religious services, attended funerals, and prayed for the souls of the brethren. Members had to live piously, abstaining both from the pleasures of the flesh and the material temptations of secular life. Members also had to administer their associations. The need for coordination was a common denominator.

To convince members to cooperate and advance their common interests, guilds formed stable, self-enforcing associations that possessed structures for making and implementing collective decisions.

A guild’s members met at least once a year (and in most cases more often) to elect officers, audit accounts, induct new members, debate policies, and amend ordinances. Officers such as aldermen, stewards, deans, and clerks managed the guild’s day to day affairs. Aldermen directed guild activities and supervised lower-ranking officers. Stewards kept guild funds, and their accounts were periodically audited. Deans summoned members to meetings, feasts, and funerals, and in many cases, policed members’ behavior. Clerks kept records. Decisions were usually made by majority vote among the master craftsmen.

These officers administered a nexus of agreements among a guild’s members. Details of these agreements varied greatly from guild to guild, but the issues addressed were similar in all cases. Members agreed to contribute certain resources and/or take certain actions that furthered the guild’s occupational and spiritual endeavors. Officers of the guild monitored members’ contributions. Manufacturing guilds, for example, employed officers known as searchers who scrutinized members’ merchandise to make sure it met guild standards and inspected members’ shops and homes seeking evidence of attempts to circumvent the rules. Members who failed to fulfill their obligations faced punishments of various sorts.

Punishments varied across transgressions, guilds, time, and space, but a pattern existed. First time offenders were punished lightly, perhaps suffering public scolding and paying small monetary fines, and repeat offenders punished harshly. The ultimate threat was expulsion. Guilds could do nothing harsher because laws protected persons and property from arbitrary expropriations and physical abuse. The legal system set the rights of individuals above the interests of organizations. Guilds were voluntary associations. Members facing harsh punishments could quit the guild and walk away. The most the guild could extract was the value of membership. Abundant evidence indicates that guilds enforced agreements in this manner.

Other game-theoretic options existed, of course. Guilds could have punished uncooperative members by taking actions with wider consequences. Members of a manufacturing guild who caught one of their own passing off shoddy merchandise under the guilds’ good name could have punished the offender by collectively lowering the quality of their products for a prolonged period. That would lower the offender’s income, albeit at the cost of lowering the income of all other members as well. Similarly, members of a guild that caught one of their brethren shirking on prayers and sinning incessantly could have punished the offender by collectively forsaking the Lord and descending into debauchery. Then, no one would or could pray for the soul of the offender, and his period in Purgatory would be extended significantly. In broader terms, cheaters could have been punished by any action that reduced the average incomes of all guild members or increased the pain that all members expected to endure in Purgatory. In theory, such threats could have convinced even the most recalcitrant members to contribute to the common good.

But, no evidence exists that craft guilds ever operated in such a manner. None of the hundreds of surviving guild ordinances contains threats of such a kind. No surviving guild documents describe punishing the innocent along with the guilty. Guilds appear to have eschewed indiscriminant retaliation for several salient reasons. First, monitoring members’ behavior was costly and imperfect. Time and risk preferences varied across individuals. Uncertainty of many kinds influenced craftsmen’s decisions. Some members would have attempted to cheat regardless of the threatened punishment. Punishments, in other words, would have occurred in equilibrium. The cost of carrying out an equilibrium-sustaining threat of expulsion would have been lower than the cost of carrying out an equilibrium-sustaining threat that reduced average income. Thus, expelling members caught violating the rules was an efficient method of enforcing the rules. Second, punishing free riders by indiscriminately harming all guild members may not have been a convincing threat. Individuals may not have believed that threats of mutual assured destruction would be carried out. The incentive to renegotiate was strong. Third, skepticism probably existed about threats to do onto others as they had done onto you. That concept contradicted a fundamental teaching of the church, to do onto others as you would have them do onto you. It also contradicted Jesus’ admonition to turn the other cheek. Thus, indiscriminant retaliation based upon hair-trigger strategies was not an organizing principle likely to be adopted by guilds whose members hoped to speed passage through Purgatory.

A hierarchy existed in large guilds. Masters were full members who usually owned their own workshops, retail outlets, or trading vessels. Masters employed journeymen, who were laborers who worked for wages on short term contracts or a daily basis (hence the term journeyman, from the French word for day). Journeymen hoped to one day advance to the level of master. To do this, journeymen usually had to save enough money to open a workshop and pay for admittance, or if they were lucky, receive a workshop through marriage or inheritance.

Masters also supervised apprentices, who were usually boys in their teens who worked for room, board, and perhaps a small stipend in exchange for a vocational education. Both guilds and government regulated apprenticeships, usually to ensure that masters fulfilled their part of the apprenticeship agreement. Terms of apprenticeships varied, usually lasting from five to nine years.

The internal structure of guilds varied widely across Europe. Little is known for certain about the structure of smaller guilds, since they left few written documents. Most of the evidence comes from large, successful associations whose internal records survive to the present day. The description above is based on such documents. It seems likely that smaller organizations fulfilled many of the same functions, but their structure was probably less formal and more horizontal.

Relationships between guilds and governments also varied across Europe. Most guilds aspired to attain recognition as a self-governing association with the right to possess property and other legal privileges. Guilds often purchased these rights from municipal and national authorities. In England, for example, a guild which wished to possess property had to purchase from the royal government a writ allowing it to do so. But, most guilds operated without formal sanction from the government. Guilds were spontaneous, voluntary, and self-enforcing associations.

Guild Chronology and Impact

Reconstructing the history of guilds poses several problems. Few written records survive from the twelfth century and earlier. Surviving documents consist principally of the records of rulers – kings, princes, churches – that taxed, chartered, and granted privileges to organizations. Some evidence also exists in the records of notaries and courts, which recorded and enforced contracts between guild masters and outsiders, such as the parents of apprentices. From the fourteenth and fifteenth centuries, records survive in larger numbers. Surviving records include statute books and other documents describing the internal organization and operation of guilds. The evidence at hand links the rise and decline of guilds to several important events in the history of Western Europe.

In the late Roman Empire, organizations resembling guilds existed in most towns and cities. These voluntary associations of artisans, known as collegia, were occasionally regulated by the state but largely left alone. They were organized along trade lines and possessed a strong social base, since their members shared religious observances and fraternal dinners. Most of these organizations disappeared during the Dark Ages, when the Western Roman Empire disintegrated and urban life collapsed. In the Eastern Empire, some collegia appear to have survived from antiquity into the Middle Ages, particularly in Constantinople, where Leo the Wise codified laws concerning commerce and crafts at the beginning of the tenth century and sources reveal an unbroken tradition of state management of guilds from ancient times. Some scholars suspect that in the West, a few of the most resilient collegia in the surviving urban areas may have evolved in an unbroken descent into medieval guilds, but the absence of documentary evidence makes it appear unlikely and unprovable.

In the centuries following the Germanic invasions, evidence indicates that numerous guild-like associations existed in towns and rural areas. These organizations functioned as modern burial and benefit societies, whose objectives included prayers for the souls of deceased members, payments of weregilds in cases of justifiable homicide, and supporting members involved in legal disputes. These rural guilds were descendents of Germanic social organizations known as gilda which the Roman historian Tacitus referred to as convivium.

During the eleventh through thirteenth centuries, considerable economic development occurred. The sources of development were increases in the productivity of medieval agriculture, the abatement of external raiding by Scandinavian and Muslim brigands, and population increases. The revival of long-distance trade coincided with the expansion of urban areas. Merchant guilds formed an institutional foundation for this commercial revolution. Merchant guilds flourished in towns throughout Europe, and in many places, rose to prominence in urban political structures. In many towns in England, for example, the merchant guild became synonymous with the body of burgesses and evolved into the municipal government. In Genoa and Venice, the merchant aristocracy controlled the city government, which promoted their interests so well as to preclude the need for a formal guild.

Merchant guilds’ principal accomplishment was establishing the institutional foundations for long-distance commerce. Italian sources provide the best picture of guilds’ rise to prominence as an economic and social institution. Merchant guilds appear in many Italian cities in the twelfth century. Craft guilds became ubiquitous during the succeeding century.

In northern Europe, merchant guilds rose to prominence a few generations later. In the twelfth and early thirteenth centuries, local merchant guilds in trading cities such as Lubeck and Bremen formed alliances with merchants throughout the Baltic region. The alliance system grew into the Hanseatic League which dominated trade around the Baltic and North Seas and in Northern Germany.

Social and religious guilds existed at this time, but few records survive. Small numbers of craft guilds developed, principally in prosperous industries such as cloth manufacturing, but records are also rare, and numbers appear to have been small.

As economic expansion continued in the thirteenth and fourteenth centuries, the influence of the Catholic Church grew, and the doctrine of Purgatory developed. The doctrine inspired the creation of countless religious guilds, since the doctrine provided members with strong incentives to want to belong to a group whose prayers would help one enter heaven and it provided guilds with mechanisms to induce members to exert effort on behalf of the organization. Many of these religious associations evolved into occupational guilds. Most of the Livery Companies of London, for example, began as intercessory societies around this time.

The number of guilds continued to grow after the Black Death. There are several potential explanations. The decline in population raised per-capita incomes, which encouraged the expansion of consumption and commerce, which in turn necessitated the formation of institutions to satisfy this demand. Repeated epidemics decreased family sizes, particularly in cities, where the typical adult had on average perhaps 1.5 surviving children, few surviving siblings, and only a small extended family, if any. Guilds replaced extended families in a form of fictive kinship. The decline in family size and impoverishment of the church also forced individuals to rely on their guild more in times of trouble, since they no longer could rely on relatives and priests to sustain them through periods of crisis. All of these changes bound individuals more closely to guilds, discouraged free riding, and encouraged the expansion of collective institutions.

For nearly two centuries after the Black Death, guilds dominated life in medieval towns. Any town resident of consequence belonged to a guild. Most urban residents thought guild membership to be indispensable. Guilds dominated manufacturing, marketing, and commerce. Guilds dominated local politics and influenced national and international affairs. Guilds were the center of social and spiritual life.

The heyday of guilds lasted into the sixteenth century. The Reformation weakened guilds in most newly Protestant nations. In England, for example, the royal government suppressed thousands of guilds in the 1530s and 1540s. The king and his ministers dispatched auditors to every guild in the realm. The auditors seized spiritual paraphernalia and funds retained for religious purposes, disbanded guilds which existed for purely pious purposes, and forced craft and merchant guilds to pay large sums for the right to remain in operation. Those guilds that did still lost the ability to provide members with spiritual services.

In Protestant nations after the Reformation, the influence of guilds waned. Many turned to governments for assistance. They requested monopolies on manufacturing and commerce and asked courts to force members to live up to their obligations. Guilds lingered where governments provided such assistance. Guilds faded where governments did not. By the seventeenth century, the power of guilds had withered in England. Guilds retained strength in nations which remained Catholic. France abolished its guilds during the French Revolution in 1791, and Napoleon’s armies disbanded guilds in most of the continental nations which they occupied during the next two decades.

References

Basing, Patricia. Trades and Crafts in Medieval Manuscripts. London: British Library, 1990.

Cooper, R.C.H. The Archives of the City of London Livery Companies and Related Organizations. London: Guildhall Library, 1985.

Davidson, Clifford. Technology, Guilds, and Early English Drama. Early Drama, Art, and Music Monograph Series, 23. Kalamazoo, MI: Medieval Institute Publications, Western Michigan University, 1996

Epstein, S. R. “Craft Guilds, Apprenticeships, and Technological Change in Pre-Industrial Europe.” Journal of Economic History 58 (1998): 684-713.

Epstein, Steven. Wage and Labor Guilds in Medieval Europe. Chapel Hill, NC: University of North Carolina Press, 1991.

Gross, Charles. The Gild Merchant; A Contribution to British Municipal History. Oxford: Clarendon Press, 1890.

Gustafsson, Bo. “The Rise and Economic Behavior of Medieval Craft Guilds: An Economic-Theoretical Interpretation.” Scandinavian Journal of Economics 35, no. 1 (1987): 1-40.

Hanawalt, Barbara. “Keepers of the Lights: Late Medieval English Parish Gilds.” Journal of Medieval and Renaissance Studies 14 (1984).

Hatcher, John and Edward Miller. Medieval England: Towns, Commerce and Crafts, 1086 – 1348. London: Longman, 1995.

Hickson, Charles R. and Earl A. Thompson. “A New Theory of Guilds and European Economic Development.” Explorations in Economic History. 28 (1991): 127-68.

Lopez, Robert. The Commercial Revolution of the Middle Ages, 950-1350. Englewood Cliffs, NJ: Prentice-Hall, 1971.

Mokyr, Joel. The Lever of Riches: Technological Creativity and Economic Progress. Oxford: Oxford University Press, 1990

Pirenne, Henri. Medieval Cities: Their Origins and the Revival of Trade. Frank Halsey (translator). Princeton: Princeton University Press, 1952.

Richardson, Gary. “A Tale of Two Theories: Monopolies and Craft Guilds in Medieval England and Modern Imagination.” Journal of the History of Economic Thought (2001).

Richardson, Gary. “Brand Names Before the Industrial Revolution.” UC Irvine Working Paper, 2000.

Richardson, Gary. “Guilds, Laws, and Markets for Manufactured Merchandise in Late-Medieval England,” Explorations in Economic History 41 (2004): 1–25.

Richardson, Gary. “Christianity and Craft Guilds in Late Medieval England: A Rational Choice Analysis” Rationality and Society 17 (2005): 139-89

Richardson, Gary. “The Prudent Village: Risk Pooling Institutions in Medieval English Agriculture,” Journal of Economic History 65, no. 2 (2005): 386–413.

Smith, Toulmin. English Gilds. London: N. Trübner & Co., 1870.

Swanson, Heather. 1983. Building Craftsmen in Late Medieval York. York: University of York, 1983.

Thrupp, Sylvia. The Merchant Class of Medieval London 1300-1500. Chicago: University of Chicago Press, 1989.

Unwin, George. The Guilds and Companies of London. London: Methuen & Company, 1904.

Ward, Joseph. Metropolitan Communities: Trade Guilds, Identity, and Change in Early Modern London. Palo Alto: Stanford University Press, 1997.

Westlake, H. F. The Parish Gilds of Mediaeval England. London: Society for Promotion of Christian Knowledge, 1919.

Citation: Richardson, Gary. “Medieval Guilds”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/medieval-guilds/

Economic Recovery in the Great Depression

Frank G. Steindl, Oklahoma State University

Introduction

The Great Depression has two meanings. One is the horrendous debacle of 1929-33 during which unemployment rose from 3 to 25 percent as the nation’s output fell over 25 percent and prices over 30 percent, in what also has been called the Great Contraction. A second meaning has the Great Depression as the entire decade of the thirties, the anxieties and apprehensions for which John Steinbeck’s The Grapes of Wrath is a metaphor. Much has been written about the unprecedented drop in economic activity in the Great Contraction, with questions about its causes and the reasons for its protracted decline especially prominent. The amount of scholarship devoted to these issues dwarfs that dealing with the recovery. But there indeed was a recovery, though long, tortuous, and uneven. In fact, it was well over twice as long as the contraction.

The economy hit its trough in March 1933. Whether or not by coincidence, President Franklin D. Roosevelt took office that month, initiating the New Deal and its fabled first hundred days, among which was the creation in June 1933 of its principal recovery vehicle, the NIRA — National Industrial Recovery Act.

Facts of the Recovery

Figure 1 uses monthly data. This allows us to see more finely the movements of the economy, as contrasted with the use of quarterly or annual data. For present purposes, the decade of the Depression runs from August 1929, when the economy was at its business cycle peak, through March 1933, the contraction trough, to June 1942, when the economy clearly was back to it long-run high-employment trend.

Figure 1 depicts the behavior of industrial output and prices over the Great Depression decade, the former as measured by the Index of Industrial Employment and the latter by the Wholesale Price Index.[1] Among the notable features are the large declines in output and prices in the Great Contraction, with the former falling 52 percent and the latter 37 percent. Another noteworthy feature is the sharp, severe 1937-38 depression, when in twelve months output fell 33 percent and prices 11 percent. A third feature is the over-two-year deflation in the face of a robust increase in output following the 1937-38 depression.

The behavior of the unemployment rate is shown in Figure 2.[2] The dashed line shows the reported official data, which do not count as employed those holding “temporary” relief jobs. The solid line adjusts the official series by including those holding such temporary jobs as employed, the effect of which is to reduce the unemployment rate (Darby 1976). Each series rises from around 3 to about 23 percent between 1929 and 1932. The official series then climbs to near 25 percent the following year whereas the adjusted series is over four percentage points lower. Each continues declining the rest of the recovery, though both rise sharply in 1938. By 1940, each is still in double digits.

Three other charts that are helpful for understanding the recovery are Figures 3, 4, and 5. The first of these shows that the monetary base of the economy — which is the reserves of commercial banks plus currency held by the public — grew principally through increases in the stock of gold In contrast to the normal situation, the base did not increase because of credit provided by the Federal Reserve System. Such credit was essentially constant. That is, the Fed, the nation’s central bank, was basically passive for most of the recovery. The rise in the stock of gold occurred initially because of revaluation of gold from $20.67 to $35 an ounce in 1933-34 (which though not changing the physical holdings of gold raised the value of such holdings by 69 percent). The physical stock of gold now valued at the higher price then increased because of an inflow of gold principally from Europe due to the deteriorating political and economic situation there.

Figure 4 shows the behavior of the stock of money, both the narrow M1and broader M2 measures of it. The shaded area shows the decreases in those money stocks in the 1937-38 depression. Those declines were one of the reasons for that depression, just as the large declines in the money stock in 1929-33 were major factors responsible for the Great Contraction. During the Contraction of 1929-33, the narrow measure of the money stock — currency held by the public and demand deposits, M1 — fell 28 percent and the broader measure of it (M1 plus time deposits at commercial banks) fell 35 percent. These declines were major factors in causing the sharp decline that was the debacle of 1929-33.

Lastly, the budget position of the federal government is shown in Figure 5. One of the notable features is the sharp increase in expenditures in mid-1936 and the equally sharp decrease thereafter. The budget therefore went dramatically into deficit, and then began to move toward a surplus by the end of 1936, largely due to the tax revenues arising from the Social Security Act of 1935.

Reasons for Recovery

In Golden Fetters (1992), Barry Eichengreen advanced the basis for the most widely accepted understanding of the slide and recovery of economies in the 1930s. The depression was a worldwide phenomenon, as indicated in Figure 6, which shows the behavior of industrial production for several major countries. His basic thesis related to the gold standard and the manner in which countries altered their behavior under it during the 1930s. Under the classical “rules of the game,” countries experiencing balance of payments deficits financed those deficits by exporting gold. The loss of gold forced them to contract their money stock, which then resulted in deflationary pressures. Countries running balance of payments surpluses received gold, which expanded their money stocks, thereby inducing expansionary pressures. According to Eichengreen’s framework, countries did not “play by the rules” of the international gold standard during the depression era. Rather, countries losing gold were forced to contract. Those receiving gold, however, did not expand. This generated a net deflationary bias, as a result of which the depression was world wide for those countries on the gold standard. As countries cut their ties to gold, which the U.S. did in early 1933, they were free to pursue expansionary monetary and fiscal policies, and this is the principal reason underlying the recovery. The inflow of gold into the U.S., for instance, expanded the reserves of the banking system, which became the basis for the increases in the stock of money.

The quantity theory of money is a useful framework that can be used to understand movements of prices and output. The theory holds that increases in the supply of money relative to the demand results in increased spending on goods, services, financial assets, and real capital. The theory can be expressed in the following equation, where M is the stock of money, V is velocity, the rate at which it is spent, which is the mirror side of the demand for money — the desire to hold it. P is the price level and y is real output.

Increases in M relative to V result in increases in P and y.

Research into the forces of recovery generally concludes that the growth of the money supply (M) was the principal cause of the rise in output (y) after March 1933, the trough of the Great Contraction. Furthermore, those increases in the money stock also pushed up the price level (P).

Four studies expressly dealing with the recovery are of note. Milton Friedman and Anna Schwartz show that “the broad movements in the stock of money correspond with those in income” (1963, 497) and argue that “the rapid rate of rise in the money stock certainly promoted and facilitated the concurrent economic expansion” (1963, 544). Christina Romer concludes that the growth of the money stock was “crucial to the recovery. If [it] had been held to its normal level, the U.S. economy in 1942 would have been 50 percent below its pre-Depression trend path” (1992, 768-69). She also finds that fiscal policy “contributed almost nothing to the recovery” (1992, 767), a finding that mirrors much of the postwar research on the influence of fiscal policy, and stands in contrast to the views of much of the public as it came to believe that the fiscal budget deficits of President Roosevelt were fundamental in promoting recovery.[3]

Ben Bernanke (1995) similarly stresses the importance of the growth of the money stock as basic to the recovery. He focuses on the gold standard as a restraint on independent monetary actions, finding that “the evidence is that countries leaving the gold standard recovered substantially more rapidly and vigorously than those who did not” (1995, 12) because they “had greater freedom to initiate expansionary monetary policies” (1995, 15).

More recently Allan Meltzer (2003) finds the recovery driven by increases in the stock of money, based on an expanding monetary base due to gold. “The main policy stimulus to output came from the rise in money, an unplanned consequence of the 1934 devaluation of the dollar against gold. Later in the decade the rising threat of war, and war itself supplemented the $35 gold price as a cause of the rise in gold and money” (2003, 573).

That the recovery was due principally to the growth of the stock of money appears to be a robust conclusion of postwar research into causes of the 1930s recovery.

The manner in which the stock of money increased is important. The growing stock of gold increased the reserves of banks, hence the monetary base. With their greater reserves, banks did two things. First, they held some as precautionary reserves, called excess reserves. This is measured on the left hand side of Figure 7. Secondly, they bought U.S.government securities, more than tripling their holdings, as seen on the right hand axis of Figure 7. Also, as seen there, commercial bank loans increased only slightly in the recovery, rising only 25 percent in over nine years.[4] The principal impetus to the growth of the money stock, therefore, was banks’ increased purchases of U.S. government securities, both ones already outstanding and ones issued to finance the deficits of those years.

The 1937-38 Depression and Revival

After four years of recovery, the economy plunged into a deep depression in May 1937, as output fell 33 percent and prices 11 percent in twelve months (shown in Figure 1). Two developments were identified with being principally responsible for the depression.[5] The one most prominently identified by contemporary scholars is the action of the Federal Reserve.

As the Fed saw the volume of excess reserves climbing month after month, it became concerned about the potential inflationary consequences if banks were to begin making more loans, thereby expanding the money supply and driving up prices. The Banking Act of 1935 gave the Fed authority to change reserve requirements. With its newly granted authority, it decided upon a “preemptive strike” against what it regarded as incipient inflation. Because it thought that those excess reserves were due to a “shortage of borrowers,” it therefore raised reserve requirements, the effect of which was to impound in required reserves the former excess reserves. The increased requirements were in fact doubled, in three steps: August 1936, March 1937, and May 1937. As Figure 7 exhibits, excess reserves therefore fell. The principal effect of the doubling of reserve requirements was to reduce the stock of money, as shown in the shaded area of Figure 4.[6]

A second factor causing the depression was the falling federal budget deficit, due to two considerations. First, there was a sharp one-time rise in expenditures in mid-1936, due to the payment of a World War I Veterans’ Bonus. Thereafter, expenditures fell — the “spike” in the figure. Secondly, the Social Security Act of 1935 mandated collection of payroll taxes beginning in 1937, with the first payments to be made several years later. The joint effect of these two was to move the budget to near surplus by late 1937.

During the depression, both output and prices fell, as was their usual behavior in depressions. The bottom of the depression was May 1938, one year after it began. Thereafter, output began growing quite robustly, rising 58 percent by August 1940. Prices, however, continued to fall, for over two years. Figure 8 shows the depression and revival experience from May 1937 through August 1940, the month in which prices last fell. The two shaded areas are the year-long depression and the price “spike” in September 1939. Of interest is that the shock of the war that spurred the price jump did not induce expectations of further price rises. Prices continued to fall for another year, through August 1940.

Difficulties with Current Understanding

According to the currently accepted interpretation, the recovery owes its existence to increases in the stock of money. One difficulty with this view is the marked contrast to the price experience of recovery through mid-1937. How could rising prices in the 1933 turnaround be fundamental to the recovery but not in the vigorous, later recovery, when prices actually fell? Another difficulty is that the continued rise in the stock of money is due to the political turmoil in Europe. There is little intrinsic to the U.S economy that contributed. Presumably, had there been no continuing inflow of gold raising the monetary base and money stock, the economy would have languished until the demands of World War II would have made their impact. In other words, would there have been virtually no recovery had there been no Adolf Hitler?

Of more consequence is the conundrum presented by the experience of more than two years of deflation in the face of dramatically rising aggregate demand, of which the sharply rising money stock appears as a major force. If the rising stock of money were fundamental to the recovery, then prices and output would have been rising, as the aggregate demand for output, spurred also by increasing fiscal budget deficits, would have been increasing relative to aggregate supply. But in the present instance, prices were declining, not rising. Something else was driving the economy during the entire recovery, but the seemingly dominant aggregate demand pressures obscured it in the early part.

One prospective impetus to aggregate supply would be declining real wages that would spur the hiring of additional workers. But with prices declining, it is unlikely that real wages would have fallen in the revival from the late 1930s depression. The evidence as indicated in Figure 9 shows that they in fact increased. With few exceptions, real wages increased throughout the entire deflationary period, rising 18 percent overall and 6 percent in the revival. The real wage rate, by rising, was thus a detriment to increased supply. Real wages cannot therefore be a factor inducing greater aggregate supply.

The economic phenomenon that was driving the recovery was probably increasing productivity. An early indication of this comes from the pioneering work of Robert Solow (1957) who in the course of examining factors contributing to economic growth developed data on the behavior of productivity. In support of this, Alexander Field presents both macroeconomic and microeconomic evidence showing that “the years 1929-41 were, in the aggregate, the most technologically progressive of any comparable period in U.S. economic history” (2003, 1399).

The rapid productivity increases were an important factor explaining the seemingly anomalous problem of rapid recovery and the stubbornness of the unemployment rate. In today’s parlance, this has come to be known as a “jobless recovery,” one in which rising productivity generates increased output rather than greater labor input producing more.

To acknowledge that productivity increases were crucial to the economic recovery is not however the end of the story because we are still left trying to understand the mechanisms underlying their sharp increases. What induced such increases? Serendipity — the idea that productivity increased at just the right time and in the appropriate amounts — is not an appealing explanation.

More likely, there is something intrinsic to the economy that encapsulates mechanisms — that is, incentives spurring inventive capital and labor innovations generating productivity increases, as well as other factors — that move the economy back to its potential.

References

Bernanke, Ben S. “The Macroeconomics of the Great Depression: A Comparative Approach.” Journal of Money, Credit, and Banking 27 (1995): 1-28.

Darby, Michael R. “Three-and-a-Half Million U.S. Employees Have Been Mislaid: Or an Explanation of Unemployment, 1934-41.” Journal of Political Economy 84 (1976):1-16.

Eichengreen, Barry. Golden Fetters: The Gold Standard and the Great Depression 1919-1939. New York: Oxford University Press, 1992.

Field, Alexander J. “The Most Technologically Progressive Decade of the Century.” American Economic Review 93 2003): 1399-1413.

Friedman, Milton and Anna J. Schwartz. A Monetary History of the United States: 1867-1960. Princeton, NJ: Princeton University Press, 1963.

Meltzer, Allan H. A History of the Federal Reserve, volume 1, 1913-1951. Chicago: University of Chicago Press, 2003.

Romer, Christina D. “What Ended the Great Depression?” Journal of Economic History 52 (1992): 757-84.

Solow, Robert M. “Technical Change and the Aggregate Production Function.” Review of Economics and Statistics 39 (1957): 312-20.

Smithies, Arthur. “The American Economy in the Thirties.” American Economic Review Papers and Proceedings 36 (1946):11-27.

Steindl, Frank G. Understanding Economic Recovery in the 1930s: Endogenous Propagation in the Great Depression. Ann Arbor: University of Michigan Press, 2004.


[1] Industrial production and the nation’s real output, real GDP, are highly correlated. The correlation relation is 98 percent, both for quarterly and annual data over the recovery period

[2] Data on the unemployment rate are available only on an annual basis for the Depression decade.

[3] In fact, large numbers of academics held that view, of which Arthur Smithies’ address to the American Economic Association is an example. His assessment was that “My main conclusion … is that fiscal policy did prove to be … the only effective means to recovery” (1946, 25, emphasis added).

[4] Real loans — loans relative to the price level — in fact declined, falling 24 percent in the 111 months of recovery.

[5] A third factor was the action of the U.S. Treasury as it “sterilized” gold, at the instigation of the Federal Reserve. By sterilization of gold, the Treasury prevented the gold inflows from increasing bank reserves.

[6] The reason the stock of money fell is that banks responded to the increased reserve requirements by trying to rebuild their excess reserves. That is, the banks did not regard their excess reserves as surplus reserves, but rather as precautionary reserves. This contrasted with the Federal Reserve’s view that the excess reserves were surplus ones, due to a “shortage” of borrowers at banks.

Citation: Steindl, Frank. “Economic Recovery in the Great Depression”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/economic-recovery-in-the-great-depression/

An Overview of the Great Depression

Randall Parker, East Carolina University

This article provides an overview of selected events and economic explanations of the interwar era. What follows is not intended to be a detailed and exhaustive review of the literature on the Great Depression, or of any one theory in particular. Rather, it will attempt to describe the “big picture” events and topics of interest. For the reader who wishes more extensive analysis and detail, references to additional materials are also included.

The 1920s

The Great Depression, and the economic catastrophe that it was, is perhaps properly scaled in reference to the decade that preceded it, the 1920s. By conventional macroeconomic measures, this was a decade of brisk economic growth in the United States. Perhaps the moniker “the roaring twenties” summarizes this period most succinctly. The disruptions and shocking nature of World War I had been survived and it was felt the United States was entering a “new era.” In January 1920, the Federal Reserve seasonally adjusted index of industrial production, a standard measure of aggregate economic activity, stood at 81 (1935–39 = 100). When the index peaked in July 1929 it was at 114, for a growth rate of 40.6 percent over this period. Similar rates of growth over the 1920–29 period equal to 47.3 percent and 42.4 percent are computed using annual real gross national product data from Balke and Gordon (1986) and Romer (1988), respectively. Further computations using the Balke and Gordon (1986) data indicate an average annual growth rate of real GNP over the 1920–29 period equal to 4.6 percent. In addition, the relative international economic strength of this country was clearly displayed by the fact that nearly one-half of world industrial output in 1925–29 was produced in the United States (Bernanke, 1983).

Consumer Durables Market

The decade of the 1920s also saw major innovations in the consumption behavior of households. The development of installment credit over this period led to substantial growth in the consumer durables market (Bernanke, 1983). Purchases of automobiles, refrigerators, radios and other such durable goods all experienced explosive growth during the 1920s as small borrowers, particularly households and unincorporated businesses, utilized their access to available credit (Persons, 1930; Bernanke, 1983; Soule, 1947).

Economic Growth in the 1920s

Economic growth during this period was mitigated only somewhat by three recessions. According to the National Bureau of Economic Research (NBER) business cycle chronology, two of these recessions were from May 1923 through July 1924 and October 1926 through November 1927. Both of these recessions were very mild and unremarkable. In contrast, the 1920s began with a recession lasting 18 months from the peak in January 1920 until the trough of July 1921. Original estimates of real GNP from the Commerce Department showed that real GNP fell 8 percent between 1919 and 1920 and another 7 percent between 1920 and 1921 (Romer, 1988). The behavior of prices contributed to the naming of this recession “the Depression of 1921,” as the implicit price deflator for GNP fell 16 percent and the Bureau of Labor Statistics wholesale price index fell 46 percent between 1920 and 1921. Although thought to be severe, Romer (1988) has argued that the so-called “postwar depression” was not as severe as once thought. While the deflation from war-time prices was substantial, revised estimates of real GNP show falls in output of only 1 percent between 1919 and 1920 and 2 percent between 1920 and 1921. Romer (1988) also argues that the behaviors of output and prices are inconsistent with the conventional explanation of the Depression of 1921 being primarily driven by a decline in aggregate demand. Rather, the deflation and the mild recession are better understood as resulting from a decline in aggregate demand together with a series of positive supply shocks, particularly in the production of agricultural goods, and significant decreases in the prices of imported primary commodities. Overall, the upshot is that the growth path of output was hardly impeded by the three minor downturns, so that the decade of the 1920s can properly be viewed economically as a very healthy period.

Fed Policies in the 1920s

Friedman and Schwartz (1963) label the 1920s “the high tide of the Reserve System.” As they explain, the Federal Reserve became increasingly confident in the tools of policy and in its knowledge of how to use them properly. The synchronous movements of economic activity and explicit policy actions by the Federal Reserve did not go unnoticed. Taking the next step and concluding there was cause and effect, the Federal Reserve in the 1920s began to use monetary policy as an implement to stabilize business cycle fluctuations. “In retrospect, we can see that this was a major step toward the assumption by government of explicit continuous responsibility for economic stability. As the decade wore on, the System took – and perhaps even more was given – credit for the generally stable conditions that prevailed, and high hopes were placed in the potency of monetary policy as then administered” (Friedman and Schwartz, 1963).

The giving/taking of credit to/by the Federal Reserve has particular value pertaining to the recession of 1920–21. Although suggesting the Federal Reserve probably tightened too much, too late, Friedman and Schwartz (1963) call this episode “the first real trial of the new system of monetary control introduced by the Federal Reserve Act.” It is clear from the history of the time that the Federal Reserve felt as though it had successfully passed this test. The data showed that the economy had quickly recovered and brisk growth followed the recession of 1920–21 for the remainder of the decade.

Questionable Lessons “Learned” by the Fed

Moreover, Eichengreen (1992) suggests that the episode of 1920–21 led the Federal Reserve System to believe that the economy could be successfully deflated or “liquidated” without paying a severe penalty in terms of reduced output. This conclusion, however, proved to be mistaken at the onset of the Depression. As argued by Eichengreen (1992), the Federal Reserve did not appreciate the extent to which the successful deflation could be attributed to the unique circumstances that prevailed during 1920–21. The European economies were still devastated after World War I, so the demand for United States’ exports remained strong many years after the War. Moreover, the gold standard was not in operation at the time. Therefore, European countries were not forced to match the deflation initiated in the United States by the Federal Reserve (explained below pertaining to the gold standard hypothesis).

The implication is that the Federal Reserve thought that deflation could be generated with little effect on real economic activity. Therefore, the Federal Reserve was not vigorous in fighting the Great Depression in its initial stages. It viewed the early years of the Depression as another opportunity to successfully liquidate the economy, especially after the perceived speculative excesses of the 1920s. However, the state of the economic world in 1929 was not a duplicate of 1920–21. By 1929, the European economies had recovered and the interwar gold standard was a vehicle for the international transmission of deflation. Deflation in 1929 would not operate as it did in 1920–21. The Federal Reserve failed to understand the economic implications of this change in the international standing of the United States’ economy. The result was that the Depression was permitted to spiral out of control and was made much worse than it otherwise would have been had the Federal Reserve not considered it to be a repeat of the 1920–21 recession.

The Beginnings of the Great Depression

In January 1928 the seeds of the Great Depression, whenever they were planted, began to germinate. For it is around this time that two of the most prominent explanations for the depth, length, and worldwide spread of the Depression first came to be manifest. Without any doubt, the economics profession would come to a firm consensus around the idea that the economic events of the Great Depression cannot be properly understood without a solid linkage to both the behavior of the supply of money together with Federal Reserve actions on the one hand and the flawed structure of the interwar gold standard on the other.

It is well documented that many public officials, such as President Herbert Hoover and members of the Federal Reserve System in the latter 1920s, were intent on ending what they perceived to be the speculative excesses that were driving the stock market boom. Moreover, as explained by Hamilton (1987), despite plentiful denials to the contrary, the Federal Reserve assumed the role of “arbiter of security prices.” Although there continues to be debate as to whether or not the stock market was overvalued at the time (White, 1990; DeLong and Schleifer, 1991), the main point is that the Federal Reserve believed there to be a speculative bubble in equity values. Hamilton (1987) describes how the Federal Reserve, intending to “pop” the bubble, embarked on a highly contractionary monetary policy in January 1928. Between December 1927 and July 1928 the Federal Reserve conducted $393 million of open market sales of securities so that only $80 million remained in the Open Market account. Buying rates on bankers’ acceptances1 were raised from 3 percent in January 1928 to 4.5 percent by July, reducing Federal Reserve holdings of such bills by $193 million, leaving a total of only $185 million of these bills on balance. Further, the discount rate was increased from 3.5 percent to 5 percent, the highest level since the recession of 1920–21. “In short, in terms of the magnitudes consciously controlled by the Fed, it would be difficult to design a more contractionary policy than that initiated in January 1928” (Hamilton, 1987).

The pressure did not stop there, however. The death of Federal Reserve Bank President Benjamin Strong and the subsequent control of policy ascribed to Adolph Miller of the Federal Reserve Board insured that the fall in the stock market was going to be made a reality. Miller believed the speculative excesses of the stock market were hurting the economy, and the Federal Reserve continued attempting to put an end to this perceived harm (Cecchetti, 1998). The amount of Federal Reserve credit that was being extended to market participants in the form of broker loans became an issue in 1929. The Federal Reserve adamantly discouraged lending that was collateralized by equities. The intentions of the Board of Governors of the Federal Reserve were made clear in a letter dated February 2, 1929 sent to Federal Reserve banks. In part the letter read:

The board has no disposition to assume authority to interfere with the loan practices of member banks so long as they do not involve the Federal reserve banks. It has, however, a grave responsibility whenever there is evidence that member banks are maintaining speculative security loans with the aid of Federal reserve credit. When such is the case the Federal reserve bank becomes either a contributing or a sustaining factor in the current volume of speculative security credit. This is not in harmony with the intent of the Federal Reserve Act, nor is it conducive to the wholesome operation of the banking and credit system of the country. (Board of Governors of the Federal Reserve 1929: 93–94, quoted from Cecchetti, 1998)

The deflationary pressure to stock prices had been applied. It was now a question of when the market would break. Although the effects were not immediate, the wait was not long.

The Economy Stumbles

The NBER business cycle chronology dates the start of the Great Depression in August 1929. For this reason many have said that the Depression started on Main Street and not Wall Street. Be that as it may, the stock market plummeted in October of 1929. The bursting of the speculative bubble had been achieved and the economy was now headed in an ominous direction. The Federal Reserve’s seasonally adjusted index of industrial production stood at 114 (1935–39 = 100) in August 1929. By October it had fallen to 110 for a decline of 3.5 percent (annualized percentage decline = 14.7 percent). After the crash, the incipient recession intensified, with the industrial production index falling from 110 in October to 100 in December 1929, or 9 percent (annualized percentage decline = 41 percent). In 1930, the index fell further from 100 in January to 79 in December, or an additional 21percent.

Links between the Crash and the Depression?

While popular history treats the crash and the Depression as one and the same event, economists know that they were not. But there is no doubt that the crash was one of the things that got the ball rolling. Several authors have offered explanations for the linkage between the crash and the recession of 1929–30. Mishkin (1978) argues that the crash and an increase in liabilities led to a deterioration in households’ balance sheets. The reduced liquidity2 led consumers to defer consumption of durable goods and housing and thus contributed to a fall in consumption. Temin (1976) suggests that the fall in stock prices had a negative wealth effect on consumption, but attributes only a minor role to this given that stocks were not a large fraction of total wealth; the stock market in 1929, although falling dramatically, remained above the value it had achieved in early 1928, and the propensity to consume from wealth was small during this period. Romer (1990) provides evidence suggesting that if the stock market were thought to be a predictor of future economic activity, then the crash can rightly be viewed as a source of increased consumer uncertainty that depressed spending on consumer durables and accelerated the decline that had begun in August 1929. Flacco and Parker (1992) confirm Romer’s findings using different data and alternative estimation techniques.

Looking back on the behavior of the economy during the year of 1930, industrial production declined 21 percent, the consumer price index fell 2.6 percent, the supply of high-powered money (that is, the liabilities of the Federal Reserve that are usable as money, consisting of currency in circulation and bank reserves; also called the monetary base) fell 2.8 percent, the nominal supply of money as measured by M1 (the product of the monetary base3 multiplied by the money multiplier4) dipped 3.5 percent and the ex post real interest rate turned out to be 11.3 percent, the highest it had been since the recession of 1920–21 (Hamilton, 1987). In spite of this, when put into historical context, there was no reason to view the downturn of 1929–30 as historically unprecedented. Its magnitude was comparable to that of many recessions that had previously occurred. Perhaps there was justifiable optimism in December 1930 that the economy might even shake off the negative movement and embark on the path to recovery, rather like what had occurred after the recession of 1920–21 (Bernanke, 1983). As we know, the bottom would not come for another 27 months.

The Economy Crumbles

Banking Failures

During 1931, there was a “change in the character of the contraction” (Friedman and Schwartz, 1963). Beginning in October 1930 and lasting until December 1930, the first of a series of banking panics now accompanied the downward spasms of the business cycle. Although bank failures had occurred throughout the 1920s, the magnitude of the failures that occurred in the early 1930s was of a different order altogether (Bernanke, 1983). The absence of any type of deposit insurance resulted in the contagion of the panics being spread to sound financial institutions and not just those on the margin.

Traditional Methods of Combating Bank Runs Not Used

Moreover, institutional arrangements that had existed in the private banking system designed to provide liquidity – to convert assets into cash – to fight bank runs before 1913 were not exercised after the creation of the Federal Reserve System. For example, during the panic of 1907, the effects of the financial upheaval had been contained through a combination of lending activities by private banks, called clearinghouses, and the suspension of deposit convertibility into currency. While not preventing bank runs and the financial panic, their economic impact was lessened to a significant extent by these countermeasures enacted by private banks, as the economy quickly recovered in 1908. The aftermath of the panic of 1907 and the desire to have a central authority to combat the contagion of financial disruptions was one of the factors that led to the establishment of the Federal Reserve System. After the creation of the Federal Reserve, clearinghouse lending and suspension of deposit convertibility by private banks were not undertaken. Believing the Federal Reserve to be the “lender of last resort,” it was apparently thought that the responsibility to fight bank runs was the domain of the central bank (Friedman and Schwartz, 1963; Bernanke, 1983). Unfortunately, when the banking panics came in waves and the financial system was collapsing, being the “lender of last resort” was a responsibility that the Federal Reserve either could not or would not assume.

Money Supply Contracts

The economic effects of the banking panics were devastating. Aside from the obvious impact of the closing of failed banks and the subsequent loss of deposits by bank customers, the money supply accelerated its downward spiral. Although the economy had flattened out after the first wave of bank failures in October–December 1930, with the industrial production index steadying from 79 in December 1930 to 80 in April 1931, the remainder of 1931 brought a series of shocks from which the economy was not to recover for some time.

Second Wave of Banking Failure

In May, the failure of Austria’s largest bank, the Kredit-anstalt, touched off financial panics in Europe. In September 1931, having had enough of the distress associated with the international transmission of economic depression, Britain abandoned its participation in the gold standard. Further, just as the United States’ economy appeared to be trying to begin recovery, the second wave of bank failures hit the financial system in June and did not abate until December. In addition, the Hoover administration in December 1931, adhering to its principles of limited government, embarked on a campaign to balance the federal budget. Tax increases resulted the following June, just as the economy was to hit the first low point of its so-called “double bottom” (Hoover, 1952).

The results of these events are now evident. Between January and December 1931 the industrial production index declined from 78 to 66, or 15.4 percent, the consumer price index fell 9.4 percent, the nominal supply of M1 dipped 5.7 percent, the ex post real interest rate5 remained at 11.3 percent, and although the supply of high-powered money6 actually increased 5.5 percent, the currency–deposit and reserve–deposit ratios began their upward ascent, and thus the money multiplier started its downward plunge (Hamilton, 1987). If the economy had flattened out in the spring of 1931, then by December output, the money supply, and the price level were all on negative growth paths that were dragging the economy deeper into depression.

Third Wave of Banking Failure

The economic difficulties were far from over. The economy displayed some evidence of recovery in late summer/early fall of 1932. However, in December 1932 the third, and largest, wave of banking panics hit the financial markets and the collapse of the economy arrived with the business cycle hitting bottom in March 1933. Industrial production between January 1932 and March 1933 fell an additional 15.6 percent. For the combined years of 1932 and 1933, the consumer price index fell a cumulative 16.2 percent, the nominal supply of M1 dropped 21.6 percent, the nominal M2 money supply fell 34.7 percent, and although the supply of high-powered money increased 8.4 percent, the currency–deposit and reserve–deposit ratios accelerated their upward ascent. Thus the money multiplier continued on a downward plunge that was not arrested until March 1933. Similar behaviors for real GDP, prices, money supplies and other key macroeconomic variables occurred in many European economies as well (Snowdon and Vane, 1999; Temin, 1989).

An examination of the macroeconomic data in August 1929 compared to March 1933 provides a stark contrast. The unemployment rate of 3 percent in August 1929 was at 25 percent in March 1933. The industrial production index of 114 in August 1929 was at 54 in March 1933, or a 52.6 percent decrease. The money supply had fallen 35 percent, prices plummeted by about 33 percent, and more than one-third of banks in the United States were either closed or taken over by other banks. The “new era” ushered in by “the roaring twenties” was over. Roosevelt took office in March 1933, a nationwide bank holiday was declared from March 6 until March 13, and the United States abandoned the international gold standard in April 1933. Recovery commenced immediately and the economy began its long path back to the pre-1929 secular growth trend.

Table 1 summarizes the drop in industrial production in the major economies of Western Europe and North America. Table 2 gives gross national product estimates for the United States from 1928 to 1941. The constant price series adjusts for inflation and deflation.

Table 1
Indices of Total Industrial Production, 1927 to 1935 (1929 = 100)

1927 1928 1929 1930 1931 1932 1933 1934 1935
Britain 95 94 100 94 86 89 95 105 114
Canada 85 94 100 91 78 68 69 82 90
France 84 94 100 99 85 74 83 79 77
Germany 95 100 100 86 72 59 68 83 96
Italy 87 99 100 93 84 77 83 85 99
Netherlands 87 94 100 109 101 90 90 93 95
Sweden 85 88 100 102 97 89 93 111 125
U.S. 85 90 100 83 69 55 63 69 79

Source: Industrial Statistics, 1900-57 (Paris, OEEC, 1958), Table 2.

Table 2
U.S. GNP at Constant (1929) and Current Prices, 1928-1941

Year GNP at constant (1929) prices (billions of $) GNP at current prices (billions of $)
1928 98.5 98.7
1929 104.4 104.6
1930 95.1 91.2
1931 89.5 78.5
1932 76.4 58.6
1933 74.2 56.1
1934 80.8 65.5
1935 91.4 76.5
1936 100.9 83.1
1937 109.1 91.2
1938 103.2 85.4
1939 111.0 91.2
1940 121.0 100.5
1941 131.7 124.7

Contemporary Explanations

The economics profession during the 1930s was at a loss to explain the Depression. The most prominent conventional explanations were of two types. First, some observers at the time firmly grounded their explanations on the two pillars of classical macroeconomic thought, Say’s Law and the belief in the self-equilibrating powers of the market. Many argued that it was simply a question of time before wages and prices adjusted fully enough for the economy to return to full employment and achieve the realization of the putative axiom that “supply creates its own demand.” Second, the Austrian school of thought argued that the Depression was the inevitable result of overinvestment during the 1920s. The best remedy for the situation was to let the Depression run its course so that the economy could be purified from the negative effects of the false expansion. Government intervention was viewed by the Austrian school as a mechanism that would simply prolong the agony and make any subsequent depression worse than it would ordinarily be (Hayek, 1966; Hayek, 1967).

Liquidationist Theory

The Hoover administration and the Federal Reserve Board also contained several so-called “liquidationists.” These individuals basically believed that economic agents should be forced to re-arrange their spending proclivities and alter their alleged profligate use of resources. If it took mass bankruptcies to produce this result and wipe the slate clean so that everyone could have a fresh start, then so be it. The liquidationists viewed the events of the Depression as an economic penance for the speculative excesses of the 1920s. Thus, the Depression was the price that was being paid for the misdeeds of the previous decade. This is perhaps best exemplified in the well-known quotation of Treasury Secretary Andrew Mellon, who advised President Hoover to “Liquidate labor, liquidate stocks, liquidate the farmers, liquidate real estate.” Mellon continued, “It will purge the rottenness out of the system. High costs of living and high living will come down. People will work harder, live a more moral life. Values will be adjusted, and enterprising people will pick up the wrecks from less competent people” (Hoover, 1952). Hoover apparently followed this advice as the Depression wore on. He continued to reassure the public that if the principles of orthodox finance were faithfully followed, recovery would surely be the result.

The business press at the time was not immune from such liquidationist prescriptions either. The Commercial and Financial Chronicle, in an August 3, 1929 editorial entitled “Is Not Group Speculating Conspiracy, Fostering Sham Prosperity?” complained of the economy being replete with profligate spending including:

(a) The luxurious diversification of diet advantageous to dairy men … and fruit growers …; (b) luxurious dressing … more silk and rayon …; (c) free spending for automobiles and their accessories, gasoline, house furnishings and equipment, radios, travel, amusements and sports; (d) the displacement from the farms by tractors and autos of produce-consuming horses and mules to a number aggregating 3,700,000 for the period 1918–1928 … (e) the frills of education to thousands for whom places might better be reserved at bench or counter or on the farm. (Quoted from Nelson, 1991)

Persons, in a paper which appeared in the November 1930 Quarterly Journal of Economics, demonstrates that some academic economists also held similar liquidationist views.

Although certainly not universal, the descriptions above suggest that no small part of the conventional wisdom at the time believed the Depression to be a penitence for past sins. In addition, it was thought that the economy would be restored to full employment equilibrium once wages and prices adjusted sufficiently. Say’s Law will ensure the economy will return to health, and supply will create its own demand sufficient to return to prosperity, if we simply let the system work its way through. In his memoirs published in 1952, 20 years after his election defeat, Herbert Hoover continued to steadfastly maintain that if Roosevelt and the New Dealers would have stuck to the policies his administration put in place, the economy would have made a full recovery within 18 months after the election of 1932. We have to intensify our resolve to “stay the course.” All will be well in time if we just “take our medicine.” In hindsight, it challenges the imagination to think up worse policy prescriptions for the events of 1929–33.

Modern Explanations

There remains considerable debate regarding the economic explanations for the behavior of the business cycle between August 1929 and March 1933. This section describes the main hypotheses that have been presented in the literature attempting to explain the causes for the depth, protracted length, and worldwide propagation of the Great Depression.

The United States’ experience, considering the preponderance of empirical results and historical simulations contained in the economic literature, can largely be accounted for by the monetary hypothesis of Friedman and Schwartz (1963) together with the nonmonetary/financial hypotheses of Bernanke (1983) and Fisher (1933). That is, most, but not all, of the characteristic phases of the business cycle and depth to which output fell from 1929 to 1933 can be accounted for by the monetary and nonmonetary/financial hypotheses. The international experience, well documented in Choudri and Kochin (1980), Hamilton (1988), Temin (1989), Bernanke and James (1991), and Eichengreen (1992), can be properly understood as resulting from a flawed interwar gold standard. Each of these hypotheses is explained in greater detail below.

Nonmonetary/Nonfinancial Theories

It should be noted that I do not include a section covering the nonmonetary/nonfinancial theories of the Great Depression. These theories, including Temin’s (1976) focus on autonomous consumption decline, the collapse of housing construction contained in Anderson and Butkiewicz (1980), the effects of the stock market crash, the uncertainty hypothesis of Romer (1990), and the Smoot–Hawley Tariff Act of 1930, are all worthy of mention and can rightly be apportioned some of the responsibility for initiating the Depression. However, any theory of the Depression must be able to account for the protracted problems associated with the punishing deflation imposed on the United States and the world during that era. While the nonmonetary/nonfinancial theories go a long way accounting for the impetus for, and first year of the Depression, my reading of the empirical results of the economic literature indicates that they do not have the explanatory power of the three other theories mentioned above to account for the depths to which the economy plunged.

Moreover, recent research by Olney (1999) argues convincingly that the decline in consumption was not autonomous at all. Rather, the decline resulted because high consumer indebtedness threatened future consumption spending because default was expensive. Olney shows that households were shouldering an unprecedented burden of installment debt – especially for automobiles. In addition, down payments were large and contracts were short. Missed installment payments triggered repossession, reducing consumer wealth in 1930 because households lost all acquired equity. Cutting consumption was the only viable strategy in 1930 for avoiding default.

The Monetary Hypothesis

In reviewing the economic history of the Depression above, it was mentioned that the supply of money fell by 35 percent, prices dropped by about 33 percent, and one-third of all banks vanished. Milton Friedman and Anna Schwartz, in their 1963 book A Monetary History of the United States, 1867–1960, call this massive drop in the supply of money “The Great Contraction.”

Friedman and Schwartz (1963) discuss and painstakingly document the synchronous movements of the real economy with the disruptions that occurred in the financial sector. They point out that the series of bank failures that occurred beginning in October 1930 worsened economic conditions in two ways. First, bank shareholder wealth was reduced as banks failed. Second, and most importantly, the bank failures were exogenous shocks and led to the drastic decline in the money supply. The persistent deflation of the 1930s follows directly from this “great contraction.”

Criticisms of Fed Policy

However, this raises an important question: Where was the Federal Reserve while the money supply and the financial system were collapsing? If the Federal Reserve was created in 1913 primarily to be the “lender of last resort” for troubled financial institutions, it was failing miserably. Friedman and Schwartz pin the blame squarely on the Federal Reserve and the failure of monetary policy to offset the contractions in the money supply. As the money multiplier continued on its downward path, the monetary base, rather than being aggressively increased, simply progressed slightly upwards on a gently positive sloping time path. As banks were failing in waves, was the Federal Reserve attempting to contain the panics by aggressively lending to banks scrambling for liquidity? The unfortunate answer is “no.” When the panics were occurring, was there discussion of suspending deposit convertibility or suspension of the gold standard, both of which had been successfully employed in the past? Again the unfortunate answer is “no.” Did the Federal Reserve consider the fact that it had an abundant supply of free gold, and therefore that monetary expansion was feasible? Once again the unfortunate answer is “no.” The argument can be summarized by the following quotation:

At all times throughout the 1929–33 contraction, alternative policies were available to the System by which it could have kept the stock of money from falling, and indeed could have increased it at almost any desired rate. Those policies did not involve radical innovations. They involved measures of a kind the System had taken in earlier years, of a kind explicitly contemplated by the founders of the System to meet precisely the kind of banking crisis that developed in late 1930 and persisted thereafter. They involved measures that were actually proposed and very likely would have been adopted under a slightly different bureaucratic structure or distribution of power, or even if the men in power had had somewhat different personalities. Until late 1931 – and we believe not even then – the alternative policies involved no conflict with the maintenance of the gold standard. Until September 1931, the problem that recurrently troubled the System was how to keep the gold inflows under control, not the reverse. (Friedman and Schwartz, 1963)

The inescapable conclusion is that it was a failure of the policies of the Federal Reserve System in responding to the crises of the time that made the Depression as bad as it was. If monetary policy had responded differently, the economic events of 1929–33 need not have been as they occurred. This assertion is supported by the results of Fackler and Parker (1994). Using counterfactual historical simulations, they show that if the Federal Reserve had kept the M1 money supply growing along its pre-October 1929 trend of 3.3 percent annually, most of the Depression would have been averted. McCallum (1990) also reaches similar conclusions employing a monetary base feedback policy in his counterfactual simulations.

Lack of Leadership at the Fed

Friedman and Schwartz trace the seeds of these regrettable events to the death of Federal Reserve Bank of New York President Benjamin Strong in 1928. Strong’s death altered the locus of power in the Federal Reserve System and left it without effective leadership. Friedman and Schwartz maintain that Strong had the personality, confidence and reputation in the financial community to lead monetary policy and sway policy makers to his point of view. Friedman and Schwartz believe that Strong would not have permitted the financial panics and liquidity crises to persist and affect the real economy. Instead, after Governor Strong died, the conduct of open market operations changed from a five-man committee dominated by the New York Federal Reserve to that of a 12-man committee of Federal Reserve Bank governors. Decisiveness in leadership was replaced by inaction and drift. Others (Temin, 1989; Wicker, 1965) reject this point, claiming the policies of the Federal Reserve in the 1930s were not inconsistent with the policies pursued in the decade of the 1920s.

The Fed’s Failure to Distinguish between Nominal and Real Interest Rates

Meltzer (1976) also points out errors made by the Federal Reserve. His argument is that the Federal Reserve failed to distinguish between nominal and real interest rates. That is, while nominal rates were falling, the Federal Reserve did virtually nothing, since it construed this to be a sign of an “easy” credit market. However, in the face of deflation, real rates were rising and there was in fact a “tight” credit market. Failure to make this distinction led money to be a contributing factor to the initial decline of 1929.

Deflation

Cecchetti (1992) and Nelson (1991) bolster the monetary hypothesis by demonstrating that the deflation during the Depression was anticipated at short horizons, once it was under way. The result, using the Fisher equation, is that high ex ante real interest rates were the transmission mechanism that led from falling prices to falling output. In addition, Cecchetti (1998) and Cecchetti and Karras (1994) argue that if the lower bound of the nominal interest rate is reached, then continued deflation renders the opportunity cost of holding money negative. In this instance the nature of money changes. Now the rate of deflation places a floor on the real return nonmoney assets must provide to make them attractive to hold. If they cannot exceed the rate on money holdings, then agents will move their assets into cash and the result will be negative net investment and a decapitalization of the economy.

Critics of the Monetary Hypothesis

The monetary hypothesis, however, is not without its detractors. Paul Samuelson observes that the monetary base did not fall during the Depression. Moreover, expecting the Federal Reserve to have aggressively increased the monetary base by whatever amount was necessary to stop the decline in the money supply is hindsight. A course of action for monetary policy such as this was beyond the scope of discussion prevailing at the time. In addition, others, like Moses Abramovitz, point out that the money supply had endogenous components that were beyond the Federal Reserve’s ability to control. Namely, the money supply may have been falling as a result of declining economic activity, or so-called “reverse causation.” Moreover the gold standard, to which the United States continued to adhere until March 1933, also tied the hands of the Federal Reserve in so far as gold outflows that occurred required the Federal Reserve to contract the supply of money. These views are also contained in Temin (1989) and Eichengreen (1992), as discussed below.

Bernanke (1983) argues that the monetary hypothesis: (i) is not a complete explanation of the link between the financial sector and aggregate output in the 1930s; (ii) does not explain how it was that decreases in the money supply caused output to keep falling over many years, especially since it is widely believed that changes in the money supply only change prices and other nominal economic values in the long run, not real economic values like output ; and (iii) is quantitatively insufficient to explain the depth of the decline in output. Bernanke (1983) not only resurrected and sharpened Fisher’s (1933) debt deflation hypothesis, but also made further contributions to what has come to be known as the nonmonetary/financial hypothesis.

The Nonmonetary/Financial Hypothesis

Bernanke (1983), building on the monetary hypothesis of Friedman and Schwartz (1963), presents an alternative interpretation of the way in which the financial crises may have affected output. The argument involves both the effects of debt deflation and the impact that bank panics had on the ability of financial markets to efficiently allocate funds from lenders to borrowers. These nonmonetary/financial theories hold that events in financial markets other than shocks to the money supply can help to account for the paths of output and prices during the Great Depression.

Fisher (1933) asserted that the dominant forces that account for “great” depressions are (nominal) over-indebtedness and deflation. Specifically, he argued that real debt burdens were substantially increased when there were dramatic declines in the price level and nominal incomes. The combination of deflation, falling nominal income and increasing real debt burdens led to debtor insolvency, lowered aggregate demand, and thereby contributed to a continuing decline in the price level and thus further increases in the real burden of debt.

The “Credit View”

Bernanke (1983), in what is now called the “credit view,” provided additional details to help explain Fisher’s debt deflation hypothesis. He argued that in normal circumstances, an initial decline in prices merely reallocates wealth from debtors to creditors, such as banks. Usually, such wealth redistributions are minor in magnitude and have no first-order impact on the economy. However, in the face of large shocks, deflation in the prices of assets forfeited to banks by debtor bankruptcies leads to a decline in the nominal value of assets on bank balance sheets. For a given value of bank liabilities, also denominated in nominal terms, this deterioration in bank assets threatens insolvency. As banks reallocate away from loans to safer government securities, some borrowers, particularly small ones, are unable to obtain funds, often at any price. Further, if this reallocation is long-lived, the shortage of credit for these borrowers helps to explain the persistence of the downturn. As the disappearance of bank financing forces lower expenditure plans, aggregate demand declines, which again contributes to the downward deflationary spiral. For debt deflation to be operative, it is necessary to demonstrate that there was a substantial build-up of debt prior to the onset of the Depression and that the deflation of the 1930s was at least partially unanticipated at medium- and long-term horizons at the time that the debt was being incurred. Both of these conditions appear to have been in place (Fackler and Parker, 2001; Hamilton, 1992; Evans and Wachtel, 1993).

The Breakdown in Credit Markets

In addition, the financial panics which occurred hindered the credit allocation mechanism. Bernanke (1983) explains that the process of credit intermediation requires substantial information gathering and non-trivial market-making activities. The financial disruptions of 1930–33 are correctly viewed as substantial impediments to the performance of these services and thus impaired the efficient allocation of credit between lenders and borrowers. That is, financial panics and debtor and business bankruptcies resulted in a increase in the real cost of credit intermediation. As the cost of credit intermediation increased, sources of credit for many borrowers (especially households, farmers and small firms) became expensive or even unobtainable at any price. This tightening of credit put downward pressure on aggregate demand and helped turn the recession of 1929–30 into the Great Depression. The empirical support for the validity of the nonmonetary/financial hypothesis during the Depression is substantial (Bernanke, 1983; Fackler and Parker, 1994, 2001; Hamilton, 1987, 1992), although support for the “credit view” for the transmission mechanism of monetary policy in post-World War II economic activity is substantially weaker. In combination, considering the preponderance of empirical results and historical simulations contained in the economic literature, the monetary hypothesis and the nonmonetary/financial hypothesis go a substantial distance toward accounting for the economic experiences of the United States during the Great Depression.

The Role of Pessimistic Expectations

To this combination, the behavior of expectations should also be added. As explained by James Tobin, there was another reason for a “change in the character of the contraction” in 1931. Although Friedman and Schwartz attribute this “change” to the bank panics that occurred, Tobin points out that change also took place because of the emergence of pessimistic expectations. If it was thought that the early stages of the Depression were symptomatic of a recession that was not different in kind from similar episodes in our economic history, and that recovery was a real possibility, the public need not have had pessimistic expectations. Instead the public may have anticipated things would get better. However, after the British left the gold standard, expectations changed in a very pessimistic way. The public may very well have believed that the business cycle downturn was not going to be reversed, but rather was going to get worse than it was. When households and business investors begin to make plans based on the economy getting worse instead of making plans based on anticipations of recovery, the depressing economic effects on consumption and investment of this switch in expectations are common knowledge in the modern macroeconomic literature. For the literature on the Great Depression, the empirical research conducted on the expectations hypothesis focuses almost exclusively on uncertainty (which is not the same thing as pessimistic/optimistic expectations) and its contribution to the onset of the Depression (Romer, 1990; Flacco and Parker, 1992). Although Keynes (1936) writes extensively about the state of expectations and their economic influence, the literature is silent regarding the empirical validity of the expectations hypothesis in 1931–33. Yet, in spite of this, the continued shocks that the United States’ economy received demonstrated that the business cycle downturn of 1931–33 was of a different kind than had previously been known. Once the public believed this to be so and made their plans accordingly, the results had to have been economically devastating. There is no formal empirical confirmation and I have not segregated the expectations hypothesis as a separate hypothesis in the overview. However, the logic of the above argument compels me to be of the opinion that the expectations hypothesis provides an impressive addition to the monetary hypothesis and the nonmonetary/financial hypothesis in accounting for the economic experiences of the United States during the Great Depression.

The Gold Standard Hypothesis

Recent research on the operation of the interwar gold standard has deepened our understanding of the Depression and its international character. The way and manner in which the interwar gold standard was structured and operated provide a convincing explanation of the international transmission of deflation and depression that occurred in the 1930s.

The story has its beginning in the 1870–1914 period. During this time the gold standard functioned as a pegged exchange rate system where certain rules were observed. Namely, it was necessary for countries to permit their money supplies to be altered in response to gold flows in order for the price-specie flow mechanism to function properly. It operated successfully because countries that were gaining gold allowed their money supply to increase and raise the domestic price level to restore equilibrium and maintain the fixed exchange rate of their currency. Countries that were losing gold were obligated to permit their money supply to decrease and generate a decline in their domestic price level to restore equilibrium and maintain the fixed exchange rate of their currency. Eichengreen (1992) discusses and extensively documents that the gold standard of this period functioned as smoothly as it did because of the international commitment countries had to the gold standard and the level of international cooperation exhibited during this time. “What rendered the commitment to the gold standard credible, then, was that the commitment was international, not merely national. That commitment was activated through international cooperation” (Eichengreen, 1992).

The gold standard was suspended when the hostilities of World War I broke out. By the end of 1928, major countries such as the United States, the United Kingdom, France and Germany had re-established ties to a functioning fixed exchange rate gold standard. However, Eichengreen (1992) points out that the world in which the gold standard functioned before World War I was not the same world in which the gold standard was being re-established. A credible commitment to the gold standard, as Hamilton (1988) explains, required that a country maintain fiscal soundness and political objectives that insured the monetary authority could pursue a monetary policy consistent with long-run price stability and continuous convertibility of the currency. Successful operation required these conditions to be in place before re-establishment of the gold standard was operational. However, many governments during the interwar period went back on the gold standard in the opposite set of circumstances. They re-established ties to the gold standard because they were incapable, due to the political chaos generated after World War I, of fiscal soundness and did not have political objectives conducive to reforming monetary policy such that it could insure long-run price stability. “By this criterion, returning to the gold standard could not have come at a worse time or for poorer reasons” (Hamilton, 1988). Kindleberger (1973) stresses the fact that the pre-World War I gold standard functioned as well as it did because of the unquestioned leadership exercised by Great Britain. After World War I and the relative decline of Britain, the United States did not exhibit the same strength of leadership Britain had shown before. The upshot is that it was an unsuitable environment in which to re-establish the gold standard after World War I and the interwar gold standard was destined to drift in a state of malperformance as no one took responsibility for its proper functioning. However, the problems did not end there.

Flaws in the Interwar International Gold Standard

Lack of Symmetry in the Response of Gold-Gaining and Gold-Losing Countries

The interwar gold standard operated with four structural/technical flaws that almost certainly doomed it to failure (Eichengreen, 1986; Temin, 1989; Bernanke and James, 1991). The first, and most damaging, was the lack of symmetry in the response of gold-gaining countries and gold-losing countries that resulted in a deflationary bias that was to drag the world deeper into deflation and depression. If a country was losing gold reserves, it was required to decrease its money supply to maintain its commitment to the gold standard. Given that a minimum gold reserve had to be maintained and that countries became concerned when the gold reserve fell within 10 percent of this minimum, little gold could be lost before the necessity of monetary contraction, and thus deflation, became a reality. Moreover, with a fractional gold reserve ratio of 40 percent, the result was a decline in the domestic money supply equal to 2.5 times the gold outflow. On the other hand, there was no such constraint on countries that experienced gold inflows. Gold reserves were accumulated without the binding requirement that the domestic money supply be expanded. Thus the price–specie flow mechanism ceased to function and the equilibrating forces of the pre-World War I gold standard were absent during the interwar period. If a country attracting gold reserves were to embark on a contractionary path, the result would be the further extraction of gold reserves from other countries on the gold standard and the imposition of deflation on their economies as well, as they were forced to contract their money supplies. “As it happened, both of the two major gold surplus countries – France and the United States, who at the time together held close to 60 percent of the world’s monetary gold – took deflationary paths in 1928–1929” (Bernanke and James, 1991).

Foreign Exchange Reserves

Second, countries that did not have reserve currencies could hold their minimum reserves in the form of both gold and convertible foreign exchange reserves. If the threat of devaluation of a reserve currency appeared likely, a country holding foreign exchange reserves could divest itself of the foreign exchange, as holding it became a more risky proposition. Further, the convertible reserves were usually only fractionally backed by gold. Thus, if countries were to prefer gold holdings as opposed to foreign exchange reserves for whatever reason, the result would be a contraction in the world money supply as reserves were destroyed in the movement to gold. This effect can be thought of as equivalent to the effect on the domestic money supply in a fractional reserve banking system of a shift in the public’s money holdings toward currency and away from bank deposits.

The Bank of France and Open Market Operations

Third, the powers of many European central banks were restricted or excluded outright. In particular, as discussed by Eichengreen (1986), the Bank of France was prohibited from engaging in open market operations, i.e. the purchase or sale of government securities. Given that France was one of the countries amassing gold reserves, this restriction largely prevented them from adhering to the rules of the gold standard. The proper response would have been to expand their supply of money and inflate so as not to continue to attract gold reserves and impose deflation on the rest of the world. This was not done. France continued to accumulate gold until 1932 and did not leave the gold standard until 1936.

Inconsistent Currency Valuations

Lastly, the gold standard was re-established at parities that were unilaterally determined by each individual country. When France returned to the gold standard in 1926, it returned at a parity rate that is believed to have undervalued the franc. When Britain returned to the gold standard in 1925, it returned at a parity rate that is believed to have overvalued the pound. In this situation, the only sustainable equilibrium required the French to inflate their economy in response to the gold inflows. However, given their legacy of inflation during the 1921–26 period, France steadfastly resisted inflation (Eichengreen, 1986). The maintenance of the gold standard and the resistance to inflation were now inconsistent policy objectives. The Bank of France’s inability to conduct open market operations only made matters worse. The accumulation of gold and the exporting of deflation to the world was the result.

The Timing of Recoveries

Taken together, the flaws described above made the interwar gold standard dysfunctional and in the end unsustainable. Looking back, we observe that the record of departure from the gold standard and subsequent recovery was different for many different countries. For some countries recovery came sooner. For some it came later. It is in this timing of departure from the gold standard that recent research has produced a remarkable empirical finding. From the work of Choudri and Kochin (1980), Eichengreen and Sachs (1985), Temin (1989), and Bernanke and James (1991), we now know that the sooner a country abandoned the gold standard, the quicker recovery commenced. Spain, which never restored its participation in the gold standard, missed the ravages of the Depression altogether. Britain left the gold standard in September 1931, and started to recover. Sweden left the gold standard at the same time as Britain, and started to recover. The United States left in March 1933, and recovery commenced. France, Holland, and Poland continued to have their economies struggle after the United States’ recovery began as they continued to adhere to the gold standard until 1936. Only after they left did recovery start; departure from the gold standard freed a country from the ravages of deflation.

The Fed and the Gold Standard: The “Midas Touch”

Temin (1989) and Eichengreen (1992) argue that it was the unbending commitment to the gold standard that generated deflation and depression worldwide. They emphasize that the gold standard required fiscal and monetary authorities around the world to submit their economies to internal adjustment and economic instability in the face of international shocks. Given how the gold standard tied countries together, if the gold parity were to be defended and devaluation was not an option, unilateral monetary actions by any one country were pointless. The end result is that Temin (1989) and Eichengreen (1992) reject Friedman and Schwartz’s (1963) claim that the Depression was caused by a series of policy failures on the part of the Federal Reserve. Actions taken in the United States, according to Temin (1989) and Eichengreen (1992), cannot be properly understood in isolation with respect to the rest of the world. If the commitment to the gold standard was to be maintained, monetary and fiscal authorities worldwide had little choice in responding to the crises of the Depression. Why did the Federal Reserve continue a policy of inaction during the banking panics? Because the commitment to the gold standard, what Temin (1989) has labeled “The Midas Touch,” gave them no choice but to let the banks fail. Monetary expansion and the injection of liquidity would lower interest rates, lead to a gold outflow, and potentially be contrary to the rules of the gold standard. Continued deflation due to gold outflows would begin to call into question the monetary authority’s commitment to the gold standard. “Defending gold parity might require the authorities to sit idly by as the banking system crumbled, as the Federal Reserve did at the end of 1931 and again at the beginning of 1933” (Eichengreen, 1992). Thus, if the adherence to the gold standard were to be maintained, the money supply was endogenous with respect to the balance of payments and beyond the influence of the Federal Reserve.

Eichengreen (1992) concludes further that what made the pre-World War I gold standard so successful was absent during the interwar period: credible commitment to the gold standard activated through international cooperation in its implementation and management. Had these important ingredients of the pre-World War I gold standard been present during the interwar period, twentieth-century economic history may have been very different.

Recovery and the New Deal

March 1933 was the rock bottom of the Depression and the inauguration of Franklin D. Roosevelt represented a sharp break with the status quo. Upon taking office, a bank holiday was declared, the United States left the interwar gold standard the following month, and the government commenced with several measures designed to resurrect the financial system. These measures included: (i) the establishment of the Reconstruction Finance Corporation which set about funneling large sums of liquidity to banks and other intermediaries; (ii) the Securities Exchange Act of 1934 which established margin requirements for bank loans used to purchase stocks and bonds and increased information requirements to potential investors; and (iii) the Glass–Steagal Act which strictly separated commercial banking and investment banking. Although delivering some immediate relief to financial markets, lenders continued to be reluctant to extend credit after the events of 1929–33, and the recovery of financial markets was slow and incomplete. Bernanke (1983) estimates that the United States’ financial system did not begin to shed the inefficiencies under which it was operating until the end of 1935.

The NIRA

Policies designed to promote different economic institutions were enacted as part of the New Deal. The National Industrial Recovery Act (NIRA) was passed on June 6, 1933 and was designed to raise prices and wages. In addition, the Act mandated the formation of planning boards in critical sectors of the economy. The boards were charged with setting output goals for their respective sector and the usual result was a restriction of production. In effect, the NIRA was a license for industries to form cartels and was struck down as unconstitutional in 1935. The Agricultural Adjustment Act of 1933 was similar legislation designed to reduce output and raise prices in the farming sector. It too was ruled unconstitutional in 1936.

Relief and Jobs Programs

Other policies intended to provide relief directly to people who were destitute and out of work were rapidly enacted. The Civilian Conservation Corps (CCC), the Tennessee Valley Authority (TVA), the Public Works Administration (PWA) and the Federal Emergency Relief Administration (FERA) were set up shortly after Roosevelt took office and provided jobs for the unemployed and grants to states for direct relief. The Civil Works Administration (CWA), created in 1933–34, and the Works Progress Administration (WPA), created in 1935, were also designed to provide work relief to the jobless. The Social Security Act was also passed in 1935. There surely are other programs with similar acronyms that have been left out, but the intent was the same. In the words of Roosevelt himself, addressing Congress in 1938:

Government has a final responsibility for the well-being of its citizenship. If private co-operative endeavor fails to provide work for the willing hands and relief for the unfortunate, those suffering hardship from no fault of their own have a right to call upon the Government for aid; and a government worthy of its name must make fitting response. (Quoted from Polenberg, 2000)

The Depression had shown the inaccuracies of classifying the 1920s as a “new era.” Rather, the “new era,” as summarized by Roosevelt’s words above and initiated in government’s involvement in the economy, began in March 1933.

The NBER business cycle chronology shows continuous growth from March 1933 until May 1937, at which time a 13-month recession hit the economy. The business cycle rebounded in June 1938 and continued on its upward march to and through the beginning of the United States’ involvement in World War II. The recovery that started in 1933 was impressive, with real GNP experiencing annual rates of the growth in the 10 percent range between 1933 and December 1941, excluding the recession of 1937–38 (Romer, 1993). However, as reported by Romer (1993), real GNP did not return to its pre-Depression level until 1937 and real GNP did not catch up to its pre-Depression secular trend until 1942. Indeed, the unemployment rate, peaking at 25 percent in March 1933, continued to dwell near or above the double-digit range until 1940. It is in this sense that most economists attribute the ending of the Depression to the onset of World War II. The War brought complete recovery as the unemployment rate quickly plummeted after December 1941 to its nadir during the War of below 2 percent.

Explanations for the Pace of Recovery

The question remains, however, that if the War completed the recovery, what initiated it and sustained it through the end of 1941? Should we point to the relief programs of the New Deal and the leadership of Roosevelt? Certainly, they had psychological/expectational effects on consumers and investors and helped to heal the suffering experienced during that time. However, as shown by Brown (1956), Peppers (1973), and Raynold, McMillin and Beard (1991), fiscal policy contributed little to the recovery, and certainly could have done much more.

Once again we return to the financial system for answers. The abandonment of the gold standard, the impact this had on the money supply, and the deliverance from the economic effects of deflation would have to be singled out as the most important contributor to the recovery. Romer (1993) stresses that Eichengreen and Sachs (1985) have it right; recovery did not come before the decision to abandon the old gold parity was made operational. Once this became reality, devaluation of the currency permitted expansion in the money supply and inflation which, rather than promoting a policy of beggar-thy-neighbor, allowed countries to escape the deflationary vortex of economic decline. As discussed in connection with the gold standard hypothesis, the simultaneity of leaving the gold standard and recovery is a robust empirical result that reflects more than simple temporal coincidence.

Romer (1993) reports an increase in the monetary base in the United States of 52 percent between April 1933 and April 1937. The M1 money supply virtually matched this increase in the monetary base, with 49 percent growth over the same period. The sources of this increase were two-fold. First, aside from the immediate monetary expansion permitted by devaluation, as Romer (1993) explains, monetary expansion continued into 1934 and beyond as gold flowed to the United States from Europe due to the increasing political unrest and heightened probability of hostilities that began the progression to World War II. Second, the increase in the money supply matched the increase in the monetary base and the Treasury chose not to sterilize the gold inflows. This is evidence that the monetary expansion resulted from policy decisions and not endogenous changes in the money multiplier. The new regime was freed from the constraints of the gold standard and the policy makers were intent on taking actions of a different nature than what had been done between 1929 and 1933.

Incompleteness of the Recovery before WWII

The Depression had turned a corner and the economy was emerging from the abyss in 1933. However, it still had a long way to go to reach full recovery. Friedman and Schwartz (1963) comment that “the most notable feature of the revival after 1933 was not its rapidity but its incompleteness.” They claim that monetary policy and the Federal Reserve were passive after 1933. The monetary authorities did nothing to stop the fall from 1929 to 1933 and did little to promote the recovery. The Federal Reserve made no effort to increase the stock of high-powered money through the use of either open market operations or rediscounting; Federal Reserve credit outstanding remained “almost perfectly constant from 1934 to mid-1940” (Friedman and Schwartz, 1963). As we have seen above, it was the Treasury that was generating increases in the monetary base at the time by issuing gold certificates equal to the amount of gold reserve inflow and depositing them at the Federal Reserve. When the government spent the money, the Treasury swapped the gold certificates for Federal Reserve notes and this expanded the monetary base (Romer, 1993). Monetary policy was thought to be powerless to promote recovery, and instead it was fiscal policy that became the implement of choice. The research shows that fiscal policy could have done much more to aid in recovery – ironically fiscal policy was the vehicle that was now the focus of attention. There is an easy explanation for why this is so.

The Emergences of Keynes

The economics profession as a whole was at a loss to provide cogent explanations for the events of 1929–33. In the words of Robert Gordon (1998), “economics had lost its intellectual moorings, and it was time for a new diagnosis.” There were no convincing answers regarding why the earlier theories of macroeconomic behavior failed to explain the events that were occurring, and worse, there was no set of principles that established a guide for proper actions in the future. That changed in 1936 with the publication of Keynes’s book The General Theory of Employment, Interest and Money. Perhaps there has been no other person and no other book in economics about which so much has been written. Many consider the arrival of Keynesian thought to have been a “revolution,” although this too is hotly contested (see, for example, Laidler, 1999). The debates that The General Theory generated have been many and long-lasting. There is little that can be said here to add or subtract from the massive literature devoted to the ideas promoted by Keynes, whether they be viewed right or wrong. But the influence over academic thought and economic policy that was generated by The General Theory is not in doubt.

The time was right for a set of ideas that not only explained the Depression’s course of events, but also provided a prescription for remedies that would create better economic performance in the future. Keynes and The General Theory, at the time the events were unfolding, provided just such a package. When all is said and done, we can look back in hindsight and argue endlessly about what Keynes “really meant” or what the “true” contribution of Keynesianism has been to the world of economics. At the time the Depression happened, Keynes represented a new paradigm for young scholars to latch on to. The stage was set for the nurturing of macroeconomics for the remainder of the twentieth century.

This article is a modified version of the introduction to Randall Parker, editor, Reflections on the Great Depression, Edward Elgar Publishing, 2002.

Bibliography

Olney, Martha. “Avoiding Default:The Role of Credit in the Consumption Collapse of 1930.” Quarterly Journal of Economics 114, no. 1 (1999): 319-35.

Anderson, Barry L. and James L. Butkiewicz. “Money, Spending and the Great Depression.” Southern Economic Journal 47 (1980): 388-403.

Balke, Nathan S. and Robert J. Gordon. “Historical Data.” In The American Business Cycle: Continuity and Change, edited by Robert J. Gordon. Chicago: University of Chicago Press, 1986.

Bernanke, Ben S. “Nonmonetary Effects of the Financial Crisis in the Propagation of the Great Depression.” American Economic Review 73, no. 3 (1983): 257-76.

Bernanke, Ben S. and Harold James. “The Gold Standard, Deflation, and Financial Crisis in the Great Depression: An International Comparison.” In Financial Markets and Financial Crises, edited by R. Glenn Hubbard. Chicago: University of Chicago Press, 1991.

Brown, E. Cary. “Fiscal Policy in the Thirties: A Reappraisal.” American Economic Review 46, no. 5 (1956): 857-79.

Cecchetti, Stephen G. “Prices during the Great Depression: Was the Deflation of 1930-1932 Really Anticipated?” American Economic Review 82, no. 1 (1992): 141-56.

Cecchetti, Stephen G. “Understanding the Great Depression: Lessons for Current Policy.” In The Economics of the Great Depression, edited by Mark Wheeler. Kalamazoo, MI: W.E. Upjohn Institute for Employment Research, 1998.

Cecchetti, Stephen G. and Georgios Karras. “Sources of Output Fluctuations during the Interwar Period: Further Evidence on the Causes of the Great Depression.” Review of Economics and Statistics 76, no. 1 (1994): 80-102

Choudri, Ehsan U. and Levis A. Kochin. “The Exchange Rate and the International Transmission of Business Cycle Disturbances: Some Evidence from the Great Depression.” Journal of Money, Credit, and Banking 12, no. 4 (1980): 565-74.

De Long, J. Bradford and Andrei Shleifer. “The Stock Market Bubble of 1929: Evidence from Closed-end Mutual Funds.” Journal of Economic History 51, no. 3 (1991): 675-700.

Eichengreen, Barry. “The Bank of France and the Sterilization of Gold, 1926–1932.” Explorations in Economic History 23, no. 1 (1986): 56-84.

Eichengreen, Barry. Golden Fetters: The Gold Standard and the Great Depression, 1919–1939. New York: Oxford University Press, 1992.

Eichengreen, Barry and Jeffrey Sachs. “Exchange Rates and Economic Recovery in the 1930s.” Journal of Economic History 45, no. 4 (1985): 925-46.

Evans, Martin and Paul Wachtel. “Were Price Changes during the Great Depression Anticipated? Evidence from Nominal Interest Rates.” Journal of Monetary Economics 32, no. 1 (1993): 3-34.

Fackler, James S. and Randall E. Parker. “Accounting for the Great Depression: A Historical Decomposition.” Journal of Macroeconomics 16 (1994): 193-220.

Fackler, James S. and Randall E. Parker. “Was Debt Deflation Operative during the Great Depression?” East Carolina University Working Paper, 2001.

Fisher, Irving. “The Debt–Deflation Theory of Great Depressions.” Econometrica 1, no. 4 (1933): 337-57.

Flacco, Paul R. and Randall E. Parker. “Income Uncertainty and the Onset of the Great Depression.” Economic Inquiry 30, no. 1 (1992): 154-71.

Friedman, Milton and Anna J. Schwartz. A Monetary History of the United States, 1867–1960. Princeton, NJ: Princeton University Press, 1963.

Gordon, Robert J. Macroeconomics, seventh edition. New York: Addison Wesley, 1998.

Hamilton, James D. “Monetary Factors in the Great Depression.” Journal of Monetary Economics 13 (1987): 1-25.

Hamilton, James D. “Role of the International Gold Standard in Propagating the Great Depression.” Contemporary Policy Issues 6, no. 2 (1988): 67-89.

Hamilton, James D. “Was the Deflation during the Great Depression Anticipated? Evidence from the Commodity Futures Market.” American Economic Review 82, no. 1 (1992): 157-78.

Hayek, Friedrich A. von. Monetary Theory and the Trade Cycle. New

York: A. M. Kelley, 1967 (originally published in 1929).

Hayek, Friedrich A. von, Prices and Production. New York: A. M.

Kelley, 1966 (originally published in 1931).

Hoover, Herbert. The Memoirs of Herbert Hoover: The Great Depression, 1929–1941. New York: Macmillan, 1952.

Keynes, John M. The General Theory of Employment, Interest, and Money. London: Macmillan, 1936.

Kindleberger, Charles P. The World in Depression, 1929–1939. Berkeley: University of California Press, 1973.

Laidler, David. Fabricating the Keynesian Revolution. Cambridge: Cambridge University Press, 1999.

McCallum, Bennett T. “Could a Monetary Base Rule Have Prevented the Great Depression?” Journal of Monetary Economics 26 (1990): 3-26.

Meltzer, Allan H. “Monetary and Other Explanations of the Start of the Great Depression.” Journal of Monetary Economics 2 (1976): 455-71.

Mishkin, Frederick S. “The Household Balance Sheet and the Great Depression.” Journal of Economic History 38, no. 4 (1978): 918-37.

Nelson, Daniel B. “Was the Deflation of 1929–1930 Anticipated? The Monetary Regime as Viewed by the Business Press.” Research in Economic History 13 (1991): 1-65.

Peppers, Larry. “Full Employment Surplus Analysis and Structural Change: The 1930s.” Explorations in Economic History 10 (1973): 197-210..

Persons, Charles E. “Credit Expansion, 1920 to 1929, and Its Lessons.” Quarterly Journal of Economics 45, no. 1 (1930): 94-130.

Polenberg, Richard. The Era of Franklin D. Roosevelt, 1933–1945: A Brief History with Documents. Boston: Bedford/St. Martin’s, 2000.

Raynold, Prosper, W. Douglas McMillin and Thomas R. Beard. “The Impact of Federal Government Expenditures in the 1930s.” Southern Economic Journal 58, no. 1 (1991): 15-28.

Romer, Christina D. “World War I and the Postwar Depression: A Reappraisal Based on Alternative Estimates of GNP.” Journal of Monetary Economics 22, no. 1 (1988): 91-115.

Romer, Christina D. “The Great Crash and the Onset of the Great Depression.” Quarterly Journal of Economics 105, no. 3 (1990): 597-624.

Romer, Christina D. “The Nation in Depression.” Journal of Economic Perspectives 7, no. 2 (1993): 19-39.

Snowdon, Brian and Howard R. Vane. Conversations with Leading Economists: Interpreting Modern Macroeconomics, Cheltenham, UK: Edward Elgar, 1999.

Soule, George H. Prosperity Decade, From War to Depression: 1917–1929. New York: Rinehart, 1947.

Temin, Peter. Did Monetary Forces Cause the Great Depression? New York: W.W. Norton, 1976.

Temin, Peter. Lessons from the Great Depression. Cambridge, MA: MIT Press, 1989.

White, Eugene N. “The Stock Market Boom and Crash of 1929 Revisited.” Journal of Economic Perspectives 4, no. 2 (1990): 67-83.

Wicker, Elmus. “Federal Reserve Monetary Policy, 1922–33: A Reinterpretation.” Journal of Political Economy 73, no. 4 (1965): 325-43.

1 Bankers’ acceptances are explained at http://www.rich.frb.org/pubs/instruments/ch10.html.

2 Liquidity is the ease of converting an asset into money.

3 The monetary base is measured as the sum of currency in the hands of the public plus reserves in the banking system. It is also called high-powered money since the monetary base is the quantity that gets multiplied into greater amounts of money supply as banks make loans and people spend and thereby create new bank deposits.

4 The money multiplier equals [D/R*(1 + D/C)]/(D/R + D/C + D/E), where

D = deposits, R = reserves, C = currency and E = excess reserves in the

banking system.

5 The real interest rate adjusts the observed (nominal) interest rate for inflation or deflation. Ex post refers to the real interest rate after the actual change in prices has been observed; ex ante refers to the real interest rate that is expected at the time the lending occurs.

6 See note 3.

Citation: Parker, Randall. “An Overview of the Great Depression”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/an-overview-of-the-great-depression/

Gold Standard

Lawrence H. Officer, University of Illinois at Chicago

The gold standard is the most famous monetary system that ever existed. The periods in which the gold standard flourished, the groupings of countries under the gold standard, and the dates during which individual countries adhered to this standard are delineated in the first section. Then characteristics of the gold standard (what elements make for a gold standard), the various types of the standard (domestic versus international, coin versus other, legal versus effective), and implications for the money supply of a country on the standard are outlined. The longest section is devoted to the “classical” gold standard, the predominant monetary system that ended in 1914 (when World War I began), followed by a section on the “interwar” gold standard, which operated between the two World Wars (the 1920s and 1930s).

Countries and Dates on the Gold Standard

Countries on the gold standard and the periods (or beginning and ending dates) during which they were on gold are listed in Tables 1 and 2 for the classical and interwar gold standards. Types of gold standard, ambiguities of dates, and individual-country cases are considered in later sections. The country groupings reflect the importance of countries to establishment and maintenance of the standard. Center countries — Britain in the classical standard, the United Kingdom (Britain’s legal name since 1922) and the United States in the interwar period — were indispensable to the spread and functioning of the gold standard. Along with the other core countries — France and Germany, and the United States in the classical period — they attracted other countries to adopt the gold standard, in particular, British colonies and dominions, Western European countries, and Scandinavia. Other countries — and, for some purposes, also British colonies and dominions — were in the periphery: acted on, rather than actors, in the gold-standard eras, and generally not as committed to the gold standard.

Table 1Countries on Classical Gold Standard
Country Type of Gold Standard Period
Center Country
Britaina Coin 1774-1797b, 1821-1914
Other Core Countries
United Statesc Coin 1879-1917d
Francee Coin 1878-1914
Germany Coin 1871-1914
British Colonies and Dominions
Australia Coin 1852-1915
Canadaf Coin 1854-1914
Ceylon Coin 1901-1914
Indiag Exchange (British pound) 1898-1914
Western Europe
Austria-Hungaryh Coin 1892-1914
Belgiumi Coin 1878-1914
Italy Coin 1884-1894
Liechtenstein Coin 1898-1914
Netherlandsj Coin 1875-1914
Portugalk Coin 1854-1891
Switzerland Coin 1878-1914
Scandinavia
Denmarkl Coin 1872-1914
Finland Coin 1877-1914
Norway Coin 1875-1914
Sweden Coin 1873-1914
Eastern Europe
Bulgaria Coin 1906-1914
Greece Coin 1885, 1910-1914
Montenegro Coin 1911-1914
Romania Coin 1890-1914
Russia Coin 1897-1914
Middle East
Egypt Coin 1885-1914
Turkey (Ottoman Empire) Coin 1881m-1914
Asia
Japann Coin 1897-1917
Philippines Exchange (U.S. dollar) 1903-1914
Siam Exchange (British pound) 1908-1914
Straits Settlementso Exchange (British pound) 1906-1914
Mexico and Central America
Costa Rica Coin 1896-1914
Mexico Coin 1905-1913
South America
Argentina Coin 1867-1876, 1883-1885, 1900-1914
Bolivia Coin 1908-1914
Brazil Coin 1888-1889, 1906-1914
Chile Coin 1895-1898
Ecuador Coin 1898-1914
Peru Coin 1901-1914
Uruguay Coin 1876-1914
Africa
Eritrea Exchange (Italian lira) 1890-1914
German East Africa Exchange (German mark) 1885p-1914
Italian Somaliland Exchange (Italian lira) 1889p-1914

a Including colonies (except British Honduras) and possessions without a national currency: New Zealand and certain other Oceanic colonies, South Africa, Guernsey, Jersey, Malta, Gibraltar, Cyprus, Bermuda, British West Indies, British Guiana, British Somaliland, Falkland Islands, other South and West African colonies.
b Or perhaps 1798.
c Including countries and territories with U.S. dollar as exclusive or predominant currency: British Honduras (from 1894), Cuba (from 1898), Dominican Republic (from 1901), Panama (from 1904), Puerto Rico (from 1900), Alaska, Aleutian Islands, Hawaii, Midway Islands (from 1898), Wake Island, Guam, and American Samoa.
d Except August – October 1914.
e Including Tunisia (from 1891) and all other colonies except Indochina.
f Including Newfoundland (from 1895).
g Including British East Africa, Uganda, Zanzibar, Mauritius, and Ceylon (to 1901).
h Including Montenegro (to 1911).
I Including Belgian Congo.
j Including Netherlands East Indies.
k Including colonies, except Portuguese India.
l Including Greenland and Iceland.
m Or perhaps 1883.
n Including Korea and Taiwan.
o Including Borneo.
p Approximate beginning date.

Sources: Bloomfield (1959, pp. 13, 15; 1963), Bordo and Kydland (1995), Bordo and Schwartz (1996), Brown (1940, pp.15-16), Bureau of the Mint (1929), de Cecco (1984, p. 59), Ding (1967, pp. 6- 7), Director of the Mint (1913, 1917), Ford (1985, p. 153), Gallarotti (1995, pp. 272 75), Gunasekera (1962), Hawtrey (1950, p. 361), Hershlag (1980, p. 62), Ingram (1971, p. 153), Kemmerer (1916; 1940, pp. 9-10; 1944, p. 39), Kindleberger (1984, pp. 59-60), Lampe (1986, p. 34), MacKay (1946, p. 64), MacLeod (1994, p. 13), Norman (1892, pp. 83-84), Officer (1996, chs. 3 4), Pamuk (2000, p. 217), Powell (1999, p. 14), Rifaat (1935, pp. 47, 54), Shinjo (1962, pp. 81-83), Spalding (1928), Wallich (1950, pp. 32-36), Yeager (1976, p. 298), Young (1925).

Table 2Countries on Interwar Gold Standard
Country Type ofGold Standard Ending Date
Exchange-RateStabilization CurrencyConvertibilitya
United Kingdomb 1925 1931
Coin 1922e Other Core Countries
Bullion 1928 Germany 1924 1931
Australiag 1925 1930
Exchange 1925 Canadai 1925 1929
Exchange 1925 Indiaj 1925 1931
Coin 1929k South Africa 1925 1933
Austria 1922 1931
Exchange 1926 Danzig 1925 1935
Coin 1925 Italym 1927 1934
Coin 1925 Portugalo 1929 1931
Coin 1925 Scandinavia
Bullion 1927 Finland 1925 1931
Bullion 1928 Sweden 1922 1931
Albania 1922 1939
Exchange 1927 Czechoslovakia 1923 1931
Exchange 1928 Greece 1927 1932
Exchange 1925 Latvia 1922 1931
Coin 1922 Poland 1926 1936
Exchange 1929 Yugoslavia 1925 1932
Egypt 1925 1931
Exchange 1925 Palestine 1927 1931
Exchange 1928 Asia
Coin 1930 Malayat 1925 1931
Coin 1925 Philippines 1922 1933
Exchange 1928 Mexico and Central America
Exchange 1922 Guatemala 1925 1933
Exchange 1922 Honduras 1923 1933
Coin 1925 Nicaragua 1915 1932
Coin 1920 South America
Coin 1927 Bolivia 1926 1931
Exchange 1928 Chile 1925 1931
Coin 1923 Ecuador 1927 1932
Exchange 1927 Peru 1928 1932
Exchange 1928 Venezuela 1923 1930

a And freedom of gold export and import.
b Including colonies (except British Honduras) and possessions without a national currency: Guernsey, Jersey, Malta, Gibraltar, Cyprus, Bermuda, British West Indies, British Guiana, British Somaliland, Falkland Islands, British West African and certain South African colonies, certain Oceanic colonies.
cIncluding countries and territories with U.S. dollar as exclusive or predominant currency: British Honduras, Cuba, Dominican Republic, Panama, Puerto Rico, Alaska, Aleutian Islands, Hawaii, Midway Islands, Wake Island, Guam, and American Samoa.
dNot applicable; “the United States dollar…constituted the central point of reference in the whole post-war stabilization effort and was throughout the period of stabilization at par with gold.” — Brown (1940, p. 394)
e1919 for freedom of gold export.
f Including colonies and possessions, except Indochina and Syria.
g Including Papua (New Guinea) and adjoining islands.
h Kenya, Uganda, and Tanganyika.
I Including Newfoundland.
j Including Bhutan, Nepal, British Swaziland, Mauritius, Pemba Island, and Zanzibar.
k 1925 for freedom of gold export.
l Including Luxemburg and Belgian Congo.
m Including Italian Somaliland and Tripoli.
n Including Dutch Guiana and Curacao (Netherlands Antilles).
o Including territories, except Portuguese India.
p Including Liechtenstein.
q Including Greenland and Iceland.
r Including Greater Lebanon.
s Including Korea and Taiwan.
t Including Straits Settlements, Sarawak, Labuan, and Borneo.

Sources: Bett (1957, p. 36), Brown (1940), Bureau of the Mint (1929), Ding (1967, pp. 6-7), Director of the Mint (1917), dos Santos (1996, pp. 191-92), Eichengreen (1992, p. 299), Federal Reserve Bulletin (1928, pp. 562, 847; 1929, pp. 201, 265, 549; 1930, pp. 72, 440; 1931, p. 554; 1935, p. 290; 1936, pp. 322, 760), Gunasekera (1962), Jonung (1984, p. 361), Kemmerer (1954, pp. 301 302), League of Nations (1926, pp. 7, 15; 1927, pp. 165-69; 1929, pp. 208-13; 1931, pp. 265-69; 1937/38, p. 107; 1946, p. 2), Moggridge (1989, p. 305), Officer (1996, chs. 3-4), Powell (1999, pp. 23-24), Spalding (1928), Wallich (1950, pp. 32-37), Yeager (1976, pp. 330, 344, 359); Young (1925, p. 76).

Characteristics of Gold Standards

Types of Gold Standards

Pure Coin and Mixed Standards

In theory, “domestic” gold standards — those that do not depend on interaction with other countries — are of two types: “pure coin” standard and “mixed” (meaning coin and paper, but also called simply “coin”) standard. The two systems share several properties. (1) There is a well-defined and fixed gold content of the domestic monetary unit. For example, the dollar is defined as a specified weight of pure gold. (2) Gold coin circulates as money with unlimited legal-tender power (meaning it is a compulsorily acceptable means of payment of any amount in any transaction or obligation). (3) Privately owned bullion (gold in mass, foreign coin considered as mass, or gold in the form of bars) is convertible into gold coin in unlimited amounts at the government mint or at the central bank, and at the “mint price” (of gold, the inverse of the gold content of the monetary unit). (4) Private parties have no restriction on their holding or use of gold (except possibly that privately created coined money may be prohibited); in particular, they may melt coin into bullion. The effect is as if coin were sold to the monetary authority (central bank or Treasury acting as a central bank) for bullion. It would make sense for the authority to sell gold bars directly for coin, even though not legally required, thus saving the cost of coining. Conditions (3) and (4) commit the monetary authority in effect to transact in coin and bullion in each direction such that the mint price, or gold content of the monetary unit, governs in the marketplace.

Under a pure coin standard, gold is the only money. Under a mixed standard, there are also paper currency (notes) — issued by the government, central bank, or commercial banks — and demand-deposit liabilities of banks. Government or central-bank notes (and central-bank deposit liabilities) are directly convertible into gold coin at the fixed established price on demand. Commercial-bank notes and demand deposits might be converted not directly into gold but rather into gold-convertible government or central-bank currency. This indirect convertibility of commercial-bank liabilities would apply certainly if the government or central- bank currency were legal tender but also generally even if it were not. As legal tender, gold coin is always exchangeable for paper currency or deposits at the mint price, and usually the monetary authority would provide gold bars for its coin. Again, two-way transactions in unlimited amounts fix the currency price of gold at the mint price. The credibility of the monetary-authority commitment to a fixed price of gold is the essence of a successful, ongoing gold-standard regime.

A pure coin standard did not exist in any country during the gold-standard periods. Indeed, over time, gold coin declined from about one-fifth of the world money supply in 1800 (2/3 for gold and silver coin together, as silver was then the predominant monetary standard) to 17 percent in 1885 (1/3 for gold and silver, for an eleven-major-country aggregate), 10 percent in 1913 (15 percent for gold and silver, for the major-country aggregate), and essentially zero in 1928 for the major-country aggregate (Triffin, 1964, pp. 15, 56). See Table 3. The zero figure means not that gold coin did not exist, rather that its main use was as reserves for Treasuries, central banks, and (generally to a lesser extent) commercial banks.

Table 3Structure of Money: Major-Countries Aggregatea(end of year)
1885 1928
8 50
33 0d
18 21
33 99

a Core countries: Britain, United States, France, Germany. Western Europe: Belgium, Italy, Netherlands, Switzerland. Other countries: Canada, Japan, Sweden.
b Metallic money, minor coin, paper currency, and demand deposits.
c 1885: Gold and silver coin; overestimate, as includes commercial-bank holdings that could not be isolated from coin held outside banks by the public. 1913: Gold and silver coin. 1928: Gold coin.
d Less than 0.5 percent.
e 1885 and 1913: Gold, silver, and foreign exchange. 1928: Gold and foreign exchange.
f Official gold: Gold in official reserves. Money gold: Gold-coin component of money supply.

Sources: Triffin (1964, p. 62), Sayers (1976, pp. 348, 352) for 1928 Bank of England dollar reserves (dated January 2, 1929).

An “international” gold standard, which naturally requires that more than one country be on gold, requires in addition freedom both of international gold flows (private parties are permitted to import or export gold without restriction) and of foreign-exchange transactions (an absence of exchange control). Then the fixed mint prices of any two countries on the gold standard imply a fixed exchange rate (“mint parity”) between the countries’ currencies. For example, the dollar- sterling mint parity was $4.8665635 per pound sterling (the British pound).

Gold-Bullion and Gold-Exchange Standards

In principle, a country can choose among four kinds of international gold standards — the pure coin and mixed standards, already mentioned, a gold-bullion standard, and a gold- exchange standard. Under a gold-bullion standard, gold coin neither circulates as money nor is it used as commercial-bank reserves, and the government does not coin gold. The monetary authority (Treasury or central bank) stands ready to transact with private parties, buying or selling gold bars (usable only for import or export, not as domestic currency) for its notes, and generally a minimum size of transaction is specified. For example, in 1925 1931 the Bank of England was on the bullion standard and would sell gold bars only in the minimum amount of 400 fine (pure) ounces, approximately £1699 or $8269. Finally, the monetary authority of a country on a gold-exchange standard buys and sells not gold in any form but rather gold- convertible foreign exchange, that is, the currency of a country that itself is on the gold coin or bullion standard.

Gold Points and Gold Export/Import

A fixed exchange rate (the mint parity) for two countries on the gold standard is an oversimplification that is often made but is misleading. There are costs of importing or exporting gold. These costs include freight, insurance, handling (packing and cartage), interest on money committed to the transaction, risk premium (compensation for risk), normal profit, any deviation of purchase or sale price from the mint price, possibly mint charges, and possibly abrasion (wearing out or removal of gold content of coin — should the coin be sold abroad by weight or as bullion). Expressing the exporting costs as the percent of the amount invested (or, equivalently, as percent of parity), the product of 1/100th of these costs and mint parity (the number of units of domestic currency per unit of foreign currency) is added to mint parity to obtain the gold-export point — the exchange rate at which gold is exported. To obtain the gold-import point, the product of 1/100th of the importing costs and mint parity is subtracted from mint parity.

If the exchange rate is greater than the gold-export point, private-sector “gold-point arbitrageurs” export gold, thereby obtaining foreign currency. Conversely, for the exchange rate less than the gold-import point, gold is imported and foreign currency relinquished. Usually the gold is, directly or indirectly, purchased from the monetary authority of the one country and sold to the monetary authority in the other. The domestic-currency cost of the transaction per unit of foreign currency obtained is the gold-export point. That per unit of foreign currency sold is the gold-import point. Also, foreign currency is sold, or purchased, at the exchange rate. Therefore arbitrageurs receive a profit proportional to the exchange-rate/gold-point divergence.

Gold-Point Arbitrage

However, the arbitrageurs’ supply of foreign currency eliminates profit by returning the exchange rate to below the gold-export point. Therefore perfect “gold-point arbitrage” would ensure that the exchange rate has upper limit of the gold-export point. Similarly, the arbitrageurs’ demand for foreign currency returns the exchange rate to above the gold-import point, and perfect arbitrage ensures that the exchange rate has that point as a lower limit. It is important to note what induces the private sector to engage in gold-point arbitrage: (1) the profit motive; and (2) the credibility of the commitment to (a) the fixed gold price and (b) freedom of foreign exchange and gold transactions, on the part of the monetary authorities of both countries.

Gold-Point Spread

The difference between the gold points is called the (gold-point) spread. The gold points and the spread may be expressed as percentages of parity. Estimates of gold points and spreads involving center countries are provided for the classical and interwar gold standards in Tables 4 and 5. Noteworthy is that the spread for a given country pair generally declines over time both over the classical gold standard (evidenced by the dollar-sterling figures) and for the interwar compared to the classical period.

Table 4Gold-Point Estimates: Classical Gold Standard
Countries Period Gold Pointsa(percent) Spreadd(percent) Method of Computation
Exportb Importc
U.S./Britain 1881-1890 0.6585 0.7141 1.3726 PA
U.S./Britain 1891-1900 0.6550 0.6274 1.2824 PA
U.S./Britain 1901-1910 0.4993 0.5999 1.0992 PA
U.S./Britain 1911-1914 0.5025 0.5915 1.0940 PA
France/U.S. 1877-1913 0.6888 0.6290 1.3178 MED
Germany/U.S. 1894-1913 0.4907 0.7123 1.2030 MED
France/Britain 1877-1913 0.4063 0.3964 0.8027 MED
Germany/Britain 1877-1913 0.3671 0.4405 0.8076 MED
Germany/France 1877-1913 0.4321 0.5556 0.9877 MED
Austria/Britain 1912 0.6453 0.6037 1.2490 SE
Netherlands/Britain 1912 0.5534 0.3552 0.9086 SE
Scandinaviae /Britain 1912 0.3294 0.6067 0.9361 SE

a For numerator country.
b Gold-import point for denominator country.
c Gold-export point for denominator country.
d Gold-export point plus gold-import point.
e Denmark, Sweden, and Norway.

Method of Computation: PA = period average. MED = median exchange rate form estimate of various authorities for various dates, converted to percent deviation from parity. SE = single exchange-rate- form estimate, converted to percent deviation from parity.

Sources: U.S./Britain — Officer (1996, p. 174). France/U.S., Germany/U.S., France/Britain, Germany/Britain, Germany/France — Morgenstern (1959, pp. 178-81). Austria/Britain, Netherlands/Britain, Scandinavia/Britain — Easton (1912, pp. 358-63).

Table 5Gold-Point Estimates: Interwar Gold Standard
Countries Period Gold Pointsa(percent) Spreadd(percent) Method of Computation
Exportb Importc
U.S./Britain 1925-1931 0.6287 0.4466 1.0753 PA
U.S./France 1926-1928e 0.4793 0.5067 0.9860 PA
U.S./France 1928-1933f 0.5743 0.3267 0.9010 PA
U.S./Germany 1926-1931 0.8295 0.3402 1.1697 PA
France/Britain 1926 0.2042 0.4302 0.6344 SE
France/Britain 1929-1933 0.2710 0.3216 0.5926 MED
Germany/Britain 1925-1933 0.3505 0.2676 0.6181 MED
Canada/Britain 1929 0.3521 0.3465 0.6986 SE
Netherlands/Britain 1929 0.2858 0.5146 0.8004 SE
Denmark/Britain 1926 0.4432 0.4930 0.9362 SE
Norway/Britain 1926 0.6084 0.3828 0.9912 SE
Sweden/Britain 1926 0.3881 0.3828 0.7709 SE

a For numerator country.
b Gold-import point for denominator country.
c Gold-export point for denominator country.
d Gold-export point plus gold-import point.
e To end of June 1928. French-franc exchange-rate stabilization, but absence of currency convertibility; see Table 2.
f Beginning July 1928. French-franc convertibility; see Table 2.

Method of Computation: PA = period average. MED = median exchange rate form estimate of various authorities for various dates, converted to percent deviation from parity. SE = single exchange-rate- form estimate, converted to percent deviation from parity.

Sources: U.S./Britain — Officer (1996, p. 174). U.S./France, U.S./Germany, France/Britain 1929- 1933, Germany/Britain — Morgenstern (1959, pp. 185-87). Canada/Britain, Netherlands/Britain — Einzig (1929, pp. 98-101) [Netherlands/Britain currencies’ mint parity from Spalding (1928, p. 135). France/Britain 1926, Denmark/Britain, Norway/Britain, Sweden/Britain — Spalding (1926, pp. 429-30, 436).

The effective monetary standard of a country is distinguishable from its legal standard. For example, a country legally on bimetallism usually is effectively on either a gold or silver monometallic standard, depending on whether its “mint-price ratio” (the ratio of its mint price of gold to mint price of silver) is greater or less than the world price ratio. In contrast, a country might be legally on a gold standard but its banks (and government) have “suspended specie (gold) payments” (refusing to convert their notes into gold), so that the country is in fact on a “paper standard.” The criterion adopted here is that a country is deemed on the gold standard if (1) gold is the predominant effective metallic money, or is the monetary bullion, (2) specie payments are in force, and (3) there is a limitation on the coinage and/or the legal-tender status of silver (the only practical and historical competitor to gold), thus providing institutional or legal support for the effective gold standard emanating from (1) and (2).

Implications for Money Supply

Consider first the domestic gold standard. Under a pure coin standard, the gold in circulation, monetary base, and money supply are all one. With a mixed standard, the money supply is the product of the money multiplier (dependent on the commercial-banks’ reserves/deposit and the nonbank-public’s currency/deposit ratios) and the monetary base (the actual and potential reserves of the commercial banking system, with potential reserves held by the nonbank public). The monetary authority alters the monetary base by changing its gold holdings and its loans, discounts, and securities portfolio (non gold assets, called its “domestic assets”). However, the level of its domestic assets is dependent on its gold reserves, because the authority generates demand liabilities (notes and deposits) by increasing its assets, and convertibility of these liabilities must be supported by a gold reserve, if the gold standard is to be maintained. Therefore the gold standard provides a constraint on the level (or growth) of the money supply.

The international gold standard involves balance-of-payments surpluses settled by gold imports at the gold-import point, and deficits financed by gold exports at the gold-export point. (Within the spread, there are no gold flows and the balance of payments is in equilibrium.) The change in the money supply is then the product of the money multiplier and the gold flow, providing the monetary authority does not change its domestic assets. For a country on a gold- exchange standard, holdings of “foreign exchange” (the reserve currency) take the place of gold. In general, the “international assets” of a monetary authority may consist of both gold and foreign exchange.

The Classical Gold Standard

Dates of Countries Joining the Gold Standard

Table 1 (above) lists all countries that were on the classical gold standard, the gold- standard type to which each adhered, and the period(s) on the standard. Discussion here concentrates on the four core countries. For centuries, Britain was on an effective silver standard under legal bimetallism. The country switched to an effective gold standard early in the eighteenth century, solidified by the (mistakenly) gold-overvalued mint-price ratio established by Isaac Newton, Master of the Mint, in 1717. In 1774 the legal-tender property of silver was restricted, and Britain entered the gold standard in the full sense on that date. In 1798 coining of silver was suspended, and in 1816 the gold standard was formally adopted, ironically during a paper-standard regime (the “Bank Restriction Period,” of 1797-1821), with the gold standard effectively resuming in 1821.

The United States was on an effective silver standard dating back to colonial times, legally bimetallic from 1786, and on an effective gold standard from 1834. The legal gold standard began in 1873-1874, when Acts ended silver-dollar coinage and limited legal tender of existing silver coins. Ironically, again the move from formal bimetallism to a legal gold standard occurred during a paper standard (the “greenback period,” of 1861-1878), with a dual legal and effective gold standard from 1879.

International Shift to the Gold Standard

The rush to the gold standard occurred in the 1870s, with the adherence of Germany, the Scandinavian countries, France, and other European countries. Legal bimetallism shifted from effective silver to effective gold monometallism around 1850, as gold discoveries in the United States and Australia resulted in overvalued gold at the mints. The gold/silver market situation subsequently reversed itself, and, to avoid a huge inflow of silver, many European countries suspended the coinage of silver and limited its legal-tender property. Some countries (France, Belgium, Switzerland) adopted a “limping” gold standard, in which existing former-standard silver coin retained full legal tender, permitting the monetary authority to redeem its notes in silver as well as gold.

As Table 1 shows, most countries were on a gold-coin (always meaning mixed) standard. The gold-bullion standard did not exist in the classical period (although in Britain that standard was embedded in legislation of 1819 that established a transition to restoration of the gold standard). A number of countries in the periphery were on a gold-exchange standard, usually because they were colonies or territories of a country on a gold-coin standard. In situations in which the periphery country lacked its own (even-coined) currency, the gold-exchange standard existed almost by default. Some countries — China, Persia, parts of Latin America — never joined the classical gold standard, instead retaining their silver or bimetallic standards.

Sources of Instability of the Classical Gold Standard

There were three elements making for instability of the classical gold standard. First, the use of foreign exchange as reserves increased as the gold standard progressed. Available end-of- year data indicate that, worldwide, foreign exchange in official reserves (the international assets of the monetary authority) increased by 36 percent from 1880 to 1899 and by 356 percent from 1899 to 1913. In comparison, gold in official reserves increased by 160 percent from 1880 to 1903 but only by 88 percent from 1903 to 1913. (Lindert, 1969, pp. 22, 25) While in 1913 only Germany among the center countries held any measurable amount of foreign exchange — 15 percent of total reserves excluding silver (which was of limited use) — the percentage for the rest of the world was double that for Germany (Table 6). If there were a rush to cash in foreign exchange for gold, reduction or depletion of the gold of reserve-currency countries could place the gold standard in jeopardy.

Table 6Share of Foreign Exchange in Official Reserves(end of year, percent)
Country 1928b
Excluding Silverb
0 10
0 0c
0d 51
13 16
27 32

a Official reserves: gold, silver, and foreign exchange.
b Official reserves: gold and foreign exchange.
c Less than 0.05 percent.
d Less than 0.5 percent.

Sources: 1913 — Lindert (1969, pp. 10-11). 1928 — Britain: Board of Governors of the Federal Reserve System [cited as BG] (1943, p. 551), Sayers (1976, pp. 348, 352) for Bank of England dollar reserves (dated January 2, 1929). United States: BG (1943, pp. 331, 544), foreign exchange consisting of Federal Reserve Banks holdings of foreign-currency bills. France and Germany: Nurkse (1944, p. 234). Rest of world [computed as residual]: gold, BG (1943, pp. 544-51); foreign exchange, from “total” (Triffin, 1964, p. 66), France, and Germany.

Second, Britain — the predominant reserve-currency country — was in a particularly sensitive situation. Again considering end-of 1913 data, almost half of world foreign-exchange reserves were in sterling, but the Bank of England had only three percent of world gold reserves (Tables 7-8). Defining the “reserve ratio” of the reserve-currency-country monetary authority as the ratio of (i) official reserves to (ii) liabilities to foreign monetary authorities held in financial institutions in the country, in 1913 this ratio was only 31 percent for the Bank of England, far lower than those of the monetary authorities of the other core countries (Table 9). An official run on sterling could easily force Britain off the gold standard. Because sterling was an international currency, private foreigners also held considerable liquid assets in London, and could themselves initiate a run on sterling.

Table 7Composition of World Official Foreign-Exchange Reserves(end of year, percent)
1913a British pounds 77
2 French francs }2}

}

16
5b

a Excluding holdings for which currency unspecified.
b Primarily Dutch guilders and Scandinavian kroner.

Sources: 1913 — Lindert (1969, pp. 18-19). 1928 — Components of world total: Triffin (1964, pp. 22, 66), Sayers (1976, pp. 348, 352) for Bank of England dollar reserves (dated January 2, 1929), Board of Governors of the Federal Reserve System [cited as BG] (1943, p. 331) for Federal Reserve Banks holdings of foreign-currency bills.

Table 8Official-Reserves Components: Percent of World Total(end of year)
Country 1928
Gold Foreign Exchange
0 7 United States 27 0a
0b 13 Germany 6 4
95 36 Table 9Reserve Ratiosa of Reserve-Currency Countries

(end of year)

Country 1928c
Excluding Silverc
0.31 0.33
90.55 5.45
2.38 not available
2.11 not available

a Ratio of official reserves to official liquid liabilities (that is, liabilities to foreign governments and central banks).
b Official reserves: gold, silver, and foreign exchange.
c Official reserves: gold and foreign exchange.

Sources : 1913 — Lindert (1969, pp. 10-11, 19). Foreign-currency holdings for which currency unspecified allocated proportionately to the four currencies based on known distribution. 1928 — Gold reserves: Board of Governors of the Federal Reserve System [cited as BG] (1943, pp. 544, 551). Foreign- exchange reserves: Sayers (1976, pp. 348, 352) for Bank of England dollar reserves (dated January 2, 1929); BG (1943, p. 331) for Federal Reserve Banks holdings of foreign-currency bills. Official liquid liabilities: Triffin (1964, p. 22), Sayers (1976, pp. 348, 352).

Third, the United States, though a center country, was a great source of instability to the gold standard. Its Treasury held a high percentage of world gold reserves (more than that of the three other core countries combined in 1913), resulting in an absurdly high reserve ratio — Tables 7-9). With no central bank and a decentralized banking system, financial crises were frequent. Far from the United States assisting Britain, gold often flowed from the Bank of England to the United States to satisfy increases in U.S. demand for money. Though in economic size the United States was the largest of the core countries, in many years it was a net importer rather than exporter of capital to the rest of the world — the opposite of the other core countries. The political power of silver interests and recurrent financial panics led to imperfect credibility in the U.S. commitment to the gold standard. Runs on banks and runs on the Treasury gold reserve placed the U.S. gold standard near collapse in the early and mid-1890s. During that period, the credibility of the Treasury’s commitment to the gold standard was shaken. Indeed, the gold standard was saved in 1895 (and again in 1896) only by cooperative action of the Treasury and a bankers’ syndicate that stemmed gold exports.

Rules of the Game

According to the “rules of the [gold-standard] game,” central banks were supposed to reinforce, rather than “sterilize” (moderate or eliminate) or ignore, the effect of gold flows on the monetary supply. A gold outflow typically decreases the international assets of the central bank and thence the monetary base and money supply. The central-bank’s proper response is: (1) raise its “discount rate,” the central-bank interest rate for rediscounting securities (cashing, at a further deduction from face value, a short-term security from a financial institution that previously discounted the security), thereby inducing commercial banks to adopt a higher reserves/deposit ratio and therefore decreasing the money multiplier; and (2) decrease lending and sell securities, thereby decreasing domestic assets and thence the monetary base. On both counts the money supply is further decreased. Should the central bank rather increase its domestic assets when it loses gold, it engages in “sterilization” of the gold flow and is decidedly not following the “rules of the game.” The converse argument (involving gold inflow and increases in the money supply) also holds, with sterilization involving the central bank decreasing its domestic assets when it gains gold.

Price Specie-Flow Mechanism

A country experiencing a balance-of-payments deficit loses gold and its money supply decreases, both automatically and by policy in accordance with the “rules of the game.” Money income contracts and the price level falls, thereby increasing exports and decreasing imports. Similarly, a surplus country gains gold, the money supply increases, money income expands, the price level rises, exports decrease and imports increase. In each case, balance-of-payments equilibrium is restored via the current account. This is called the “price specie-flow mechanism.” To the extent that wages and prices are inflexible, movements of real income in the same direction as money income occur; in particular, the deficit country suffers unemployment but the payments imbalance is nevertheless corrected.

The capital account also acts to restore balance, via interest-rate increases in the deficit country inducing a net inflow of capital. The interest-rate increases also reduce real investment and thence real income and imports. Similarly, interest-rate decreases in the surplus country elicit capital outflow and increase real investment, income, and imports. This process enhances the current-account correction of the imbalance.

One problem with the “rules of the game” is that, on “global-monetarist” theoretical grounds, they were inconsequential. Under fixed exchange rates, gold flows simply adjust money supply to money demand; the money supply is not determined by policy. Also, prices, interest rates, and incomes are determined worldwide. Even core countries can influence these variables domestically only to the extent that they help determine them in the global marketplace. Therefore the price-specie-flow and like mechanisms cannot occur. Historical data support this conclusion: gold flows were too small to be suggestive of these mechanisms; and prices, incomes, and interest rates moved closely in correspondence (rather than in the opposite directions predicted by the adjustment mechanisms induced by the “rules of the game”) — at least among non-periphery countries, especially the core group.

Discount Rate Rule and the Bank of England

However, the Bank of England did, in effect, manage its discount rate (“Bank Rate”) in accordance with rule (1). The Bank’s primary objective was to maintain convertibility of its notes into gold, that is, to preserve the gold standard, and its principal policy tool was Bank Rate. When its “liquidity ratio” of gold reserves to outstanding note liabilities decreased, it would usually increase Bank Rate. The increase in Bank Rate carried with it market short-term increase rates, inducing a short-term capital inflow and thereby moving the exchange rate away from the gold-export point by increasing the exchange value of the pound. The converse also held, with a rise in the liquidity ratio involving a Bank Rate decrease, capital outflow, and movement of the exchange rate away from the gold import point. The Bank was constantly monitoring its liquidity ratio, and in response altered Bank Rate almost 200 times over 1880- 1913.

While the Reichsbank (the German central bank), like the Bank of England, generally moved its discount rate inversely to its liquidity ratio, most other central banks often violated the rule, with changes in their discount rates of inappropriate direction, or of insufficient amount or frequency. The Bank of France, in particular, kept its discount rate stable. Unlike the Bank of England, it chose to have large gold reserves (see Table 8), with payments imbalances accommodated by fluctuations in its gold rather than financed by short-term capital flows. The United States, lacking a central bank, had no discount rate to use as a policy instrument.

Sterilization Was Dominant

As for rule (2), that the central-bank’s domestic and international assets move in the same direction; in fact the opposite behavior, sterilization, was dominant, as shown in Table 10. The Bank of England followed the rule more than any other central bank, but even so violated it more often than not! How then did the classical gold standard cope with payments imbalances? Why was it a stable system?

Table 10Annual Changes in Internationala and Domesticb Assets of Central BankPercent of Changes in the Same Directionc
1880-1913d Britain 33
__ France 33
31 British Dominionse 13
32 Scandinaviag 25
33 South Americai 23

a 1880-1913: Gold, silver and foreign exchange. 1922-1936: Gold and foreign exchange.
b Domestic income-earning assets: discounts, loans, securities.
c Implying country is following “rules of the game.” Observations with zero or negligible changes in either class of assets excluded.
d Years when country is off gold standard excluded. See Tables 1 and 2.
e Australia and South Africa.
f1880-1913: Austria-Hungary, Belgium, and Netherlands. 1922-1936: Austria, Italy, Netherlands, and Switzerland.
g Denmark, Finland, Norway, and Sweden.
h1880-1913: Russia. 1922-1936: Bulgaria, Czechoslovakia, Greece, Hungary, Poland, Romania, and Yugoslavia.
I Chile, Colombia, Peru, and Uruguay.

Sources: Bloomfield (1959, p. 49), Nurkse (1944, p. 69).

The Stability of the Classical Gold Standard

The fundamental reason for the stability of the classical gold standard is that there was always absolute private-sector credibility in the commitment to the fixed domestic-currency price of gold on the part of the center country (Britain), two (France and Germany) of the three remaining core countries, and certain other European countries (Belgium, Netherlands, Switzerland, and Scandinavia). Certainly, that was true from the late-1870s onward. (For the United States, this absolute credibility applied from about 1900.) In earlier periods, that commitment had a contingency aspect: it was recognized that convertibility could be suspended in the event of dire emergency (such as war); but, after normal conditions were restored, convertibility would be re-established at the pre-existing mint price and gold contracts would again be honored. The Bank Restriction Period is an example of the proper application of the contingency, as is the greenback period (even though the United States, effectively on the gold standard, was legally on bimetallism).

Absolute Credibility Meant Zero Convertibility and Exchange Risk

The absolute credibility in countries’ commitment to convertiblity at the existing mint price implied that there was extremely low, essentially zero, convertibility risk (the probability that Treasury or central-bank notes would not be redeemed in gold at the established mint price) and exchange risk (the probability that the mint parity between two currencies would be altered, or that exchange control or prohibition of gold export would be instituted).

Reasons Why Commitment to Convertibility Was So Credible

There were many reasons why the commitment to convertibility was so credible. (1) Contracts were expressed in gold; if convertibility were abandoned, contracts would inevitably be violated — an undesirable outcome for the monetary authority. (2) Shocks to the domestic and world economies were infrequent and generally mild. There was basically international peace and domestic calm.

(3) The London capital market was the largest, most open, most diversified in the world, and its gold market was also dominant. A high proportion of world trade was financed in sterling, London was the most important reserve-currency center, and balances of payments were often settled by transferring sterling assets rather than gold. Therefore sterling was an international currency — not merely supplemental to gold but perhaps better: a boon to non- center countries, because sterling involved positive, not zero, interest return and its transfer costs were much less than those of gold. Advantages to Britain were the charges for services as an international banker, differential interest returns on its financial intermediation, and the practice of countries on a sterling (gold-exchange) standard of financing payments surpluses with Britain by piling up short-term sterling assets rather than demanding Bank of England gold.

(4) There was widespread ideology — and practice — of “orthodox metallism,” involving authorities’ commitment to an anti-inflation, balanced-budget, stable-money policy. In particular, the ideology implied low government spending and taxes and limited monetization of government debt (financing of budget deficits by printing money). Therefore it was not expected that a country’s price level or inflation would get out of line with that of other countries, with resulting pressure on the country’s adherence to the gold standard. (5) This ideology was mirrored in, and supported by, domestic politics. Gold had won over silver and paper, and stable-money interests (bankers, industrialists, manufacturers, merchants, professionals, creditors, urban groups) over inflationary interests (farmers, landowners, miners, debtors, rural groups).

(6) There was freedom from government regulation and a competitive environment, domestically and internationally. Therefore prices and wages were more flexible than in other periods of human history (before and after). The core countries had virtually no capital controls; the center country (Britain) had adopted free trade, and the other core countries had moderate tariffs. Balance-of-payments financing and adjustment could proceed without serious impediments.

(7) Internal balance (domestic macroeconomic stability, at a high level of real income and employment) was an unimportant goal of policy. Preservation of convertibility of paper currency into gold would not be superseded as the primary policy objective. While sterilization of gold flows was frequent (see above), the purpose was more “meeting the needs of trade” (passive monetary policy) than fighting unemployment (active monetary policy).

(8) The gradual establishment of mint prices over time ensured that the implied mint parities (exchange rates) were in line with relative price levels; so countries joined the gold standard with exchange rates in equilibrium. (9) Current-account and capital-account imbalances tended to be offsetting for the core countries, especially for Britain. A trade deficit induced a gold loss and a higher interest rate, attracting a capital inflow and reducing capital outflow. Indeed, the capital- exporting core countries — Britain, France, and Germany — could eliminate a gold loss simply by reducing lending abroad.

Rareness of Violations of Gold Points

Many of the above reasons not only enhanced credibility in existing mint prices and parities but also kept international-payments imbalances, and hence necessary adjustment, of small magnitude. Responding to the essentially zero convertibility and exchange risks implied by the credible commitment, private agents further reduced the need for balance-of-payments adjustment via gold-point arbitrage (discussed above) and also via a specific kind of speculation. When the exchange rate moved beyond a gold point, arbitrage acted to return it to the spread. So it is not surprising that “violations of the gold points” were rare on a monthly average basis, as demonstrated in Table 11 for the dollar, franc, and mark exchange rate versus sterling. Certainly, gold-point violations did occur; but they rarely persisted sufficiently to be counted on monthly average data. Such measured violations were generally associated with financial crises. (The number of dollar-sterling violations for 1890-1906 exceeding that for 1889-1908 is due to the results emanating from different researchers using different data. Nevertheless, the important common finding is the low percent of months encompassed by violations.)

Table 11Violations of Gold Points
Exchange Rate Time Period Number of Months Number dollar-sterling 240 0.4
1890-1906 3 dollar-sterling 76 0
1889-1908 12b mark-sterling 240 7.5

a May 1925 – August 1931: full months during which both United States and Britain on gold standard.
b Approximate number, deciphered from graph.

Sources: Dollar-sterling, 1890-1906 and 1925-1931 — Officer (1996, p. 235). All other — Giovannini (1993, pp. 130-31).

Stabilizing Speculation

The perceived extremely low convertibility and exchange risks gave private agents profitable opportunities not only outside the spread (gold-point arbitrage) but also within the spread (exchange-rate speculation). As the exchange value of a country’s currency weakened, the exchange rate approaching the gold-export point, speculators had an ever greater incentive to purchase domestic currency with foreign currency (a capital inflow); for they had good reason to believe that the exchange rate would move in the opposite direction, whereupon they would reverse their transaction at a profit. Similarly, a strengthened currency, with the exchange rate approaching the gold-import point, involved speculators selling the domestic currency for foreign currency (a capital outflow). Clearly, the exchange rate would either not go beyond the gold point (via the actions of other speculators of the same ilk) or would quickly return to the spread (via gold-point arbitrage). Also, the further the exchange rate moved toward the gold point, the greater the potential profit opportunity; for there was a decreased distance to that gold point and an increased distance from the other point.

This “stabilizing speculation” enhanced the exchange value of depreciating currencies that were about to lose gold; and thus the gold loss could be prevented. The speculation was all the more powerful, because the absence of controls on capital movements meant private capital flows were highly responsive to exchange-rate changes. Dollar-sterling data, in Table 12, show that this speculation was extremely efficient in keeping the exchange rate away from the gold points — and increasingly effective over time. Interestingly, these statements hold even for the 1890s, during which at times U.S. maintenance of currency convertibility was precarious. The average deviation of the exchange rate from the midpoint of the spread fell decade-by-decade from about 1/3 of one percent of parity in 1881-1890 (23 percent of the gold-point spread) to only 12/100th of one percent of parity in 1911-1914 (11 percent of the spread).

Table 12Average Deviation of Dollar-Sterling Exchange Rate from Gold-Point-Spread Midpoint
Percent of Parity Quarterly observations
0.32 1891-1900 19
0.15 1911-1914a 11
0.28 Monthly observations
0.24 1925-1931c 26

a Ending with second quarter of 1914.
b Third quarter 1925 – second quarter 1931: full quarters during which both United States and Britain on gold standard.
c May 1925 – August 1931: full months during which both United States and Britain on gold standard.

Source: Officer (1996, pp. 182, 191, 272).

Government Policies That Enhanced Gold-Standard Stability

Government policies also enhanced gold-standard stability. First, by the turn of the century South Africa — the main world gold producer — sold all its gold in London, either to private parties or actively to the Bank of England, with the Bank serving also as residual purchaser of the gold. Thus the Bank had the means to replenish its gold reserves. Second, the orthodox- metallism ideology and the leadership of the Bank of England — other central banks would often gear their monetary policy to that of the Bank — kept monetary policies harmonized. Monetary discipline was maintained.

Third, countries used “gold devices,” primarily the manipulation of gold points, to affect gold flows. For example, the Bank of England would foster gold imports by lowering the foreign gold-export point (number of units of foreign currency per pound, the British gold-import point) through interest-free loans to gold importers or raising its purchase price for bars and foreign coin. The Bank would discourage gold exports by lowering the foreign gold-import point (the British gold-export point) via increasing its selling prices for gold bars and foreign coin, refusing to sell bars, or redeeming its notes in underweight domestic gold coin. These policies were alternative to increasing Bank Rate.

The Bank of France and Reichsbank employed gold devices relative to discount-rate changes more than Britain did. Some additional policies included converting notes into gold only in Paris or Berlin rather than at branches elsewhere in the country, the Bank of France converting its notes in silver rather than gold (permitted under its “limping” gold standard), and the Reichsbank using moral suasion to discourage the export of gold. The U.S. Treasury followed similar policies at times. In addition to providing interest-free loans to gold importers and changing the premium at which it would sell bars (or refusing to sell bars outright), the Treasury condoned banking syndicates to put pressure on gold arbitrageurs to desist from gold export in 1895 and 1896, a time when the U.S. adherence to the gold standard was under stress.

Fourth, the monetary system was adept at conserving gold, as evidenced in Table 3. This was important, because the increased gold required for a growing world economy could be obtained only from mining or from nonmonetary hoards. While the money supply for the eleven- major-country aggregate more than tripled from 1885 to 1913, the percent of the money supply in the form of metallic money (gold and silver) more than halved. This process did not make the gold standard unstable, because gold moved into commercial-bank and central-bank (or Treasury) reserves: the ratio of gold in official reserves to official plus money gold increased from 33 to 54 percent. The relative influence of the public versus private sector in reducing the proportion of metallic money in the money supply is an issue warranting exploration by monetary historians.

Fifth, while not regular, central-bank cooperation was not generally required in the stable environment in which the gold standard operated. Yet this cooperation was forthcoming when needed, that is, during financial crises. Although Britain was the center country, the precarious liquidity position of the Bank of England meant that it was more often the recipient than the provider of financial assistance. In crises, it would obtain loans from the Bank of France (also on occasion from other central banks), and the Bank of France would sometimes purchase sterling to push up that currency’s exchange value. Assistance also went from the Bank of England to other central banks, as needed. Further, the credible commitment was so strong that private bankers did not hesitate to make loans to central banks in difficulty.

In sum, “virtuous” two-way interactions were responsible for the stability of the gold standard. The credible commitment to convertibility of paper money at the established mint price, and therefore the fixed mint parities, were both a cause and a result of (1) the stable environment in which the gold standard operated, (2) the stabilizing behavior of arbitrageurs and speculators, and (3) the responsible policies of the authorities — and (1), (2), and (3), and their individual elements, also interacted positively among themselves.

Experience of Periphery

An important reason for periphery countries to join and maintain the gold standard was the access to the capital markets of the core countries thereby fostered. Adherence to the gold standard connoted that the peripheral country would follow responsible monetary, fiscal, and debt-management policies — and, in particular, faithfully repay the interest on and principal of debt. This “good housekeeping seal of approval” (the term coined by Bordo and Rockoff, 1996), by reducing the risk premium, involved a lower interest rate on the country’s bonds sold abroad, and very likely a higher volume of borrowing. The favorable terms and greater borrowing enhanced the country’s economic development.

However, periphery countries bore the brunt of the burden of adjustment of payments imbalances with the core (and other Western European) countries, for three reasons. First, some of the periphery countries were on a gold-exchange standard. When they ran a surplus, they typically increased — and with a deficit, decreased — their liquid balances in London (or other reserve-currency country) rather than withdraw gold from the reserve-currency country. The monetary base of the periphery country would increase, or decrease, but that of the reserve-currency country would remain unchanged. This meant that such changes in domestic variables — prices, incomes, interest rates, portfolios, etc.–that occurred to correct the surplus or deficit, were primarily in the periphery country. The periphery, rather than the core, “bore the burden of adjustment.”

Second, when Bank Rate increased, London drew funds from France and Germany, that attracted funds from other Western European and Scandinavian countries, that drew capital from the periphery. Also, it was easy for a core country to correct a deficit by reducing lending to, or bringing capital home from, the periphery. Third, the periphery countries were underdeveloped; their exports were largely primary products (agriculture and mining), which inherently were extremely sensitive to world market conditions. This feature made adjustment in the periphery compared to the core take the form more of real than financial correction. This conclusion also follows from the fact that capital obtained from core countries for the purpose of economic development was subject to interruption and even reversal. While the periphery was probably better off with access to the capital than in isolation, its welfare gain was reduced by the instability of capital import.

The experience on adherence to the gold standard differed among periphery groups. The important British dominions and colonies — Australia, New Zealand, Canada, and India — successfully maintained the gold standard. They were politically stable and, of course, heavily influenced by Britain. They paid the price of serving as an economic cushion to the Bank of England’s financial situation; but, compared to the rest of the periphery, gained a relatively stable long-term capital inflow. In undeveloped Latin American and Asia, adherence to the gold standard was fragile, with lack of complete credibility in the commitment to convertibility. Many of the reasons for credible commitment that applied to the core countries were absent — for example, there were powerful inflationary interests, strong balance-of-payments shocks, and rudimentary banking sectors. For Latin America and Asia, the cost of adhering to the gold standard was very apparent: loss of the ability to depreciate the currency to counter reductions in exports. Yet the gain, in terms of a steady capital inflow from the core countries, was not as stable or reliable as for the British dominions and colonies.

The Breakdown of the Classical Gold Standard

The classical gold standard was at its height at the end of 1913, ironically just before it came to an end. The proximate cause of the breakdown of the classical gold standard was political: the advent of World War I in August 1914. However, it was the Bank of England’s precarious liquidity position and the gold-exchange standard that were the underlying cause. With the outbreak of war, a run on sterling led Britain to impose extreme exchange control — a postponement of both domestic and international payments — that made the international gold standard non-operational. Convertibility was not legally suspended; but moral suasion, legalistic action, and regulation had the same effect. Gold exports were restricted by extralegal means (and by Trading with the Enemy legislation), with the Bank of England commandeering all gold imports and applying moral suasion to bankers and bullion brokers.

Almost all other gold-standard countries undertook similar policies in 1914 and 1915. The United States entered the war and ended its gold standard late, adopting extralegal restrictions on convertibility in 1917 (although in 1914 New York banks had temporarily imposed an informal embargo on gold exports). An effect of the universal removal of currency convertibility was the ineffectiveness of mint parities and inapplicability of gold points: floating exchange rates resulted.

Interwar Gold Standard

Return to the Gold Standard

In spite of the tremendous disruption to domestic economies and the worldwide economy caused by World War I, a general return to gold took place. However, the resulting interwar gold standard differed institutionally from the classical gold standard in several respects. First, the new gold standard was led not by Britain but rather by the United States. The U.S. embargo on gold exports (imposed in 1917) was removed in 1919, and currency convertibility at the prewar mint price was restored in 1922. The gold value of the dollar rather than of the pound sterling would typically serve as the reference point around which other currencies would be aligned and stabilized. Second, it follows that the core would now have two center countries, the United Kingdom and the United States.

Third, for many countries there was a time lag between stabilizing a country’s currency in the foreign-exchange market (fixing the exchange rate or mint parity) and resuming currency convertibility. Given a lag, the former typically occurred first, currency stabilization operating via central-bank intervention in the foreign-exchange market (transacting in the domestic currency and a reserve currency, generally sterling or the dollar). Table 2 presents the dates of exchange- rate stabilization and currency convertibility resumption for the countries on the interwar gold standard. It is fair to say that the interwar gold standard was at its height at the end of 1928, after all core countries were fully on the standard and before the Great Depression began.

Fourth, the contingency aspect of convertibility conversion, that required restoration of convertibility at the mint price that existed prior to the emergency (World War I), was broken by various countries — even core countries. Some countries (including the United States, United Kingdom, Denmark, Norway, Netherlands, Sweden, Switzerland, Australia, Canada, Japan, Argentina) stabilized their currencies at the prewar mint price. However, other countries (France, Belgium, Italy, Portugal, Finland, Bulgaria, Romania, Greece, Chile) established a gold content of their currency that was a fraction of the prewar level: the currency was devalued in terms of gold, the mint price was higher than prewar. A third group of countries (Germany, Austria, Hungary) stabilized new currencies adopted after hyperinflation. A fourth group (Czechoslovakia, Danzig, Poland, Estonia, Latvia, Lithuania) consisted of countries that became independent or were created following the war and that joined the interwar gold standard. A fifth group (some Latin American countries) had been on silver or paper standards during the classical period but went on the interwar gold standard. A sixth country group (Russia) had been on the classical gold standard, but did not join the interwar gold standard. A seventh group (Spain, China, Iran) joined neither gold standard.

The fifth way in which the interwar gold standard diverged from the classical experience was the mix of gold-standard types. As Table 2 shows, the gold coin standard, dominant in the classical period, was far less prevalent in the interwar period. In particular, all four core countries had been on coin in the classical gold standard; but, of them, only the United States was on coin interwar. The gold-bullion standard, nonexistent prewar, was adopted by two core countries (United Kingdom and France) as well as by two Scandinavian countries (Denmark and Norway). Most countries were on a gold-exchange standard. The central banks of countries on the gold-exchange standard would convert their currencies not into gold but rather into “gold-exchange” currencies (currencies themselves convertible into gold), in practice often sterling, sometimes the dollar (the reserve currencies).

Instability of the Interwar Gold Standard

The features that fostered stability of the classical gold standard did not apply to the interwar standard; instead, many forces made for instability. (1) The process of establishing fixed exchange rates was piecemeal and haphazard, resulting in disequilibrium exchange rates. The United Kingdom restored convertibility at the prewar mint price without sufficient deflation, resulting in an overvalued currency of about ten percent. (Expressed in a common currency at mint parity, the British price level was ten percent higher than that of its trading partners and competitors). A depressed export sector and chronic balance-of-payments difficulties were to result. Other overvalued currencies (in terms of mint parity) were those of Denmark, Italy, and Norway. In contrast, France, Germany, and Belgium had undervalued currencies. (2) Wages and prices were less flexible than in the prewar period. In particular, powerful unions kept wages and unemployment high in British export industries, hindering balance-of-payments correction.

(3) Higher trade barriers than prewar also restrained adjustment.

(4) The gold-exchange standard economized on total world gold via the gold of reserve- currency countries backing their currencies in their reserves role for countries on that standard and also for countries on a coin or bullion standard that elected to hold part of their reserves in London or New York. (Another economizing element was continuation of the move of gold out of the money supply and into banking and official reserves that began in the classical period: for the eleven-major-country aggregate, gold declined to less than œ of one percent of the money supply in 1928, and the ratio of official gold to official-plus-money gold reached 99 percent — Table 3). The gold-exchange standard was inherently unstable, because of the conflict between (a) the expansion of sterling and dollar liabilities to foreign central banks to expand world liquidity, and (b) the resulting deterioration in the reserve ratio of the Bank of England, and U.S. Treasury and Federal Reserve Banks.

This instability was particularly severe in the interwar period, for several reasons. First, France was now a large official holder of sterling, with over half the official reserves of the Bank of France in foreign exchange in 1928, versus essentially none in 1913 (Table 6); and France was resentful that the United Kingdom had used its influence in the League of Nations to induce financially reconstructed countries in Europe to adopt the gold-exchange (sterling) standard. Second, many more countries were on the gold-exchange standard than prewar. Cooperation in restraining a run on sterling or the dollar would be difficult to achieve. Third, the gold-exchange standard, associated with colonies in the classical period, was viewed as a system inferior to a coin standard.

(5) In the classical period, London was the one dominant financial center; in the interwar period it was joined by New York and, in the late 1920s, Paris. Both private and official holdings of foreign currency could shift among the two or three centers, as interest-rate differentials and confidence levels changed.

(6) The problem with gold was not overall scarcity but rather maldistribution. In 1928, official reserve-currency liabilities were much more concentrated than in 1913: the United Kingdom accounted for 77 percent of world foreign-exchange reserves and France less than two percent (versus 47 and 30 percent in 1913 — Table 7). Yet the United Kingdom held only seven percent of world official gold and France 13 percent (Table 8). Reflecting its undervalued currency, France also possessed 39 percent of world official foreign exchange. Incredibly, the United States held 37 percent of world official gold — more than all the non-core countries together.

(7) Britain’s financial position was even more precarious than in the classical period. In 1928, the gold and dollar reserves of the Bank of England covered only one third of London’s liquid liabilities to official foreigners, a ratio hardly greater than in 1913 (and compared to a U.S. ratio of almost 5œ — Table 9). Various elements made the financial position difficult compared to prewar. First, U.K. liquid liabilities were concentrated on stronger countries (France, United States), whereas its liquid assets were predominantly in weaker countries (such as Germany). Second, there was ongoing tension with France, that resented the sterling-dominated gold- exchange standard and desired to cash in its sterling holding for gold to aid its objective of achieving first-class financial status for Paris.

(8) Internal balance was an important goal of policy, which hindered balance-of-payments adjustment, and monetary policy was affected greatly by domestic politics rather than geared to preservation of currency convertibility. (9) Especially because of (8), the credibility in authorities’ commitment to the gold standard was not absolute. Convertibility risk and exchange risk could be well above zero, and currency speculation could be destabilizing rather than stabilizing; so that when a country’s currency approached or reached its gold-export point, speculators might anticipate that currency convertibility would not be maintained and the currency devalued. Hence they would sell rather than buy the currency, which, of course, would help bring about the very outcome anticipated.

(10) The “rules of the game” were infrequently followed and, for most countries, violated even more often than in the classical gold standard — Table 10. Sterilization of gold inflows by the Bank of England can be viewed as an attempt to correct the overvalued pound by means of deflation. However, the U.S. and French sterilization of their persistent gold inflows reflected exclusive concern for the domestic economy and placed the burden of adjustment on other countries in the form of deflation.

(11) The Bank of England did not provide a leadership role in any important way, and central-bank cooperation was insufficient to establish credibility in the commitment to currency convertibility.

Breakdown of the Interwar Gold Standard

Although Canada effectively abandoned the gold standard early in 1929, this was a special case in two respects. First, the action was an early drastic reaction to high U.S. interest rates established to fight the stock-market boom but that carried the threat of unsustainable capital outflow and gold loss for other countries. Second, use of gold devices was the technique used to restrict gold exports and informally terminate the Canadian gold standard.

The beginning of the end of the interwar gold standard occurred with the Great Depression. The depression began in the periphery, with low prices for exports and debt-service requirements leading to insurmountable balance-of-payments difficulties while on the gold standard. However, U.S. monetary policy was an important catalyst. In the second half of 1927 the Federal Reserve pursued an easy-money policy, which supported foreign currencies but also fed the boom in the New York stock market. Reversing policy to fight the Wall Street boom, higher interest rates attracted monies to New York, which weakened sterling in particular. The stock market crash in October 1929, while helpful to sterling, was followed by a passive monetary policy that did not prevent the U.S. depression that started shortly thereafter and that spread to the rest of the world via declines in U.S. trade and lending. In 1929 and 1930 a number of periphery countries either formally suspended currency convertibility or restricted it so that their currencies went beyond the gold-export point.

It was destabilizing speculation, emanating from lack of confidence in authorities’ commitment to currency convertibility that ended the interwar gold standard. In May 1931 there was a run on Austria’s largest commercial bank, and the bank failed. The run spread to Germany, where an important bank also collapsed. The countries’ central banks lost substantial reserves; international financial assistance was too late; and in July 1931 Germany adopted exchange control, followed by Austria in October. These countries were definitively off the gold standard.

The Austrian and German experiences, as well as British budgetary and political difficulties, were among the factors that destroyed confidence in sterling, which occurred in mid-July 1931. Runs on sterling ensued, and the Bank of England lost much of its reserves. Loans from abroad were insufficient, and in any event taken as a sign of weakness. The gold standard was abandoned in September, and the pound quickly and sharply depreciated on the foreign- exchange market, as overvaluation of the pound would imply.

Amazingly, there were no violations of the dollar-sterling gold points on a monthly average basis to the very end of August 1931 (Table 11). In contrast, the average deviation of the dollar-sterling exchange rate from the midpoint of the gold-point spread in 1925-1931 was more than double that in 1911-1914, by either of two measures (Table 12), suggesting less- dominant stabilizing speculation compared to the prewar period. Yet the 1925-1931 average deviation was not much more (in one case, even less) than in earlier decades of the classical gold standard. The trust in the Bank of England had a long tradition, and the shock to confidence in sterling that occurred in July 1931 was unexpected by the British authorities.

Following the U.K. abandonment of the gold standard, many countries followed, some to maintain their competitiveness via currency devaluation, others in response to destabilizing capital flows. The United States held on until 1933, when both domestic and foreign demands for gold, manifested in runs on U.S. commercial banks, became intolerable. The “gold bloc” countries (France, Belgium, Netherlands, Switzerland, Italy, Poland) and Danzig lasted even longer; but, with their currencies now overvalued and susceptible to destabilizing speculation, these countries succumbed to the inevitable by the end of 1936. Albania stayed on gold until occupied by Italy in 1939. As much as a cause, the Great Depression was a consequence of the gold standard; for gold-standard countries hesitated to inflate their economies for fear of weakening the balance of payments, suffering loss of gold and foreign-exchange reserves, and being forced to abandon convertibility or the gold parity. So the gold standard involved “golden fetters” (the title of the classic work of Eichengreen, 1992) that inhibited monetary and fiscal policy to fight the depression. Therefore, some have argued, these fetters seriously exacerbated the severity of the Great Depression within countries (because expansionary policy to fight unemployment was not adopted) and fostered the international transmission of the Depression (because as a country’s output decreased, its imports fell, thus reducing exports and income of other countries).

The “international gold standard,” defined as the period of time during which all four core countries were on the gold standard, existed from 1879 to 1914 (36 years) in the classical period and from 1926 or 1928 to 1931 (four or six years) in the interwar period. The interwar gold standard was a dismal failure in longevity, as well as in its association with the greatest depression the world has known.

References

Bayoumi, Tamim, Barry Eichengreen, and Mark P. Taylor, eds. Modern Perspectives on the Gold Standard. Cambridge: Cambridge University Press, 1996.

Bernanke, Ben, and Harold James. “The Gold Standard, Deflation, and Financial Crisis in the Great Depression: An International Comparison.” In Financial Market and Financial Crises, edited by R. Glenn Hubbard, 33-68. Chicago: University of Chicago Press, 1991.

Bett, Virgil M. Central Banking in Mexico: Monetary Policies and Financial Crises, 1864-1940. Ann Arbor: University of Michigan, 1957.

Bloomfield, Arthur I. Monetary Policy under the International Gold Standard, 1880 1914. New York: Federal Reserve Bank of New York, 1959.

Bloomfield, Arthur I. Short-Term Capital Movements Under the Pre-1914 Gold Standard. Princeton: International Finance Section, Princeton University, 1963.

Board of Governors of the Federal Reserve System. Banking and Monetary Statistics, 1914-1941. Washington, DC, 1943.

Bordo, Michael D. “The Classical Gold Standard: Some Lessons for Today.” Federal Reserve Bank of St. Louis Review 63, no. 5 (1981): 2-17.

Bordo, Michael D. “The Classical Gold Standard: Lessons from the Past.” In The International Monetary System: Choices for the Future, edited by Michael B. Connolly, 229-65. New York: Praeger, 1982.

Bordo, Michael D. “Gold Standard: Theory.” In The New Palgrave Dictionary of Money & Finance, vol. 2, edited by Peter Newman, Murray Milgate, and John Eatwell, 267 71. London: Macmillan, 1992.

Bordo, Michael D. “The Gold Standard, Bretton Woods and Other Monetary Regimes: A Historical Appraisal.” Federal Reserve Bank of St. Louis Review 75, no. 2 (1993): 123-91.

Bordo, Michael D. The Gold Standard and Related Regimes: Collected Essays. Cambridge: Cambridge University Press, 1999.

Bordo, Michael D., and Forrest Capie, eds. Monetary Regimes in Transition. Cambridge: Cambridge University Press, 1994.

Bordo, Michael D., and Barry Eichengreen, eds. A Retrospective on the Bretton Woods System: Lessons for International Monetary Reform. Chicago: University of Chicago Press, 1993.

Bordo, Michael D., and Finn E. Kydland. “The Gold Standard as a Rule: An Essay in Exploration.” Explorations in Economic History 32, no. 4 (1995): 423-64.

Bordo, Michael D., and Hugh Rockoff. “The Gold Standard as a ‘Good Housekeeping Seal of Approval’. ” Journal of Economic History 56, no. 2 (1996): 389- 428.

Bordo, Michael D., and Anna J. Schwartz, eds. A Retrospective on the Classical Gold Standard, 1821-1931. Chicago: University of Chicago Press, 1984.

Bordo, Michael D., and Anna J. Schwartz. “The Operation of the Specie Standard: Evidence for Core and Peripheral Countries, 1880-1990.” In Currency Convertibility: The Gold Standard and Beyond, edited by Jorge Braga de Macedo, Barry Eichengreen, and Jaime Reis, 11-83. London: Routledge, 1996.

Bordo, Michael D., and Anna J. Schwartz. “Monetary Policy Regimes and Economic Performance: The Historical Record.” In Handbook of Macroeconomics, vol. 1A, edited by John B. Taylor and Michael Woodford, 149-234. Amsterdam: Elsevier, 1999.

Broadberry, S. N., and N. F. R. Crafts, eds. Britain in the International Economy. Cambridge: Cambridge University Press, 1992.

Brown, William Adams, Jr. The International Gold Standard Reinterpreted, 1914- 1934. New York: National Bureau of Economic Research, 1940.

Bureau of the Mint. Monetary Units and Coinage Systems of the Principal Countries of the World, 1929. Washington, DC: Government Printing Office, 1929.

Cairncross, Alec, and Barry Eichengreen. Sterling in Decline: The Devaluations of 1931, 1949 and 1967. Oxford: Basil Blackwell, 1983.

Calleo, David P. “The Historiography of the Interwar Period: Reconsiderations.” In Balance of Power or Hegemony: The Interwar Monetary System, edited by Benjamin M. Rowland, 225-60. New York: New York University Press, 1976.

Clarke, Stephen V. O. Central Bank Cooperation: 1924-31. New York: Federal Reserve Bank of New York, 1967.

Cleveland, Harold van B. “The International Monetary System in the Interwar Period.” In Balance of Power or Hegemony: The Interwar Monetary System, edited by Benjamin M. Rowland, 1-59. New York: New York University Press, 1976.

Cooper, Richard N. “The Gold Standard: Historical Facts and Future Prospects.” Brookings Papers on Economic Activity 1 (1982): 1-45.

Dam, Kenneth W. The Rules of the Game: Reform and Evolution in the International Monetary System. Chicago: University of Chicago Press, 1982.

De Cecco, Marcello. The International Gold Standard. New York: St. Martin’s Press, 1984.

De Cecco, Marcello. “Gold Standard.” In The New Palgrave Dictionary of Money & Finance, vol. 2, edited by Peter Newman, Murray Milgate, and John Eatwell, 260 66. London: Macmillan, 1992.

De Cecco, Marcello. “Central Bank Cooperation in the Inter-War Period: A View from the Periphery.” In International Monetary Systems in Historical Perspective, edited by Jaime Reis, 113-34. Houndmills, Basingstoke, Hampshire: Macmillan, 1995.

De Macedo, Jorge Braga, Barry Eichengreen, and Jaime Reis, eds. Currency Convertibility: The Gold Standard and Beyond. London: Routledge, 1996.

Ding, Chiang Hai. “A History of Currency in Malaysia and Singapore.” In The Monetary System of Singapore and Malaysia: Implications of the Split Currency, edited by J. Purcal, 1-9. Singapore: Stamford College Press, 1967.

Director of the Mint. The Monetary Systems of the Principal Countries of the World, 1913. Washington: Government Printing Office, 1913.

Director of the Mint. Monetary Systems of the Principal Countries of the World, 1916. Washington: Government Printing Office, 1917.

Dos Santos, Fernando Teixeira. “Last to Join the Gold Standard, 1931.” In Currency Convertibility: The Gold Standard and Beyond, edited by Jorge Braga de Macedo, Barry Eichengreen, and Jaime Reis, 182-203. London: Routledge, 1996.

Dowd, Kevin, and Richard H. Timberlake, Jr., eds. Money and the National State: The Financial Revolution, Government and the World Monetary System. New Brunswick (U.S.): Transaction, 1998.

Drummond, Ian. M. The Gold Standard and the International Monetary System, 1900 1939. Houndmills, Basingstoke, Hampshire: Macmillan, 1987.

Easton, H. T. Tate’s Modern Cambist. London: Effingham Wilson, 1912.

Eichengreen, Barry, ed. The Gold Standard in Theory and History. New York: Methuen, 1985.

Eichengreen, Barry. Elusive Stability: Essays in the History of International Finance, 1919-1939. New York: Cambridge University Press, 1990.

Eichengreen, Barry. “International Monetary Instability between the Wars: Structural Flaws or Misguided Policies?” In The Evolution of the International Monetary System: How can Efficiency and Stability Be Attained? edited by Yoshio Suzuki, Junichi Miyake, and Mitsuaki Okabe, 71-116. Tokyo: University of Tokyo Press, 1990.

Eichengreen, Barry. Golden Fetters: The Gold Standard and the Great Depression, 1919 1939. New York: Oxford University Press, 1992.

Eichengreen, Barry. “The Endogeneity of Exchange-Rate Regimes.” In Understanding Interdependence: The Macroeconomics of the Open Economy, edited by Peter B. Kenen, 3-33. Princeton: Princeton University Press, 1995.

Eichengreen, Barry. “History of the International Monetary System: Implications for Research in International Macroeconomics and Finance.” In The Handbook of International Macroeconomics, edited by Frederick van der Ploeg, 153-91. Cambridge, MA: Basil Blackwell, 1994.

Eichengreen, Barry, and Marc Flandreau. The Gold Standard in Theory and History, second edition. London: Routledge, 1997.

Einzig, Paul. International Gold Movements. London: Macmillan, 1929. Federal Reserve Bulletin, various issues, 1928-1936.

Ford, A. G. The Gold Standard 1880-1914: Britain and Argentina. Oxford: Clarendon Press, 1962.

Ford, A. G. “Notes on the Working of the Gold Standard before 1914.” In The Gold Standard in Theory and History, edited by Barry Eichengreen, 141-65. New York: Methuen, 1985.

Ford, A. G. “International Financial Policy and the Gold Standard, 1870-1914.” In The Industrial Economies: The Development of Economic and Social Policies, The Cambridge Economic History of Europe, vol. 8, edited by Peter Mathias and Sidney Pollard, 197-249. Cambridge: Cambridge University Press, 1989.

Frieden, Jeffry A. “The Dynamics of International Monetary Systems: International and Domestic Factors in the Rise, Reign, and Demise of the Classical Gold Standard.” In Coping with Complexity in the International System, edited by Jack Snyder and Robert Jervis, 137-62. Boulder, CO: Westview, 1993.

Friedman, Milton, and Anna Jacobson Schwartz. A Monetary History of the United States, 1867-1960. Princeton: Princeton University, Press, 1963.

Gallarotti, Giulio M. The Anatomy of an International Monetary Regime: The Classical Gold Standard, 1880-1914. New York: Oxford University Press, 1995.

Giovannini, Alberto. “Bretton Woods and its Precursors: Rules versus Discretion in the History of International Monetary Regimes.” In A Retrospective on the Bretton Woods System: Lessons for International Monetary Reform, edited by Michael D. Bordo and Barry Eichengreen, 109-47. Chicago: University of Chicago Press, 1993.

Gunasekera, H. A. de S. From Dependent Currency to Central Banking in Ceylon: An Analysis of Monetary Experience, 1825-1957. London: G. Bell, 1962.

Hawtrey, R. G. The Gold Standard in Theory and Practice, fifth edition. London: Longmans, Green, 1947.

Hawtrey, R. G. Currency and Credit, fourth edition. London: Longmans, Green, 1950.

Hershlag, Z. Y. Introduction to the Modern Economic History of the Middle East. London: E. J. Brill, 1980.

Ingram, James C. Economic Changes in Thailand, 1850-1970. Stanford, CA: Stanford University, 1971.

Jonung, Lars. “Swedish Experience under the Classical Gold Standard, 1873-1914.” In A Retrospective on the Classical Gold Standard, 1821-1931, edited by Michael D. Bordo and Anna J. Schwartz, 361-99. Chicago: University of Chicago Press, 1984.

Kemmerer, Donald L. “Statement.” In Gold Reserve Act Amendments, Hearings, U.S. Senate, 83rd Cong., second session, pp. 299-302. Washington, DC: Government Printing Office, 1954.

Kemmerer, Edwin Walter. Modern Currency Reforms: A History and Discussion of Recent Currency Reforms in India, Puerto Rico, Philippine Islands, Straits Settlements and Mexico. New York: Macmillan, 1916.

Kemmerer, Edwin Walter. Inflation and Revolution: Mexico’s Experience of 1912- 1917. Princeton: Princeton University Press, 1940.

Kemmerer, Edwin Walter. Gold and the Gold Standard: The Story of Gold Money – – Past, Present and Future. New York: McGraw-Hill, 1944.

Kenwood, A.G., and A. L. Lougheed. The Growth of the International Economy, 1820 1960. London: George Allen & Unwin, 1971.

Kettell, Brian. Gold. Cambridge, MA: Ballinger, 1982.

Kindleberger, Charles P. A Financial History of Western Europe. London: George Allen & Unwin, 1984.

Kindleberger, Charles P. The World in Depression, 1929-1939, revised edition. Berkeley, University of California Press, 1986.

Lampe, John R. The Bulgarian Economy in the Twentieth Century. London: Croom Helm, 1986.

League of Nations. Memorandum on Currency and Central Banks, 1913-1925, second edition, vol. 1. Geneva, 1926.

League of Nations. International Statistical Yearbook, 1926. Geneva, 1927.

League of Nations. International Statistical Yearbook, 1928. Geneva, 1929.

League of Nations. Statistical Yearbook, 1930/31.Geneva, 1931.

League of Nations. Money and Banking, 1937/38, vol. 1: Monetary Review. Geneva.

League of Nations. The Course and Control of Inflation. Geneva, 1946.

Lindert, Peter H. Key Currencies and Gold, 1900-1913. Princeton: International Finance Section, Princeton University, 1969.

McCloskey, Donald N., and J. Richard Zecher. “How the Gold Standard Worked, 1880 1913.” In The Monetary Approach to the Balance of Payments, edited by Jacob A. Frenkel and Harry G. Johnson, 357-85. Toronto: University of Toronto Press, 1976.

MacKay, R. A., ed. Newfoundland: Economic Diplomatic, and Strategic Studies. Toronto: Oxford University Press, 1946.

MacLeod, Malcolm. Kindred Countries: Canada and Newfoundland before Confederation. Ottawa: Canadian Historical Association, 1994.

Moggridge, D. E. British Monetary Policy, 1924-1931: The Norman Conquest of $4.86. Cambridge: Cambridge University Press, 1972.

Moggridge, D. E. “The Gold Standard and National Financial Policies, 1919-39.” In The Industrial Economies: The Development of Economic and Social Policies, The Cambridge Economic History of Europe, vol. 8, edited by Peter Mathias and Sidney Pollard, 250-314. Cambridge: Cambridge University Press, 1989.

Morgenstern, Oskar. International Financial Transactions and Business Cycles. Princeton: Princeton University Press, 1959.

Norman, John Henry. Complete Guide to the World’s Twenty-nine Metal Monetary Systems. New York: G. P. Putnam, 1892.

Nurkse, Ragnar. International Currency Experience: Lessons of the Inter-War Period. Geneva: League of Nations, 1944.

Officer, Lawrence H. Between the Dollar-Sterling Gold Points: Exchange Rates, Parity, and Market Behavior. Cambridge: Cambridge University Press, 1996.

Pablo, Martín Acena, and Jaime Reis, eds. Monetary Standards in the Periphery: Paper, Silver and Gold, 1854-1933. Houndmills, Basingstoke, Hampshire: Macmillan, 2000.

Palyi, Melchior. The Twilight of Gold, 1914-1936: Myths and Realities. Chicago: Henry Regnery, 1972.

Pamuk, Sevket. A Monetary History of the Ottoman Empire. Cambridge: Cambridge University Press, 2000.

Pani?, M. European Monetary Union: Lessons from the Classical Gold Standard. Houndmills, Basingstoke, Hampshire: St. Martin’s Press, 1992.

Powell, James. A History of the Canadian Dollar. Ottawa: Bank of Canada, 1999.

Redish, Angela. Bimetallism: An Economic and Historical Analysis. Cambridge: Cambridge University Press, 2000.

Rifaat, Mohammed Ali. The Monetary System of Egypt: An Inquiry into its History and Present Working. London: George Allen & Unwin, 1935.

Rockoff, Hugh. “Gold Supply.” In The New Palgrave Dictionary of Money & Finance, vol. 2, edited by Peter Newman, Murray Milgate, and John Eatwell, 271 73. London: Macmillan, 1992.

Sayers, R. S. The Bank of England, 1891-1944, Appendixes. Cambridge: Cambridge University Press, 1976.

Sayers, R. S. The Bank of England, 1891-1944. Cambridge: Cambridge University Press, 1986.

Schwartz, Anna J. “Alternative Monetary Regimes: The Gold Standard.” In Alternative Monetary Regimes, edited by Colin D. Campbell and William R. Dougan, 44-72. Baltimore: Johns Hopkins University Press, 1986.

Shinjo, Hiroshi. History of the Yen: 100 Years of Japanese Money-Economy. Kobe: Kobe University, 1962.

Spalding, William F. Tate’s Modern Cambist. London: Effingham Wilson, 1926.

Spalding, William F. Dictionary of the World’s Currencies and Foreign Exchange. London: Isaac Pitman, 1928.

Triffin, Robert. The Evolution of the International Monetary System: Historical Reappraisal and Future Perspectives. Princeton: International Finance Section, Princeton University, 1964.

Triffin, Robert. Our International Monetary System: Yesterday, Today, and Tomorrow. New York: Random House, 1968.

Wallich, Henry Christopher. Monetary Problems of an Export Economy: The Cuban Experience, 1914-1947. Cambridge, MA: Harvard University Press, 1950.

Yeager, Leland B. International Monetary Relations: Theory, History, and Policy, second edition. New York: Harper & Row, 1976.

Young, John Parke. Central American Currency and Finance. Princeton: Princeton University Press, 1925.

Citation: Officer, Lawrence. “Gold Standard”. EH.Net Encyclopedia, edited by Robert Whaples. March 26, 2008. URL http://eh.net/encyclopedia/gold-standard/

The Economic History of the Fur Trade: 1670 to 1870

Ann M. Carlos, University of Colorado
Frank D. Lewis, Queen’s University

Introduction

A commercial fur trade in North America grew out of the early contact between Indians and European fisherman who were netting cod on the Grand Banks off Newfoundland and on the Bay of Gaspé near Quebec. Indians would trade the pelts of small animals, such as mink, for knives and other iron-based products, or for textiles. Exchange at first was haphazard and it was only in the late sixteenth century, when the wearing of beaver hats became fashionable, that firms were established who dealt exclusively in furs. High quality pelts are available only where winters are severe, so the trade took place predominantly in the regions we now know as Canada, although some activity took place further south along the Mississippi River and in the Rocky Mountains. There was also a market in deer skins that predominated in the Appalachians.

The first firms to participate in the fur trade were French, and under French rule the trade spread along the St. Lawrence and Ottawa Rivers, and down the Mississippi. In the seventeenth century, following the Dutch, the English developed a trade through Albany. Then in 1670, a charter was granted by the British crown to the Hudson’s Bay Company, which began operating from posts along the coast of Hudson Bay (see Figure 1). For roughly the next hundred years, this northern region saw competition of varying intensity between the French and the English. With the conquest of New France in 1763, the French trade shifted to Scottish merchants operating out of Montreal. After the negotiation of Jay’s Treaty (1794), the northern border was defined and trade along the Mississippi passed to the American Fur Company under John Jacob Astor. In 1821, the northern participants merged under the name of the Hudson’s Bay Company, and for many decades this merged company continued to trade in furs. Finally, in the 1990s, under pressure from animal rights groups, the Hudson’s Bay Company, which in the twentieth century had become a large Canadian retailer, ended the fur component of its operation.

Figure 1
Hudson’s Bay Company Hinterlands
 Hudson's Bay Company Hinterlands (map)

Source: Ray (1987, plate 60)

The fur trade was based on pelts destined either for the luxury clothing market or for the felting industries, of which hatting was the most important. This was a transatlantic trade. The animals were trapped and exchanged for goods in North America, and the pelts were transported to Europe for processing and final sale. As a result, forces operating on the demand side of the market in Europe and on the supply side in North America determined prices and volumes; while intermediaries, who linked the two geographically separated areas, determined how the trade was conducted.

The Demand for Fur: Hats, Pelts and Prices

However much hats may be considered an accessory today, they were for centuries a mandatory part of everyday dress, for both men and women. Of course styles changed, and, in response to the vagaries of fashion and politics, hats took on various forms and shapes, from the high-crowned, broad-brimmed hat of the first two Stuarts to the conically-shaped, plainer hat of the Puritans. The Restoration of Charles II of England in 1660 and the Glorious Revolution in 1689 brought their own changes in style (Clarke, 1982, chapter 1). What remained a constant was the material from which hats were made – wool felt. The wool came from various animals, but towards the end of the fifteenth century beaver wool began to be predominate. Over time, beaver hats became increasingly popular eventually dominating the market. Only in the nineteenth century did silk replace beaver in high-fashion men’s hats.

Wool Felt

Furs have long been classified as either fancy or staple. Fancy furs are those demanded for the beauty and luster of their pelt. These furs – mink, fox, otter – are fashioned by furriers into garments or robes. Staple furs are sought for their wool. All staple furs have a double coating of hair with long, stiff, smooth hairs called guard hairs which protect the shorter, softer hair, called wool, that grows next to the animal skin. Only the wool can be felted. Each of the shorter hairs is barbed and once the barbs at the ends of the hair are open, the wool can be compressed into a solid piece of material called felt. The prime staple fur has been beaver, although muskrat and rabbit have also been used.

Wool felt was used for over two centuries to make high-fashion hats. Felt is stronger than a woven material. It will not tear or unravel in a straight line; it is more resistant to water, and it will hold its shape even if it gets wet. These characteristics made felt the prime material for hatters especially when fashion called for hats with large brims. The highest quality hats would be made fully from beaver wool, whereas lower quality hats included inferior wool, such as rabbit.

Felt Making

The transformation of beaver skins into felt and then hats was a highly skilled activity. The process required first that the beaver wool be separated from the guard hairs and the skin, and that some of the wool have open barbs, since felt required some open-barbed wool in the mixture. Felt dates back to the nomads of Central Asia, who are said to have invented the process of felting and made their tents from this light but durable material. Although the art of felting disappeared from much of western Europe during the first millennium, felt-making survived in Russia, Sweden, and Asia Minor. As a result of the Medieval Crusades, felting was reintroduced through the Mediterranean into France (Crean, 1962).

In Russia, the felting industry was based on the European beaver (castor fiber). Given their long tradition of working with beaver pelts, the Russians had perfected the art of combing out the short barbed hairs from among the longer guard hairs, a technology that they safeguarded. As a consequence, the early felting trades in England and France had to rely on beaver wool imported from Russia, although they also used domestic supplies of wool from other animals, such rabbit, sheep and goat. But by the end of the seventeenth century, Russian supplies were drying up, reflecting the serious depletion of the European beaver population.

Coincident with the decline in European beaver stocks was the emergence of a North American trade. North American beaver (castor canadensis) was imported through agents in the English, French and Dutch colonies. Although many of the pelts were shipped to Russia for initial processing, the growth of the beaver market in England and France led to the development of local technologies, and more knowledge of the art of combing. Separating the beaver wool from the felt was only the first step in the felting process. It was also necessary that some of the barbs on the short hairs be raised or open. On the animal these hairs were naturally covered with keratin to prevent the barbs from opening, thus to make felt, the keratin had to be stripped from at least some of the hairs. The process was difficult to refine and entailed considerable experimentation by felt-makers. For instance, one felt maker “bundled [the skins] in a sack of linen and boiled [them] for twelve hours in water containing several fatty substances and nitric acid” (Crean, 1962, p. 381). Although such processes removed the keratin, they did so at the price of a lower quality wool.

The opening of the North American trade not only increased the supply of skins for the felting industry, it also provided a subset of skins whose guard hairs had already been removed and the keratin broken down. Beaver pelts imported from North America were classified as either parchment beaver (castor sec – dry beaver), or coat beaver (castor gras – greasy beaver). Parchment beaver were from freshly caught animals, whose skins were simply dried before being presented for trade. Coat beaver were skins that had been worn by the Indians for a year or more. With wear, the guard hairs fell out and the pelt became oily and more pliable. In addition, the keratin covering the shorter hairs broke down. By the middle of the seventeenth century, hatters and felt-makers came to learn that parchment and coat beaver could be combined to produce a strong, smooth, pliable, top-quality waterproof material.

Until the 1720s, beaver felt was produced with relatively fixed proportions of coat and parchment skins, which led to periodic shortages of one or the other type of pelt. The constraint was relaxed when carotting was developed, a chemical process by which parchment skins were transformed into a type of coat beaver. The original carrotting formula consisted of salts of mercury diluted in nitric acid, which was brushed on the pelts. The use of mercury was a big advance, but it also had serious health consequences for hatters and felters, who were forced to breathe the mercury vapor for extended periods. The expression “mad as a hatter” dates from this period, as the vapor attacked the nervous systems of these workers.

The Prices of Parchment and Coat Beaver

Drawn from the accounts of the Hudson’s Bay Company, Table 1 presents some eighteenth century prices of parchment and coat beaver pelts. From 1713 to 1726, before the carotting process had become established, coat beaver generally fetched a higher price than parchment beaver, averaging 6.6 shillings per pelt as compared to 5.5 shillings. Once carotting was widely used, however, the prices were reversed, and from 1730 to 1770 parchment exceeded coat in almost every year. The same general pattern is seen in the Paris data, although there the reversal was delayed, suggesting slower diffusion in France of the carotting technology. As Crean (1962, p. 382) notes, Nollet’s L’Art de faire des chapeaux included the exact formula, but it was not published until 1765.

A weighted average of parchment and coat prices in London reveals three episodes. From 1713 to 1722 prices were quite stable, fluctuating within the narrow band of 5.0 and 5.5 shillings per pelt. During the period, 1723 to 1745, prices moved sharply higher and remained in the range of 7 to 9 shillings. The years 1746 to 1763 saw another big increase to over 12 shillings per pelt. There are far fewer prices available for Paris, but we do know that in the period 1739 to 1753 the trend was also sharply higher with prices more than doubling.

Table 1
Price of Beaver Pelts in Britain: 1713-1763
(shillings per skin)

Year Parchment Coat Averagea Year Parchment Coat Averagea
1713 5.21 4.62 5.03 1739 8.51 7.11 8.05
1714 5.24 7.86 5.66 1740 8.44 6.66 7.88
1715 4.88 5.49 1741 8.30 6.83 7.84
1716 4.68 8.81 5.16 1742 7.72 6.41 7.36
1717 5.29 8.37 5.65 1743 8.98 6.74 8.27
1718 4.77 7.81 5.22 1744 9.18 6.61 8.52
1719 5.30 6.86 5.51 1745 9.76 6.08 8.76
1720 5.31 6.05 5.38 1746 12.73 7.18 10.88
1721 5.27 5.79 5.29 1747 10.68 6.99 9.50
1722 4.55 4.97 4.55 1748 9.27 6.22 8.44
1723 8.54 5.56 7.84 1749 11.27 6.49 9.77
1724 7.47 5.97 7.17 1750 17.11 8.42 14.00
1725 5.82 6.62 5.88 1751 14.31 10.42 12.90
1726 5.41 7.49 5.83 1752 12.94 10.18 11.84
1727 7.22 1753 10.71 11.97 10.87
1728 8.13 1754 12.19 12.68 12.08
1729 9.56 1755 12.05 12.04 11.99
1730 8.71 1756 13.46 12.02 12.84
1731 6.27 1757 12.59 11.60 12.17
1732 7.12 1758 13.07 11.32 12.49
1733 8.07 1759 15.99 14.68
1734 7.39 1760 13.37 13.06 13.22
1735 8.33 1761 10.94 13.03 11.36
1736 8.72 7.07 8.38 1762 13.17 16.33 13.83
1737 7.94 6.46 7.50 1763 16.33 17.56 16.34
1738 8.95 6.47 8.32

a A weighted average of the prices of parchment, coat and half parchment beaver pelts. Weights are based on the trade in these types of furs at Fort Albany. Prices of the individual types of pelts are not available for the years, 1727 to 1735.

Source: Carlos and Lewis, 1999.

The Demand for Beaver Hats

The main cause of the rising beaver pelt prices in England and France was the increasing demand for beaver hats, which included hats made exclusively with beaver wool and referred to as “beaver hats,” and those hats containing a combination of beaver and a lower cost wool, such as rabbit. These were called “felt hats.” Unfortunately, aggregate consumption series for the eighteenth century Europe are not available. We do, however, have Gregory King’s contemporary work for England which provides a good starting point. In a table entitled “Annual Consumption of Apparell, anno 1688,” King calculated that consumption of all types of hats was about 3.3 million, or nearly one hat per person. King also included a second category, caps of all sorts, for which he estimated consumption at 1.6 million (Harte, 1991, p. 293). This means that as early as 1700, the potential market for hats in England alone was nearly 5 million per year. Over the next century, the rising demand for beaver pelts was a result of a number factors including population growth, a greater export market, a shift toward beaver hats from hats made of other materials, and a shift from caps to hats.

The British export data indicate that demand for beaver hats was growing not just in England, but in Europe as well. In 1700 a modest 69,500 beaver hats were exported from England and almost the same number of felt hats; but by 1760, slightly over 500,000 beaver hats and 370,000 felt halts were shipped from English ports (Lawson, 1943, app. I). In total, over the seventy years to 1770, 21 million beaver and felt hats were exported from England. In addition to the final product, England exported the raw material, beaver pelts. In 1760, £15,000 in beaver pelts were exported along with a range of other furs. The hats and the pelts tended to go to different parts of Europe. Raw pelts were shipped mainly to northern Europe, including Germany, Flanders, Holland and Russia; whereas hats went to the southern European markets of Spain and Portugal. In 1750, Germany imported 16,500 beaver hats, while Spain imported 110,000 and Portugal 175,000 (Lawson, 1943, appendices F & G). Over the first six decades of the eighteenth century, these markets grew dramatically, such that the value of beaver hat sales to Portugal alone was £89,000 in 1756-1760, representing about 300,000 hats or two-thirds of the entire export trade.

European Intermediaries in the Fur Trade

By the eighteenth century, the demand for furs in Europe was being met mainly by exports from North America with intermediaries playing an essential role. The American trade, which moved along the main water systems, was organized largely through chartered companies. At the far north, operating out of Hudson Bay, was the Hudson’s Bay Company, chartered in 1670. The Compagnie d’Occident, founded in 1718, was the most successful of a series of monopoly French companies. It operated through the St. Lawrence River and in the region of the eastern Great Lakes. There was also an English trade through Albany and New York, and a French trade down the Mississippi.

The Hudson’s Bay Company and the Compagnie d’Occident, although similar in title, had very different internal structures. The English trade was organized along hierarchical lines with salaried managers, whereas the French monopoly issued licenses (congés) or leased out the use of its posts. The structure of the English company allowed for more control from the London head office, but required systems that could monitor the managers of the trading posts (Carlos and Nicholas, 1990). The leasing and licensing arrangements of the French made monitoring unnecessary, but led to a system where the center had little influence over the conduct of the trade.

The French and English were distinguished as well by how they interacted with the Natives. The Hudson’s Bay Company established posts around the Bay and waited for the Indians, often middlemen, to come to them. The French, by contrast, moved into the interior, directly trading with the Indians who harvested the furs. The French arrangement was more conducive to expansion, and by the end of the seventeenth century, they had moved beyond the St. Lawrence and Ottawa rivers into the western Great Lakes region (see Figure 1). Later they established posts in the heart of the Hudson Bay hinterland. In addition, the French explored the river systems to the south, setting up a post at the mouth of the Mississippi. As noted earlier, after Jay’s Treaty was signed, the French were replaced in the Mississippi region by U.S. interests which later formed the American Fur Company (Haeger, 1991).

The English takeover of New France at the end of the French and Indian Wars in 1763 did not, at first, fundamentally change the structure of the trade. Rather, French management was replaced by Scottish and English merchants operating in Montreal. But, within a decade, the Montreal trade was reorganized into partnerships between merchants in Montreal and traders who wintered in the interior. The most important of these arrangements led to the formation of the Northwest Company, which for the first two decades of the nineteenth century, competed with the Hudson’s Bay Company (Carlos and Hoffman, 1986). By the early decades of the nineteenth century, the Hudson’s Bay Company, the Northwest Company, and the American Fur Company had, combined, a system of trading posts across North America, including posts in Oregon and British Columbia and on the Mackenzie River. In 1821, the Northwest Company and the Hudson’s Bay Company merged under the name of the Hudson’s Bay Company. The Hudson’s Bay Company then ran the trade as a monopsony until the late 1840s when it began facing serious competition from trappers to the south. The Company’s role in the northwest changed again with the Canadian Confederation in 1867. Over the next decades treaties were signed with many of the northern tribes forever changing the old fur trade order in Canada.

The Supply of Furs: The Harvesting of Beaver and Depletion

During the eighteenth century, the changing technology of felt production and the growing demand for felt hats were met by attempts to increase the supply of furs, especially the supply of beaver pelts. Any permanent increase, however, was ultimately dependent on the animal resource base. How that base changed over time must be a matter of speculation since no animal counts exist from that period; nevertheless, the evidence we do have points to a scenario in which over-harvesting, at least in some years, gave rise to serious depletion of the beaver and possibly other animals such as marten that were also being traded. Why the beaver were over-harvested was closely related to the prices Natives were receiving, but important as well was the nature of Native property rights to the resource.

Harvests in the Fort Albany and York Factory Regions

That beaver populations along the Eastern seaboard regions of North America were depleted as the fur trade advanced is widely accepted. In fact the search for new sources of supply further west, including the region of Hudson Bay, has been attributed in part to dwindling beaver stocks in areas where the fur trade had been long established. Although there has been little discussion of the impact that the Hudson’s Bay Company and the French, who traded in the region of Hudson Bay, were having on the beaver stock, the remarkably complete records of the Hudson’s Bay Company provide the basis for reasonable inferences about depletion. From 1700 there is an uninterrupted annual series of fur returns at Fort Albany; the fur returns from York Factory begin in 1716 (see Figure 1).

The beaver returns at Fort Albany and York Factory for the period 1700 to 1770 are described in Figure 2. At Fort Albany the number of beaver skins over the period 1700 to 1720 averaged roughly 19,000, with wide year-to-year fluctuations; the range was about 15,000 to 30,000. After 1720 and until the late 1740s average returns declined by about 5,000 skins, and remained within the somewhat narrower range of roughly 10,000 to 20,000 skins. The period of relative stability was broken in the final years of the 1740s. In 1748 and 1749, returns increased to an average of nearly 23,000. Following these unusually strong years, the trade fell precipitously so that in 1756 fewer than 6,000 beaver pelts were received. There was a brief recovery in the early 1760s but by the end decade trade had fallen below even the mid-1750s levels. In 1770, Fort Albany took in just 3,600 beaver pelts. This pattern – unusually large returns in the late 1740s and low returns thereafter – indicates that the beaver in the Fort Albany region were being seriously depleted.

Figure 2
Beaver Traded at Fort Albany and York Factory 1700 – 1770

Source: Carlos and Lewis, 1993.

The beaver returns at York Factory from 1716 to 1770, also described in Figure 2, have some of the key features of the Fort Albany data. After some low returns early on (from 1716 to 1720), the number of beaver pelts increased to an average of 35,000. There were extraordinary returns in 1730 and 1731, when the average was 55,600 skins, but beaver receipts then stabilized at about 31,000 over the remainder of the decade. The first break in the pattern came in the early 1740s shortly after the French established several trading posts in the area. Surprisingly perhaps, given the increased competition, trade in beaver pelts at the Hudson’s Bay Company post increased to an average of 34,300, this over the period 1740 to 1743. Indeed, the 1742 return of 38,791 skins was the largest since the French had established any posts in the region. The returns in 1745 were also strong, but after that year the trade in beaver pelts began a decline that continued through to 1770. Average returns over the rest of the decade were 25,000; the average during the 1750s was 18,000, and just 15,500 in the 1760s. The pattern of beaver returns at York Factory – high returns in the early 1740s followed by a large decline – strongly suggests that, as in the Fort Albany hinterland, the beaver population had been greatly reduced.

The overall carrying capacity of any region, or the size of the animal stock, depends on the nature of the terrain and the underlying biological determinants such as birth and death rates. A standard relationship between the annual harvest and the animal population is the Lotka-Volterra logistic, commonly used in natural resource models to relate the natural growth of a population to the size of that population:
F(X) = aX – bX2, a, b > 0 (1)

where X is the population, F(X) is the natural growth in the population, a is the maximum proportional growth rate of the population, and b = a/X, where X is the upper limit to population size. The population dynamics of the species exploited depends on the harvest each period:

DX = aX – bX2– H (2)

where DX is the annual change in the population and H is the harvest. The choice of parameter a and maximum population X is central to the population estimates and have been based largely on estimates from the beaver ecology literature and Ontario provincial field reports of beaver densities (Carlos and Lewis, 1993).

Simulations based on equation 2 suggest that, until the 1730s, beaver populations remained at levels roughly consistent with maximum sustained yield management, sometimes referred to as the biological optimum. But after the 1730s there was a decline in beaver stocks to about half the maximum sustained yield levels. The cause of the depletion was closely related to what was happening in Europe. There, buoyant demand for felt hats and dwindling local fur supplies resulted in much higher prices for beaver pelts. These higher prices, in conjunction with the resulting competition from the French in the Hudson Bay region, led the Hudson’s Bay Company to offer much better terms to Natives who came to their trading posts (Carlos and Lewis, 1999).

Figure 3 reports a price index for furs at Fort Albany and at York Factory. The index represents a measure of what Natives received in European goods for their furs. At Fort Albany, fur prices were close to 70 from 1713 to 1731, but in 1732, in response to higher European fur prices and the entry of la Vérendrye, an important French trader, the price jumped to 81. After that year, prices continued to rise. The pattern at York Factory was similar. Although prices were high in the early years when the post was being established, beginning in 1724 the price settled down to about 70. At York Factory, the jump in price came in 1738, which was the year la Vérendrye set up a trading post in the York Factory hinterland. Prices then continued to increase. It was these higher fur prices that led to over-harvesting and, ultimately, a decline in beaver stocks.

Figure 3
Price Index for Furs: Fort Albany and York Factory, 1713 – 1770

Source: Carlos and Lewis, 2001.

Property Rights Regimes

An increase in price paid to Native hunters did not have to lead to a decline in the animal stocks, because Indians could have chosen to limit their harvesting. Why they did not was closely related their system of property rights. One can classify property rights along a spectrum with, at one end, open access, where anyone can hunt or fish, and at the other, complete private property, where a sole owner has full control over the resource. Between, there are a range of property rights regimes with access controlled by a community or a government, and where individual members of the group do not necessarily have private property rights. Open access creates a situation where there is less incentive to conserve, because animals not harvested by a particular hunter will be available to other hunters in the future. Thus the closer is a system to open access the more likely it is that the resource will be depleted.

Across aboriginal societies in North America, one finds a range of property rights regimes. Native Americans did have a concept of trespass and of property, but individual and family rights to resources were not absolute. Sometimes referred to as the Good Samaritan principle (McManus, 1972), outsiders were not permitted to harvest furs on another’s territory for trade, but they were allowed to hunt game and even beaver for food. Combined with this limitation to private property was an Ethic of Generosity that included liberal gift-giving where any visitor to one’s encampment was to be supplied with food and shelter.

Why a social norm such as gift-giving or the related Good Samaritan principle emerged was due to the nature of the aboriginal environment. The primary objective of aboriginal societies was survival. Hunting was risky, and so rules were put in place that would reduce the risk of starvation. As Berkes et al.(1989, p. 153) notes, for such societies: “all resources are subject to the overriding principle that no one can prevent a person from obtaining what he needs for his family’s survival.” Such actions were reciprocal and especially in the sub-arctic world were an insurance mechanism. These norms, however, also reduced the incentive to conserve the beaver and other animals that were part of the fur trade. The combination of these norms and the increasing price paid to Native traders led to the large harvests in the 1740s and ultimately depletion of the animal stock.

The Trade in European Goods

Indians were the primary agents in the North American commercial fur trade. It was they who hunted the animals, and transported and traded the pelts or skins to European intermediaries. The exchange was a voluntary. In return for their furs, Indians obtained both access to an iron technology to improve production and access to a wide range of new consumer goods. It is important to recognize, however, that although the European goods were new to aboriginals, the concept of exchange was not. The archaeological evidence indicates an extensive trade between Native tribes in the north and south of North America prior to European contact.

The extraordinary records of the Hudson’s Bay Company allow us to form a clear picture of what Indians were buying. Table 2 lists the goods received by Natives at York Factory, which was by far the largest of the Hudson’s Bay Company trading posts. As is evident from the table, the commercial trade was more than in beads and baubles or even guns and alcohol; rather Native traders were receiving a wide range of products that improved their ability to meet their subsistence requirements and allowed them to raise their living standards. The items have been grouped by use. The producer goods category was dominated by firearms, including guns, shot and powder, but also includes knives, awls and twine. The Natives traded for guns of different lengths. The 3-foot gun was used mainly for waterfowl and in heavily forested areas where game could be shot at close range. The 4-foot gun was more accurate and suitable for open spaces. In addition, the 4-foot gun could play a role in warfare. Maintaining guns in the harsh sub-arctic environment was a serious problem, and ultimately, the Hudson’s Bay Company was forced to send gunsmiths to its trading posts to assess quality and help with repairs. Kettles and blankets were the main items in the “household goods” category. These goods probably became necessities to the Natives who adopted them. Then there were the luxury goods, which have been divided into two broad categories: “tobacco and alcohol,” and “other luxuries,” dominated by cloth of various kinds (Carlos and Lewis, 2001; 2002).

Table 2
Value of Goods Received at York Factory in 1740 (made beaver)

We have much less information about the French trade. The French are reported to have exchanged similar items, although given their higher transport costs, both the furs received and the goods traded tended to be higher in value relative to weight. The Europeans, it might be noted, supplied no food to the trade in the eighteenth century. In fact, Indians helped provision the posts with fish and fowl. This role of food purveyor grew in the nineteenth century as groups known as the “home guard Cree” came to live around the posts; as well, pemmican, supplied by Natives, became an important source of nourishment for Europeans involved in the buffalo hunts.

The value of the goods listed in Table 2 is expressed in terms of the unit of account, the made beaver, which the Hudson’s Bay Company used to record its transactions and determine the rate of exchange between furs and European goods. The price of a prime beaver pelt was 1 made beaver, and every other type of fur and good was assigned a price based on that unit. For example, a marten (a type of mink) was a made beaver, a blanket was 7 made beaver, a gallon of brandy, 4 made beaver, and a yard of cloth, 3? made beaver. These were the official prices at York Factory. Thus Indians, who traded at these prices, received, for example, a gallon of brandy for four prime beaver pelts, two yards of cloth for seven beaver pelts, and a blanket for 21 marten pelts. This was barter trade in that no currency was used; and although the official prices implied certain rates of exchange between furs and goods, Hudson’s Bay Company factors were encouraged to trade at rates more favorable to the Company. The actual rates, however, depended on market conditions in Europe and, most importantly, the extent of French competition in Canada. Figure 3 illustrates the rise in the price of furs at York Factory and Fort Albany in response to higher beaver prices in London and Paris, as well as to a greater French presence in the region (Carlos and Lewis, 1999). The increase in price also reflects the bargaining ability of Native traders during periods of direct competition between the English and French and later the Hudson’s Bay Company and the Northwest Company. At such times, the Native traders would play both parties off against each other (Ray and Freeman, 1978).

The records of the Hudson’s Bay Company provide us with a unique window to the trading process, including the bargaining ability of Native traders, which is evident in the range of commodities received. Natives only bought goods they wanted. Clear from the Company records is that it was the Natives who largely determined the nature and quality of those goods. As well the records tell us how income from the trade was being allocated. The breakdown differed by post and varied over time; but, for example, in 1740 at York Factory, the distribution was: producer goods – 44 percent; household goods – 9 percent; alcohol and tobacco – 24 percent; and other luxuries – 23 percent. An important implication of the trade data is that, like many Europeans and most American colonists, Native Americans were taking part in the consumer revolution of the eighteenth century (de Vries, 1993; Shammas, 1993). In addition to necessities, they were consuming a remarkable variety of luxury products. Cloth, including baize, duffel, flannel, and gartering, was by far the largest class, but they also purchased beads, combs, looking glasses, rings, shirts, and vermillion among a much longer list. Because these items were heterogeneous in nature, the Hudson’s Bay Company’s head office went to great lengths to satisfy the specific tastes of Native consumers. Attempts were also made, not always successfully, to introduce new products (Carlos and Lewis, 2002).

Perhaps surprising, given the emphasis that has been placed on it in the historical literature, was the comparatively small role of alcohol in the trade. At York Factory, Native traders received in 1740 a total of 494 gallons of brandy and “strong water,” which had a value of 1,976 made beaver. More than twice this amount was spent on tobacco in that year, nearly five times was spent on firearms, twice was spent on cloth, and more was spent on blankets and kettles than on alcohol. Thus, brandy, although a significant item of trade, was by no means a dominant one. In addition, alcohol could hardly have created serious social problems during this period. The amount received would have allowed for no more than ten two-ounce drinks per year for the adult Native population living in the region.

The Labor Supply of Natives

Another important question can be addressed using the trade data. Were Natives “lazy and improvident” as they have been described by some contemporaries, or were they “industrious” like the American colonists and many Europeans? Central to answering this question is how Native groups responded to the price of furs, which began rising in the 1730s. Much of the literature argues that Indian trappers reduced their effort in response to higher fur prices; that is, they had backward-bending supply curves of labor. The view is that Natives had a fixed demand for European goods that, at higher fur prices, could be met with fewer furs, and hence less effort. Although widely cited, this argument does not stand up. Not only were higher fur prices accompanied by larger total harvests of furs in the region, but the pattern of Native expenditure also points to a scenario of greater effort. From the late 1730s to the 1760s, as the price of furs rose, the share of expenditure on luxury goods increased dramatically (see Figure 4). Thus Natives were not content simply to accept their good fortune by working less; rather they seized the opportunity provided to them by the strong fur market by increasing their effort in the commercial sector, thereby dramatically augmenting the purchases of those goods, namely the luxuries, that could raise their living standards.

Figure 4
Native Expenditure Shares at York Factory 1716 – 1770

Source: Carlos and Lewis, 2001.

A Note on the Non-commercial Sector

As important as the fur trade was to Native Americans in the sub-arctic regions of Canada, commerce with the Europeans comprised just one, relatively small, part of their overall economy. Exact figures are not available, but the traditional sectors; hunting, gathering, food preparation and, to some extent, agriculture must have accounted for at least 75 to 80 percent of Native labor during these decades. Nevertheless, despite the limited time spent in commercial activity, the fur trade had a profound effect on the nature of the Native economy and Native society. The introduction of European producer goods, such as guns, and household goods, mainly kettles and blankets, changed the way Native Americans achieved subsistence; and the European luxury goods expanded the range of products that allowed them to move beyond subsistence. Most importantly, the fur trade connected Natives to Europeans in ways that affected how and how much they chose to work, where they chose to live, and how they exploited the resources on which the trade and their survival was based.

References

Berkes, Fikret, David Feeny, Bonnie J. McCay, and James M. Acheson. “The Benefits of the Commons.” Nature 340 (July 13, 1989): 91-93.

Braund, Kathryn E. Holland.Deerskins and Duffels: The Creek Indian Trade with Anglo-America, 1685-1815. Lincoln: University of Nebraska Press, 1993.

Carlos, Ann M., and Elizabeth Hoffman. “The North American Fur Trade: Bargaining to a Joint Profit Maximum under Incomplete Information, 1804-1821.” Journal of Economic History 46, no. 4 (1986): 967-86.

Carlos, Ann M., and Frank D. Lewis. “Indians, the Beaver and the Bay: The Economics of Depletion in the Lands of the Hudson’s Bay Company, 1700-1763.” Journal of Economic History 53, no. 3 (1993): 465-94.

Carlos, Ann M., and Frank D. Lewis. “Property Rights, Competition and Depletion in the Eighteenth-Century Canadian Fur Trade: The Role of the European Market.” Canadian Journal of Economics 32, no. 3 (1999): 705-28.

Carlos, Ann M., and Frank D. Lewis. “Property Rights and Competition in the Depletion of the Beaver: Native Americans and the Hudson’s Bay Company.” In The Other Side of the Frontier: Economic Explorations in Native American History, edited by Linda Barrington, 131-149. Boulder, CO: Westview Press, 1999.

Carlos, Ann M., and Frank D. Lewis. “Trade, Consumption, and the Native Economy: Lessons from York Factory, Hudson Bay.” Journal of Economic History61, no. 4 (2001): 465-94.

Carlos, Ann M., and Frank D. Lewis. “Marketing in the Land of Hudson Bay: Indian Consumers and the Hudson’s Bay Company, 1670-1770.” Enterprise and Society 2 (2002): 285-317.

Carlos, Ann and Nicholas, Stephen. “Agency Problems in Early Chartered Companies: The Case of the Hudson’s Bay Company.” Journal of Economic History 50, no. 4 (1990): 853-75.

Clarke, Fiona. Hats. London: Batsford, 1982.

Crean, J. F. “Hats and the Fur Trade.” Canadian Journal of Economics and Political Science 28, no. 3 (1962): 373-386.

Corner, David. “The Tyranny of Fashion: The Case of the Felt-Hatting Trade in the Late Seventeenth and Eighteenth Centuries.” Textile History 22, no.2 (1991): 153-178.

de Vries, Jan. “Between Purchasing Power and the World of Goods: Understanding the Household Economy in Early Modern Europe.” In Consumption and the World of Goods, edited by John Brewer and Roy Porter, 85-132. London: Routledge, 1993.

Ginsburg Madeleine. The Hat: Trends and Traditions. London: Studio Editions, 1990.

Haeger, John D. John Jacob Astor: Business and Finance in the Early Republic. Detroit: Wayne State University Press, 1991.

Harte, N.B. “The Economics of Clothing in the Late Seventeenth Century.” Textile History 22, no. 2 (1991): 277-296.

Heidenreich, Conrad E., and Arthur J. Ray. The Early Fur Trade: A Study in Cultural Interaction. Toronto: McClelland and Stewart, 1976.

Helm, Jane, ed. Handbook of North American Indians 6, Subarctic. Washington: Smithsonian, 1981.

Innis, Harold. The Fur Trade in Canada (revised edition). Toronto: University of Toronto Press, 1956.

Krech III, Shepard. The Ecological Indian: Myth and History. New York: Norton, 1999.

Lawson, Murray G. Fur: A Study in English Mercantilism. Toronto: University of Toronto Press, 1943.

McManus, John. “An Economic Analysis of Indian Behavior in the North American Fur Trade.” Journal of Economic History 32, no.1 (1972): 36-53.

Ray, Arthur J. Indians in the Fur Trade: Their Role as Hunters, Trappers and Middlemen in the Lands Southwest of Hudson Bay, 1660-1870. Toronto: University of Toronto Press, 1974.

Ray, Arthur J. and Donald Freeman. “Give Us Good Measure”: An Economic Analysis of Relations between the Indians and the Hudson’s Bay Company before 1763. Toronto: University of Toronto Press, 1978.

Ray, Arthur J. “Bayside Trade, 1720-1780.” In Historical Atlas of Canada 1, edited by R. Cole Harris, plate 60. Toronto: University of Toronto Press, 1987.

Rich, E. E. Hudson’s Bay Company, 1670 – 1870. 2 vols. Toronto: McClelland and Stewart, 1960.

Rich, E.E. “Trade Habits and Economic Motivation among the Indians of North America.” Canadian Journal of Economics and Political Science 26, no. 1 (1960): 35-53.

Shammas, Carole. “Changes in English and Anglo-American Consumption from 1550-1800.” In Consumption and the World of Goods, edited by John Brewer and Roy Porter, 177-205. London: Routledge, 1993.

Wien, Thomas. “Selling Beaver Skins in North America and Europe, 1720-1760: The Uses of Fur-Trade Imperialism.” Journal of the Canadian Historical Association, New Series 1 (1990): 293-317.

Citation: Carlos, Ann and Frank Lewis. “Fur Trade (1670-1870)”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/the-economic-history-of-the-fur-trade-1670-to-1870/