EH.net is owned and operated by the Economic History Association
with the support of other sponsoring organizations.

The Fireproof Building: Technology and Public Safety in the Nineteenth-Century American City

Author(s):Wermiel, Sara E.
Reviewer(s):McSwain, James B.

Published by EH.NET (November 2000)

Sara E. Wermiel, The Fireproof Building: Technology and Public Safety in

the Nineteenth-Century American City. Baltimore: Johns Hopkins University

Press, 2000. 301 pp. $45.00 (cloth), ISBN: 0-8018-6311-2.

Reviewed for EH.NET by James B. McSwain, Department of History, Tuskegee

University.

Sara E. Wermiel, historian of technology and city planner, has produced a

well-researched, clearly written study of the development of

fireproof/fire-resistant buildings in the United States from the late

eighteenth century to the threshold of World War I. Extensive documentation, a

useful glossary, and an excellent bibliographic essay, undergird her analysis.

Her work contains helpful reproductions of contemporary engineering,

architectural, and promotional drawings and sketches.

The drive to build fireproof buildings arose in part out of fear of

conflagration, or a city-wide fire. Many municipalities established “fire

limits,” a zone requiring strict exterior construction standards. This,

however, did not address the flammability of interior materials. Finding a

suitable non-flammable substitute for wood meant designing buildings with

noncombustible floors and roofs. The initial solution was the masonry vault,

or a barrel-shaped, load-bearing span that supported the floor above, and

rested on massive, and expensive, walls and piers. Held back by high costs and

“technical difficulties,” only a dozen masonry vaulted buildings, mainly for

the Federal Government, were put up in the U.S. before 1850. Vaulted buildings

performed well in fires, but had several drawbacks. They had a thorough-going,

anti-human atmosphere owing to the enormous walls, center pieces, columns, and

the “thrust” of the vault arches, that blocked light and used up most of the

interior space of the building.

Wermiel devotes the bulk of her book to the various solutions to the problem

of putting up fireproof/fire-resistant floors and roofs, without resorting to

vault construction. By 1850 U.S. builders relied upon a system of iron beams

and girders (horizontal spanning elements), in between which were brick

arches, quite like the masonry vaults, but not nearly as space-consuming.

Subsequently, wrought iron, having superior tensile strength, replaced cast

iron in framing buildings. Its malleability allowed rolling into “I” shaped

beams thinner and stronger than cast iron.

During the post-1865 construction boom, builders tried a number of

alternatives to brick arches and floors, including iron sheets and concrete,

stone slabs, and various sorts of solid and hollow clay (terra cotta or tile)

blocks. They generally performed well, though often structural iron failed

under intense heat. This suggested that noncombustible materials did not

guarantee the survival of a building. So, the notion of fireproof expanded to

include noncombustible materials that did not conduct heat, which could

distort flanges or other crucial components. In the 1890s building owners

found in New England “mill construction” an attractive model of affordable

fire-resistant construction that featured space separation accompanied by

fire-fighting equipment. Drawing upon this paradigm, the Associated Factory

Mutual Fire Insurance Companies (AFM) made features such as sprinklers,

stairways isolated from floor areas, and exterior access ladders, required

items. The AFM regarded this “slow-burning” construction a superior method

toachieve fire prevention at a comparatively low cost.

In the 1880s elevators allowed buildings to go beyond the six-story limit. To

make the proposed “skyscrapers” fire safe, architects and builders switched

from load bearing walls to a metal framework (skeleton), made of iron or

steel, that carried the weight of the building. However, a major problem to

resolve was egress. The contents of a fireproof/fire-resistant building could

burn and produce deadly smoke, toxic fumes, and blistering heat that killed

trapped occupants, forcing architects and engineers to focus upon how to leave

buildings. In the 1860s fire escapes became the norm for New York city

tenements. Yet, city codes for other public buildings remained dangerously

ambiguous. Boston led the way up to 1900 in imposing strict standards of

egress for many new buildings and all tenements and boarding houses. After

1900 New York city authorities tied egress to occupancy, so that the more

rooms a building had, the more exits were required.

Two fires revealed gaps between code and practice. In 1903 the Iroquois

Theatre in Chicago, outfitted with fireproof floors, roofs, and partitions,

burned, killing 581 people. However, it had steps in front of doors, fire

escapes exposed to flames, inadequate balcony stairways, and no exit signs.

Subsequent Chicago ordinances dealt with all of these shortcomings. In March

1911 the New York building containing the Triangle Shirtwaist Company burned.

The iron, steel and tile structure survived nicely. Although there were

stairways and escapes, several had doors that opened inward, and one may have

been locked. This led to building codes that redefined adequate egress from

buildings.

Wermiel concludes that the pivotal event of modern fire-resistive construction

was adoption of skeleton frame construction. It brought together existing

fireproofing materials and experience to build fire-resistant tall buildings.

She also argues that since fireproof buildings were so much more expensive

than regular buildings, government contract requirements and code regulations

provided incentives for architects, engineers, and industrial people to come

up with new materials and construction techniques for fireproof projects.

Several observations are in order. Having investigated controversies over

petroleum storage (1901-03), I was aware of insurance concerns over

conflagration in the late nineteenth century. Several names familiar to me

surfaced in Wermiel’s book. Engineers F.J.T. Stewart and William H. Merrill

served as advisors to the National Board of Fire Underwriters, the National

Fire Prevention Association and various consulting committees. I would,

therefore, have enjoyed more information about the role these groups played in

the development of and campaign for fire-resistant materials and construction

techniques. But I cannot fault Wermiel for sticking to her topic. Her work has

whetted my appetite for more explanation, an outcome I attribute only to good

books.

Further, there are important parallels between Wermiel’s book and Thomas J.

Misa’s A Nation of Steel: The Making of Modern America, 1865-1925

(1995). Wermiel points to the crucial role of government in stimulating demand

for fireproof building design and materials. Similarly, Misa explores the

relationship between central governments and an international cartel of steel

manufacturers who monopolized the fabrication of pre-WWI battleship armor.

However, in fireproofing there was much more unrestrained competition among

architects, suppliers, and contractors, than among the armor moguls. Price

considerations played a constant role in fireproof construction versus

traditional wood framing, and in iron and steel production. Here is where Misa

and Wermiel converge, because both address the market for iron and steel rails

and beams. Wermeil is clear about the role price, engineering preferences, and

the shift from wrought iron columns to steel columns, played in making

skeleton construction important to the evolution of fireproof buildings.

Misa’s account of the skyscraper is a bitmore complex and provides crucial

details of how designers and builders came to favor open-hearth steel over

steel produced by Bessemer rail shops. It is also set in the broad context of

urbanization and the push this provided to make better use of space by

building up rather than out.

Wermiel’s book is carefully crafted and informative. Though readers may

benefit from collateral reading in works such as Misa to fill out the context

of certain crucial events in the saga of fireproof construction, Wermiel has

assembled and synthesized a great deal of difficult, technical details to

support her narrative and to sustain her insightful conclusions.

James B. McSwain has recently completed “Energy and Municipal Regulation: The

Struggle to Control the Storage and Supply of Fuel Oil in Mobile, Alabama,

1894-1910,” the first of three related essays on this issue in the Gulf South

(Mobile, New Orleans, Galveston).

Subject(s):Industry: Manufacturing and Construction
Geographic Area(s):North America
Time Period(s):19th Century

An Economic History of Weather Forecasting

Erik D. Craft, University of Richmond

Introduction

The United States Congress established a national weather organization in 1870 when it instructed the Secretary of War to organize the collection of meteorological observations and forecasting of storms on the Great Lakes and Atlantic Seaboard. Large shipping losses on the Great Lakes during the 1868 and 1869 seasons, growing acknowledgement that storms generally traveled from the West to the East, a telegraphic network that extended west of the Great Lakes and the Atlantic Seaboard, and an eager Army officer promising military discipline are credited with convincing Congress that a storm-warning system was feasible. The United States Army Signal Service weather organization immediately dwarfed its European counterparts in budget and geographical size and shortly thereafter created storm warnings that on the Great Lakes alone led to savings in shipping losses that exceeded the entire network’s expenses.

Uses of Weather Information

Altering Immediate Behavior

The most obvious use of weather information is to change behavior in response to expected weather outcomes. The motivating force behind establishing weather organizations in England, France, Germany, and the United States was to provide warnings to ships of forthcoming storms, so that the ships might remain in harbor. But it soon became obvious that agricultural and commercial interests would benefit from weather forecasts as well. Farmers could protect fruit sensitive to freezes, and shippers could limit spoilage of produce while en route. Beyond preparation for severe weather, weather forecasts are now created for ever more specialized activities: implementing military operations, scheduling operation of power generation facilities, routing aircraft safely and efficiently, planning professional sports teams’ strategies, estimating demand for commodities sensitive to weather outcomes, planning construction projects, and optimizing the use of irrigation and reservoir systems’ resources.

Applying Climatological Knowledge

Climatological data can be used to match crop varieties, construction practices, and other activities appropriately to different regions. For example, in 1947 the British Government planned to grow groundnuts on 3.2 million acres in East and Central Africa. The groundnut was chosen because it was suited to the average growing conditions of the chosen regions. But due a lack of understanding of the variance in amount and timing of rainfall, the project was abandoned after five years and initial capital outlays of 24 million British pounds and annual operating costs of 7 million pounds. The preparation of ocean wind and weather charts in the 1850s by Matthew Fontaine Maury, Superintendent of the U.S. Navy’s Depot of Charts and Instruments, identified better routes for vessels sailing between America and Europe and from the United States East Cost to United States West Coast. The reduced sailing durations are alleged to have saved millions of dollars annually. Climatological data can also be used in modern environmental forecasts of air quality and how pollution is dispersed in the air. There are even forensic meteorologists who specialize in identifying weather conditions at a given point in time after accidents and subsequent litigation. Basic climatological information is also one reason why the United States cinema industry became established in Southern California; it was known that a high percentage of all days were sunny, so that outdoor filming would not be delayed.

Smoothing Consumption of Weather-Sensitive Commodities

An indirect use of weather forecasts and subsequent weather occurrences is their influence on the prices of commodities that are affected by weather outcomes. Knowledge that growing conditions will be poor or have been poor will lead to expectations of a smaller crop harvest. This causes expected prices of the crop to rise, thereby slowing consumption. This is socially efficient, since the present inventory and now smaller future harvest will have to be consumed more slowly over the time period up until the next season’s crop can be planted, cultivated, and harvested. Without an appropriate rise in price after bad weather outcomes, an excessive depletion of the crop’s inventory could result, leading to more variability in the consumption path of the commodity. People generally prefer consuming their income and individual products in relatively smooth streams, rather than in large amounts in some periods and small amounts in other periods. Both improved weather forecasts and United State Department of Agriculture crop forecasts help buyers more effectively consume a given quantity of a crop.

The History Weather Forecasts in the United States

An important economic history question is whether or not it was necessary for the United States Federal Government to found a weather forecasting organization. There are two challenges in answering that question: establishing that the weather information was socially valuable and determining if private organizations were incapable of providing the appropriate level of services. Restating the latter issue, did weather forecasts and the gathering of climatological information possess enough attributes of a public good such that private organizations would create an insufficiently large amount of socially- beneficial information? There are also two parts to this latter public good problem: nonexcludability and nonrivalry. Could private producers of weather information create a system whereby they earned enough money from users of weather information to cover the costs of creating the information? Would such a weather system be of the socially optimal size?

Potential Organizational Sources of Weather Forecasts

There were many organizations during the 1860s that the observer might imagine would benefit from the creation of weather forecasts. After the consolidation of most telegraphic service in the United States into Western Union in 1866, an organization with employees throughout the country existed. The Associated Press had a weather-reporting network, but there is no evidence that it considered supplementing its data with forecasts. One Ebenezer E. Merriam began supplying New York newspapers with predictions in 1856. Many years later, astronomer turned Army Signal Service forecaster Cleveland Abbe concluded that Merriam made his predictions using newspaper weather reports. The Chicago Board of Trade declined an invitation in 1869 to support a weather forecasting service based in Cincinnati. Neither ship-owners nor marine insurers appear to have expressed any interest in creating or buying weather information. Great Lakes marine insurers had even already overcome organizational problems by forming the Board of Lake Underwriters in 1855. For example, the group incurred expenses of over $11,000 in 1861 inspecting vessels and providing ratings on behalf of its members in the annual Lake Vessel Register. The Board of Lake Underwriters even had nine inspectors distributed on the Great Lakes to inspect wrecks on behalf of its members. Although there was evidence that storms generally traveled in a westerly direction, none of these groups apparently expected the benefits to itself to exceed the costs of establishing the network necessary to provide useful weather information.

Cleveland Abbe at the Cincinnati Observatory began the most serious attempt to establish a quasi-private meteorological organization in 1868 when he sought financial support from the Associated Press, Western Union, local newspapers, and the Cincinnati Chamber of Commerce. His initial plan included a system of one hundred reporting stations with the Associated Press covering the $100 instrument costs at half of the stations and the dispatch costs. In the following year, he widened his scope to include the Chicago Board of Trade and individual subscribers and proposed a more limited network of between sixteen and twenty-two stations. The Cincinnati Chamber of Commerce, whose president published the Cincinnati Commercial, funded the experiment from September through November of 1869. Abbe likely never had more than ten observers report on any given day and could not maintain more than about thirty local subscribers for his service, which provided at most only occasional forecasts. Abbe continued to receive assistance from Western Union in the collection and telegraphing of observations after the three-month trial, but he fell short in raising funds to allow the expansion of his network to support weather forecasts. His ongoing “Weather Bulletin of the Cincinnati Observatory” was not even published in the Cincinnati Commercial.

Founding of the Army Signal Service Weather Organization

Just as the three-month trial of Abbe’s weather bulletin concluded, Increase A. Lapham, a Milwaukee natural scientist, distributed his second list of Great Lakes shipping losses, entitled “Disaster on the Lakes.” The list included 1,164 vessel casualties, 321 deaths, and $3.1 million in property damaged in 1868 and 1,914 vessel casualties, 209 lives lost, and $4.1 million in financial losses in 1869. The number of ships that were totally destroyed was 105 and 126 in each year, respectively. According to a separate account, the storm of November 16-19, 1869 alone destroyed vessels whose value exceeded $420,000. Lapham’s list of losses included a petition to establish a weather forecasting service. In 1850, he had prepared a similar proposal alongside a list of shipping of losses, and twice during the 1850s he had tracked barometric lows across Wisconsin to provide evidence that storms could be forecast.

Recipients of Lapham’s petitions included the Wisconsin Academy of Sciences, the Chicago Academy of Sciences, the National Board of Trade meeting in Richmond, a new Chicago monthly business periodical entitled The Bureau, and Congressman Halbert E. Paine of Milwaukee. Paine had studied meteorological theories under Professor Elias Loomis at Western Reserve College and would introduce storm-warning service bills and eventually the final joint resolution in the House that gave the Army Signal Service storm-warning responsibilities. In his book Treatise on Meteorology (1868), Loomis claimed that the approach of storms to New York could be predicted reliably given telegraphic reports from several locations in the Mississippi Valley. From December 1869 through February 1870, Lapham’s efforts received wider attention. The Bureau featured nine pieces on meteorology from December until March, including at least two by Lapham.

Following the Civil War, the future of a signaling organization in the Army was uncertain. Having had budget requests for telegraph and signal equipment for years 1870 and 1871 cut in half to $5000, Colonel Albert J. Myer, Chief Signal Officer, led a small organization seeking a permanent existence. He visited Congressman Paine’s office in December of 1869 with maps showing proposed observation stations throughout the United Stations. Myer’s eagerness for the weather responsibilities, as well as the discipline of the Army organization and a network of military posts in the West, many linked via telegraph, would appear to have made the Army Signal Service a natural choice. The marginal costs of an Army weather organization using Signal Service personnel included only instruments and commercial telegraphy expenses. On February 4, 1870, Congress approved the Congressional Joint Resolution which “authorizes and requires the Secretary of War to provide for taking of meteorological observations . . . and for giving notice on the northern lakes and on the sea-coast of the approach and force of storms.” Five days later, President Grant signed the bill.

Expansion of the Army Signal Service’s Weather Bureau

Observer-sergeants in the Signal Service recorded their first synchronous observations November 1, 1870, 7:35 a.m. Washington time at twenty-four stations. The storm-warning system began formal operation October 23, 1871 with potential flag displays at eight ports on the Great Lakes and sixteen ports on the Atlantic seaboard. At that time, only fifty general observation stations existed. Already by June 1872, Congress expanded the Army Signal Service’s explicit forecast responsibilities via an appropriations act to most of the United States “for such stations, reports, and signal as may be found necessary for the benefit of agriculture and commercial interests.” In 1872, the Signal Service also began publication of the Weekly Weather Chronicle during the growing seasons. It disappeared in 1877, reemerging in 1887 as the Weather Crop Bulletin. As the fall of 1872 began, confidence in the utility of weather information was so high that 89 agricultural societies and 38 boards of trade and chambers of commerce had appointed meteorological committees to communicate with the Army Signal Service. In addition to dispensing general weather forecasts for regions of the country three times a day, the Signal Service soon sent special warnings to areas in danger of cold waves and frosts.

The original method of warning ships of dangerous winds was hoisting a single red flag with a black square located in the middle. This was known as a cautionary signal, and Army personnel at Signal Service observation stations or civilians at display stations would raise the flag on a pole “whenever the winds are expected to be as strong as twenty-five miles per hour, and to continue so for several hours, within a radius of one hundred miles from the station.” In the first year of operation ending 1 September 1872, 354 cautionary signals were flown on both the Great Lakes and the Atlantic Seaboard, approximately 70% of which were verified as having met the above definition. Such a measure of accuracy is incomplete, however, as it can always be raised artificially by not forecasting storms under marginal conditions, even though such a strategy might diminish the value of the service.

The United States and Canada shared current meteorological information beginning in 1871. By 1880, seventeen Canadian stations reported meteorological data to the United States at least twice daily by telegraph. The number of Army Signal Service stations providing telegraphic reports three times a day stabilized at 138 stations in 1880, dipped to 121 stations in 1883, and grew to approximately 149 stations by 1888. (See Table 1 for a summary of the growth of the Army Signal Service Meteorological Network from 1870 to 1890.) Additional display stations only provided storm warnings at sea and lake ports. River stations monitored water levels in order to forecast floods. Special cotton-region stations, beginning in 1883, comprised a dense network of daily reporters of rainfall and maximum and minimum temperatures. Total Army Signal Service expenditures grew from a $15,000 supplemental appropriation for weather operations in fiscal year 1870 to about one million dollars for all Signal Service costs around 1880 and stabilized at that level. Figure 1 shows the extent geographical extent of the Army Signal Service telegraphic observation network in 1881.

Figure 1: Army Signal Service Observation Network in 1881
Click on the image for the larger, more detailed image (~600K)Source: Map between pages 250-51, Annual Report of the Chief Signal Officer, October 1, 1881, Congressional Serial Set Volume 2015. See the detailed map between pages 304-05 for the location of each of the different types of stations listed in Table 1.

Table 1: Growth of the United States Army Signal Service Meteorological Network

Budget (Real 1880 Dollars)

Stations of the Second Order

Stations of the Third Order

Repair Stations

Display Stations

Special River Stations

Special Cotton-Region Stations

1870

32,487

25

1871

112,456

54

1872

220,269

65

1873

549,634

80

9

1874

649,431

92

20

1875

749,228

98

20

1876

849,025

106

38

23

1877

849,025

116

29

10

9

23

1878

978,085

136

36

12

11

23

1879

1,043,604

158

30

17

46

30

1880

1,109,123

173

39

49

50

29

1881

1,080,254

171

47

44

61

29

87

1882

937,077

169

45

3

74

30

127

1883

950,737

143

42

27

7

30

124

1884

1,014,898

138

68

7

63

40

138

1885

1,085,479

152

58

8

64

66

137

1886

1,150,673

146

33

11

66

69

135

1887

1,080,291

145

31

13

63

70

133

1888

1,063,639

149

30

24

68

78

116

1889

1,022,031

148

32

23

66

72

114

1890

994,629

144

34

15

73

72

114

Sources: Report of the Chief Signal Officer: 1888, p. 171; 1889, p. 136; 1890, p. 203 and “Provision of Value of Weather Information Services,” Craft (1995), p. 34.

Notes: The actual total budgets for years 1870 through 1881 are estimated. Stations of the second order recorded meteorological conditions three times per day. Most immediately telegraphed the data. Stations of the third order recorded observations at sunset. Repair stations maintained Army telegraph lines. Display stations displayed storm warnings on the Great Lakes and Atlantic seaboard. Special river stations monitored water levels in order to forecast floods. Special cotton-region stations collected high temperature, low temperature, and precipitation data from a denser network of observation locations

Early Value of Weather Information

Budget reductions in the Army Signal Service’s weather activities in 1883 led to the reduction of fall storm-warning broadcast locations on the Great Lakes from 80 in 1882 to 43 in 1883. This one-year drop in the availability of storm-warnings creates a special opportunity to measure the value of warnings of extremely high winds on the Great Lakes (see Figure 2). Many other factors can be expected to affect the value of shipping losses on the Great Lakes: the level of commerce in a given season, the amount of shipping tonnage available to haul a season’s commerce, the relative composition of the tonnage (steam versus sail), the severity of the weather, and long-term trends in technological change or safety. Using a statistical technique know as multiple regression, in which the effect of these many factors on shipping losses are analyzed concurrently, Craft (1998) argued that each extra storm-warning location on the Great Lakes lowered losses by about one percent. This implies that the storm-warning system reduced losses on the Great Lakes by approximately one million dollars annually in the mid 1870s and between $1 million and $4.5 million dollars per year by the early 1880s.

Source: The data are found in the following: Chicago Daily Inter Ocean (December 5, 1874 p. 2; December 18, 1875; December 27, 1876 p. 6; December 17, 1878; December 29, 1879 p. 6; February 3, 1881 p. 12; December 28, 1883 p. 3; December 5, 1885 p. 4); Marine Record (December 27, 1883 p. 5; December 25, 1884 pp. 4-5; December 24, 1885 pp. 4-5; December 30, 1886 p. 6; December 15, 1887 pp 4-5); Chief Signal Officer, Annual Report of the Chief Signal Officer, 1871- 1890.

Note: Series E 52 of the Historical Statistics of the United States (U.S. Bureau of the Census, 1975) was used to adjust all values to real 1880 dollars.

There are additional indirect methods with which to confirm the preceding estimate of the value of early weather information. If storm-warnings actually reduced the risk of damage to cargo and ships due to bad weather, then the cost of shipping cargo would be expected to decline. In particular, such reductions in shipping prices due to savings in losses caused by storms can be differentiated from other types of technological improvements by studying how fall shipping prices changed relative to summer shipping prices. It was during the fall that ships were particularly vulnerable to accidents caused by storms. Changes is shipping prices of grain from Chicago to Buffalo during the summers and falls from the late 1860s to late 1880s imply that storm-warnings were valuable and are consistent with the more direct method estimating reductions in shipping losses. Although marine insurance premia data for shipments on the Great Lakes are limited and difficult to interpret due the waning and waxing of the insurance cartel’s cohesion, such data are also supportive of the overall interpretation.

Given Army Signal Service budgets of about one million dollars for providing meteorological services to the entire United States, a reasonable minimum bound for the rate of return to the creation of weather information from 1870 to 1888 is 64 percent. The figure includes no social benefits from any weather information other than Great Lakes storm warnings. This estimate of nineteenth century information implies that the creation and distribution of storm warnings by the United States Federal Government were a socially beneficial investment.

Transfer of Weather Services to the Department of Agriculture

The Allison Commission hearings in 1884 and 1885 sought to determine the appropriate organization of Federal agencies whose activities included scientific research. The Allison Commission’s long report included testimony and discussion relating to the organization of the Army Signal Service, the United States Geological Survey, the Coast and Geodetic Survey, and the Navy Hydrographic Office. Weather forecasting required a reliable network of observers, some of whom were the sole Army personnel at a location. Advantages of a military organizational structure included a greater range of disciplinary responses, including court-martials for soldiers, for deficient job performance. Problems, however, of the military organization included the limited ability to increase one’s rank while working for the Signal Service and tension between the civilian and military personnel. In 1891, after an unsuccessful Congressional attempt at reform in 1887, the Weather Bureau became a civilian organization when it joined the young Department of Agriculture.

Aviation and World War I

Interest in upper air weather conditions grew rapidly after the turn of the century on account of two related events: the development of aviation and World War I. Safe use of aircraft depended on more precise knowledge of weather conditions (winds, storms, and visibility) between takeoff and landing locations. Not only were military aircraft introduced during World War I, but understanding wind conditions was also crucial to the use of poison gas on the front lines. In the most important change of the Weather Bureau’s organizational direction since transfer to the Department of Agricultural, Congress passed the Air Commerce Act in 1926, which by 1932 led to 38% of the Weather Bureau’s budget being directed toward aerology research and support.

Transfer of the Weather Bureau to the Department of Commerce

Even though aerological expenditures by the Weather Bureau in support of aviation rivaled funding for general weather services by the late 1930s, the Weather Bureau came under increasing criticism from aviation interests. The Weather Bureau was transferred to the Department of Commerce in 1940 where other support for aviation already originated. This transition mirrored the declining role of agriculture in the United States and movement toward a more urban economy. Subsequently known as the United States Weather Service, it has remained there since.

World War II

During World War II, weather forecasts assumed greater importance, as aircraft and rapid troop movements became key parts of military strategy. Accurate long-range artillery use also depended on knowledge of prevailing winds. For extensive use of weather forecasts and climatological information during wartime, consider Allied plans the strike German oil refineries in Ploesti, Romania. In the winter of 1943 military weather teams parachuted into the mountains of Yugoslavia to relay weather data. Bombers from North Africa could only reach the refineries in the absence of headwinds in either direction of the sortie. Cloud cover en route was important for protection, clear skies were helpful for identification of targets, and southerly winds permitted the bombers to drop their ordinance on the first pass on the south side of the area’s infrastructure, allowing the winds to assist in spreading the fire. Historical data indicated that only March or August were possible windows. Though many aircraft were lost, the August 1 raid was considered a success.

Tide, wind, and cloud conditions were also crucial in the planning of the invasion of Normandy (planned for June 5 and postponed until June 6 in 1944). The German High Command had been advised by its chief meteorologist that conditions were not opportune for an Allied invasion on the days following June 4. Dissention among American and British military forecasters nearly delayed the invasion further. Had it been deferred until the next date of favorable tide conditions, the invasion would have taken place during the worst June storm in twenty years in the English Channel.

Forecasting in Europe

A storm on November 14, 1854 destroyed the French warship Henri IV and damaged other British and French vessels on the Black Sea involved in the Crimean War. A report from the state-supported Paris Observatory indicated that barometric readings showed that the storm has passed across Europe in about four days. Urban Leverrier, director of the Paris Observatory, concluded that had there been a telegraph line between Vienna and the Crimea, the British and French fleets could have received warnings. Although the United States weather network was preceded by storm-warning systems in the Netherlands in 1860, Great Britain in 1861, and France in 1863, the new United States observation network immediately dwarfed the European organizations in both financial resources and geographical magnitude.

Robert FitzRoy, captain of the Beagle during Darwin’s famous voyage, was appointed director of the Meteorological Department established by the British Board of Trade (a government organization) in 1854. The wreck of the well-constructed iron vessel Royal Charter in a storm with much loss of life in October of 1859 provided another opportunity for a meteorological leader to argue that storms could be tracked and forecast. With support from the Prince Consort, FitzRoy and the Meteorological Department were granted approval to establish a storm-warning service. On February 6, 1861 the first warnings were issued. By August 1861 weather forecasts were issued regularly. By 1863, the Meteorological Department had a budget of three thousand English pounds. Criticism arose from different groups. Scientists wished to establish meteorology on a sound theoretical foundation and differentiate it from astrology. At the time, many publishers of weather almanacs subscribed to various theories of the influence of the moon or other celestial bodies on weather (This is not as outlandish one might suppose; in 1875, well-known economist William Stanley Jevons studied connections between sunspot activity and meteorology with business cycles). Some members of this second group supported the practice of forecasting but were critical of FitzRoy’s technique, perhaps hoping to become alternative sources of forecasts. Amidst the criticism, FitzRoy committed suicide in 1865. Forecasts and warnings were discontinued in 1866 until the warnings resumed two years later. General forecasts were suspended until 1877.

In 1862, Leverrier wrote the French Ministry of Public Education that French naval and commercial interests might be compromised by their dependence on warnings from the British Board of Trade. A storm-warning service in France commenced in July of 1863. Given the general movement of storms westward, neither France nor Britain had the luxury of tracking storms well before they arrived, as would have been possible with the November 1854 storm in the Crimea and as the Army Signal Service soon would be able to do in America. On account of administrative difficulties that were to hinder effective functioning of the service until 1877, French warnings ceased in October 1865 but resumed in May the next year. The French Central Bureau Meteorology was founded only in 1878 with a budget of only $12,000.

After the initiation of storm warning systems that preceded the Army Signal Service weather network, Europe would not achieve meteorological prominence again until the Bergen School of meteorology developed new storm analysis techniques after World War I, which incorporated cold and warm fronts. In the difficult days in Norway during the conclusion of the Great War, meteorological information from the rest of Europe was unavailable. Theoretical physicist turned meteorological researcher Wilhelm Bjerknes appealed to Norway’s national interests in defense, in the development of commercial aviation, and in increased agricultural output to build a dense observation network, whose data helped yield a new paradigm for meteorology.

Conclusion

The first weather forecasts in the United States that were based on a large network of simultaneous observations provided information to society that was much more valuable than the cost of production. There was discussion in the early winter of 1870 between the scientist Increase Lapham and a businessman in Chicago of the feasibility of establishing a private forecasting organization in Wisconsin or Illinois (see Craft 1999). But previous attempts by private organizations in the United States had been unsuccessful in supporting any private weather-forecasting service. In the contemporary United States, the Federal government both collects data and offers forecasts, while private weather organizations provide a variety of customized services.

Weather Forecasting Timeline

1743

Benjamin Franklin, using reports of numerous postmasters, determined the northeastward path of a hurricane from the West Indies.

1772-1777

Thomas Jefferson at Monticello, Virginia and James Madison at Williamsburg, Virginia collect a series of contemporaneous weather observations.

1814

Surgeon General Tilton issues an order directing Army surgeons to keep a diary of the weather in order to ascertain any influences of weather upon disease.

1817

Josiah Meigs, Commission of the General Land Office, requests officials at their land offices to record meteorological observations.

1846-1848

Matthew F. Maury, Superintendent of the U.S. Naval Observatory, publishes his first charts compiled from ships’ log showing efficient sailing routes.

1847

Barometer used to issue storm warnings in Barbadoes.

1848

J. Jones of New York advertises meteorological reports costing between twelve and one half and twenty-five cents per city per day. There is no evidence the service was ever sold.

1848

Publication in the British Daily News of the first telegraphic daily weather report.

1849

The Smithsonian Institution begins a nearly three decade long project of collecting meteorological data with the goal of understanding storms.

1849

Captain Joseph Brooks, manager of the Portland Steamship Line, receives telegraphic reports three times a day from Albany, New York, and Plattsburg in order to determine if the line’s ships should remain in port in Maine.

1853-1855

Ebenezer E. Merriam of New York, using newspaper telegraphic reports, offers weather forecasts in New York’s newspapers on an apparently irregular basis.

1858

The U.S. Army Engineers begin collecting meteorological observations while surveying the Great Lakes.

1860

Christoph Buys Ballot issues first storm warnings in the Netherlands.

1861

Admiral Robert FitzRoy of the British Meteorological Office begins issuing storm-warnings.

1863

Urban Leverrier, director of the Paris Observatory, organizes a storm-warning service.

1868

Cleveland Abbe of the Cincinnati Observatory unsuccessfully proposes a weather service of one hundred observation stations to be supported by the Cincinnati Chamber of Commerce, Associated Press, Western Union, and local newspapers.

1869

The Cincinnati Chamber of Commerce funds a three-month trial of the Cincinnati Observatory’s weather bulletin. The Chicago Board of Trade declines to participate.

1869

Increase A. Lapham publishes a list of the shipping losses on the Great Lakes during the 1868 and 1869 seasons.

1870

Congress passes a joint resolution directing the Secretary of War to establish a meteorological network for the creation of storm warnings on the Great Lakes and Atlantic Seaboard. Storm-warnings are offered on November 8. Forecasts begin the following February 19.

1872

Congressional appropriations bill extends Army Signal Service duties to provide forecasts for agricultural and commercial interests.

1880

Frost warnings offered for Louisiana sugar producers.

1881-1884

Army Signal Service expedition to Lady Franklin Bay in support of international polar weather research. Only seven of the twenty-five member team survives.

1881

Special cotton-region weather reporting network established.

1891

Weather Bureau transferred to the Department of Agriculture.

1902

Daily weather forecasts sent by radio to Cunard Line steamships.

1905

First wireless weather report from a ship at sea.

1918

Norway expands its meteorological network and organization leading to the development of new forecasting theories centered on three-dimensional interaction of cold and warm fronts.

1919

American Meteorological Society founded.

1926

Air Commerce Act gives the Weather Bureau responsibility for providing weather services to aviation.

1934

First private sector meteorologist hired by a utility company.

1940

The Weather Bureau is transferred from the Department of Agriculture to the Department of Commerce.

1946

First private weather forecast companies begin service.

1960

The first meteorological satellite, Tiros I, enters orbit successfully.

1976

The United States launches its first geostationary weather satellites.

References

Abbe, Cleveland, Jr. “A Chronological Outline of the History of Meteorology in the United States.” Monthly Weather Review 37, no. 3-6 (1909): 87-89, 146- 49, 178-80, 252-53.

Alter, J. Cecil. “National Weather Service Origins.” Bulletin of the Historical and Philosophical Society of Ohio 7, no. 3 (1949): 139-85.

Anderson, Katharine. “The Weather Prophets: Science and Reputation in Victorian Meteorology.” History of Science 37 (1999): 179-216.

Burton, Jim. “Robert Fitzroy and the Early History of the Meteorological Office.” British Journal for the History of Science 19 (1986): 147-76.

Chief Signal Officer. Report of the Chief Signal Officer. Washington: GPO, 1871-1890.

Craft, Erik. “The Provision and Value of Weather Information Services in the United States during the Founding Period of the Weather Bureau with Special Reference to Transportation on the Great Lakes.” Ph.D. diss., University of Chicago, 1995.

Craft, Erik. “The Value of Weather Information Services for Nineteenth-Century Great Lakes Shipping.” American Economic Review 88, no.5 (1998): 1059-1076.

Craft, Erik. “Private Weather Organizations and the Founding of the United States Weather Bureau.” Journal of Economic History 59, no. 4 (1999): 1063- 1071.

Davis, John L. “Weather Forecasting and the Development of Meteorological Theory at the Paris Observatory.” Annals of Science 41 (1984): 359-82.

Fleming, James Rodger. Meteorology in America, 1800-1870. Baltimore: Johns Hopkins University Press, 1990.

Fleming, James Rodger, and Roy E. Goodman, editors. International Bibliography of Meteorology. Upland, Pennsylvania: Diane Publishing Co., 1994.

Friedman, Robert Marc. Appropriating the Weather: Vilhelm Bjerknes and the Construction of a Modern Meteorology. Ithaca: Cornell University Press, 1989.

Hughes, Patrick. A Century of Weather Service. New York: Gordon and Breach, 1970.

Miller, Eric R. “The Evolution of Meteorological Institutions in United States.” Monthly Weather Review 59 (1931): 1-6.

Miller, Eric R. “New Light on the Beginnings of the Weather Bureau from the Papers of Increase A. Lapham.” Monthly Weather Review 59 (1931): 65-70.

Sah, Raaj. “Priorities of Developing Countries in Weather and Climate.” World Development 7 no. 3 (1979): 337-47.

Spiegler, David B. “A History of Private Sector Meteorology.” In Historical Essays on Meteorology, 1919-1995, edited by James Rodger Fleming, 417- 41. Boston: American Meteorological Society, 1996.

Weber, Gustavus A. The Weather Bureau: Its History, Activities and Organization. New York: D. Appleton and Company, 1922.

Whitnah, Donald R. A History of the United States Weather Bureau. Urbana: University of Illinois Press, 1961.

Citation: Craft, Erik. “Economic History of Weather Forecasting”. EH.Net Encyclopedia, edited by Robert Whaples. October 6, 2001. URL http://eh.net/encyclopedia/an-economic-history-of-weather-forecasting/

Japanese Industrialization and Economic Growth

Carl Mosk, University of Victoria

Japan achieved sustained growth in per capita income between the 1880s and 1970 through industrialization. Moving along an income growth trajectory through expansion of manufacturing is hardly unique. Indeed Western Europe, Canada, Australia and the United States all attained high levels of income per capita by shifting from agrarian-based production to manufacturing and technologically sophisticated service sector activity.

Still, there are four distinctive features of Japan’s development through industrialization that merit discussion:

The proto-industrial base

Japan’s agricultural productivity was high enough to sustain substantial craft (proto-industrial) production in both rural and urban areas of the country prior to industrialization.

Investment-led growth

Domestic investment in industry and infrastructure was the driving force behind growth in Japanese output. Both private and public sectors invested in infrastructure, national and local governments serving as coordinating agents for infrastructure build-up.

  • Investment in manufacturing capacity was largely left to the private sector.
  • Rising domestic savings made increasing capital accumulation possible.
  • Japanese growth was investment-led, not export-led.

Total factor productivity growth — achieving more output per unit of input — was rapid.

On the supply side, total factor productivity growth was extremely important. Scale economies — the reduction in per unit costs due to increased levels of output — contributed to total factor productivity growth. Scale economies existed due to geographic concentration, to growth of the national economy, and to growth in the output of individual companies. In addition, companies moved down the “learning curve,” reducing unit costs as their cumulative output rose and demand for their product soared.

The social capacity for importing and adapting foreign technology improved and this contributed to total factor productivity growth:

  • At the household level, investing in education of children improved social capability.
  • At the firm level, creating internalized labor markets that bound firms to workers and workers to firms, thereby giving workers a strong incentive to flexibly adapt to new technology, improved social capability.
  • At the government level, industrial policy that reduced the cost to private firms of securing foreign technology enhanced social capacity.

Shifting out of low-productivity agriculture into high productivity manufacturing, mining, and construction contributed to total factor productivity growth.

Dualism

Sharply segmented labor and capital markets emerged in Japan after the 1910s. The capital intensive sector enjoying high ratios of capital to labor paid relatively high wages, and the labor intensive sector paid relatively low wages.

Dualism contributed to income inequality and therefore to domestic social unrest. After 1945 a series of public policy reforms addressed inequality and erased much of the social bitterness around dualism that ravaged Japan prior to World War II.

The remainder of this article will expand on a number of the themes mentioned above. The appendix reviews quantitative evidence concerning these points. The conclusion of the article lists references that provide a wealth of detailed evidence supporting the points above, which this article can only begin to explore.

The Legacy of Autarky and the Proto-Industrial Economy: Achievements of Tokugawa Japan (1600-1868)

Why Japan?

Given the relatively poor record of countries outside the European cultural area — few achieving the kind of “catch-up” growth Japan managed between 1880 and 1970 – the question naturally arises: why Japan? After all, when the United States forcibly “opened Japan” in the 1850s and Japan was forced to cede extra-territorial rights to a number of Western nations as had China earlier in the 1840s, many Westerners and Japanese alike thought Japan’s prospects seemed dim indeed.

Tokugawa achievements: urbanization, road networks, rice cultivation, craft production

In answering this question, Mosk (2001), Minami (1994) and Ohkawa and Rosovsky (1973) emphasize the achievements of Tokugawa Japan (1600-1868) during a long period of “closed country” autarky between the mid-seventeenth century and the 1850s: a high level of urbanization; well developed road networks; the channeling of river water flow with embankments and the extensive elaboration of irrigation ditches that supported and encouraged the refinement of rice cultivation based upon improving seed varieties, fertilizers and planting methods especially in the Southwest with its relatively long growing season; the development of proto-industrial (craft) production by merchant houses in the major cities like Osaka and Edo (now called Tokyo) and its diffusion to rural areas after 1700; and the promotion of education and population control among both the military elite (the samurai) and the well-to-do peasantry in the eighteenth and early nineteenth centuries.

Tokugawa political economy: daimyo and shogun

These developments were inseparable from the political economy of Japan. The system of confederation government introduced at the end of the fifteenth century placed certain powers in the hands of feudal warlords, daimyo, and certain powers in the hands of the shogun, the most powerful of the warlords. Each daimyo — and the shogun — was assigned a geographic region, a domain, being given taxation authority over the peasants residing in the villages of the domain. Intercourse with foreign powers was monopolized by the shogun, thereby preventing daimyo from cementing alliances with other countries in an effort to overthrow the central government. The samurai military retainers of the daimyo were forced to abandon rice farming and reside in the castle town headquarters of their daimyo overlord. In exchange, samurai received rice stipends from the rice taxes collected from the villages of their domain. By removing samurai from the countryside — by demilitarizing rural areas — conflicts over local water rights were largely made a thing of the past. As a result irrigation ditches were extended throughout the valleys, and riverbanks were shored up with stone embankments, facilitating transport and preventing flooding.

The sustained growth of proto-industrialization in urban Japan, and its widespread diffusion to villages after 1700 was also inseparable from the productivity growth in paddy rice production and the growing of industrial crops like tea, fruit, mulberry plant growing (that sustained the raising of silk cocoons) and cotton. Indeed, Smith (1988) has given pride of place to these “domestic sources” of Japan’s future industrial success.

Readiness to emulate the West

As a result of these domestic advances, Japan was well positioned to take up the Western challenge. It harnessed its infrastructure, its high level of literacy, and its proto-industrial distribution networks to the task of emulating Western organizational forms and Western techniques in energy production, first and foremost enlisting inorganic energy sources like coal and the other fossil fuels to generate steam power. Having intensively developed the organic economy depending upon natural energy flows like wind, water and fire, Japanese were quite prepared to master inorganic production after the Black Ships of the Americans forced Japan to jettison its long-standing autarky.

From Balanced to Dualistic Growth, 1887-1938: Infrastructure and Manufacturing Expand

Fukoku Kyohei

After the Tokugawa government collapsed in 1868, a new Meiji government committed to the twin policies of fukoku kyohei (wealthy country/strong military) took up the challenge of renegotiating its treaties with the Western powers. It created infrastructure that facilitated industrialization. It built a modern navy and army that could keep the Western powers at bay and establish a protective buffer zone in North East Asia that eventually formed the basis for a burgeoning Japanese empire in Asia and the Pacific.

Central government reforms in education, finance and transportation

Jettisoning the confederation style government of the Tokugawa era, the new leaders of the new Meiji government fashioned a unitary state with powerful ministries consolidating authority in the capital, Tokyo. The freshly minted Ministry of Education promoted compulsory primary schooling for the masses and elite university education aimed at deepening engineering and scientific knowledge. The Ministry of Finance created the Bank of Japan in 1882, laying the foundations for a private banking system backed up a lender of last resort. The government began building a steam railroad trunk line girding the four major islands, encouraging private companies to participate in the project. In particular, the national government committed itself to constructing a Tokaido line connecting the Tokyo/Yokohama region to the Osaka/Kobe conurbation along the Pacific coastline of the main island of Honshu, and to creating deepwater harbors at Yokohama and Kobe that could accommodate deep-hulled steamships.

Not surprisingly, the merchants in Osaka, the merchant capital of Tokugawa Japan, already well versed in proto-industrial production, turned to harnessing steam and coal, investing heavily in integrated spinning and weaving steam-driven textile mills during the 1880s.

Diffusion of best-practice agriculture

At the same time, the abolition of the three hundred or so feudal fiefs that were the backbone of confederation style-Tokugawa rule and their consolidation into politically weak prefectures, under a strong national government that virtually monopolized taxation authority, gave a strong push to the diffusion of best practice agricultural technique. The nationwide diffusion of seed varieties developed in the Southwest fiefs of Tokugawa Japan spearheaded a substantial improvement in agricultural productivity especially in the Northeast. Simultaneously, expansion of agriculture using traditional Japanese technology agriculture and manufacturing using imported Western technology resulted.

Balanced growth

Growth at the close of the nineteenth century was balanced in the sense that traditional and modern technology using sectors grew at roughly equal rates, and labor — especially young girls recruited out of farm households to labor in the steam using textile mills — flowed back and forth between rural and urban Japan at wages that were roughly equal in industrial and agricultural pursuits.

Geographic economies of scale in the Tokaido belt

Concentration of industrial production first in Osaka and subsequently throughout the Tokaido belt fostered powerful geographic scale economies (the ability to reduce per unit costs as output levels increase), reducing the costs of securing energy, raw materials and access to global markets for enterprises located in the great harbor metropolises stretching from the massive Osaka/Kobe complex northward to the teeming Tokyo/Yokohama conurbation. Between 1904 and 1911, electrification mainly due to the proliferation of intercity electrical railroads created economies of scale in the nascent industrial belt facing outward onto the Pacific. The consolidation of two huge hydroelectric power grids during the 1920s — one servicing Tokyo/Yokohama, the other Osaka and Kobe — further solidified the comparative advantage of the Tokaido industrial belt in factory production. Finally, the widening and paving during the 1920s of roads that could handle buses and trucks was also pioneered by the great metropolises of the Tokaido, which further bolstered their relative advantage in per capita infrastructure.

Organizational economies of scale — zaibatsu

In addition to geographic scale economies, organizational scale economies also became increasingly important in the late nineteenth centuries. The formation of the zaibatsu (“financial cliques”), which gradually evolved into diversified industrial combines tied together through central holding companies, is a case in point. By the 1910s these had evolved into highly diversified combines, binding together enterprises in banking and insurance, trading companies, mining concerns, textiles, iron and steel plants, and machinery manufactures. By channeling profits from older industries into new lines of activity like electrical machinery manufacturing, the zaibatsu form of organization generated scale economies in finance, trade and manufacturing, drastically reducing information-gathering and transactions costs. By attracting relatively scare managerial and entrepreneurial talent, the zaibatsu format economized on human resources.

Electrification

The push into electrical machinery production during the 1920s had a revolutionary impact on manufacturing. Effective exploitation of steam power required the use of large central steam engines simultaneously driving a large number of machines — power looms and mules in a spinning/weaving plant for instance – throughout a factory. Small enterprises did not mechanize in the steam era. But with electrification the “unit drive” system of mechanization spread. Each machine could be powered up independently of one another. Mechanization spread rapidly to the smallest factory.

Emergence of the dualistic economy

With the drive into heavy industries — chemicals, iron and steel, machinery — the demand for skilled labor that would flexibly respond to rapid changes in technique soared. Large firms in these industries began offering premium wages and guarantees of employment in good times and bad as a way of motivating and holding onto valuable workers. A dualistic economy emerged during the 1910s. Small firms, light industry and agriculture offered relatively low wages. Large enterprises in the heavy industries offered much more favorable remuneration, extending paternalistic benefits like company housing and company welfare programs to their “internal labor markets.” As a result a widening gulf opened up between the great metropolitan centers of the Tokaido and rural Japan. Income per head was far higher in the great industrial centers than in the hinterland.

Clashing urban/rural and landlord/tenant interests

The economic strains of emergent dualism were amplified by the slowing down of technological progress in the agricultural sector, which had exhaustively reaped the benefits due to regional diffusion from the Southwest to the Northeast of best practice Tokugawa rice cultivation. Landlords — around 45% of the cultivable rice paddy land in Japan was held in some form of tenancy at the beginning of the twentieth century — who had played a crucial role in promoting the diffusion of traditional best practice techniques now lost interest in rural affairs and turned their attention to industrial activities. Tenants also found their interests disregarded by the national authorities in Tokyo, who were increasingly focused on supplying cheap foodstuffs to the burgeoning industrial belt by promoting agricultural production within the empire that it was assembling through military victories. Japan secured Taiwan from China in 1895, and formally brought Korea under its imperial rule in 1910 upon the heels of its successful war against Russia in 1904-05. Tenant unions reacted to this callous disrespect of their needs through violence. Landlord/tenant disputes broke out in the early 1920s, and continued to plague Japan politically throughout the 1930s, calls for land reform and bureaucratic proposals for reform being rejected by a Diet (Japan’s legislature) politically dominated by landlords.

Japan’s military expansion

Japan’s thrust to imperial expansion was inflamed by the growing instability of the geopolitical and international trade regime of the later 1920s and early 1930s. The relative decline of the United Kingdom as an economic power doomed a gold standard regime tied to the British pound. The United States was becoming a potential contender to the United Kingdom as the backer of a gold standard regime but its long history of high tariffs and isolationism deterred it from taking over leadership in promoting global trade openness. Germany and the Soviet Union were increasingly becoming industrial and military giants on the Eurasian land mass committed to ideologies hostile to the liberal democracy championed by the United Kingdom and the United States. It was against this international backdrop that Japan began aggressively staking out its claim to being the dominant military power in East Asia and the Pacific, thereby bringing it into conflict with the United States and the United Kingdom in the Asian and Pacific theaters after the world slipped into global warfare in 1939.

Reform and Reconstruction in a New International Economic Order, Japan after World War II

Postwar occupation: economic and institutional restructuring

Surrendering to the United States and its allies in 1945, Japan’s economy and infrastructure was revamped under the S.C.A.P (Supreme Commander of the Allied Powers) Occupation lasting through 1951. As Nakamura (1995) points out, a variety of Occupation-sponsored reforms transformed the institutional environment conditioning economic performance in Japan. The major zaibatsu were liquidated by the Holding Company Liquidation Commission set up under the Occupation (they were revamped as keiretsu corporate groups mainly tied together through cross-shareholding of stock in the aftermath of the Occupation); land reform wiped out landlordism and gave a strong push to agricultural productivity through mechanization of rice cultivation; and collective bargaining, largely illegal under the Peace Preservation Act that was used to suppress union organizing during the interwar period, was given the imprimatur of constitutional legality. Finally, education was opened up, partly through making middle school compulsory, partly through the creation of national universities in each of Japan’s forty-six prefectures.

Improvement in the social capability for economic growth

In short, from a domestic point of view, the social capability for importing and adapting foreign technology was improved with the reforms in education and the fillip to competition given by the dissolution of the zaibatsu. Resolving tension between rural and urban Japan through land reform and the establishment of a rice price support program — that guaranteed farmers incomes comparable to blue collar industrial workers — also contributed to the social capacity to absorb foreign technology by suppressing the political divisions between metropolitan and hinterland Japan that plagued the nation during the interwar years.

Japan and the postwar international order

The revamped international economic order contributed to the social capability of importing and adapting foreign technology. The instability of the 1920s and 1930s was replaced with a relatively predictable bipolar world in which the United States and the Soviet Union opposed each other in both geopolitical and ideological arenas. The United States became an architect of multilateral architecture designed to encourage trade through its sponsorship of the United Nations, the World Bank, the International Monetary Fund and the General Agreement on Tariffs and Trade (the predecessor to the World Trade Organization). Under the logic of building military alliances to contain Eurasian Communism, the United States brought Japan under its “nuclear umbrella” with a bilateral security treaty. American companies were encouraged to license technology to Japanese companies in the new international environment. Japan redirected its trade away from the areas that had been incorporated into the Japanese Empire before 1945, and towards the huge and expanding American market.

Miracle Growth: Soaring Domestic Investment and Export Growth, 1953-1970

Its infrastructure revitalized through the Occupation period reforms, its capacity to import and export enhanced by the new international economic order, and its access to American technology bolstered through its security pact with the United States, Japan experienced the dramatic “Miracle Growth” between 1953 and the early 1970s whose sources have been cogently analyzed by Denison and Chung (1976). Especially striking in the Miracle Growth period was the remarkable increase in the rate of domestic fixed capital formation, the rise in the investment proportion being matched by a rising savings rate whose secular increase — especially that of private household savings – has been well documented and analyzed by Horioka (1991). While Japan continued to close the gap in income per capita between itself and the United States after the early 1970s, most scholars believe that large Japanese manufacturing enterprises had by and large become internationally competitive by the early 1970s. In this sense it can be said that Japan had completed its nine decade long convergence to international competitiveness through industrialization by the early 1970s.

MITI

There is little doubt that the social capacity to import and adapt foreign technology was vastly improved in the aftermath of the Pacific War. Creating social consensus with Land Reform and agricultural subsidies reduced political divisiveness, extending compulsory education and breaking up the zaibatsu had a positive impact. Fashioning the Ministry of International Trade and Industry (M.I.T.I.) that took responsibility for overseeing industrial policy is also viewed as facilitating Japan’s social capability. There is no doubt that M.I.T.I. drove down the cost of securing foreign technology. By intervening between Japanese firms and foreign companies, it acted as a single buyer of technology, playing off competing American and European enterprises in order to reduce the royalties Japanese concerns had to pay on technology licenses. By keeping domestic patent periods short, M.I.T.I. encouraged rapid diffusion of technology. And in some cases — the experience of International Business Machines (I.B.M.), enjoying a virtual monopoly in global mainframe computer markets during the 1950s and early 1960s, is a classical case — M.I.T.I. made it a condition of entry into the Japanese market (through the creation of a subsidiary Japan I.B.M. in the case of I.B.M.) that foreign companies share many of their technological secrets with potential Japanese competitors.

How important industrial policy was for Miracle Growth remains controversial, however. The view of Johnson (1982), who hails industrial policy as a pillar of the Japanese Development State (government promoting economic growth through state policies) has been criticized and revised by subsequent scholars. The book by Uriu (1996) is a case in point.

Internal labor markets, just-in-time inventory and quality control circles

Furthering the internalization of labor markets — the premium wages and long-term employment guarantees largely restricted to white collar workers were extended to blue collar workers with the legalization of unions and collective bargaining after 1945 — also raised the social capability of adapting foreign technology. Internalizing labor created a highly flexible labor force in post-1950 Japan. As a result, Japanese workers embraced many of the key ideas of Just-in-Time inventory control and Quality Control circles in assembly industries, learning how to do rapid machine setups as part and parcel of an effort to produce components “just-in-time” and without defect. Ironically, the concepts of just-in-time and quality control were originally developed in the United States, just-in-time methods being pioneered by supermarkets and quality control by efficiency experts like W. Edwards Deming. Yet it was in Japan that these concepts were relentlessly pursued to revolutionize assembly line industries during the 1950s and 1960s.

Ultimate causes of the Japanese economic “miracle”

Miracle Growth was the completion of a protracted historical process involving enhancing human capital, massive accumulation of physical capital including infrastructure and private manufacturing capacity, the importation and adaptation of foreign technology, and the creation of scale economies, which took decades and decades to realize. Dubbed a miracle, it is best seen as the reaping of a bountiful harvest whose seeds were painstakingly planted in the six decades between 1880 and 1938. In the course of the nine decades between the 1880s and 1970, Japan amassed and lost a sprawling empire, reorienting its trade and geopolitical stance through the twists and turns of history. While the ultimate sources of growth can be ferreted out through some form of statistical accounting, the specific way these sources were marshaled in practice is inseparable from the history of Japan itself and of the global environment within which it has realized its industrial destiny.

Appendix: Sources of Growth Accounting and Quantitative Aspects of Japan’s Modern Economic Development

One of the attractions of studying Japan’s post-1880 economic development is the abundance of quantitative data documenting Japan’s growth. Estimates of Japanese income and output by sector, capital stock and labor force extend back to the 1880s, a period when Japanese income per capita was low. Consequently statistical probing of Japan’s long-run growth from relative poverty to abundance is possible.

The remainder of this appendix is devoted to introducing the reader to the vast literature on quantitative analysis of Japan’s economic development from the 1880s until 1970, a nine decade period during which Japanese income per capita converged towards income per capita levels in Western Europe. As the reader will see, this discussion confirms the importance of factors discussed at the outset of this article.

Our initial touchstone is the excellent “sources of growth” accounting analysis carried out by Denison and Chung (1976) on Japan’s growth between 1953 and 1971. Attributing growth in national income in growth of inputs, the factors of production — capital and labor — and growth in output per unit of the two inputs combined (total factor productivity) along the following lines:

G(Y) = { a G(K) + [1-a] G(L) } + G (A)

where G(Y) is the (annual) growth of national output, g(K) is the growth rate of capital services, G(L) is the growth rate of labor services, a is capital’s share in national income (the share of income accruing to owners of capital), and G(A) is the growth of total factor productivity, is a standard approach used to approximate the sources of growth of income.

Using a variant of this type of decomposition that takes into account improvements in the quality of capital and labor, estimates of scale economies and adjustments for structural change (shifting labor out of agriculture helps explain why total factor productivity grows), Denison and Chung (1976) generate a useful set of estimates for Japan’s Miracle Growth era.

Operating with this “sources of growth” approach and proceeding under a variety of plausible assumptions, Denison and Chung (1976) estimate that of Japan’s average annual real national income growth of 8.77 % over 1953-71, input growth accounted for 3.95% (accounting for 45% of total growth) and growth in output per unit of input contributed 4.82% (accounting for 55% of total growth). To be sure, the precise assumptions and techniques they use can be criticized. The precise numerical results they arrive at can be argued over. Still, their general point — that Japan’s growth was the result of improvements in the quality of factor inputs — health and education for workers, for instance — and improvements in the way these inputs are utilized in production — due to technological and organizational change, reallocation of resources from agriculture to non-agriculture, and scale economies, is defensible.

With this in mind consider Table 1.

Table 1: Industrialization and Economic Growth in Japan, 1880-1970:
Selected Quantitative Characteristics

Panel A: Income and Structure of National Output

Real Income per Capita [a] Share of National Output (of Net Domestic Product) and Relative Labor Productivity (Ratio of Output per Worker in Agriculture to Output per Worker in the N Sector) [b]
Years Absolute Relative to U.S. level Year Agriculture Manufacturing & Mining

(Ma)

Manufacturing,

Construction & Facilitating Sectors [b]

Relative Labor Productivity

A/N

1881-90 893 26.7% 1887 42.5% 13.6% 20.0% 68.3
1891-1900 1,049 28.5 1904 37.8 17.4 25.8 44.3
1900-10 1,195 25.3 1911 35.5 20.3 31.1 37.6
1911-20 1,479 27.9 1919 29.9 26.2 38.3 32.5
1921-30 1,812 29.1 1930 20.0 25.8 43.3 27.4
1930-38 2,197 37.7 1938 18.5 35.3 51.7 20.8
1951-60 2,842 26.2 1953 22.0 26.3 39.7 22.6
1961-70 6,434 47.3 1969 8.7 30.5 45.9 19.1

Panel B: Domestic and External Sources of Aggregate Supply and Demand Growth: Manufacturing and Mining (Ma), Gross Domestic Fixed Capital Formation (GDFCF), and Trade (TR)

Percentage Contribution to Growth due to: Trade Openness and Trade Growth [c]
Years Ma to Output Growth GDFCF to Effective

Demand Growth

Years Openness Growth in Trade
1888-1900 19.3% 17.9% 1885-89 6.9% 11.4%
1900-10 29.2 30.5 1890-1913 16.4 8.0
1910-20 26.5 27.9 1919-29 32.4 4.6
1920-30 42.4 7.5 1930-38 43.3 8.1
1930-38 50.5 45.3 1954-59 19.3 12.0
1955-60 28.1 35.0 1960-69 18.5 10.3
1960-70 33.5 38.5

Panel C: Infrastructure and Human Development

Human Development Index (HDI) [d] Electricity Generation and National Broadcasting (NHK) per 100 Persons [e]
Year Educational Attainment Infant Mortality Rate (IMR) Overall HDI

Index

Year Electricity NHK Radio Subscribers
1900 0.57 155 0.57 1914 0.28 n.a.
1910 0.69 161 0.61 1920 0.68 n.a.
1920 0.71 166 0.64 1930 2.46 1.2
1930 0.73 124 0.65 1938 4.51 7.8
1950 0.81 63 0.69 1950 5.54 11.0
1960 0.87 34 0.75 1960 12.28 12.6
1970 0.95 14 0.83 1970 34.46 21.9

Notes: [a] Maddison (2000) provides estimates of real income that take into account the purchasing power of national currencies.

[b] Ohkawa (1979) gives estimates for the “N” sector that is defined as manufacturing and mining (Ma) plus construction plus facilitating industry (transport, communications and utilities). It should be noted that the concept of an “N” sector is not standard in the field of economics.

[c] The estimates of trade are obtained by adding merchandise imports to merchandise exports. Trade openness is estimated by taking the ratio of total (merchandise) trade to national output, the latter defined as Gross Domestic Product (G.D.P.). The trade figures include trade with Japan’s empire (Korea, Taiwan, Manchuria, etc.); the income figures for Japan exclude income generated in the empire.

[d] The Human Development Index is a composite variable formed by adding together indices for educational attainment, for health (using life expectancy that is inversely related to the level of the infant mortality rate, the IMR), and for real per capita income. For a detailed discussion of this index see United Nations Development Programme (2000).

[e] Electrical generation is measured in million kilowatts generated and supplied. For 1970, the figures on NHK subscribers are for television subscribers. The symbol n.a. = not available.

Sources: The figures in this table are taken from various pages and tables in Japan Statistical Association (1987), Maddison (2000), Minami (1994), and Ohkawa (1979).

Flowing from this table are a number of points that bear lessons of the Denison and Chung (1976) decomposition. One cluster of points bears upon the timing of Japan’s income per capita growth and the relationship of manufacturing expansion to income growth. Another highlights improvements in the quality of the labor input. Yet another points to the overriding importance of domestic investment in manufacturing and the lesser significance of trade demand. A fourth group suggests that infrastructure has been important to economic growth and industrial expansion in Japan, as exemplified by the figures on electricity generating capacity and the mass diffusion of communications in the form of radio and television broadcasting.

Several parts of Table 1 point to industrialization, defined as an increase in the proportion of output (and labor force) attributable to manufacturing and mining, as the driving force in explaining Japan’s income per capita growth. Notable in Panels A and B of the table is that the gap between Japanese and American income per capita closed most decisively during the 1910s, the 1930s, and the 1960s, precisely the periods when manufacturing expansion was the most vigorous.

Equally noteworthy of the spurts of the 1910s, 1930s and the 1960s is the overriding importance of gross domestic fixed capital formation, that is investment, for growth in demand. By contrast, trade seems much less important to growth in demand during these critical decades, a point emphasized by both Minami (1994) and by Ohkawa and Rosovsky (1973). The notion that Japanese growth was “export led” during the nine decades between 1880 and 1970 when Japan caught up technologically with the leading Western nations is not defensible. Rather, domestic capital investment seems to be the driving force behind aggregate demand expansion. The periods of especially intense capital formation were also the periods when manufacturing production soared. Capital formation in manufacturing, or in infrastructure supporting manufacturing expansion, is the main agent pushing long-run income per capita growth.

Why? As Ohkawa and Rosovsky (1973) argue, spurts in manufacturing capital formation were associated with the import and adaptation of foreign technology, especially from the United States These investment spurts were also associated with shifts of labor force out of agriculture and into manufacturing, construction and facilitating sectors where labor productivity was far higher than it was in labor-intensive farming centered around labor-intensive rice cultivation. The logic of productivity gain due to more efficient allocation of labor resources is apparent from the right hand column of Panel A in Table 1.

Finally, Panel C of Table 1 suggests that infrastructure investment that facilitated health and educational attainment (combined public and private expenditure on sanitation, schools and research laboratories), and public/private investment in physical infrastructure including dams and hydroelectric power grids helped fuel the expansion of manufacturing by improving human capital and by reducing the costs of transportation, communications and energy supply faced by private factories. Mosk (2001) argues that investments in human-capital-enhancing (medicine, public health and education), financial (banking) and physical infrastructure (harbors, roads, power grids, railroads and communications) laid the groundwork for industrial expansions. Indeed, the “social capability for importing and adapting foreign technology” emphasized by Ohkawa and Rosovsky (1973) can be largely explained by an infrastructure-driven growth hypothesis like that given by Mosk (2001).

In sum, Denison and Chung (1976) argue that a combination of input factor improvement and growth in output per combined factor inputs account for Japan’s most rapid spurt of economic growth. Table 1 suggests that labor quality improved because health was enhanced and educational attainment increased; that investment in manufacturing was important not only because it increased capital stock itself but also because it reduced dependence on agriculture and went hand in glove with improvements in knowledge; and that the social capacity to absorb and adapt Western technology that fueled improvements in knowledge was associated with infrastructure investment.

References

Denison, Edward and William Chung. “Economic Growth and Its Sources.” In Asia’s Next Giant: How the Japanese Economy Works, edited by Hugh Patrick and Henry Rosovsky, 63-151. Washington, DC: Brookings Institution, 1976.

Horioka, Charles Y. “Future Trends in Japan’s Savings Rate and the Implications Thereof for Japan’s External Imbalance.” Japan and the World Economy 3 (1991): 307-330.

Japan Statistical Association. Historical Statistics of Japan [Five Volumes]. Tokyo: Japan Statistical Association, 1987.

Johnson, Chalmers. MITI and the Japanese Miracle: The Growth of Industrial Policy, 1925-1975. Stanford: Stanford University Press, 1982.

Maddison, Angus. Monitoring the World Economy, 1820-1992. Paris: Organization for Economic Co-operation and Development, 2000.

Minami, Ryoshin. Economic Development of Japan: A Quantitative Study. [Second edition]. Houndmills, Basingstoke, Hampshire: Macmillan Press, 1994.

Mitchell, Brian. International Historical Statistics: Africa and Asia. New York: New York University Press, 1982.

Mosk, Carl. Japanese Industrial History: Technology, Urbanization, and Economic Growth. Armonk, New York: M.E. Sharpe, 2001.

Nakamura, Takafusa. The Postwar Japanese Economy: Its Development and Structure, 1937-1994. Tokyo: University of Tokyo Press, 1995.

Ohkawa, Kazushi. “Production Structure.” In Patterns of Japanese Economic Development: A Quantitative Appraisal, edited by Kazushi Ohkawa and Miyohei Shinohara with Larry Meissner, 34-58. New Haven: Yale University Press, 1979.

Ohkawa, Kazushi and Henry Rosovsky. Japanese Economic Growth: Trend Acceleration in the Twentieth Century. Stanford, CA: Stanford University Press, 1973.

Smith, Thomas. Native Sources of Japanese Industrialization, 1750-1920. Berkeley: University of California Press, 1988.

Uriu, Robert. Troubled Industries: Confronting Economic Challenge in Japan. Ithaca: Cornell University Press, 1996.

United Nations Development Programme. Human Development Report, 2000. New York: Oxford University Press, 2000.

Citation: Mosk, Carl. “Japan, Industrialization and Economic Growth”. EH.Net Encyclopedia, edited by Robert Whaples. January 18, 2004. URL http://eh.net/encyclopedia/japanese-industrialization-and-economic-growth/

Industrial Sickness Funds

John E. Murray, University of Toledo

Overview and Definition

Industrial sickness funds provided an early form of health insurance. They were financial institutions that extended cash payments and in some cases medical benefits to members who became unable to work due to sickness or injury. The term industrial sickness funds is a later construct which describes funds organized by companies, which were also known as establishment funds, and by labor unions. These funds were widespread geographically in the United States; the 1890 Census of Insurance found 1,259 nationwide, with concentrations in the Northeast, Midwest, California, Texas, and Louisiana (U.S. Department of the Interior, 1895). By the turn of the twentieth century, some industrial sickness funds had accumulated considerable experience at managing sickness benefits. A few predated the Civil War. When the U. S. Commissioner of Labor surveyed a sample of sickness funds in 1908, it found 867 non-fraternal funds nationwide that provided temporary disability benefits (U.S. Commissioner of Labor, 1909). By the time of World War I, these funds, together with similar funds sponsored by fraternal societies, covered 30 to 40 percent of non-agricultural wage workers in the more industrialized states, or by extension, eight to nine million nationwide (Murray 2007a). Sickness funds were numerous, widespread, and in general carefully operated.

Industrial sickness funds were among the earliest providers of any type of health or medical benefits in the United States. In fact, their earliest product was called “workingman’s insurance” or “sickness insurance,” terms that described their clientele and purpose accurately. In the late Progressive Era, reformers promoted government insurance programs that would supplant the sickness funds. To sound more British, they used the term “health insurance,” and that is the phrase we still use for this kind of insurance contract (Numbers 1978). In the history of health insurance, the funds were contemporary with benefit operations of fraternal societies (see fraternal sickness insurance) and led into the period of group health insurance (see health insurance, U. S.). They should be distinguished from the sickness benefits provided by some industrial insurance policies, which required weekly premium payments and paid a cash benefit upon death, which was intended to cover burial expenses.

Many written histories of health insurance have missed the important role industrial sickness funds played in both relief of worker suffering and in the political process. Recent historians have tended to criticize, patronize, or ignore sickness funds. Lubove (1986) complained that they stood in the way of government insurance for all workers. Klein (2003) claimed that they were inefficient, without making explicit her standard for that judgment. Quadagno (2005) simply asserted that no one had thought of health insurance before the 1920s. Contemporary commentators such as I. M. Rubinow and Irving Fisher criticized workers who preferred “hopelessly inadequate” sickness fund insurance over government insurance as “infantile” (Derickson 2005). But these criticisms stemmed more from their authors’ ideological preconceptions than from close study of these institutions.

Rise and Operations of Industrial Sickness Funds

The period of their greatest extent and importance was from the 1880s to around 1940. The many state labor bureau surveys of individual workers, since digitized by the University of California’s Historical Labor Statistics Project and available for download at EH.net, often asked questions such as “do you belong to a benefit society,” meaning a fraternal sickness benefit fund or an industrial sickness fund. Of the surveys from the early 1890s that included this question, around a quarter of respondents indicated that they belonged to such societies. Later, closer to 1920, several states examined the extent of sickness insurance coverage in response to movements to create governmental health insurance for workers (Table 1). These later studies indicated that in the Northeast, Midwest, and California, between thirty and forty percent of non-agricultural workers were covered. Thus, remarkably, these societies had actually increased their market share over a three decade period in which the labor force itself grew from 13 to 30 million workers (Murray 2007a). Industrial sickness funds were dynamic institutions, capable of dealing with an ever expanding labor market

Table 1:
Sources of Insurance in Three States (thousands of workers)

Source/state Illinois Ohio California
Fraternal society 250 200 291
Establishment fund 116 130 50
Union fund 140 85 38
Other sick fund 12 N/a 35
Commercial insurance 140 85 2 (?)
Total 660 500 416
Eligible labor force 1,850 1,500 995
Share insured 36% 33% 42%
Sources: Illinois (1919), Ohio, (1919), California (1917), Lee et al. (1957).

Industrial sickness funds operated in a relatively simple fashion, but one that enabled them to mitigate the usual information problems that emerge in insurance markets. The process of joining a fund and making a claim typically worked as follows. A newly hired worker in a plant with such a fund explicitly applied to join, often after a probationary period during which fund managers could observe his baseline health and work habits. After admission to the fund, he paid an entrance fee followed by weekly dues. Since the average industrial worker in the 1910s earned about ten dollars a week, the entrance fee of one dollar was a half-day’s pay and the dues of ten cents made the cost to the worker around one percent of his pay packet.

A member who was unable to work contacted his fund, which then sent either a committee of fellow fund members, a physician, or both to check on the member-now-claimant. If they found him as sick as he had said he was, and in their judgment he was unable to work, after a one week waiting period he received around half his weekly pay. The waiting period was intended to let transient, less serious illnesses resolve so that the fund could support members with longer-term medical problems. To continue receiving the sick pay the claimant needed to allow periodic examinations by a physician or visiting committee. In rough terms, the average worker missed two percent of a work year, or about a week every year, a rate that varied by age and industry. The quarter of all workers who missed any work lost on average one month’s pay; thus a typical incapacitated worker received three and a half weeks of benefit per year. Comparing the cost of dues and expected value of benefits shows that the sickness funds were close to an actuarially fair bet: $5.00 in annual dues compared to (0.25 chance of falling ill) x (3.5 weeks of benefits) x ($5.00 weekly benefit), or about four and a half dollars in expected benefits. Thus, sickness funds appear to have been a reasonably fair deal for workers.

Establishment funds did not invent sickness benefits by any means. Rather, they systematized previous arrangements for supporting sick workers or the survivors of deceased workers. The old way was to pass the hat, which was characterized by random assessments and arbitrary financial awards. Workers and employers both observed that contributors and beneficiaries alike detested passing the hat. Fellow workers complained about the surprise nature of the hat’s appearance, and beneficiaries faced humiliation upon grief when the hat contained less money than had been collected for a more popular co-worker. Eventually rules replaced discretion, and benefits were paid according to a published schedule, either as a flat rate per diem or as a percentage of wages. The 1890 Census of Insurance reported that only a few funds extended benefits “at the discretion of the society,” and by the time of the 1908 Commissioner of Labor survey the practice had disappeared (Murray 2007).

Labor union funds began in the early nineteenth century. In the earliest union funds, members of craft unions pledged to complete jobs that ill brothers had contracted to perform but could not finish due to illness. Eventually cash benefit payments replaced the in-kind promises of labor, accompanied by cash premium payments into the union’s kitty. While criticized by many observers as unstable, labor union funds actually operated in transparent fashion. Even funds that offered unemployment benefits survived the depression of the mid-1890s by reducing benefit payments and enacting other conservative measures. Another criticism was that their benefits were too small in amount and too brief in duration, but according to the 1908 Commissioner of Labor survey, labor union funds and establishment funds offered similar levels of benefits. The cost-benefit ratio did favor establishment funds, but establishment fund membership ended with employment at a particular company, while union funds offered the substantial attraction of benefits that were portable from job to job.

The cash payment to sick workers created an incentive to take sick leave that workers without sickness insurance did not face; this is the moral hazard of sick pay. Further, workers who believed that they were more likely to make a sick claim would have a stronger incentive to join a sickness fund than a worker in relatively good health; this is called adverse selection. Early twentieth century commentators on government sickness insurance disagreed on the extent and even the existence of moral hazard and adverse selection in sickness insurance. Later statistical studies found evidence for both in establishment funds. However, the funds themselves had understood the potential financial damage each could wreak and strategized to mitigate such losses. The magnitude of the sick pay moral hazard was small, and affected primarily the tendency of the worker to make a claim in the first place. Many sickness funds limited their liability here by paying for the physician who examined the claimant and thus was responsible for approving extended sickness payments. Physicians appear to have paid attention to the wishes of those who paid them. Among claimants in funds that paid the examining physician directly, the average duration of their illness ended significantly earlier. By the same token, physicians who were paid by the worker tended to approve longer absences for that worker—a sign that physicians too responded to incentives.

Testing for adverse selection depends on whether membership in a company’s fund was the worker’s choice (that is, it was voluntary) or the company’s choice (that is, it was compulsory). In fact among establishment funds in which membership was voluntary, claim rates per member were significantly higher than in mandatory membership funds. This indicates that voluntary funds were especially attractive to sicker workers, which is the essence of adverse selection. To reduce the risks of adverse selection, funds imposed age limits to keep out older applicants, physical examinations to discourage the obviously ill, probationary periods to reveal chronic illness, and pre-existing condition clauses to avoid paying for such conditions (Murray 2007a). Sickness funds thus cleverly managed information problems typical of insurance markets.

Industrial Sickness Funds and Progressive Era Politics

Industrial sickness funds were the linchpin of efforts to promote and to oppose the Progressive campaign for state-level mandatory government sickness insurance. One consistent claim made by government insurance supporters was that workers could neither afford to pay for sickness insurance nor to save in advance of financially damaging health problems. The leading advocacy organization, the American Association for Labor Legislation (AALL), reported in its magazine that “Savings of Wage-Earners Are Insufficient to Meet this Loss,” meaning lost income during sickness (American Association for Labor Legislation 1916a). However, worker surveys of savings, income, and insurance holdings revealed that workers rationally strategized according to their varying needs and abilities across the life-cycle. Young workers saved little and were less likely to belong to industrial sickness funds—but were less likely to miss work due to illness as well. Middle aged workers, married with families to support, were relatively more likely to belong to a sickness fund. Older workers pursued a different strategy, saving more and relying on sickness funds less; among other factors, they wanted greater liquidity in their financial assets (Murray 2007a). Worker strategies reflected varying needs at varying stages of life, some (but not all) of which could be adequately addressed by membership in sickness funds.

Despite claims to the contrary by some historians, there was little popular support for government sickness insurance in early twentieth century America. Lobbying by the AALL led twelve states to charge investigatory commissions with determining the need for and feasibility of government sickness insurance (Moss 1996). The AALL offered a basic bill that could be adjusted to meet a state’s particular needs (American Association for Labor Legislation 1916b). Typically the Association prodded states to adopt a version of German insurance, which would keep the many small industrial sickness funds while forcing new members into some and creating new funds for other workers. However, these bills met consistent defeat in statehouses, earning only a fleeting victory in the New York Senate in 1919, which was followed by the bill’s death in an Assembly committee (Hoffman 2001). In the previous year a California referendum on a constitutional amendment that would allow the government to provide sickness insurance lost by nearly three to one (Costa 1996).

After the Progressive campaign exhausted itself, industrial sickness funds continued to grow through the 1920s, but the Great Depression exposed deep flaws in their structure. Many labor union funds, without a sponsoring firm to act as lender of last resort, dissolved. Establishment funds failed at a surprisingly low rate, but their survival was made possible by the tendency of firms to fire less healthy workers. Federal surveys in Minnesota found that ill-health led to earlier job loss in the Depression, and comparisons of self reported health in later surveys indicated that the unemployed were in fact in poorer health than the employed, and the disparity grew as the Depression deepened. Thus, industrial sickness funds paradoxically enjoyed falling claim rates (and thus reduced expenses) as the economy deteriorated (Murray 2007).

Decline and Rebirth of Sickness Funds

At the same time, commercial insurers had been engaging in ever more productive research into the actuarial science of group health insurance. Eventually the insurers cut premium rates while offering benefits comparable to those available through sickness funds. As a result, the commercial insurers and Blue CrossBlue Shield came to dominate the market for health benefits. A federal survey that covered the early 1930s found more firms with group health than with mutual benefit societies but the benefit societies still insured more than twice as many workers (Sayers, et al 1937). By the later 1930s that gap in the number of firms had widened in favor of group health (Figure 1), and the number of workers insured was about equal. After the mid-1940s, industrial sickness funds were no longer a significant player in markets for health insurance (Murray 2007a).

Figure 1: Health Benefit Provision and Source
Source: Dobbin (1992) citing National Industrial Conference Board surveys.

More recently, a type of industrial sickness fund has begun to stage a comeback. Voluntary employee beneficiary associations (VEBAs) fall under a 1928 federal law that was created to govern industrial sickness funds. VEBAs are trusts set up to pay employee benefits without earning profits for the company. In late 2007, the Big Three automakers each contracted with the United Auto Workers (UAW) to operate a VEBA that would provide health insurance for UAW members. If the automakers and their workers succeed in establishing VEBAs that stand the test of time, they will have resurrected a once-successful financial institution previously thought relegated to the pre-World War II economy (Murray 2007b).

References

American Association for Labor Legislation. “Brief for Health Insurance.” American Labor Legislation Review 6 (1916a): 155–236.

American Association for Labor Legislation. “Tentative Draft of an Act.” American Labor Legislation Review 6 (1916b): 239–68.

California Social Insurance Commission. Report of the Social Insurance Commission of the State of California, January 25, 1917. Sacramento: California State Printing Office, 1917.

Costa, Dora L. “Demand for Private and State Provided Health Insurance in the 1910s: Evidence from California.” Photocopy, MIT, 1996.

Derickson, Alan. Health Security for All: Dreams of Universal Health Care in America. Baltimore: Johns Hopkins University Press, 2005.

Dobbin, Frank. “The Origins of Private Social Insurance: Public Policy and Fringe Benefits in America, 1920-1950,” American Journal of Sociology 97 (1992): 1416-50.

Hoffman, Beatrix. The Wages of Sickness: The Politics of Health Insurance in Progressive America. Chapel Hill: University of North Carolina Press, 2001.

Klein, Jennifer. For All These Rights: Business, Labor, and the Shaping of America’s Public-Private Welfare State. Princeton: Princeton University Press, 2003.

Lee, Everett S., Ann Ratner Miller, Carol P. Brainerd, and Richard A. Easterlin, under the direction of Simon Kuznets and Dorothy Swaine Thomas. Population Redistribution and Economic Growth, 1870-1950: Volume I, Methodological Considerations and Reference Tables. Philadelphia: Memoirs of the American Philosophical Society 45, 1957.

Lubove, Roy. The Struggle for Social Security, 1900-1930. Second edition. Pittsburgh: University of Pittsburgh Press, 1986.

Moss, David. Socializing Security: Progressive-Era Economists and the Origins of American Social Policy. Cambridge: Harvard University Press, 1996.

Murray, John E. Origins of American Health Insurance: A History of Industrial Sickness Funds. New Haven: Yale University Press, 2007a.

Murray, John E. “UAW Members Must Treat Health Care Money as Their Own,” Detroit Free Press, 21 November 2007b.

Ohio Health and Old Age Insurance Commission. Health, Health Insurance, Old Age Pensions: Report, Recommendations, Dissenting Opinions. Columbus: Heer, 1919.

Quadagno, Jill. One Nation, Uninsured: Why the U. S. Has No National Health Insurance. New York: Oxford University Press, 2005.

Sayers, R. R., Gertrud Kroeger, and W. M. Gafafer. “General Aspects and Functions of the Sick Benefit Organization.” Public Health Reports 52 (November 5, 1937): 1563–80.

State of Illinois. Report of the Health Insurance Commission of the State of Illinois, May 1, 1919. Springfield: State of Illinois, 1919.

U.S. Department of the Interior. Report on Insurance Business in the United States at the Eleventh Census: 1890; pt. 2, “Life Insurance.” Washington, DC: GPO, 1895.

U.S. Commissioner of Labor. Twenty-third Annual Report of the Commissioner of Labor, 1908: Workmen’s Insurance and Benefit Funds in the United States. Washington, DC: GPO, 1909.

Citation: Murray, John. “Industrial Sickness Funds, US”. EH.Net Encyclopedia, edited by Robert Whaples. June 5, 2008. URL http://eh.net/encyclopedia/industrial-sickness-funds/

Hours of Work in U.S. History

Robert Whaples, Wake Forest University

In the 1800s, many Americans worked seventy hours or more per week and the length of the workweek became an important political issue. Since then the workweek’s length has decreased considerably. This article presents estimates of the length of the historical workweek in the U.S., describes the history of the shorter-hours “movement,” and examines the forces that drove the workweek’s decline over time.

Estimates of the Length of the Workweek

Measuring the length of the workweek (or workday or workyear) is a difficult task, full of ambiguities concerning what constitutes work and who is to be considered a worker. Estimating the length of the historical workweek is even more troublesome. Before the Civil War most Americans were employed in agriculture and most of these were self-employed. Like self-employed workers in other fields, they saw no reason to record the amount of time they spent working. Often the distinction between work time and leisure time was blurry. Therefore, estimates of the length of the typical workweek before the mid-1800s are very imprecise.

The Colonial Period

Based on the amount of work performed — for example, crops raised per worker — Carr (1992) concludes that in the seventeenth-century Chesapeake region, “for at least six months of the year, an eight to ten-hour day of hard labor was necessary.” This does not account for other required tasks, which probably took about three hours per day. This workday was considerably longer than for English laborers, who at the time probably averaged closer to six hours of heavy labor each day.

The Nineteenth Century

Some observers believe that most American workers adopted the practice of working from “first light to dark” — filling all their free hours with work — throughout the colonial period and into the nineteenth century. Others are skeptical of such claims and argue that work hours increased during the nineteenth century — especially its first half. Gallman (1975) calculates “changes in implicit hours of work per agricultural worker” and estimates that hours increased 11 to 18 percent from 1800 to 1850. Fogel and Engerman (1977) argue that agricultural hours in the North increased before the Civil War due to the shift into time-intensive dairy and livestock. Weiss and Craig (1993) find evidence suggesting that agricultural workers also increased their hours of work between 1860 and 1870. Finally, Margo (2000) estimates that “on an economy-wide basis, it is probable that annual hours of work rose over the (nineteenth) century, by around 10 percent.” He credits this rise to the shift out of agriculture, a decline in the seasonality of labor demand and reductions in annual periods of nonemployment. On the other hand, it is clear that working hours declined substantially for one important group. Ransom and Sutch (1977) and Ng and Virts (1989) estimate that annual labor hours per capita fell 26 to 35 percent among African-Americans with the end of slavery.

Manufacturing Hours before 1890

Our most reliable estimates of the workweek come from manufacturing, since most employers required that manufacturing workers remain at work during precisely specified hours. The Census of Manufactures began to collect this information in 1880 but earlier estimates are available. Much of what is known about average work hours in the nineteenth century comes from two surveys of manufacturing hours taken by the federal government. The first survey, known as the Weeks Report, was prepared by Joseph Weeks as part of the Census of 1880. The second was prepared in 1893 by Commissioner of Labor Carroll D. Wright, for the Senate Committee on Finance, chaired by Nelson Aldrich. It is commonly called the Aldrich Report. Both of these sources, however, have been criticized as flawed due to problems such as sample selection bias (firms whose records survived may not have been typical) and unrepresentative regional and industrial coverage. In addition, the two series differ in their estimates of the average length of the workweek by as much as four hours. These estimates are reported in Table 1. Despite the previously mentioned problems, it seems reasonable to accept two important conclusions based on these data — the length of the typical manufacturing workweek in the 1800s was very long by modern standards and it declined significantly between 1830 and 1890.

Table 1
Estimated Average Weekly Hours Worked in Manufacturing, 1830-1890

Year Weeks Report Aldrich Report
1830 69.1
1840 67.1 68.4
1850 65.5 69.0
1860 62.0 66.0
1870 61.1 63.0
1880 60.7 61.8
1890 60.0

Sources: U.S. Department of Interior (1883), U.S. Senate (1893)
Note: Atack and Bateman (1992), using data from census manuscripts, estimate average weekly hours to be 60.1 in 1880 — very close to Weeks’ contemporary estimate. They also find that the summer workweek was about 1.5 hours longer than the winter workweek.

Hours of Work during the Twentieth Century

Because of changing definitions and data sources there does not exist a consistent series of workweek estimates covering the entire twentieth century. Table 2 presents six sets of estimates of weekly hours. Despite differences among the series, there is a fairly consistent pattern, with weekly hours falling considerably during the first third of the century and much more slowly thereafter. In particular, hours fell strongly during the years surrounding World War I, so that by 1919 the eight-hour day (with six workdays per week) had been won. Hours fell sharply at the beginning of the Great Depression, especially in manufacturing, then rebounded somewhat and peaked during World War II. After World War II, the length of the workweek stabilized around forty hours. Owen’s nonstudent-male series shows little trend after World War II, but the other series show a slow, but steady, decline in the length of the average workweek. Greis’s two series are based on the average length of the workyear and adjust for paid vacations, holidays and other time-off. The last column is based on information reported by individuals in the decennial censuses and in the Current Population Survey of 1988. It may be the most accurate and representative series, as it is based entirely on the responses of individuals rather than employers.

Table 2
Estimated Average Weekly Hours Worked, 1900-1988

Year Census of Manu-facturing JonesManu-

facturing

OwenNonstudent Males GreisManu-

facturing

GreisAll Workers Census/CPS All Workers
1900 59.6* 55.0 58.5
1904 57.9 53.6 57.1
1909 56.8 (57.3) 53.1 55.7
1914 55.1 (55.5) 50.1 54.0
1919 50.8 (51.2) 46.1 50.0
1924 51.1* 48.8 48.8
1929 50.6 48.0 48.7
1934 34.4 40.6
1940 37.6 42.5 43.3
1944 44.2 46.9
1947 39.2 42.4 43.4 44.7
1950 38.7 41.1 42.7
1953 38.6 41.5 43.2 44.0
1958 37.8* 40.9 42.0 43.4
1960 41.0 40.9
1963 41.6 43.2 43.2
1968 41.7 41.2 42.0
1970 41.1 40.3
1973 40.6 41.0
1978 41.3* 39.7 39.1
1980 39.8
1988 39.2

Sources: Whaples (1990a), Jones (1963), Owen (1976, 1988), and Greis (1984). The last column is based on the author’s calculations using Coleman and Pencavel’s data from Table 4 (below).
* = these estimates are from one year earlier than the year listed.
(The figures in parentheses in the first column are unofficial estimates but are probably more precise, as they better estimate the hours of workers in industries with very long workweeks.)

Hours in Other Industrial Sectors

Table 3 compares the length of the workweek in manufacturing to that in other industries for which there is available information. (Unfortunately, data from the agricultural and service sectors are unavailable until late in this period.) The figures in Table 3 show that the length of the workweek was generally shorter in the other industries — sometimes considerably shorter. For example, in 1910 anthracite coalminers’ workweeks were about forty percent shorter than the average workweek among manufacturing workers. All of the series show an overall downward trend.

Table 3
Estimated Average Weekly Hours Worked, Other Industries

Year Manufacturing Construction Railroads Bituminous Coal Anthracite Coal
1850s about 66 about 66
1870s about 62 about 60
1890 60.0 51.3
1900 59.6 50.3 52.3 42.8 35.8
1910 57.3 45.2 51.5 38.9 43.3
1920 51.2 43.8 46.8 39.3 43.2
1930 50.6 42.9 33.3 37.0
1940 37.6 42.5 27.8 27.2
1955 38.5 37.1 32.4 31.4

Sources: Douglas (1930), Jones (1963), Licht (1983), and Tables 1 and 2.
Note: The manufacturing figures for the 1850s and 1870s are approximations based on averaging numbers from the Weeks and Aldrich reports from Table 1. The early estimates for the railroad industry are also approximations.

Recent Trends by Race and Gender

Some analysts, such as Schor (1992) have argued that the workweek increased substantially in the last half of the twentieth century. Few economists accept this conclusion, arguing that it is based on the use of faulty data (public opinion surveys) and unexplained methods of “correcting” more reliable sources. Schor’s conclusions are contradicted by numerous studies. Table 4 presents Coleman and Pencavel’s (1993a, 1993b) estimates of the average workweek of employed people — disaggregated by race and gender. For all four groups the average length of the workweek has dropped since 1950. Although median weekly hours were virtually constant for men, the upper tail of the hours distribution fell for those with little schooling and rose for the well-educated. In addition, Coleman and Pencavel also find that work hours declined for young and older men (especially black men), but changed little for white men in their prime working years. Women with relatively little schooling were working fewer hours in the 1980s than in 1940, while the reverse is true of well-educated women.

Table 4
Estimated Average Weekly Hours Worked, by Race and Gender, 1940-1988

Year White Men Black Men White Women Black Women
1940 44.1 44.5 40.6 42.2
1950 43.4 42.8 41.0 40.3
1960 43.3 40.4 36.8 34.7
1970 43.1 40.2 36.1 35.9
1980 42.9 39.6 35.9 36.5
1988 42.4 39.6 35.5 37.2

Source: Coleman and Pencavel (1993a, 1993b)

Broader Trends in Time Use, 1880 to 2040

In 1880 a typical male household head had very little leisure time — only about 1.8 hours per day over the course of a year. However, as Fogel’s (2000) estimates in Table 5 show, between 1880 and 1995 the amount of work per day fell nearly in half, allowing leisure time to more than triple. Because of the decline in the length of the workweek and the declining portion of a lifetime that is spent in paid work (due largely to lengthening periods of education and retirement) the fraction of the typical American’s lifetime devoted to work has become remarkably small. Based on these trends Fogel estimates that four decades from now less than one-fourth of our discretionary time (time not needed for sleep, meals, and hygiene) will be devoted to paid work — over three-fourths will be available for doing what we wish.

Table 5
Division of the Day for the Average Male Household Head over the Course of a Year, 1880 and 1995

Activity 1880 1995
Sleep 8 8
Meals and hygiene 2 2
Chores 2 2
Travel to and from work 1 1
Work 8.5 4.7
Illness .7 .5
Left over for leisure activities 1.8 5.8

Source: Fogel (2000)

Table 6
Estimated Trend in the Lifetime Distribution of Discretionary Time, 1880-2040

Activity 1880 1995 2040
Lifetime Discretionary Hours 225,900 298,500 321,900
Lifetime Work Hours 182,100 122,400 75,900
Lifetime Leisure Hours 43,800 176,100 246,000

Source: Fogel (2000)
Notes: Discretionary hours exclude hours used for sleep, meals and hygiene. Work hours include paid work, travel to and from work, and household chores.

Postwar International Comparisons

While hours of work have decreased slowly in the U.S. since the end of World War II, they have decreased more rapidly in Western Europe. Greis (1984) calculates that annual hours worked per employee fell from 1908 to 1704 in the U.S. between 1950 and 1979, a 10.7 percent decrease. This compares to a 21.8 percent decrease across a group of twelve Western European countries, where the average fell from 2170 hours to 1698 hours between 1950 and 1979. Perhaps the most precise way of measuring work hours is to have individuals fill out diaries on their day-to-day and hour-to-hour time use. Table 7 presents an international comparison of average work hours both inside and outside of the workplace, by adult men and women — averaging those who are employed with those who are not. (Juster and Stafford (1991) caution, however, that making these comparisons requires a good deal of guesswork.) These numbers show a significant drop in total work per week in the U.S. between 1965 and 1981. They also show that total work by men and women is very similar, although it is divided differently. Total work hours in the U.S. were fairly similar to those in Japan, but greater than in Denmark, while less than in the USSR.

Table 7
Weekly Work Time in Four Countries, Based on Time Diaries, 1960s-1980s

Activity US USSR (Pskov)
Men Women Men Women
1965 1981 1965 1981 1965 1981 1965 1981
Total Work 63.1 57.8 60.9 54.4 64.4 65.7 75.3 66.3
Market Work 51.6 44.0 18.9 23.9 54.6 53.8 43.8 39.3
Commuting 4.8 3.5 1.6 2.0 4.9 5.2 3.7 3.4
Housework 11.5 13.8 41.8 30.5 9.8 11.9 31.5 27.0
Activity Japan Denmark
Men Women Men Women
1965 1985 1965 1985 1964 1987 1964 1987
Total Work 60.5 55.5 64.7 55.6 45.4 46.2 43.4 43.9
Market Work 57.7 52.0 33.2 24.6 41.7 33.4 13.3 20.8
Commuting 3.6 4.5 1.0 1.2 n.a n.a n.a n.a
Housework 2.8 3.5 31.5 31.0 3.7 12.8 30.1 23.1

Source: Juster and Stafford (1991)

The Shorter Hours “Movement” in the U.S.

The Colonial Period

Captain John Smith, after mapping New England’s coast, came away convinced that three days’ work per week would satisfy any settler. Far from becoming a land of leisure, however, the abundant resources of British America and the ideology of its settlers, brought forth high levels of work. Many colonial Americans held the opinion that prosperity could be taken as a sign of God’s pleasure with the individual, viewed work as inherently good and saw idleness as the devil’s workshop. Rodgers (1978) argues that this work ethic spread and eventually reigned supreme in colonial America. The ethic was consistent with the American experience, since high returns to effort meant that hard work often yielded significant increases in wealth. In Virginia, authorities also transplanted the Statue of Artificers, which obliged all Englishmen (except the gentry) to engage in productive activity from sunrise to sunset. Likewise, a 1670 Massachusetts law demanded a minimum ten-hour workday, but it is unlikely that these laws had any impact on the behavior of most free workers.

The Revolutionary War Period

Roediger and Foner (1989) contend that the Revolutionary War era brought a series of changes that undermined support for sun-to-sun work. The era’s republican ideology emphasized that workers needed free time, away from work, to participate in democracy. Simultaneously, the development of merchant capitalism meant that there were, for the first time, a significant number of wageworkers. Roediger and Foner argue that reducing labor costs was crucial to the profitability of these workers’ employers, who reduced costs by squeezing more work from their employees — reducing time for meals, drink and rest and sometimes even rigging the workplace’s official clock. Incensed by their employers’ practice of paying a flat daily wage during the long summer shift and resorting to piece rates during short winter days, Philadelphia’s carpenters mounted America’s first ten-hour-day strike in May 1791. (The strike was unsuccessful.)

1820s: The Shorter Hours Movement Begins

Changes in the organization of work, with the continued rise of merchant capitalists, the transition from the artisanal shop to the early factory, and an intensified work pace had become widespread by about 1825. These changes produced the first extensive, aggressive movement among workers for shorter hours, as the ten-hour movement blossomed in New York City, Philadelphia and Boston. Rallying around the ten-hour banner, workers formed the first city-central labor union in the U.S., the first labor newspaper, and the first workingmen’s political party — all in Philadelphia — in the late 1820s.

Early Debates over Shorter Hours

Although the length of the workday is largely an economic decision arrived at by the interaction of the supply and demand for labor, advocates of shorter hours and foes of shorter hours have often argued the issue on moral grounds. In the early 1800s, advocates argued that shorter work hours improved workers’ health, allowed them time for self-improvement and relieved unemployment. Detractors countered that workers would abuse leisure time (especially in saloons) and that long, dedicated hours of work were the path to success, which should not be blocked for the great number of ambitious workers.

1840s: Early Agitation for Government Intervention

When Samuel Slater built the first textile mills in the U.S., “workers labored from sun up to sun down in summer and during the darkness of both morning and evening in the winter. These hours ? only attracted attention when they exceeded the common working day of twelve hours,” according to Ware (1931). During the 1830s, an increased work pace, tighter supervision, and the addition of about fifteen minutes to the work day (partly due to the introduction of artificial lighting during winter months), plus the growth of a core of more permanent industrial workers, fueled a campaign for a shorter workweek among mill workers in Lowell, Massachusetts, whose workweek averaged about 74 hours. This agitation was led by Sarah Bagley and the New England Female Labor Reform Association, which, beginning in 1845, petitioned the state legislature to intervene in the determination of hours. The petitions were followed by America’s first-ever examination of labor conditions by a governmental investigating committee. The Massachusetts legislature proved to be very unsympathetic to the workers’ demands, but similar complaints led to the passage of laws in New Hampshire (1847) and Pennsylvania (1848), declaring ten hours to be the legal length of the working day. However, these laws also specified that a contract freely entered into by employee and employer could set any length for the workweek. Hence, these laws had little impact. Legislation passed by the federal government had a more direct, though limited effect. On March 31, 1840, President Martin Van Buren issued an executive order mandating a ten-hour day for all federal employees engaged in manual work.

1860s: Grand Eight Hours Leagues

As the length of the workweek gradually declined, political agitation for shorter hours seems to have waned for the next two decades. However, immediately after the Civil War reductions in the length of the workweek reemerged as an important issue for organized labor. The new goal was an eight-hour day. Roediger (1986) argues that many of the new ideas about shorter hours grew out of the abolitionists’ critique of slavery — that long hours, like slavery, stunted aggregate demand in the economy. The leading proponent of this idea, Ira Steward, argued that decreasing the length of the workweek would raise the standard of living of workers by raising their desired consumption levels as their leisure expanded, and by ending unemployment. The hub of the newly launched movement was Boston and Grand Eight Hours Leagues sprang up around the country in 1865 and 1866. The leaders of the movement called the meeting of the first national organization to unite workers of different trades, the National Labor Union, which met in Baltimore in 1867. In response to this movement, eight states adopted general eight-hour laws, but again the laws allowed employer and employee to mutually consent to workdays longer than the “legal day.” Many critics saw these laws and this agitation as a hoax, because few workers actually desired to work only eight hours per day at their original hourly pay rate. The passage of the state laws did foment action by workers — especially in Chicago where parades, a general strike, rioting and martial law ensued. In only a few places did work hours fall after the passage of these laws. Many become disillusioned with the idea of using the government to promote shorter hours and by the late 1860s, efforts to push for a universal eight-hour day had been put on the back burner.

The First Enforceable Hours Laws

Despite this lull in shorter-hours agitation, in 1874, Massachusetts passed the nation’s first enforceable ten-hour law. It covered only female workers and became fully effective by 1879. This legislation was fairly late by European standards. Britain had passed its first effective Factory Act, setting maximum hours for almost half of its very young textile workers, in 1833.

1886: Year of Dashed Hopes

In the early 1880s organized labor in the U.S. was fairly weak. In 1884, the short-lived Federation of Organized Trades and Labor Unions (FOTLU) fired a “shot in the dark.” During its final meeting, before dissolving, the Federation “ordained” May 1, 1886 as the date on which workers would cease working beyond eight hours per day. Meanwhile, the Knights of Labor, which had begun as a secret fraternal society and evolved a labor union, began to gain strength. It appears that many nonunionized workers, especially the unskilled, came to see in the Knights a chance to obtain a better deal from their employers, perhaps even to obtain the eight-hour day. FOTLU’s call for workers to simply walk off the job after eight hours beginning on May 1, plus the activities of socialist and anarchist labor organizers and politicians, and the apparent strength of the Knights combined to attract members in record numbers. The Knights mushroomed and its new membership demanded that their local leaders support them in attaining the eight-hour day. Many smelled victory in the air — the movement to win the eight-hour day became frenzied and the goal became “almost a religious crusade” (Grob, 1961).

The Knights’ leader, Terence Powderly, thought that the push for a May 1 general strike for eight-hours was “rash, short-sighted and lacking in system” and “must prove abortive” (Powderly, 1890). He offered no effective alternative plan but instead tried to block the mass action, issuing a “secret circular” condemning the use of strikes. Powderly reasoned that low incomes forced workmen to accept long hours. Workers didn’t want shorter hours unless their daily pay was maintained, but employers were unwilling and/or unable to offer this. Powderly’s rival, labor leader Samuel Gompers, agreed that “the movement of ’86 did not have the advantage of favorable conditions” (Gompers, 1925). Nelson (1986) points to divisions among workers, which probably had much to do with the failure in 1886 of the drive for the eight-hour day. Some insisted on eight hours with ten hours’ pay, but others were willing to accept eight hours with eight hours’ pay,

Haymarket Square Bombing

The eight-hour push of 1886 was, in Norman Ware’s words, “a flop” (Ware, 1929). Lack of will and organization among workers was undoubtedly important, but its collapse was aided by violence that marred strikes and political rallies in Chicago and Milwaukee. The 1886 drive for eight-hours literally blew up in organized labor’s face. At Haymarket Square in Chicago an anarchist bomb killed fifteen policemen during an eight-hour rally, and in Milwaukee’s Bay View suburb nine strikers were killed as police tried to disperse roving pickets. The public backlash and fear of revolution damned the eight-hour organizers along with the radicals and dampened the drive toward eight hours — although it is estimated that the strikes of May 1886 shortened the workweek for about 200,000 industrial workers, especially in New York City and Cincinnati.

The AFL’s Strategy

After the demise of the Knights of Labor, the American Federation of Labor (AFL) became the strongest labor union in the U.S. It held shorter hours as a high priority. The inside cover of its Proceedings carried two slogans in large type: “Eight hours for work, eight hours for rest, eight hours for what we will” and “Whether you work by the piece or work by the day, decreasing the hours increases the pay.” (The latter slogan was coined by Ira Steward’s wife, Mary.) In the aftermath of 1886, the American Federation of Labor adopted a new strategy of selecting each year one industry in which it would attempt to win the eight-hour day, after laying solid plans, organizing, and building up a strike fund war chest by taxing nonstriking unions. The United Brotherhood of Carpenters and Joiners was selected first and May 1, 1890 was set as a day of national strikes. It is estimated that nearly 100,000 workers gained the eight-hour day as a result of these strikes in 1890. However, other unions turned down the opportunity to follow the carpenters’ example and the tactic was abandoned. Instead, the length of the workweek continued to erode during this period, sometimes as the result of a successful local strike, more often as the result of broader economic forces.

The Spread of Hours Legislation

Massachusetts’ first hours law in 1874 set sixty hours per week as the legal maximum for women, in 1892 this was cut to 58, in 1908 to 56, and in 1911 to 54. By 1900, 26 percent of states had maximum hours laws covering women, children and, in some, adult men (generally only those in hazardous industries). The percentage of states with maximum hours laws climbed to 58 percent in 1910, 76 percent in 1920, and 84 percent in 1930. Steinberg (1982) calculates that the percent of employees covered climbed from 4 percent nationally in 1900, to 7 percent in 1910, and 12 percent in 1920 and 1930. In addition, these laws became more restrictive with the average legal standard falling from a maximum of 59.3 hours per week in 1900 to 56.7 in 1920. According to her calculations, in 1900 about 16 percent of the workers covered by these laws were adult men, 49 percent were adult women and the rest were minors.

Court Rulings

The banner years for maximum hours legislation were right around 1910. This may have been partly a reaction to the Supreme Court’s ruling upholding female-hours legislation in the Muller vs. Oregon case (1908). The Court’s rulings were not always completely consistent during this period, however. In 1898 the Court upheld a maximum eight-hour day for workmen in the hazardous industries of mining and smelting in Utah in Holden vs. Hardy. In Lochner vs. New York (1905), it rejected as unconstitutional New York’s ten-hour day for bakers, which was also adopted (at least nominally) out of concerns for safety. The defendant showed that mortality rates in baking were only slightly above average, and lower than those for many unregulated occupations, arguing that this was special interest legislation, designed to favor unionized bakers. Several state courts, on the other hand, supported laws regulating the hours of men in only marginally hazardous work. By 1917, in Bunting vs. Oregon, the Supreme Court seemingly overturned the logic of the Lochner decision, supporting a state law that required overtime payment for all men working long hours. The general presumption during this period was that the courts would allow regulation of labor concerning women and children, who were thought to be incapable of bargaining on an equal footing with employers and in special need of protection. Men were allowed freedom of contract unless it could be proven that regulating their hours served a higher good for the population at large.

New Arguments about Shorter Hours

During the first decades of the twentieth century, arguments favoring shorter hours moved away from Steward’s line that shorter hours increased pay and reduced unemployment to arguments that shorter hours were good for employers because they made workers more productive. A new cadre of social scientists began to offer evidence that long hours produced health-threatening, productivity-reducing fatigue. This line of reasoning, advanced in the court brief of Louis Brandeis and Josephine Goldmark, was crucial in the Supreme Court’s decision to support state regulation of women’s hours in Muller vs. Oregon. Goldmark’s book, Fatigue and Efficiency (1912) was a landmark. In addition, data relating to hours and output among British and American war workers during World War I helped convince some that long hours could be counterproductive. Businessmen, however, frequently attacked the shorter hours movement as merely a ploy to raise wages, since workers were generally willing to work overtime at higher wage rates.

Federal Legislation in the 1910s

In 1912 the Federal Public Works Act was passed, which provided that every contract to which the U.S. government was a party must contain an eight-hour day clause. Three year later LaFollette’s Bill established maximum hours for maritime workers. These were preludes to the most important shorter-hours law enacted by Congress during this period — 1916’s Adamson Act, which was passed to counter a threatened nationwide strike, granted rail workers the basic eight hour day. (The law set eight hours as the basic workday and required higher overtime pay for longer hours.)

World War I and Its Aftermath

Labor markets became very tight during World War I as the demand for workers soared and the unemployment rate plunged. These forces put workers in a strong bargaining position, which they used to obtain shorter work schedules. The move to shorter hours was also pushed by the federal government, which gave unprecedented support to unionization. The federal government began to intervene in labor disputes for the first time, and the National War Labor Board “almost invariably awarded the basic eight-hour day when the question of hours was at issue” in labor disputes (Cahill, 1932). At the end of the war everyone wondered if organized labor would maintain its newfound power and the crucial test case was the steel industry. Blast furnace workers generally put in 84-hour workweeks. These abnormally long hours were the subject of much denunciation and a major issue in a strike that began in September 1919. The strike failed (and organized labor’s power receded during the 1920s), but four years later US Steel reduced its workday from twelve to eight hours. The move came after much arm-twisting by President Harding but its timing may be explained by immigration restrictions and the loss of immigrant workers who were willing to accept such long hours (Shiells, 1990).

The Move to a Five-day Workweek

During the 1920s agitation for shorter workdays largely disappeared, now that the workweek had fallen to about 50 hours. However, pressure arose to grant half-holidays on Saturday or Saturday off — especially in industries whose workers were predominantly Jewish. By 1927 at least 262 large establishments had adopted the five-day week, while only 32 had it by 1920. The most notable action was Henry Ford’s decision to adopt the five-day week in 1926. Ford employed more than half of the nation’s approximately 400,000 workers with five-day weeks. However, Ford’s motives were questioned by many employers who argued that productivity gains from reducing hours ceased beyond about forty-eight hours per week. Even the reformist American Labor Legislation Review greeted the call for a five-day workweek with lukewarm interest.

Changing Attitudes in the 1920s

Hunnicutt (1988) argues that during the 1920s businessmen and economists began to see shorter hours as a threat to future economic growth. With the development of advertising — the “gospel of consumption” — a new vision of progress was proposed to American workers. It replaced the goal of leisure time with a list of things to buy and business began to persuade workers that more work brought more tangible rewards. Many workers began to oppose further decreases in the length of the workweek. Hunnicutt concludes that a new work ethic arose as Americans threw off the psychology of scarcity for one of abundance.

Hours’ Reduction during the Great Depression

Then the Great Depression hit the American economy. By 1932 about half of American employers had shortened hours. Rather than slash workers’ real wages, employers opted to lay-off many workers (the unemployment rate hit 25 percent) and tried to protect the ones they kept on by the sharing of work among them. President Hoover’s Commission for Work Sharing pushed voluntary hours reductions and estimated that they had saved three to five million jobs. Major employers like Sears, GM, and Standard Oil scaled down their workweeks and Kellogg’s and the Akron tire industry pioneered the six-hour day. Amid these developments, the AFL called for a federally-mandated thirty-hour workweek.

The Black-Connery 30-Hours Bill and the NIRA

The movement for shorter hours as a depression-fighting work-sharing measure built such a seemingly irresistible momentum that by 1933 observers predicting that the “30-hour week was within a month of becoming federal law” (Hunnicutt, 1988). During the period after the 1932 election but before Franklin Roosevelt’s inauguration, Congressional hearings on thirty hours began, and less than one month into FDR’s first term, the Senate passed, 53 to 30, a thirty-hour bill authored by Hugo Black. The bill was sponsored in the House by William Connery. Roosevelt originally supported the Black-Connery proposals, but soon backed off, uneasy with a provision forbidding importation of goods produced by workers whose weeks were longer than thirty hours, and convinced by arguments of business that trying to legislate fewer hours might have disastrous results. Instead, FDR backed the National Industrial Recovery Act (NIRA). Hunnicutt argues that an implicit deal was struck in the NIRA. Labor leaders were persuaded by NIRA Section 7a’s provisions — which guaranteed union organization and collective bargaining — to support the NIRA rather than the Black-Connery Thirty-Hour Bill. Business, with the threat of thirty hours hanging over its head, fell raggedly into line. (Most historians cite other factors as the key to the NIRA’s passage. See Barbara Alexander’s article on the NIRA in this encyclopedia.) When specific industry codes were drawn up by the NIRA-created National Recovery Administration (NRA), shorter hours were deemphasized. Despite a plan by NRA Administrator Hugh Johnson to make blanket provisions for a thirty-five hour workweek in all industry codes, by late August 1933, the momentum toward the thirty-hour week had dissipated. About half of employees covered by NRA codes had their hours set at forty per week and nearly 40 percent had workweeks longer than forty hours.

The FSLA: Federal Overtime Law

Hunnicutt argues that the entire New Deal can be seen as an attempt to keep shorter-hours advocates at bay. After the Supreme Court struck down the NRA, Roosevelt responded to continued demands for thirty hours with the Works Progress Administration, the Wagner Act, Social Security, and, finally, the Fair Labor Standards Acts, which set a federal minimum wage and decreed that overtime beyond forty hours per week would be paid at one-and-a-half times the base rate in covered industries.

The Demise of the Shorter Hours’ Movement

As the Great Depression ended, average weekly work hours slowly climbed from their low reached in 1934. During World War II hours reached a level almost as high as at the end of World War I. With the postwar return of weekly work hours to the forty-hour level the shorter hours movement effectively ended. Occasionally organized labor’s leaders announced that they would renew the push for shorter hours, but they found that most workers didn’t desire a shorter workweek.

The Case of Kellogg’s

Offsetting isolated examples of hours reductions after World War II, there were noteworthy cases of backsliding. Hunnicutt (1996) has studied the case of Kellogg’s in great detail. In 1946, 87% of women and 71% of men working at Kellogg’s voted to return to the six-hour day, with the end of the war. Over the course of the next decade, however, the tide turned. By 1957 most departments had opted to switch to 8-hour shifts, so that only about one-quarter of the work force, mostly women, retained a six-hour shift. Finally, in 1985, the last department voted to adopt an 8-hour workday. Workers, especially male workers, began to favor additional money more than the extra two hours per day of free time. In interviews they explained that they needed the extra money to buy a wide range of consumer items and to keep up with the neighbors. Several men told about the friction that resulted when men spent too much time around the house: “The wives didn’t like the men underfoot all day.” “The wife always found something for me to do if I hung around.” “We got into a lot of fights.” During the 1950s, the threat of unemployment evaporated and the moral condemnation for being a “work hog” no longer made sense. In addition, the rise of quasi-fixed employment costs (such as health insurance) induced management to push workers toward a longer workday.

The Current Situation

As the twentieth century ended there was nothing resembling a shorter hours “movement.” The length of the workweek continues to fall for most groups — but at a glacial pace. Some Americans complain about a lack of free time but the vast majority seem content with an average workweek of roughly forty hours — channeling almost all of their growing wages into higher incomes rather than increased leisure time.

Causes of the Decline in the Length of the Workweek

Supply, Demand and Hours of Work

The length of the workweek, like other labor market outcomes, is determined by the interaction of the supply and demand for labor. Employers are torn by conflicting pressures. Holding everything else constant, they would like employees to work long hours because this means that they can utilize their equipment more fully and offset any fixed costs from hiring each worker (such as the cost of health insurance — common today, but not a consideration a century ago). On the other hand, longer hours can bring reduced productivity due to worker fatigue and can bring worker demands for higher hourly wages to compensate for putting in long hours. If they set the workweek too high, workers may quit and few workers will be willing to work for them at a competitive wage rate. Thus, workers implicitly choose among a variety of jobs — some offering shorter hours and lower earnings, others offering longer hours and higher earnings.

Economic Growth and the Long-Term Reduction of Work Hours

Historically employers and employees often agreed on very long workweeks because the economy was not very productive (by today’s standards) and people had to work long hours to earn enough money to feed, clothe and house their families. The long-term decline in the length of the workweek, in this view, has primarily been due to increased economic productivity, which has yielded higher wages for workers. Workers responded to this rise in potential income by “buying” more leisure time, as well as by buying more goods and services. In a recent survey, a sizeable majority of economic historians agreed with this view. Over eighty percent accepted the proposition that “the reduction in the length of the workweek in American manufacturing before the Great Depression was primarily due to economic growth and the increased wages it brought” (Whaples, 1995). Other broad forces probably played only a secondary role. For example, roughly two-thirds of economic historians surveyed rejected the proposition that the efforts of labor unions were the primary cause of the drop in work hours before the Great Depression.

Winning the Eight-Hour Day in the Era of World War I

The swift reduction of the workweek in the period around World War I has been extensively analyzed by Whaples (1990b). His findings support the consensus that economic growth was the key to reduced work hours. Whaples links factors such as wages, labor legislation, union power, ethnicity, city size, leisure opportunities, age structure, wealth and homeownership, health, education, alternative employment opportunities, industrial concentration, seasonality of employment, and technological considerations to changes in the average workweek in 274 cities and 118 industries. He finds that the rapid economic expansion of the World War I period, which pushed up real wages by more than 18 percent between 1914 and 1919, explains about half of the drop in the length of the workweek. The reduction of immigration during the war was important, as it deprived employers of a group of workers who were willing to put in long hours, explaining about one-fifth of the hours decline. The rapid electrification of manufacturing seems also to have played an important role in reducing the workweek. Increased unionization explains about one-seventh of the reduction, and federal and state legislation and policies that mandated reduced workweeks also had a noticeable role.

Cross-sectional Patterns from 1919

In 1919 the average workweek varied tremendously, emphasizing the point that not all workers desired the same workweek. The workweek exceeded 69 hours in the iron blast furnace, cottonseed oil, and sugar beet industries, but fell below 45 hours in industries such as hats and caps, fur goods, and women’s clothing. Cities’ averages also differed dramatically. In a few Midwestern steel mill towns average workweeks exceeded 60 hours. In a wide range of low-wage Southern cities they reached the high 50s, but in high-wage Western ports, like Seattle, the workweek fell below 45 hours.

Whaples (1990a) finds that among the most important city-level determinants of the workweek during this period were the availability of a pool of agricultural workers, the capital-labor ratio, horsepower per worker, and the amount of employment in large establishments. Hours rose as each of these increased. Eastern European immigrants worked significantly longer than others, as did people in industries whose output varied considerably from season to season. High unionization and strike levels reduced hours to a small degree. The average female employee worked about six and a half fewer hours per week in 1919 than did the average male employee. In city-level comparisons, state maximum hours laws appear to have had little affect on average work hours, once the influences of other factors have been taken into account. One possibility is that these laws were passed only after economic forces lowered the length of the workweek. Overall, in cities where wages were one percent higher, hours were about -0.13 to -0.05 percent lower. Again, this suggests that during the era of declining hours, workers were willing to use higher wages to “buy” shorter hours.

Annotated Bibliography

Perhaps the most comprehensive survey of the shorter hours movement in the U.S. is David Roediger and Philip Foner’s Our Own Time: A History of American Labor and the Working Day (1989). It contends that “the length of the working day has been the central issue for the American labor movement during its most vigorous periods of activity, uniting workers along lines of craft, gender, and ethnicity.” Critics argue that its central premise is flawed because workers have often been divided about the optimal length of the workweek. It explains the point of view of organized labor and recounts numerous historically important events and arguments, but does not attempt to examine in detail the broader economic forces that determined the length of the workweek. An earlier useful comprehensive work is Marion Cahill’s Shorter Hours: A Study of the Movement since the Civil War (1932).

Benjamin Hunnicutt’s Work Without End: Abandoning Shorter Hours for the Right to Work (1988) focuses on the period from 1920 to 1940 and traces the political, intellectual, and social “dialogues” that changed the American concept of progress from dreams of more leisure to an “obsession” with the importance of work and wage-earning. This work’s detailed analysis and insights are valuable, but it draws many of its inferences from what intellectuals said about shorter hours, rather than spending time on the actual decision makers — workers and employers. Hunnicutt’s Kellogg’s Six-Hour Day (1996), is important because it does exactly this — interviewing employees and examining the motives and decisions of a prominent employer. Unfortunately, it shows that one must carefully interpret what workers say on the subject, as they are prone to reinterpret their own pasts so that their choices can be more readily rationalized. (See EH.NET’s review: http://eh.net/book_reviews/kelloggs-six-hour-day/.)

Economists have given surprisingly little attention to the determinants of the workweek. The most comprehensive treatment is Robert Whaples’ “The Shortening of the American Work Week” (1990), which surveys estimates of the length of the workweek, the shorter hours movement, and economic theories about the length of the workweek. Its core is an extensive statistical examination of the determinants of the workweek in the period around World War I.

References

Atack, Jeremy and Fred Bateman. “How Long Was the Workday in 1880?” Journal of Economic History 52, no. 1 (1992): 129-160.

Cahill, Marion Cotter. Shorter Hours: A Study of the Movement since the Civil War. New York: Columbia University Press, 1932.

Carr, Lois Green. “Emigration and the Standard of Living: The Seventeenth Century Chesapeake.” Journal of Economic History 52, no. 2 (1992): 271-291.

Coleman, Mary T. and John Pencavel. “Changes in Work Hours of Male Employees, 1940-1988.” Industrial and Labor Relations Review 46, no. 2 (1993a): 262-283.

Coleman, Mary T. and John Pencavel. “Trends in Market Work Behavior of Women since 1940.” Industrial and Labor Relations Review 46, no. 4 (1993b): 653-676.

Douglas, Paul. Real Wages in the United States, 1890-1926. Boston: Houghton, 1930.

Fogel, Robert. The Fourth Great Awakening and the Future of Egalitarianism. Chicago: University of Chicago Press, 2000.

Fogel, Robert and Stanley Engerman. Time on the Cross: The Economics of American Negro Slavery. Boston: Little, Brown, 1974.

Gallman, Robert. “The Agricultural Sector and the Pace of Economic Growth: U.S. Experience in the Nineteenth Century.” In Essays in Nineteenth-Century Economic History: The Old Northwest, edited by David Klingaman and Richard Vedder. Athens, OH: Ohio University Press, 1975.

Goldmark, Josephine. Fatigue and Efficiency. New York: Charities Publication Committee, 1912.

Gompers, Samuel. Seventy Years of Life and Labor: An Autobiography. New York: Dutton, 1925.

Greis, Theresa Diss. The Decline of Annual Hours Worked in the United States, since 1947. Manpower and Human Resources Studies, no. 10, Wharton School, University of Pennsylvania, 1984.

Grob, Gerald. Workers and Utopia: A Study of Ideological Conflict in the American Labor Movement, 1865-1900. Evanston: Northwestern University Press, 1961.

Hunnicutt, Benjamin Kline. Work Without End: Abandoning Shorter Hours for the Right to Work. Philadelphia: Temple University Press, 1988.

Hunnicutt, Benjamin Kline. Kellogg’s Six-Hour Day. Philadelphia: Temple University Press, 1996.

Jones, Ethel. “New Estimates of Hours of Work per Week and Hourly Earnings, 1900-1957.” Review of Economics and Statistics 45, no. 4 (1963): 374-385.

Juster, F. Thomas and Frank P. Stafford. “The Allocation of Time: Empirical Findings, Behavioral Models, and Problems of Measurement.” Journal of Economic Literature 29, no. 2 (1991): 471-522.

Licht, Walter. Working for the Railroad: The Organization of Work in the Nineteenth Century. Princeton: Princeton University Press, 1983.

Margo, Robert. “The Labor Force in the Nineteenth Century.” In The Cambridge Economic History of the United States, Volume II, The Long Nineteenth Century, edited by Stanley Engerman and Robert Gallman, 207-243. New York: Cambridge University Press, 2000.

Nelson, Bruce. “‘We Can’t Get Them to Do Aggressive Work’: Chicago’s Anarchists and the Eight-Hour Movement.” International Labor and Working Class History 29 (1986).

Ng, Kenneth and Nancy Virts. “The Value of Freedom.” Journal of Economic History 49, no. 4 (1989): 958-965.

Owen, John. “Workweeks and Leisure: An Analysis of Trends, 1948-1975.” Monthly Labor Review 99 (1976).

Owen, John. “Work-time Reduction in the United States and Western Europe.” Monthly Labor Review 111 (1988).

Powderly, Terence. Thirty Years of Labor, 1859-1889. Columbus: Excelsior, 1890.

Ransom, Roger and Richard Sutch. One Kind of Freedom: The Economic Consequences of Emancipation. New York: Cambridge University Press, 1977.

Rodgers, Daniel. The Work Ethic in Industrial America, 1850-1920. Chicago: University of Chicago Press, 1978.

Roediger, David. “Ira Steward and the Antislavery Origins of American Eight-Hour Theory.” Labor History 27 (1986).

Roediger, David and Philip Foner. Our Own Time: A History of American Labor and the Working Day. New York: Verso, 1989.

Schor, Juliet B. The Overworked American: The Unexpected Decline in Leisure. New York: Basic Books, 1992.

Shiells, Martha Ellen, “Collective Choice of Working Conditions: Hours in British and U.S. Iron and Steel, 1890-1923.” Journal of Economic History 50, no. 2 (1990): 379-392.

Steinberg, Ronnie. Wages and Hours: Labor and Reform in Twentieth-Century America. New Brunswick, NJ: Rutgers University Press, 1982.

United States, Department of Interior, Census Office. Report on the Statistics of Wages in Manufacturing Industries, by Joseph Weeks, 1880 Census, Vol. 20. Washington: GPO, 1883.

United States Senate. Senate Report 1394, Fifty-Second Congress, Second Session. “Wholesale Prices, Wages, and Transportation.” Washington: GPO, 1893.

Ware, Caroline. The Early New England Cotton Manufacture: A Study of Industrial Beginnings. Boston: Houghton-Mifflin, 1931.

Ware, Norman. The Labor Movement in the United States, 1860-1895. New York: Appleton, 1929.

Weiss, Thomas and Lee Craig. “Agricultural Productivity Growth during the Decade of the Civil War.” Journal of Economic History 53, no. 3 (1993): 527-548.

Whaples, Robert. “The Shortening of the American Work Week: An Economic and Historical Analysis of Its Context, Causes, and Consequences.” Ph.D. dissertation, University of Pennsylvania, 1990a.

Whaples, Robert. “Winning the Eight-Hour Day, 1909-1919.” Journal of Economic History 50, no. 2 (1990b): 393-406.

Whaples, Robert. “Where Is There Consensus Among American Economic Historians? The Results of a Survey on Forty Propositions.” Journal of Economic History 55, no. 1 (1995): 139-154.

Citation: Whaples, Robert. “Hours of Work in U.S. History”. EH.Net Encyclopedia, edited by Robert Whaples. August 14, 2001. URL http://eh.net/encyclopedia/hours-of-work-in-u-s-history/

The U.S. Economy in the 1920s

Gene Smiley, Marquette University

Introduction

The interwar period in the United States, and in the rest of the world, is a most interesting era. The decade of the 1930s marks the most severe depression in our history and ushered in sweeping changes in the role of government. Economists and historians have rightly given much attention to that decade. However, with all of this concern about the growing and developing role of government in economic activity in the 1930s, the decade of the 1920s often tends to get overlooked. This is unfortunate because the 1920s are a period of vigorous, vital economic growth. It marks the first truly modern decade and dramatic economic developments are found in those years. There is a rapid adoption of the automobile to the detriment of passenger rail travel. Though suburbs had been growing since the late nineteenth century their growth had been tied to rail or trolley access and this was limited to the largest cities. The flexibility of car access changed this and the growth of suburbs began to accelerate. The demands of trucks and cars led to a rapid growth in the construction of all-weather surfaced roads to facilitate their movement. The rapidly expanding electric utility networks led to new consumer appliances and new types of lighting and heating for homes and businesses. The introduction of the radio, radio stations, and commercial radio networks began to break up rural isolation, as did the expansion of local and long-distance telephone communications. Recreational activities such as traveling, going to movies, and professional sports became major businesses. The period saw major innovations in business organization and manufacturing technology. The Federal Reserve System first tested its powers and the United States moved to a dominant position in international trade and global business. These things make the 1920s a period of considerable importance independent of what happened in the 1930s.

National Product and Income and Prices

We begin the survey of the 1920s with an examination of the overall production in the economy, GNP, the most comprehensive measure of aggregate economic activity. Real GNP growth during the 1920s was relatively rapid, 4.2 percent a year from 1920 to 1929 according to the most widely used estimates. (Historical Statistics of the United States, or HSUS, 1976) Real GNP per capita grew 2.7 percent per year between 1920 and 1929. By both nineteenth and twentieth century standards these were relatively rapid rates of real economic growth and they would be considered rapid even today.

There were several interruptions to this growth. In mid-1920 the American economy began to contract and the 1920-1921 depression lasted about a year, but a rapid recovery reestablished full-employment by 1923. As will be discussed below, the Federal Reserve System’s monetary policy was a major factor in initiating the 1920-1921 depression. From 1923 through 1929 growth was much smoother. There was a very mild recession in 1924 and another mild recession in 1927 both of which may be related to oil price shocks (McMillin and Parker, 1994). The 1927 recession was also associated with Henry Ford’s shut-down of all his factories for six months in order to changeover from the Model T to the new Model A automobile. Though the Model T’s market share was declining after 1924, in 1926 Ford’s Model T still made up nearly 40 percent of all the new cars produced and sold in the United States. The Great Depression began in the summer of 1929, possibly as early as June. The initial downturn was relatively mild but the contraction accelerated after the crash of the stock market at the end of October. Real total GNP fell 10.2 percent from 1929 to 1930 while real GNP per capita fell 11.5 percent from 1929 to 1930.

image002 image004

Price changes during the 1920s are shown in Figure 2. The Consumer Price Index, CPI, is a better measure of changes in the prices of commodities and services that a typical consumer would purchase, while the Wholesale Price Index, WPI, is a better measure in the changes in the cost of inputs for businesses. As the figure shows the 1920-1921 depression was marked by extraordinarily large price decreases. Consumer prices fell 11.3 percent from 1920 to 1921 and fell another 6.6 percent from 1921 to 1922. After that consumer prices were relatively constant and actually fell slightly from 1926 to 1927 and from 1927 to 1928. Wholesale prices show greater variation. The 1920-1921 depression hit farmers very hard. Prices had been bid up with the increasing foreign demand during the First World War. As European production began to recover after the war prices began to fall. Though the prices of agricultural products fell from 1919 to 1920, the depression brought on dramatic declines in the prices of raw agricultural produce as well as many other inputs that firms employ. In the scramble to beat price increases during 1919 firms had built up large inventories of raw materials and purchased inputs and this temporary increase in demand led to even larger price increases. With the depression firms began to draw down those inventories. The result was that the prices of raw materials and manufactured inputs fell rapidly along with the prices of agricultural produce—the WPI dropped 45.9 percent between 1920 and 1921. The price changes probably tend to overstate the severity of the 1920-1921 depression. Romer’s recent work (1988) suggests that prices changed much more easily in that depression reducing the drop in production and employment. Wholesale prices in the rest of the 1920s were relatively stable though they were more likely to fall than to rise.

Economic Growth in the 1920s

Despite the 1920-1921 depression and the minor interruptions in 1924 and 1927, the American economy exhibited impressive economic growth during the 1920s. Though some commentators in later years thought that the existence of some slow growing or declining sectors in the twenties suggested weaknesses that might have helped bring on the Great Depression, few now argue this. Economic growth never occurs in all sectors at the same time and at the same rate. Growth reallocates resources from declining or slower growing sectors to the more rapidly expanding sectors in accordance with new technologies, new products and services, and changing consumer tastes.

Economic growth in the 1920s was impressive. Ownership of cars, new household appliances, and housing was spread widely through the population. New products and processes of producing those products drove this growth. The combination of the widening use of electricity in production and the growing adoption of the moving assembly line in manufacturing combined to bring on a continuing rise in the productivity of labor and capital. Though the average workweek in most manufacturing remained essentially constant throughout the 1920s, in a few industries, such as railroads and coal production, it declined. (Whaples 2001) New products and services created new markets such as the markets for radios, electric iceboxes, electric irons, fans, electric lighting, vacuum cleaners, and other laborsaving household appliances. This electricity was distributed by the growing electric utilities. The stocks of those companies helped create the stock market boom of the late twenties. RCA, one of the glamour stocks of the era, paid no dividends but its value appreciated because of expectations for the new company. Like the Internet boom of the late 1990s, the electricity boom of the 1920s fed a rapid expansion in the stock market.

Fed by continuing productivity advances and new products and services and facilitated by an environment of stable prices that encouraged production and risk taking, the American economy embarked on a sustained expansion in the 1920s.

Population and Labor in the 1920s

At the same time that overall production was growing, population growth was declining. As can be seen in Figure 3, from an annual rate of increase of 1.85 and 1.93 percent in 1920 and 1921, respectively, population growth rates fell to 1.23 percent in 1928 and 1.04 percent in 1929.

These changes in the overall growth rate were linked to the birth and death rates of the resident population and a decrease in foreign immigration. Though the crude death rate changed little during the period, the crude birth rate fell sharply into the early 1930s. (Figure 4) There are several explanations for the decline in the birth rate during this period. First, there was an accelerated rural-to-urban migration. Urban families have tended to have fewer children than rural families because urban children do not augment family incomes through their work as unpaid workers as rural children do. Second, the period also saw continued improvement in women’s job opportunities and a rise in their labor force participation rates.

Immigration also fell sharply. In 1917 the federal government began to limit immigration and in 1921 an immigration act limited the number of prospective citizens of any nationality entering the United States each year to no more than 3 percent of that nationality’s resident population as of the 1910 census. A new act in 1924 lowered this to 2 percent of the resident population at the 1890 census and more firmly blocked entry for people from central, southern, and eastern European nations. The limits were relaxed slightly in 1929.

The American population also continued to move during the interwar period. Two regions experienced the largest losses in population shares, New England and the Plains. For New England this was a continuation of a long-term trend. The population share for the Plains region had been rising through the nineteenth century. In the interwar period its agricultural base, combined with the continuing shift from agriculture to industry, led to a sharp decline in its share. The regions gaining population were the Southwest and, particularly, the far West.— California began its rapid growth at this time.

 Real Average Weekly or Daily Earnings for Selected=During the 1920s the labor force grew at a more rapid rate than population. This somewhat more rapid growth came from the declining share of the population less than 14 years old and therefore not in the labor force. In contrast, the labor force participation rates, or fraction of the population aged 14 and over that was in the labor force, declined during the twenties from 57.7 percent to 56.3 percent. This was entirely due to a fall in the male labor force participation rate from 89.6 percent to 86.8 percent as the female labor force participation rate rose from 24.3 percent to 25.1 percent. The primary source of the fall in male labor force participation rates was a rising retirement rate. Employment rates for males who were 65 or older fell from 60.1 percent in 1920 to 58.0 percent in 1930.

With the depression of 1920-1921 the unemployment rate rose rapidly from 5.2 to 8.7 percent. The recovery reduced unemployment to an average rate of 4.8 percent in 1923. The unemployment rate rose to 5.8 percent in the recession of 1924 and to 5.0 percent with the slowdown in 1927. Otherwise unemployment remained relatively low. The onset of the Great Depression from the summer of 1929 on brought the unemployment rate from 4.6 percent in 1929 to 8.9 percent in 1930. (Figure 5)

Earnings for laborers varied during the twenties. Table 1 presents average weekly earnings for 25 manufacturing industries. For these industries male skilled and semi-skilled laborers generally commanded a premium of 35 percent over the earnings of unskilled male laborers in the twenties. Unskilled males received on average 35 percent more than females during the twenties. Real average weekly earnings for these 25 manufacturing industries rose somewhat during the 1920s. For skilled and semi-skilled male workers real average weekly earnings rose 5.3 percent between 1923 and 1929, while real average weekly earnings for unskilled males rose 8.7 percent between 1923 and 1929. Real average weekly earnings for females rose on 1.7 percent between 1923 and 1929. Real weekly earnings for bituminous and lignite coal miners fell as the coal industry encountered difficult times in the late twenties and the real daily wage rate for farmworkers in the twenties, reflecting the ongoing difficulties in agriculture, fell after the recovery from the 1920-1921 depression.

The 1920s were not kind to labor unions even though the First World War had solidified the dominance of the American Federation of Labor among labor unions in the United States. The rapid growth in union membership fostered by federal government policies during the war ended in 1919. A committee of AFL craft unions undertook a successful membership drive in the steel industry in that year. When U.S. Steel refused to bargain, the committee called a strike, the failure of which was a sharp blow to the unionization drive. (Brody, 1965) In the same year, the United Mine Workers undertook a large strike and also lost. These two lost strikes and the 1920-21 depression took the impetus out of the union movement and led to severe membership losses that continued through the twenties. (Figure 6)

Under Samuel Gompers’s leadership, the AFL’s “business unionism” had attempted to promote the union and collective bargaining as the primary answer to the workers’ concerns with wages, hours, and working conditions. The AFL officially opposed any government actions that would have diminished worker attachment to unions by providing competing benefits, such as government sponsored unemployment insurance, minimum wage proposals, maximum hours proposals and social security programs. As Lloyd Ulman (1961) points out, the AFL, under Gompers’ direction, differentiated on the basis of whether the statute would or would not aid collective bargaining. After Gompers’ death, William Green led the AFL in a policy change as the AFL promoted the idea of union-management cooperation to improve output and promote greater employer acceptance of unions. But Irving Bernstein (1965) concludes that, on the whole, union-management cooperation in the twenties was a failure.

To combat the appeal of unions in the twenties, firms used the “yellow-dog” contract requiring employees to swear they were not union members and would not join one; the “American Plan” promoting the open shop and contending that the closed shop was un-American; and welfare capitalism. The most common aspects of welfare capitalism included personnel management to handle employment issues and problems, the doctrine of “high wages,” company group life insurance, old-age pension plans, stock-purchase plans, and more. Some firms formed company unions to thwart independent unionization and the number of company-controlled unions grew from 145 to 432 between 1919 and 1926.

Until the late thirties the AFL was a voluntary association of independent national craft unions. Craft unions relied upon the particular skills the workers had acquired (their craft) to distinguish the workers and provide barriers to the entry of other workers. Most craft unions required a period of apprenticeship before a worker was fully accepted as a journeyman worker. The skills, and often lengthy apprenticeship, constituted the entry barrier that gave the union its bargaining power. There were only a few unions that were closer to today’s industrial unions where the required skills were much less (or nonexistent) making the entry of new workers much easier. The most important of these industrial unions was the United Mine Workers, UMW.

The AFL had been created on two principles: the autonomy of the national unions and the exclusive jurisdiction of the national union.—Individual union members were not, in fact, members of the AFL; rather, they were members of the local and national union, and the national was a member of the AFL. Representation in the AFL gave dominance to the national unions, and, as a result, the AFL had little effective power over them. The craft lines, however, had never been distinct and increasingly became blurred. The AFL was constantly mediating jurisdictional disputes between member national unions. Because the AFL and its individual unions were not set up to appeal to and work for the relatively less skilled industrial workers, union organizing and growth lagged in the twenties.

Agriculture

The onset of the First World War in Europe brought unprecedented prosperity to American farmers. As agricultural production in Europe declined, the demand for American agricultural exports rose, leading to rising farm product prices and incomes. In response to this, American farmers expanded production by moving onto marginal farmland, such as Wisconsin cutover property on the edge of the woods and hilly terrain in the Ozark and Appalachian regions. They also increased output by purchasing more machinery, such as tractors, plows, mowers, and threshers. The price of farmland, particularly marginal farmland, rose in response to the increased demand, and the debt of American farmers increased substantially.

This expansion of American agriculture continued past the end of the First World War as farm exports to Europe and farm prices initially remained high. However, agricultural production in Europe recovered much faster than most observers had anticipated. Even before the onset of the short depression in 1920, farm exports and farm product prices had begun to fall. During the depression, farm prices virtually collapsed. From 1920 to 1921, the consumer price index fell 11.3 percent, the wholesale price index fell 45.9 percent, and the farm products price index fell 53.3 percent. (HSUS, Series E40, E42, and E135)

Real average net income per farm fell over 72.6 percent between 1920 and 1921 and, though rising in the twenties, never recovered the relative levels of 1918 and 1919. (Figure 7) Farm mortgage foreclosures rose and stayed at historically high levels for the entire decade of the 1920s. (Figure 8) The value of farmland and buildings fell throughout the twenties and, for the first time in American history, the number of cultivated acres actually declined as farmers pulled back from the marginal farmland brought into production during the war. Rather than indicators of a general depression in agriculture in the twenties, these were the results of the financial commitments made by overoptimistic American farmers during and directly after the war. The foreclosures were generally on second mortgages rather than on first mortgages as they were in the early 1930s. (Johnson, 1973; Alston, 1983)

A Declining Sector

A major difficulty in analyzing the interwar agricultural sector lies in separating the effects of the 1920-21 and 1929-33 depressions from those that arose because agriculture was declining relative to the other sectors. A relatively very slow growing demand for basic agricultural products and significant increases in the productivity of labor, land, and machinery in agricultural production combined with a much more rapid extensive economic growth in the nonagricultural sectors of the economy required a shift of resources, particularly labor, out of agriculture. (Figure 9) The market induces labor to voluntarily move from one sector to another through income differentials, suggesting that even in the absence of the effects of the depressions, farm incomes would have been lower than nonfarm incomes so as to bring about this migration.

The continuous substitution of tractor power for horse and mule power released hay and oats acreage to grow crops for human consumption. Though cotton and tobacco continued as the primary crops in the south, the relative production of cotton continued to shift to the west as production in Arkansas, Missouri, Oklahoma, Texas, New Mexico, Arizona, and California increased. As quotas reduced immigration and incomes rose, the demand for cereal grains grew slowly—more slowly than the supply—and the demand for fruits, vegetables, and dairy products grew. Refrigeration and faster freight shipments expanded the milk sheds further from metropolitan areas. Wisconsin and other North Central states began to ship cream and cheeses to the Atlantic Coast. Due to transportation improvements, specialized truck farms and the citrus industry became more important in California and Florida. (Parker, 1972; Soule, 1947)

The relative decline of the agricultural sector in this period was closely related to the highly inelastic income elasticity of demand for many farm products, particularly cereal grains, pork, and cotton. As incomes grew, the demand for these staples grew much more slowly. At the same time, rising land and labor productivity were increasing the supplies of staples, causing real prices to fall.

Table 3 presents selected agricultural productivity statistics for these years. Those data indicate that there were greater gains in labor productivity than in land productivity (or per acre yields). Per acre yields in wheat and hay actually decreased between 1915-19 and 1935-39. These productivity increases, which released resources from the agricultural sector, were the result of technological improvements in agriculture.

Technological Improvements In Agricultural Production

In many ways the adoption of the tractor in the interwar period symbolizes the technological changes that occurred in the agricultural sector. This changeover in the power source that farmers used had far-reaching consequences and altered the organization of the farm and the farmers’ lifestyle. The adoption of the tractor was land saving (by releasing acreage previously used to produce crops for workstock) and labor saving. At the same time it increased the risks of farming because farmers were now much more exposed to the marketplace. They could not produce their own fuel for tractors as they had for the workstock. Rather, this had to be purchased from other suppliers. Repair and replacement parts also had to be purchased, and sometimes the repairs had to be undertaken by specialized mechanics. The purchase of a tractor also commonly required the purchase of new complementary machines; therefore, the decision to purchase a tractor was not an isolated one. (White, 2001; Ankli, 1980; Ankli and Olmstead, 1981; Musoke, 1981; Whatley, 1987). These changes resulted in more and more farmers purchasing and using tractors, but the rate of adoption varied sharply across the United States.

Technological innovations in plants and animals also raised productivity. Hybrid seed corn increased yields from an average of 40 bushels per acre to 100 to 120 bushels per acre. New varieties of wheat were developed from the hardy Russian and Turkish wheat varieties which had been imported. The U.S. Department of Agriculture’s Experiment Stations took the lead in developing wheat varieties for different regions. For example, in the Columbia River Basin new varieties raised yields from an average of 19.1 bushels per acre in 1913-22 to 23.1 bushels per acre in 1933-42. (Shepherd, 1980) New hog breeds produced more meat and new methods of swine sanitation sharply increased the survival rate of piglets. An effective serum for hog cholera was developed, and the federal government led the way in the testing and eradication of bovine tuberculosis and brucellosis. Prior to the Second World War, a number of pesticides to control animal disease were developed, including cattle dips and disinfectants. By the mid-1920s a vaccine for “blackleg,” an infectious, usually fatal disease that particularly struck young cattle, was completed. The cattle tick, which carried Texas Fever, was largely controlled through inspections. (Schlebecker, 1975; Bogue, 1983; Wood, 1980)

Federal Agricultural Programs in the 1920s

Though there was substantial agricultural discontent in the period from the Civil War to late 1890s, the period from then to the onset of the First World War was relatively free from overt farmers’ complaints. In later years farmers dubbed the 1910-14 period as agriculture’s “golden years” and used the prices of farm crops and farm inputs in that period as a standard by which to judge crop and input prices in later years. The problems that arose in the agricultural sector during the twenties once again led to insistent demands by farmers for government to alleviate their distress.

Though there were increasing calls for direct federal government intervention to limit production and raise farm prices, this was not used until Roosevelt took office. Rather, there was a reliance upon the traditional method to aid injured groups—tariffs, and upon the “sanctioning and promotion of cooperative marketing associations.” In 1921 Congress attempted to control the grain exchanges and compel merchants and stockyards to charge “reasonable rates,” with the Packers and Stockyards Act and the Grain Futures Act. In 1922 Congress passed the Capper-Volstead Act to promote agricultural cooperatives and the Fordney-McCumber Tariff to impose high duties on most agricultural imports.—The Cooperative Marketing Act of 1924 did not bolster failing cooperatives as it was supposed to do. (Hoffman and Liebcap, 1991)

Twice between 1924 and 1928 Congress passed “McNary-Haugan” bills, but President Calvin Coolidge vetoed both. The McNary-Haugan bills proposed to establish “fair” exchange values (based on the 1910-14 period) for each product and to maintain them through tariffs and a private corporation that would be chartered by the government and could buy enough of each commodity to keep its price up to the computed fair level. The revenues were to come from taxes imposed on farmers. The Hoover administration passed the Hawley-Smoot tariff in 1930 and an Agricultural Marketing Act in 1929. This act committed the federal government to a policy of stabilizing farm prices through several nongovernment institutions but these failed during the depression. Federal intervention in the agricultural sector really came of age during the New Deal era of the 1930s.

Manufacturing

Agriculture was not the only sector experiencing difficulties in the twenties. Other industries, such as textiles, boots and shoes, and coal mining, also experienced trying times. However, at the same time that these industries were declining, other industries, such as electrical appliances, automobiles, and construction, were growing rapidly. The simultaneous existence of growing and declining industries has been common to all eras because economic growth and technological progress never affect all sectors in the same way. In general, in manufacturing there was a rapid rate of growth of productivity during the twenties. The rise of real wages due to immigration restrictions and the slower growth of the resident population spurred this. Transportation improvements and communications advances were also responsible. These developments brought about differential growth in the various manufacturing sectors in the United States in the 1920s.

Because of the historic pattern of economic development in the United States, the northeast was the first area to really develop a manufacturing base. By the mid-nineteenth century the East North Central region was creating a manufacturing base and the other regions began to create manufacturing bases in the last half of the nineteenth century resulting in a relative westward and southern shift of manufacturing activity. This trend continued in the 1920s as the New England and Middle Atlantic regions’ shares of manufacturing employment fell while all of the other regions—excluding the West North Central region—gained. There was considerable variation in the growth of the industries and shifts in their ranking during the decade. The largest broadly defined industries were, not surprisingly, food and kindred products; textile mill products; those producing and fabricating primary metals; machinery production; and chemicals. When industries are more narrowly defined, the automobile industry, which ranked third in manufacturing value added in 1919, ranked first by the mid-1920s.

Productivity Developments

Gavin Wright (1990) has argued that one of the underappreciated characteristics of American industrial history has been its reliance on mineral resources. Wright argues that the growing American strength in industrial exports and industrialization in general relied on an increasing intensity in nonreproducible natural resources. The large American market was knit together as one large market without internal barriers through the development of widespread low-cost transportation. Many distinctively American developments, such as continuous-process, mass-production methods were associated with the “high throughput” of fuel and raw materials relative to labor and capital inputs. As a result the United States became the dominant industrial force in the world 1920s and 1930s. According to Wright, after World War II “the process by which the United States became a unified ‘economy’ in the nineteenth century has been extended to the world as a whole. To a degree, natural resources have become commodities rather than part of the ‘factor endowment’ of individual countries.” (Wright, 1990)

In addition to this growing intensity in the use of nonreproducible natural resources as a source of productivity gains in American manufacturing, other technological changes during the twenties and thirties tended to raise the productivity of the existing capital through the replacement of critical types of capital equipment with superior equipment and through changes in management methods. (Soule, 1947; Lorant, 1967; Devine, 1983; Oshima, 1984) Some changes, such as the standardization of parts and processes and the reduction of the number of styles and designs, raised the productivity of both capital and labor. Modern management techniques, first introduced by Frederick W. Taylor, were introduced on a wider scale.

One of the important forces contributing to mass production and increased productivity was the transfer to electric power. (Devine, 1983) By 1929 about 70 percent of manufacturing activity relied on electricity, compared to roughly 30 percent in 1914. Steam provided 80 percent of the mechanical drive capacity in manufacturing in 1900, but electricity provided over 50 percent by 1920 and 78 percent by 1929. An increasing number of factories were buying their power from electric utilities. In 1909, 64 percent of the electric motor capacity in manufacturing establishments used electricity generated on the factory site; by 1919, 57 percent of the electricity used in manufacturing was purchased from independent electric utilities.

The shift from coal to oil and natural gas and from raw unprocessed energy in the forms of coal and waterpower to processed energy in the form of internal combustion fuel and electricity increased thermal efficiency. After the First World War energy consumption relative to GNP fell, there was a sharp increase in the growth rate of output per labor-hour, and the output per unit of capital input once again began rising. These trends can be seen in the data in Table 3. Labor productivity grew much more rapidly during the 1920s than in the previous or following decade. Capital productivity had declined in the decade previous to the 1920s while it also increased sharply during the twenties and continued to rise in the following decade. Alexander Field (2003) has argued that the 1930s were the most technologically progressive decade of the twentieth century basing his argument on the growth of multi-factor productivity as well as the impressive array of technological developments during the thirties. However, the twenties also saw impressive increases in labor and capital productivity as, particularly, developments in energy and transportation accelerated.

 Average Annual Rates of Labor Productivity and Capital Productivity Growth.

Warren Devine, Jr. (1983) reports that in the twenties the most important result of the adoption of electricity was that it would be an indirect “lever to increase production.” There were a number of ways in which this occurred. Electricity brought about an increased flow of production by allowing new flexibility in the design of buildings and the arrangement of machines. In this way it maximized throughput. Electric cranes were an “inestimable boon” to production because with adequate headroom they could operate anywhere in a plant, something that mechanical power transmission to overhead cranes did not allow. Electricity made possible the use of portable power tools that could be taken anywhere in the factory. Electricity brought about improved illumination, ventilation, and cleanliness in the plants, dramatically improving working conditions. It improved the control of machines since there was no longer belt slippage with overhead line shafts and belt transmission, and there were less limitations on the operating speeds of machines. Finally, it made plant expansion much easier than when overhead shafts and belts had been relied upon for operating power.

The mechanization of American manufacturing accelerated in the 1920s, and this led to a much more rapid growth of productivity in manufacturing compared to earlier decades and to other sectors at that time. There were several forces that promoted mechanization. One was the rapidly expanding aggregate demand during the prosperous twenties. Another was the technological developments in new machines and processes, of which electrification played an important part. Finally, Harry Jerome (1934) and, later, Harry Oshima (1984) both suggest that the price of unskilled labor began to rise as immigration sharply declined with new immigration laws and falling population growth. This accelerated the mechanization of the nation’s factories.

Technological changes during this period can be documented for a number of individual industries. In bituminous coal mining, labor productivity rose when mechanical loading devices reduced the labor required from 24 to 50 percent. The burst of paved road construction in the twenties led to the development of a finishing machine to smooth the surface of cement highways, and this reduced the labor requirement from 40 to 60 percent. Mechanical pavers that spread centrally mixed materials further increased productivity in road construction. These replaced the roadside dump and wheelbarrow methods of spreading the cement. Jerome (1934) reports that the glass in electric light bulbs was made by new machines that cut the number of labor-hours required for their manufacture by nearly half. New machines to produce cigarettes and cigars, for warp-tying in textile production, and for pressing clothes in clothing shops also cut labor-hours. The Banbury mixer reduced the labor input in the production of automobile tires by half, and output per worker of inner tubes increased about four times with a new production method. However, as Daniel Nelson (1987) points out, the continuing advances were the “cumulative process resulting from a vast number of successive small changes.” Because of these continuing advances in the quality of the tires and in the manufacturing of tires, between 1910 and 1930 “tire costs per thousand miles of driving fell from $9.39 to $0.65.”

John Lorant (1967) has documented other technological advances that occurred in American manufacturing during the twenties. For example, the organic chemical industry developed rapidly due to the introduction of the Weizman fermentation process. In a similar fashion, nearly half of the productivity advances in the paper industry were due to the “increasingly sophisticated applications of electric power and paper manufacturing processes,” especially the fourdrinier paper-making machines. As Avi Cohen (1984) has shown, the continuing advances in these machines were the result of evolutionary changes to the basic machine. Mechanization in many types of mass-production industries raised the productivity of labor and capital. In the glass industry, automatic feeding and other types of fully automatic production raised the efficiency of the production of glass containers, window glass, and pressed glass. Giedion (1948) reported that the production of bread was “automatized” in all stages during the 1920s.

Though not directly bringing about productivity increases in manufacturing processes, developments in the management of manufacturing firms, particularly the largest ones, also significantly affected their structure and operation. Alfred D. Chandler, Jr. (1962) has argued that the structure of a firm must follow its strategy. Until the First World War most industrial firms were centralized, single-division firms even when becoming vertically integrated. When this began to change the management of the large industrial firms had to change accordingly.

Because of these changes in the size and structure of the firm during the First World War, E. I. du Pont de Nemours and Company was led to adopt a strategy of diversifying into the production of largely unrelated product lines. The firm found that the centralized, divisional structure that had served it so well was not suited to this strategy, and its poor business performance led its executives to develop between 1919 and 1921 a decentralized, multidivisional structure that boosted it to the first rank among American industrial firms.

General Motors had a somewhat different problem. By 1920 it was already decentralized into separate divisions. In fact, there was so much decentralization that those divisions essentially remained separate companies and there was little coordination between the operating divisions. A financial crisis at the end of 1920 ousted W. C. Durant and brought in the du Ponts and Alfred Sloan. Sloan, who had seen the problems at GM but had been unable to convince Durant to make changes, began reorganizing the management of the company. Over the next several years Sloan and other GM executives developed the general office for a decentralized, multidivisional firm.

Though facing related problems at nearly the same time, GM and du Pont developed their decentralized, multidivisional organizations separately. As other manufacturing firms began to diversify, GM and du Pont became the models for reorganizing the management of the firms. In many industrial firms these reorganizations were not completed until well after the Second World War.

Competition, Monopoly, and the Government

The rise of big businesses, which accelerated in the postbellum period and particularly during the first great turn-of-the-century merger wave, continued in the interwar period. Between 1925 and 1939 the share of manufacturing assets held by the 100 largest corporations rose from 34.5 to 41.9 percent. (Niemi, 1980) As a public policy, the concern with monopolies diminished in the 1920s even though firms were growing larger. But the growing size of businesses was one of the convenient scapegoats upon which to blame the Great Depression.

However, the rise of large manufacturing firms in the interwar period is not so easily interpreted as an attempt to monopolize their industries. Some of the growth came about through vertical integration by the more successful manufacturing firms. Backward integration was generally an attempt to ensure a smooth supply of raw materials where that supply was not plentiful and was dispersed and firms “feared that raw materials might become controlled by competitors or independent suppliers.” (Livesay and Porter, 1969) Forward integration was an offensive tactic employed when manufacturers found that the existing distribution network proved inadequate. Livesay and Porter suggested a number of reasons why firms chose to integrate forward. In some cases they had to provide the mass distribution facilities to handle their much larger outputs; especially when the product was a new one. The complexity of some new products required technical expertise that the existing distribution system could not provide. In other cases “the high unit costs of products required consumer credit which exceeded financial capabilities of independent distributors.” Forward integration into wholesaling was more common than forward integration into retailing. The producers of automobiles, petroleum, typewriters, sewing machines, and harvesters were typical of those manufacturers that integrated all the way into retailing.

In some cases, increases in industry concentration arose as a natural process of industrial maturation. In the automobile industry, Henry Ford’s invention in 1913 of the moving assembly line—a technological innovation that changed most manufacturing—lent itself to larger factories and firms. Of the several thousand companies that had produced cars prior to 1920, 120 were still doing so then, but Ford and General Motors were the clear leaders, together producing nearly 70 percent of the cars. During the twenties, several other companies, such as Durant, Willys, and Studebaker, missed their opportunity to become more important producers, and Chrysler, formed in early 1925, became the third most important producer by 1930. Many went out of business and by 1929 only 44 companies were still producing cars. The Great Depression decimated the industry. Dozens of minor firms went out of business. Ford struggled through by relying on its huge stockpile of cash accumulated prior to the mid-1920s, while Chrysler actually grew. By 1940, only eight companies still produced cars—GM, Ford, and Chrysler had about 85 percent of the market, while Willys, Studebaker, Nash, Hudson, and Packard shared the remainder. The rising concentration in this industry was not due to attempts to monopolize. As the industry matured, growing economies of scale in factory production and vertical integration, as well as the advantages of a widespread dealer network, led to a dramatic decrease in the number of viable firms. (Chandler, 1962 and 1964; Rae, 1984; Bernstein, 1987)

It was a similar story in the tire industry. The increasing concentration and growth of firms was driven by scale economies in production and retailing and by the devastating effects of the depression in the thirties. Although there were 190 firms in 1919, 5 firms dominated the industry—Goodyear, B. F. Goodrich, Firestone, U.S. Rubber, and Fisk, followed by Miller Rubber, General Tire and Rubber, and Kelly-Springfield. During the twenties, 166 firms left the industry while 66 entered. The share of the 5 largest firms rose from 50 percent in 1921 to 75 percent in 1937. During the depressed thirties, there was fierce price competition, and many firms exited the industry. By 1937 there were 30 firms, but the average employment per factory was 4.41 times as large as in 1921, and the average factory produced 6.87 times as many tires as in 1921. (French, 1986 and 1991; Nelson, 1987; Fricke, 1982)

The steel industry was already highly concentrated by 1920 as U.S. Steel had around 50 percent of the market. But U. S. Steel’s market share declined through the twenties and thirties as several smaller firms competed and grew to become known as Little Steel, the next six largest integrated producers after U. S. Steel. Jonathan Baker (1989) has argued that the evidence is consistent with “the assumption that competition was a dominant strategy for steel manufacturers” until the depression. However, the initiation of the National Recovery Administration (NRA) codes in 1933 required the firms to cooperate rather than compete, and Baker argues that this constituted a training period leading firms to cooperate in price and output policies after 1935. (McCraw and Reinhardt, 1989; Weiss, 1980; Adams, 1977)

Mergers

A number of the larger firms grew by merger during this period, and the second great merger wave in American industry occurred during the last half of the 1920s. Figure 10 shows two series on mergers during the interwar period. The FTC series included many of the smaller mergers. The series constructed by Carl Eis (1969) only includes the larger mergers and ends in 1930.

This second great merger wave coincided with the stock market boom of the twenties and has been called “merger for oligopoly” rather than merger for monopoly. (Stigler, 1950) This merger wave created many larger firms that ranked below the industry leaders. Much of the activity in occurred in the banking and public utilities industries. (Markham, 1955) In manufacturing and mining, the effects on industrial structure were less striking. Eis (1969) found that while mergers took place in almost all industries, they were concentrated in a smaller number of them, particularly petroleum, primary metals, and food products.

The federal government’s antitrust policies toward business varied sharply during the interwar period. In the 1920s there was relatively little activity by the Justice Department, but after the Great Depression the New Dealers tried to take advantage of big business to make business exempt from the antitrust laws and cartelize industries under government supervision.

With the passage of the FTC and Clayton Acts in 1914 to supplement the 1890 Sherman Act, the cornerstones of American antitrust law were complete. Though minor amendments were later enacted, the primary changes after that came in the enforcement of the laws and in swings in judicial decisions. Their two primary areas of application were in the areas of overt behavior, such as horizontal and vertical price-fixing, and in market structure, such as mergers and dominant firms. Horizontal price-fixing involves firms that would normally be competitors getting together to agree on stable and higher prices for their products. As long as most of the important competitors agree on the new, higher prices, substitution between products is eliminated and the demand becomes much less elastic. Thus, increasing the price increases the revenues and the profits of the firms who are fixing prices. Vertical price-fixing involves firms setting the prices of intermediate products purchased at different stages of production. It also tends to eliminate substitutes and makes the demand less elastic.

Price-fixing continued to be considered illegal throughout the period, but there was no major judicial activity regarding it in the 1920s other than the Trenton Potteries decision in 1927. In that decision 20 individuals and 23 corporations were found guilty of conspiring to fix the prices of bathroom bowls. The evidence in the case suggested that the firms were not very successful at doing so, but the court found that they were guilty nevertheless; their success, or lack thereof, was not held to be a factor in the decision. (Scherer and Ross, 1990) Though criticized by some, the decision was precedent setting in that it prohibited explicit pricing conspiracies per se.

The Justice Department had achieved success in dismantling Standard Oil and American Tobacco in 1911 through decisions that the firms had unreasonably restrained trade. These were essentially the same points used in court decisions against the Powder Trust in 1911, the thread trust in 1913, Eastman Kodak in 1915, the glucose and cornstarch trust in 1916, and the anthracite railroads in 1920. The criterion of an unreasonable restraint of trade was used in the 1916 and 1918 decisions that found the American Can Company and the United Shoe Machinery Company innocent of violating the Sherman Act; it was also clearly enunciated in the 1920 U. S. Steel decision. This became known as the rule of reason standard in antitrust policy.

Merger policy had been defined in the 1914 Clayton Act to prohibit only the acquisition of one corporation’s stock by another corporation. Firms then shifted to the outright purchase of a competitor’s assets. A series of court decisions in the twenties and thirties further reduced the possibilities of Justice Department actions against mergers. “Only fifteen mergers were ordered dissolved through antitrust actions between 1914 and 1950, and ten of the orders were accomplished under the Sherman Act rather than Clayton Act proceedings.”

Energy

The search for energy and new ways to translate it into heat, light, and motion has been one of the unending themes in history. From whale oil to coal oil to kerosene to electricity, the search for better and less costly ways to light our lives, heat our homes, and move our machines has consumed much time and effort. The energy industries responded to those demands and the consumption of energy materials (coal, oil, gas, and fuel wood) as a percent of GNP rose from about 2 percent in the latter part of the nineteenth century to about 3 percent in the twentieth.

Changes in the energy markets that had begun in the nineteenth century continued. Processed energy in the forms of petroleum derivatives and electricity continued to become more important than “raw” energy, such as that available from coal and water. The evolution of energy sources for lighting continued; at the end of the nineteenth century, natural gas and electricity, rather than liquid fuels began to provide more lighting for streets, businesses, and homes.

In the twentieth century the continuing shift to electricity and internal combustion fuels increased the efficiency with which the American economy used energy. These processed forms of energy resulted in a more rapid increase in the productivity of labor and capital in American manufacturing. From 1899 to 1919, output per labor-hour increased at an average annual rate of 1.2 percent, whereas from 1919 to 1937 the increase was 3.5 percent per year. The productivity of capital had fallen at an average annual rate of 1.8 percent per year in the 20 years prior to 1919, but it rose 3.1 percent a year in the 18 years after 1919. As discussed above, the adoption of electricity in American manufacturing initiated a rapid evolution in the organization of plants and rapid increases in productivity in all types of manufacturing.

The change in transportation was even more remarkable. Internal combustion engines running on gasoline or diesel fuel revolutionized transportation. Cars quickly grabbed the lion’s share of local and regional travel and began to eat into long distance passenger travel, just as the railroads had done to passenger traffic by water in the 1830s. Even before the First World War cities had begun passing laws to regulate and limit “jitney” services and to protect the investments in urban rail mass transit. Trucking began eating into the freight carried by the railroads.

These developments brought about changes in the energy industries. Coal mining became a declining industry. As Figure 11 shows, in 1925 the share of petroleum in the value of coal, gas, and petroleum output exceeded bituminous coal, and it continued to rise. Anthracite coal’s share was much smaller and it declined while natural gas and LP (or liquefied petroleum) gas were relatively unimportant. These changes, especially the declining coal industry, were the source of considerable worry in the twenties.

Coal

One of the industries considered to be “sick” in the twenties was coal, particularly bituminous, or soft, coal. Income in the industry declined, and bankruptcies were frequent. Strikes frequently interrupted production. The majority of the miners “lived in squalid and unsanitary houses, and the incidence of accidents and diseases was high.” (Soule, 1947) The number of operating bituminous coal mines declined sharply from 1923 through 1932. Anthracite (or hard) coal output was much smaller during the twenties. Real coal prices rose from 1919 to 1922, and bituminous coal prices fell sharply from then to 1925. (Figure 12) Coal mining employment plummeted during the twenties. Annual earnings, especially in bituminous coal mining, also fell because of dwindling hourly earnings and, from 1929 on, a shrinking workweek. (Figure 13)

The sources of these changes are to be found in the increasing supply due to productivity advances in coal production and in the decreasing demand for coal. The demand fell as industries began turning from coal to electricity and because of productivity advances in the use of coal to create energy in steel, railroads, and electric utilities. (Keller, 1973) In the generation of electricity, larger steam plants employing higher temperatures and steam pressures continued to reduce coal consumption per kilowatt hour. Similar reductions were found in the production of coke from coal for iron and steel production and in the use of coal by the steam railroad engines. (Rezneck, 1951) All of these factors reduced the demand for coal.

Productivity advances in coal mining tended to be labor saving. Mechanical cutting accounted for 60.7 percent of the coal mined in 1920 and 78.4 percent in 1929. By the middle of the twenties, the mechanical loading of coal began to be introduced. Between 1929 and 1939, output per labor-hour rose nearly one third in bituminous coal mining and nearly four fifths in anthracite as more mines adopted machine mining and mechanical loading and strip mining expanded.

The increasing supply and falling demand for coal led to the closure of mines that were too costly to operate. A mine could simply cease operations, let the equipment stand idle, and lay off employees. When bankruptcies occurred, the mines generally just turned up under new ownership with lower capital charges. When demand increased or strikes reduced the supply of coal, idle mines simply resumed production. As a result, the easily expanded supply largely eliminated economic profits.

The average daily employment in coal mining dropped by over 208,000 from its peak in 1923, but the sharply falling real wages suggests that the supply of labor did not fall as rapidly as the demand for labor. Soule (1947) notes that when employment fell in coal mining, it meant fewer days of work for the same number of men. Social and cultural characteristics tended to tie many to their home region. The local alternatives were few, and ignorance of alternatives outside the Appalachian rural areas, where most bituminous coal was mined, made it very costly to transfer out.

Petroleum

In contrast to the coal industry, the petroleum industry was growing throughout the interwar period. By the thirties, crude petroleum dominated the real value of the production of energy materials. As Figure 14 shows, the production of crude petroleum increased sharply between 1920 and 1930, while real petroleum prices, though highly variable, tended to decline.

The growing demand for petroleum was driven by the growth in demand for gasoline as America became a motorized society. The production of gasoline surpassed kerosene production in 1915. Kerosene’s market continued to contract as electric lighting replaced kerosene lighting. The development of oil burners in the twenties began a switch from coal toward fuel oil for home heating, and this further increased the growing demand for petroleum. The growth in the demand for fuel oil and diesel fuel for ship engines also increased petroleum demand. But it was the growth in the demand for gasoline that drove the petroleum market.

The decline in real prices in the latter part of the twenties shows that supply was growing even faster than demand. The discovery of new fields in the early twenties increased the supply of petroleum and led to falling prices as production capacity grew. The Santa Fe Springs, California strike in 1919 initiated a supply shock as did the discovery of the Long Beach, California field in 1921. New discoveries in Powell, Texas and Smackover Arkansas further increased the supply of petroleum in 1921. New supply increases occurred in 1926 to 1928 with petroleum strikes in Seminole, Oklahoma and Hendricks, Texas. The supply of oil increased sharply in 1930 to 1931 with new discoveries in Oklahoma City and East Texas. Each new discovery pushed down real oil prices, and the prices of petroleum derivatives, and the growing production capacity led to a general declining trend in petroleum prices. McMillin and Parker (1994) argue that supply shocks generated by these new discoveries were a factor in the business cycles during the 1920s.

The supply of gasoline increased more than the supply of crude petroleum. In 1913 a chemist at Standard Oil of Indiana introduced the cracking process to refine crude petroleum; until that time it had been refined by distillation or unpressurized heating. In the heating process, various refined products such as kerosene, gasoline, naphtha, and lubricating oils were produced at different temperatures. It was difficult to vary the amount of the different refined products produced from a barrel of crude. The cracking process used pressurized heating to break heavier components down into lighter crude derivatives; with cracking, it was possible to increase the amount of gasoline obtained from a barrel of crude from 15 to 45 percent. In the early twenties, chemists at Standard Oil of New Jersey improved the cracking process, and by 1927 it was possible to obtain twice as much gasoline from a barrel of crude petroleum as in 1917.

The petroleum companies also developed new ways to distribute gasoline to motorists that made it more convenient to purchase gasoline. Prior to the First World War, gasoline was commonly purchased in one- or five-gallon cans and the purchaser used a funnel to pour the gasoline from the can into the car. Then “filling stations” appeared, which specialized in filling cars’ tanks with gasoline. These spread rapidly, and by 1919 gasoline companies werebeginning to introduce their own filling stations or contract with independent stations to exclusively distribute their gasoline. Increasing competition and falling profits led filling station operators to expand into other activities such as oil changes and other mechanical repairs. The general name attached to such stations gradually changed to “service stations” to reflect these new functions.

Though the petroleum firms tended to be large, they were highly competitive, trying to pump as much petroleum as possible to increase their share of the fields. This, combined with the development of new fields, led to an industry with highly volatile prices and output. Firms desperately wanted to stabilize and reduce the production of crude petroleum so as to stabilize and raise the prices of crude petroleum and refined products. Unable to obtain voluntary agreement on output limitations by the firms and producers, governments began stepping in. Led by Texas, which created the Texas Railroad Commission in 1891, oil-producing states began to intervene to regulate production. Such laws were usually termed prorationing laws and were quotas designed to limit each well’s output to some fraction of its potential. The purpose was as much to stabilize and reduce production and raise prices as anything else, although generally such laws were passed under the guise of conservation. Although the federal government supported such attempts, not until the New Deal were federal laws passed to assist this.

Electricity

By the mid 1890s the debate over the method by which electricity was to be transmitted had been won by those who advocated alternating current. The reduced power losses and greater distance over which electricity could be transmitted more than offset the necessity for transforming the current back to direct current for general use. Widespread adoption of machines and appliances by industry and consumers then rested on an increase in the array of products using electricity as the source of power, heat, or light and the development of an efficient, lower cost method of generating electricity.

General Electric, Westinghouse, and other firms began producing the electrical appliances for homes and an increasing number of machines based on electricity began to appear in industry. The problem of lower cost production was solved by the introduction of centralized generating facilities that distributed the electric power through lines to many consumers and business firms.

Though initially several firms competed in generating and selling electricity to consumers and firms in a city or area, by the First World War many states and communities were awarding exclusive franchises to one firm to generate and distribute electricity to the customers in the franchise area. (Bright, 1947; Passer, 1953) The electric utility industry became an important growth industry and, as Figure 15 shows, electricity production and use grew rapidly.

The electric utilities increasingly were regulated by state commissions that were charged with setting rates so that the utilities could receive a “fair return” on their investments. Disagreements over what constituted a “fair return” and the calculation of the rate base led to a steady stream of cases before the commissions and a continuing series of court appeals. Generally these court decisions favored the reproduction cost basis. Because of the difficulty and cost in making these calculations, rates tended to be in the hands of the electric utilities that, it has been suggested, did not lower rates adequately to reflect the rising productivity and lowered costs of production. The utilities argued that a more rapid lowering of rates would have jeopardized their profits. Whether or not this increased their monopoly power is still an open question, but it should be noted, that electric utilities were hardly price-taking industries prior to regulation. (Mercer, 1973) In fact, as Figure 16 shows, the electric utilities began to systematically practice market segmentation charging users with less elastic demands, higher prices per kilowatt-hour.

Energy in the American Economy of the 1920s

The changes in the energy industries had far-reaching consequences. The coal industry faced a continuing decline in demand. Even in the growing petroleum industry, the periodic surges in the supply of petroleum caused great instability. In manufacturing, as described above, electrification contributed to a remarkable rise in productivity. The transportation revolution brought about by the rise of gasoline-powered trucks and cars changed the way businesses received their supplies and distributed their production as well as where they were located. The suburbanization of America and the beginnings of urban sprawl were largely brought about by the introduction of low-priced gasoline for cars.

Transportation

The American economy was forever altered by the dramatic changes in transportation after 1900. Following Henry Ford’s introduction of the moving assembly production line in 1914, automobile prices plummeted, and by the end of the 1920s about 60 percent of American families owned an automobile. The advent of low-cost personal transportation led to an accelerating movement of population out of the crowded cities to more spacious homes in the suburbs and the automobile set off a decline in intracity public passenger transportation that has yet to end. Massive road-building programs facilitated the intercity movement of people and goods. Trucks increasingly took over the movement of freight in competition with the railroads. New industries, such as gasoline service stations, motor hotels, and the rubber tire industry, arose to service the automobile and truck traffic. These developments were complicated by the turmoil caused by changes in the federal government’s policies toward transportation in the United States.

With the end of the First World War, a debate began as to whether the railroads, which had been taken over by the government, should be returned to private ownership or nationalized. The voices calling for a return to private ownership were much stronger, but doing so fomented great controversy. Many in Congress believed that careful planning and consolidation could restore the railroads and make them more efficient. There was continued concern about the near monopoly that the railroads had on the nation’s intercity freight and passenger transportation. The result of these deliberations was the Transportation Act of 1920, which was premised on the continued domination of the nation’s transportation by the railroads—an erroneous presumption.

The Transportation Act of 1920 presented a marked change in the Interstate Commerce Commission’s ability to control railroads. The ICC was allowed to prescribe exact rates that were to be set so as to allow the railroads to earn a fair return, defined as 5.5 percent, on the fair value of their property. The ICC was authorized to make an accounting of the fair value of each regulated railroad’s property; however, this was not completed until well into the 1930s, by which time the accounting and rate rules were out of date. To maintain fair competition between railroads in a region, all roads were to have the same rates for the same goods over the same distance. With the same rates, low-cost roads should have been able to earn higher rates of return than high-cost roads. To handle this, a recapture clause was inserted: any railroad earning a return of more than 6 percent on the fair value of its property was to turn the excess over to the ICC, which would place half of the money in a contingency fund for the railroad when it encountered financial problems and the other half in a contingency fund to provide loans to other railroads in need of assistance.

In order to address the problem of weak and strong railroads and to bring better coordination to the movement of rail traffic in the United States, the act was directed to encourage railroad consolidation, but little came of this in the 1920s. In order to facilitate its control of the railroads, the ICC was given two additional powers. The first was the control over the issuance or purchase of securities by railroads, and the second was the power to control changes in railroad service through the control of car supply and the extension and abandonment of track. The control of the supply of rail cars was turned over to the Association of American Railroads. Few extensions of track were proposed, but as time passed, abandonment requests grew. The ICC, however, trying to mediate between the conflicting demands of shippers, communities and railroads, generally refused to grant abandonments, and this became an extremely sensitive issue in the 1930s.

As indicated above, the premises of the Transportation Act of 1920 were wrong. Railroads experienced increasing competition during the 1920s, and both freight and passenger traffic were drawn off to competing transport forms. Passenger traffic exited from the railroads much more quickly. As the network of all weather surfaced roads increased, people quickly turned from the train to the car. Harmed even more by the move to automobile traffic were the electric interurban railways that had grown rapidly just prior to the First World War. (Hilton-Due, 1960) Not surprisingly, during the 1920s few railroads earned profits in excess of the fair rate of return.

The use of trucks to deliver freight began shortly after the turn of the century. Before the outbreak of war in Europe, White and Mack were producing trucks with as much as 7.5 tons of carrying capacity. Most of the truck freight was carried on a local basis, and it largely supplemented the longer distance freight transportation provided by the railroads. However, truck size was growing. In 1915 Trailmobile introduced the first four-wheel trailer designed to be pulled by a truck tractor unit. During the First World War, thousands of trucks were constructed for military purposes, and truck convoys showed that long distance truck travel was feasible and economical. The use of trucks to haul freight had been growing by over 18 percent per year since 1925, so that by 1929 intercity trucking accounted for more than one percent of the ton-miles of freight hauled.

The railroads argued that the trucks and buses provided “unfair” competition and believed that if they were also regulated, then the regulation could equalize the conditions under which they competed. As early as 1925, the National Association of Railroad and Utilities Commissioners issued a call for the regulation of motor carriers in general. In 1928 the ICC called for federal regulation of buses and in 1932 extended this call to federal regulation of trucks.

Most states had began regulating buses at the beginning of the 1920s in an attempt to reduce the diversion of urban passenger traffic from the electric trolley and railway systems. However, most of the regulation did not aim to control intercity passenger traffic by buses. As the network of surfaced roads expanded during the twenties, so did the routes of the intercity buses. In 1929 a number of smaller bus companies were incorporated in the Greyhound Buslines, the carrier that has since dominated intercity bus transportation. (Walsh, 2000)

A complaint of the railroads was that interstate trucking competition was unfair because it was subsidized while railroads were not. All railroad property was privately owned and subject to property taxes, whereas truckers used the existing road system and therefore neither had to bear the costs of creating the road system nor pay taxes upon it. Beginning with the Federal Road-Aid Act of 1916, small amounts of money were provided as an incentive for states to construct rural post roads. (Dearing-Owen, 1949) However, through the First World War most of the funds for highway construction came from a combination of levies on the adjacent property owners and county and state taxes. The monies raised by the counties were commonly 60 percent of the total funds allocated, and these primarily came from property taxes. In 1919 Oregon pioneered the state gasoline tax, which then began to be adopted by more and more states. A highway system financed by property taxes and other levies can be construed as a subsidization of motor vehicles, and one study for the period up to 1920 found evidence of substantial subsidization of trucking. (Herbst-Wu, 1973) However, the use of gasoline taxes moved closer to the goal of users paying the costs of the highways. Neither did the trucks have to pay for all of the highway construction because automobiles jointly used the highways. Highways had to be constructed in more costly ways in order to accommodate the larger and heavier trucks. Ideally the gasoline taxes collected from trucks should have covered the extra (or marginal) costs of highway construction incurred because of the truck traffic. Gasoline taxes tended to do this.

The American economy occupies a vast geographic region. Because economic activity occurs over most of the country, falling transportation costs have been crucial to knitting American firms and consumers into a unified market. Throughout the nineteenth century the railroads played this crucial role. Because of the size of the railroad companies and their importance in the economic life of Americans, the federal government began to regulate them. But, by 1917 it appeared that the railroad system had achieved some stability, and it was generally assumed that the post-First World War era would be an extension of the era from 1900 to 1917. Nothing could have been further from the truth. Spurred by public investments in highways, cars and trucks voraciously ate into the railroad’s market, and, though the regulators failed to understand this at the time, the railroad’s monopoly on transportation quickly disappeared.

Communications

Communications had joined with transportation developments in the nineteenth century to tie the American economy together more completely. The telegraph had benefited by using the railroads’ right-of-ways, and the railroads used the telegraph to coordinate and organize their far-flung activities. As the cost of communications fell and information transfers sped, the development of firms with multiple plants at distant locations was facilitated. The interwar era saw a continuation of these developments as the telephone continued to supplant the telegraph and the new medium of radio arose to transmit news and provide a new entertainment source.

Telegraph domination of business and personal communications had given way to the telephone as long distance telephone calls between the east and west coasts with the new electronic amplifiers became possible in 1915. The number of telegraph messages handled grew 60.4 percent in the twenties. The number of local telephone conversations grew 46.8 percent between 1920 and 1930, while the number of long distance conversations grew 71.8 percent over the same period. There were 5 times as many long distance telephone calls as telegraph messages handled in 1920, and 5.7 times as many in 1930.

The twenties were a prosperous period for AT&T and its 18 major operating companies. (Brooks, 1975; Temin, 1987; Garnet, 1985; Lipartito, 1989) Telephone usage rose and, as Figure 19 shows, the share of all households with a telephone rose from 35 percent to nearly 42 percent. In cities across the nation, AT&T consolidated its system, gained control of many operating companies, and virtually eliminated its competitors. It was able to do this because in 1921 Congress passed the Graham Act exempting AT&T from the Sherman Act in consolidating competing telephone companies. By 1940, the non-Bell operating companies were all small relative to the Bell operating companies.

Surprisingly there was a decline in telephone use on the farms during the twenties. (Hadwiger-Cochran, 1984; Fischer 1987) Rising telephone rates explain part of the decline in rural use. The imposition of connection fees during the First World War made it more costly for new farmers to hook up. As AT&T gained control of more and more operating systems, telephone rates were increased. AT&T also began requiring, as a condition of interconnection, that independent companies upgrade their systems to meet AT&T standards. Most of the small mutual companies that had provided service to farmers had operated on a shoestring—wires were often strung along fenceposts, and phones were inexpensive “whoop and holler” magneto units. Upgrading to AT&T’s standards raised costs, forcing these companies to raise rates.

However, it also seems likely that during the 1920s there was a general decline in the rural demand for telephone services. One important factor in this was the dramatic decline in farm incomes in the early twenties. The second reason was a change in the farmers’ environment. Prior to the First World War, the telephone eased farm isolation and provided news and weather information that was otherwise hard to obtain. After 1920 automobiles, surfaced roads, movies, and the radio loosened the isolation and the telephone was no longer as crucial.

Othmar Merganthaler’s development of the linotype machine in the late nineteenth century had irrevocably altered printing and publishing. This machine, which quickly created a line of soft, lead-based metal type that could be printed, melted down and then recast as a new line of type, dramatically lowered the costs of printing. Previously, all type had to be painstakingly set by hand, with individual cast letter matrices picked out from compartments in drawers to construct words, lines, and paragraphs. After printing, each line of type on the page had to be broken down and each individual letter matrix placed back into its compartment in its drawer for use in the next printing job. Newspapers often were not published every day and did not contain many pages, resulting in many newspapers in most cities. In contrast to this laborious process, the linotype used a keyboard upon which the operator typed the words in one of the lines in a news column. Matrices for each letter dropped down from a magazine of matrices as the operator typed each letter and were assembled into a line of type with automatic spacers to justify the line (fill out the column width). When the line was completed the machine mechanically cast the line of matrices into a line of lead type. The line of lead type was ejected into a tray and the letter matrices mechanically returned to the magazine while the operator continued typing the next line in the news story. The first Merganthaler linotype machine was installed in the New York Tribune in 1886. The linotype machine dramatically lowered the costs of printing newspapers (as well as books and magazines). Prior to the linotype a typical newspaper averaged no more than 11 pages and many were published only a few times a week. The linotype machine allowed newspapers to grow in size and they began to be published more regularly. A process of consolidation of daily and Sunday newspapers began that continues to this day. Many have termed the Merganthaler linotype machine the most significant printing invention since the introduction of movable type 400 years earlier.

For city families as well as farm families, radio became the new source of news and entertainment. (Barnouw, 1966; Rosen, 1980 and 1987; Chester-Garrison, 1950) It soon took over as the prime advertising medium and in the process revolutionized advertising. By 1930 more homes had radio sets than had telephones. The radio networks sent news and entertainment broadcasts all over the country. The isolation of rural life, particularly in many areas of the plains, was forever broken by the intrusion of the “black box,” as radio receivers were often called. The radio began a process of breaking down regionalism and creating a common culture in the United States.

The potential demand for radio became clear with the first regular broadcast of Westinghouse’s KDKA in Pittsburgh in the fall of 1920. Because the Department of Commerce could not deny a license application there was an explosion of stations all broadcasting at the same frequency and signal jamming and interference became a serious problem. By 1923 the Department of Commerce had gained control of radio from the Post Office and the Navy and began to arbitrarily disperse stations on the radio dial and deny licenses creating the first market in commercial broadcast licenses. In 1926 a U.S. District Court decided that under the Radio Law of 1912 Herbert Hoover, the secretary of commerce, did not have this power. New stations appeared and the logjam and interference of signals worsened. A Radio Act was passed in January of 1927 creating the Federal Radio Commission (FRC) as a temporary licensing authority. Licenses were to be issued in the public interest, convenience, and necessity. A number of broadcasting licenses were revoked; stations were assigned frequencies, dial locations, and power levels. The FRC created 24 clear-channel stations with as much as 50,000 watts of broadcasting power, of which 21 ended up being affiliated with the new national radio networks. The Communications Act of 1934 essentially repeated the 1927 act except that it created a permanent, seven-person Federal Communications Commission (FCC).

Local stations initially created and broadcast the radio programs. The expenses were modest, and stores and companies operating radio stations wrote this off as indirect, goodwill advertising. Several forces changed all this. In 1922, AT&T opened up a radio station in New York City, WEAF (later to become WNBC). AT&T envisioned this station as the center of a radio toll system where individuals could purchase time to broadcast a message transmitted to other stations in the toll network using AT&T’s long distance lines and an August 1922 broadcast by a Long Island realty company became the first conscious use of direct advertising.

Though advertising continued to be condemned, the fiscal pressures on radio stations to accept advertising began rising. In 1923 the American Society of Composers and Publishers (ASCAP), began demanding a performance fee anytime ASCAP-copyrighted music was performed on the radio, either live or on record. By 1924 the issue was settled, and most stations began paying performance fees to ASCAP. AT&T decided that all stations broadcasting with non AT&T transmitters were violating their patent rights and began asking for annual fees from such stations based on the station’s power. By the end of 1924, most stations were paying the fees. All of this drained the coffers of the radio stations, and more and more of them began discreetly accepting advertising.

RCA became upset at AT&T’s creation of a chain of radio stations and set up its own toll network using the inferior lines of Western Union and Postal Telegraph, because AT&T, not surprisingly, did not allow any toll (or network) broadcasting on its lines except by its own stations. AT&T began to worry that its actions might threaten its federal monopoly in long distance telephone communications. In 1926 a new firm was created, the National Broadcasting Company (NBC), which took over all broadcasting activities from AT&T and RCA as AT&T left broadcasting. When NBC debuted in November of 1926, it had two networks: the Red, which was the old AT&T network, and the Blue, which was the old RCA network. Radio networks allowed advertisers to direct advertising at a national audience at a lower cost. Network programs allowed local stations to broadcast superior programs that captured a larger listening audience and in return received a share of the fees the national advertiser paid to the network. In 1927 a new network, the Columbia Broadcasting System (CBS) financed by the Paley family began operation and other new networks entered or tried to enter the industry in the 1930s.

Communications developments in the interwar era present something of a mixed picture. By 1920 long distance telephone service was in place, but rising rates slowed the rate of adoption in the period, and telephone use in rural areas declined sharply. Though direct dialing was first tried in the twenties, its general implementation would not come until the postwar era, when other changes, such as microwave transmission of signals and touch-tone dialing, would also appear. Though the number of newspapers declined, newspaper circulation generally held up. The number of competing newspapers in larger cities began declining, a trend that also would accelerate in the postwar American economy.

Banking and Securities Markets

In the twenties commercial banks became “department stores of finance.”— Banks opened up installment (or personal) loan departments, expanded their mortgage lending, opened up trust departments, undertook securities underwriting activities, and offered safe deposit boxes. These changes were a response to growing competition from other financial intermediaries. Businesses, stung by bankers’ control and reduced lending during the 1920-21 depression, began relying more on retained earnings and stock and bond issues to raise investment and, sometimes, working capital. This reduced loan demand. The thrift institutions also experienced good growth in the twenties as they helped fuel the housing construction boom of the decade. The securities markets boomed in the twenties only to see a dramatic crash of the stock market in late 1929.

There were two broad classes of commercial banks; those that were nationally chartered and those that were chartered by the states. Only the national banks were required to be members of the Federal Reserve System. (Figure 21) Most banks were unit banks because national regulators and most state regulators prohibited branching. However, in the twenties a few states began to permit limited branching; California even allowed statewide branching.—The Federal Reserve member banks held the bulk of the assets of all commercial banks, even though most banks were not members. A high bank failure rate in the 1920s has usually been explained by “overbanking” or too many banks located in an area, but H. Thomas Johnson (1973-74) makes a strong argument against this. (Figure 22)— If there were overbanking, on average each bank would have been underutilized resulting in intense competition for deposits and higher costs and lower earnings. One common reason would have been the free entry of banks as long as they achieved the minimum requirements then in force. However, the twenties saw changes that led to the demise of many smaller rural banks that would likely have been profitable if these changes had not occurred. Improved transportation led to a movement of business activities, including banking, into the larger towns and cities. Rural banks that relied on loans to farmers suffered just as farmers did during the twenties, especially in the first half of the twenties. The number of bank suspensions and the suspension rate fell after 1926. The sharp rise in bank suspensions in 1930 occurred because of the first banking crisis during the Great Depression.

Prior to the twenties, the main assets of commercial banks were short-term business loans, made by creating a demand deposit or increasing an existing one for a borrowing firm. As business lending declined in the 1920s commercial banks vigorously moved into new types of financial activities. As banks purchased more securities for their earning asset portfolios and gained expertise in the securities markets, larger ones established investment departments and by the late twenties were an important force in the underwriting of new securities issued by nonfinancial corporations.

The securities market exhibited perhaps the most dramatic growth of the noncommercial bank financial intermediaries during the twenties, but others also grew rapidly. (Figure 23) The assets of life insurance companies increased by 10 percent a year from 1921 to 1929; by the late twenties they were a very important source of funds for construction investment. Mutual savings banks and savings and loan associations (thrifts) operated in essentially the same types of markets. The Mutual savings banks were concentrated in the northeastern United States. As incomes rose, personal savings increased, and housing construction expanded in the twenties, there was an increasing demand for the thrifts’ interest earning time deposits and mortgage lending.

But the dramatic expansion in the financial sector came in new corporate securities issues in the twenties—especially common and preferred stock—and in the trading of existing shares of those securities. (Figure 24) The late twenties boom in the American economy was rapid, highly visible, and dramatic. Skyscrapers were being erected in most major cities, the automobile manufacturers produced over four and a half million new cars in 1929; and the stock market, like a barometer of this prosperity, was on a dizzying ride to higher and higher prices. “Playing the market” seemed to become a national pastime.

The Dow-Jones index hit its peak of 381 on September 3 and then slid to 320 on October 21. In the following week the stock market “crashed,” with a record number of shares being traded on several days. At the end of Tuesday, October, 29th, the index stood at 230, 96 points less than one week before. On November 13, 1929, the Dow-Jones index reached its lowest point for the year at 198—183 points less than the September 3 peak.

The path of the stock market boom of the twenties can be seen in Figure 25. Sharp price breaks occurred several times during the boom, and each of these gave rise to dark predictions of the end of the bull market and speculation. Until late October of 1929, these predictions turned out to be wrong. Between those price breaks and prior to the October crash, stock prices continued to surge upward. In March of 1928, 3,875,910 shares were traded in one day, establishing a record. By late 1928, five million shares being traded in a day was a common occurrence.

New securities, from rising merger activity and the formation of holding companies, were issued to take advantage of the rising stock prices.—Stock pools, which were not illegal until the 1934 Securities and Exchange Act, took advantage of the boom to temporarily drive up the price of selected stocks and reap large gains for the members of the pool. In stock pools a group of speculators would pool large amounts of their funds and then begin purchasing large amounts of shares of a stock. This increased demand led to rising prices for that stock. Frequently pool insiders would “churn” the stock by repeatedly buying and selling the same shares among themselves, but at rising prices. Outsiders, seeing the price rising, would decide to purchase the stock whose price was rising. At a predetermined higher price the pool members would, within a short period, sell their shares and pull out of the market for that stock. Without the additional demand from the pool, the stock’s price usually fell quickly bringing large losses for the unsuspecting outside investors while reaping large gains for the pool insiders.

Another factor commonly used to explain both the speculative boom and the October crash was the purchase of stocks on small margins. However, contrary to popular perception, margin requirements through most of the twenties were essentially the same as in previous decades. Brokers, recognizing the problems with margin lending in the rapidly changing market, began raising margin requirements in late 1928, and by the fall of 1929, margin requirements were the highest in the history of the New York Stock Exchange. In the 1920s, as was the case for decades prior to that, the usual margin requirements were 10 to 15 percent of the purchase price, and, apparently, more often around 10 percent. There were increases in this percentage by 1928 and by the fall of 1928, well before the crash and at the urging of a special New York Clearinghouse committee, margin requirements had been raised to some of the highest levels in New York Stock Exchange history. One brokerage house required the following of its clients. Securities with a selling price below $10 could only be purchased for cash. Securities with a selling price of $10 to $20 had to have a 50 percent margin; for securities of $20 to $30 a margin requirement of 40 percent; and, for securities with a price above $30 the margin was 30 percent of the purchase price. In the first half of 1929 margin requirements on customers’ accounts averaged a 40 percent margin, and some houses raised their margins to 50 percent a few months before the crash. These were, historically, very high margin requirements. (Smiley and Keehn, 1988)—Even so, during the crash when additional margin calls were issued, those investors who could not provide additional margin saw the brokers’ sell their stock at whatever the market price was at the time and these forced sales helped drive prices even lower.

The crash began on Monday, October 21, as the index of stock prices fell 3 points on the third-largest volume in the history of the New York Stock Exchange. After a slight rally on Tuesday, prices began declining on Wednesday and fell 21 points by the end of the day bringing on the third call for more margin in that week. On Black Thursday, October 24, prices initially fell sharply, but rallied somewhat in the afternoon so that the net loss was only 7 points, but the volume of thirteen million shares set a NYSE record. Friday brought a small gain that was wiped out on Saturday. On Monday, October 28, the Dow Jones index fell 38 points on a volume of nine million shares—three million in the final hour of trading. Black Tuesday, October 29, brought declines in virtually every stock price. Manufacturing firms, which had been lending large sums to brokers for margin loans, had been calling in these loans and this accelerated on Monday and Tuesday. The big Wall Street banks increased their lending on call loans to offset some of this loss of loanable funds. The Dow Jones Index fell 30 points on a record volume of nearly sixteen and a half million shares exchanged. Black Thursday and Black Tuesday wiped out entire fortunes.

Though the worst was over, prices continued to decline until November 13, 1929, as brokers cleaned up their accounts and sold off the stocks of clients who could not supply additional margin. After that, prices began to slowly rise and by April of 1930 had increased 96 points from the low of November 13,— “only” 87 points less than the peak of September 3, 1929. From that point, stock prices resumed their depressing decline until the low point was reached in the summer of 1932.

 

—There is a long tradition that insists that the Great Bull Market of the late twenties was an orgy of speculation that bid the prices of stocks far above any sustainable or economically justifiable level creating a bubble in the stock market. John Kenneth Galbraith (1954) observed, “The collapse in the stock market in the autumn of 1929 was implicit in the speculation that went before.”—But not everyone has agreed with this.

In 1930 Irving Fisher argued that the stock prices of 1928 and 1929 were based on fundamental expectations that future corporate earnings would be high.— More recently, Murray Rothbard (1963), Gerald Gunderson (1976), and Jude Wanniski (1978) have argued that stock prices were not too high prior to the crash.—Gunderson suggested that prior to 1929, stock prices were where they should have been and that when corporate profits in the summer and fall of 1929 failed to meet expectations, stock prices were written down.— Wanniski argued that political events brought on the crash. The market broke each time news arrived of advances in congressional consideration of the Hawley-Smoot tariff. However, the virtually perfect foresight that Wanniski’s explanation requires is unrealistic.— Charles Kindleberger (1973) and Peter Temin (1976) examined common stock yields and price-earnings ratios and found that the relative constancy did not suggest that stock prices were bid up unrealistically high in the late twenties.—Gary Santoni and Gerald Dwyer (1990) also failed to find evidence of a bubble in stock prices in 1928 and 1929.—Gerald Sirkin (1975) found that the implied growth rates of dividends required to justify stock prices in 1928 and 1929 were quite conservative and lower than post-Second World War dividend growth rates.

However, examination of after-the-fact common stock yields and price-earning ratios can do no more than provide some ex post justification for suggesting that there was not excessive speculation during the Great Bull Market.— Each individual investor was motivated by that person’s subjective expectations of each firm’s future earnings and dividends and the future prices of shares of each firm’s stock. Because of this element of subjectivity, not only can we never accurately know those values, but also we can never know how they varied among individuals. The market price we observe will be the end result of all of the actions of the market participants, and the observed price may be different from the price almost all of the participants expected.

In fact, there are some indications that there were differences in 1928 and 1929. Yields on common stocks were somewhat lower in 1928 and 1929. In October of 1928, brokers generally began raising margin requirements, and by the beginning of the fall of 1929, margin requirements were, on average, the highest in the history of the New York Stock Exchange. Though the discount and commercial paper rates had moved closely with the call and time rates on brokers’ loans through 1927, the rates on brokers’ loans increased much more sharply in 1928 and 1929.— This pulled in funds from corporations, private investors, and foreign banks as New York City banks sharply reduced their lending. These facts suggest that brokers and New York City bankers may have come to believe that stock prices had been bid above a sustainable level by late 1928 and early 1929. White (1990) created a quarterly index of dividends for firms in the Dow-Jones index and related this to the DJI. Through 1927 the two track closely, but in 1928 and 1929 the index of stock prices grows much more rapidly than the index of dividends.

The qualitative evidence for a bubble in the stock market in 1928 and 1929 that White assembled was strengthened by the findings of J. Bradford De Long and Andre Shleifer (1991). They examined closed-end mutual funds, a type of fund where investors wishing to liquidate must sell their shares to other individual investors allowing its fundamental value to be exactly measurable.— Using evidence from these funds, De Long and Shleifer estimated that in the summer of 1929, the Standard and Poor’s composite stock price index was overvalued about 30 percent due to excessive investor optimism. Rappoport and White (1993 and 1994) found other evidence that supported a bubble in the stock market in 1928 and 1929. There was a sharp divergence between the growth of stock prices and dividends; there were increasing premiums on call and time brokers’ loans in 1928 and 1929; margin requirements rose; and stock market volatility rose in the wake of the 1929 stock market crash.

There are several reasons for the creation of such a bubble. First, the fundamental values of earnings and dividends become difficult to assess when there are major industrial changes, such as the rapid changes in the automobile industry, the new electric utilities, and the new radio industry.— Eugene White (1990) suggests that “While investors had every reason to expect earnings to grow, they lacked the means to evaluate easily the future path of dividends.” As a result investors bid up prices as they were swept up in the ongoing stock market boom. Second, participation in the stock market widened noticeably in the twenties. The new investors were relatively unsophisticated, and they were more likely to be caught up in the euphoria of the boom and bid prices upward.— New, inexperienced commission sales personnel were hired to sell stocks and they promised glowing returns on stocks they knew little about.

These observations were strengthened by the experimental work of economist Vernon Smith. (Bishop, 1987) In a number of experiments over a three-year period using students and Tucson businessmen and businesswomen, bubbles developed as inexperienced investors valued stocks differently and engaged in price speculation. As these investors in the experiments began to realize that speculative profits were unsustainable and uncertain, their dividend expectations changed, the market crashed, and ultimately stocks began trading at their fundamental dividend values. These bubbles and crashes occurred repeatedly, leading Smith to conjecture that there are few regulatory steps that can be taken to prevent a crash.

Though the bubble of 1928 and 1929 made some downward adjustment in stock prices inevitable, as Barsky and De Long have shown, changes in fundamentals govern the overall movements. And the end of the long bull market was almost certainly governed by this. In late 1928 and early 1929 there was a striking rise in economic activity, but a decline began somewhere between May and July of that year and was clearly evident by August of 1929. By the middle of August, the rise in stock prices had slowed down as better information on the contraction was received. There were repeated statements by leading figures that stocks were “overpriced” and the Federal Reserve System sharply increased the discount rate in August 1929 was well as continuing its call for banks to reduce their margin lending. As this information was assessed, the number of speculators selling stocks increased, and the number buying decreased. With the decreased demand, stock prices began to fall, and as more accurate information on the nature and extent of the decline was received, stock prices fell more. The late October crash made the decline occur much more rapidly, and the margin purchases and consequent forced selling of many of those stocks contributed to a more severe price fall. The recovery of stock prices from November 13 into April of 1930 suggests that stock prices may have been driven somewhat too low during the crash.

There is now widespread agreement that the 1929 stock market crash did not cause the Great Depression. Instead, the initial downturn in economic activity was a primary determinant of the ending of the 1928-29 stock market bubble. The stock market crash did make the downturn become more severe beginning in November 1929. It reduced discretionary consumption spending (Romer, 1990) and created greater income uncertainty helping to bring on the contraction (Flacco and Parker, 1992). Though stock market prices reached a bottom and began to recover following November 13, 1929, the continuing decline in economic activity took its toll and by May 1930 stock prices resumed their decline and continued to fall through the summer of 1932.

Domestic Trade

In the nineteenth century, a complex array of wholesalers, jobbers, and retailers had developed, but changes in the postbellum period reduced the role of the wholesalers and jobbers and strengthened the importance of the retailers in domestic trade. (Cochran, 1977; Chandler, 1977; Marburg, 1951; Clewett, 1951) The appearance of the department store in the major cities and the rise of mail order firms in the postbellum period changed the retailing market.

Department Stores

A department store is a combination of specialty stores organized as departments within one general store. A. T. Stewart’s huge 1846 dry goods store in New York City is often referred to as the first department store. (Resseguie, 1965; Sobel-Sicilia, 1986) R. H. Macy started his dry goods store in 1858 and Wanamaker’s in Philadelphia opened in 1876. By the end of the nineteenth century, every city of any size had at least one major department store. (Appel, 1930; Benson, 1986; Hendrickson, 1979; Hower, 1946; Sobel, 1974) Until the late twenties, the department store field was dominated by independent stores, though some department stores in the largest cities had opened a few suburban branches and stores in other cities. In the interwar period department stores accounted for about 8 percent of retail sales.

The department stores relied on a “one-price” policy, which Stewart is credited with beginning. In the antebellum period and into the postbellum period, it was common not to post a specific price on an item; rather, each purchaser haggled with a sales clerk over what the price would be. Stewart posted fixed prices on the various dry goods sold, and the customer could either decide to buy or not buy at the fixed price. The policy dramatically lowered transactions costs for both the retailer and the purchaser. Prices were reduced with a smaller markup over the wholesale price, and a large sales volume and a quicker turnover of the store’s inventory generated profits.

Mail Order Firms

What changed the department store field in the twenties was the entrance of Sears Roebuck and Montgomery Ward, the two dominant mail order firms in the United States. (Emmet-Jeuck, 1950; Chandler, 1962, 1977) Both firms had begun in the late nineteenth century and by 1914 the younger Sears Roebuck had surpassed Montgomery Ward. Both located in Chicago due to its central location in the nation’s rail network and both had benefited from the advent of Rural Free Delivery in 1896 and low cost Parcel Post Service in 1912.

In 1924 Sears hired Robert C. Wood, who was able to convince Sears Roebuck to open retail stores. Wood believed that the declining rural population and the growing urban population forecast the gradual demise of the mail order business; survival of the mail order firms required a move into retail sales. By 1925 Sears Roebuck had opened 8 retail stores, and by 1929 it had 324 stores. Montgomery Ward quickly followed suit. Rather than locating these in the central business district (CBD), Wood located many on major streets closer to the residential areas. These moves of Sears Roebuck and Montgomery Ward expanded department store retailing and provided a new type of chain store.

Chain Stores

Though chain stores grew rapidly in the first two decades of the twentieth century, they date back to the 1860s when George F. Gilman and George Huntington Hartford opened a string of New York City A&P (Atlantic and Pacific) stores exclusively to sell tea. (Beckman-Nolen, 1938; Lebhar, 1963; Bullock, 1933) Stores were opened in other regions and in 1912 their first “cash-and-carry” full-range grocery was opened. Soon they were opening 50 of these stores each week and by the 1920s A&P had 14,000 stores. They then phased out the small stores to reduce the chain to 4,000 full-range, supermarket-type stores. A&P’s success led to new grocery store chains such as Kroger, Jewel Tea, and Safeway.

Prior to A&P’s cash-and-carry policy, it was common for grocery stores, produce (or green) grocers, and meat markets to provide home delivery and credit, both of which were costly. As a result, retail prices were generally marked up well above the wholesale prices. In cash-and-carry stores, items were sold only for cash; no credit was extended, and no expensive home deliveries were provided. Markups on prices could be much lower because other costs were much lower. Consumers liked the lower prices and were willing to pay cash and carry their groceries, and the policy became common by the twenties.

Chains also developed in other retail product lines. In 1879 Frank W. Woolworth developed a “5 and 10 Cent Store,” or dime store, and there were over 1,000 F. W. Woolworth stores by the mid-1920s. (Winkler, 1940) Other firms such as Kresge, Kress, and McCrory successfully imitated Woolworth’s dime store chain. J.C. Penney’s dry goods chain store began in 1901 (Beasley, 1948), Walgreen’s drug store chain began in 1909, and shoes, jewelry, cigars, and other lines of merchandise also began to be sold through chain stores.

Self-Service Policies

In 1916 Clarence Saunders, a grocer in Memphis, Tennessee, built upon the one-price policy and began offering self-service at his Piggly Wiggly store. Previously, customers handed a clerk a list or asked for the items desired, which the clerk then collected and the customer paid for. With self-service, items for sale were placed on open shelves among which the customers could walk, carrying a shopping bag or pushing a shopping cart. Each customer could then browse as he or she pleased, picking out whatever was desired. Saunders and other retailers who adopted the self-service method of retail selling found that customers often purchased more because of exposure to the array of products on the shelves; as well, self-service lowered the labor required for retail sales and therefore lowered costs.

Shopping Centers

Shopping Centers, another innovation in retailing that began in the twenties, was not destined to become a major force in retail development until after the Second World War. The ultimate cause of this innovation was the widening ownership and use of the automobile. By the 1920s, as the ownership and use of the car began expanding, population began to move out of the crowded central cities toward the more open suburbs. When General Robert Wood set Sears off on its development of urban stores, he located these not in the central business district, CBD, but as free-standing stores on major arteries away from the CBD with sufficient space for parking.

At about the same time, a few entrepreneurs began to develop shopping centers. Yehoshua Cohen (1972) says, “The owner of such a center was responsible for maintenance of the center, its parking lot, as well as other services to consumers and retailers in the center.” Perhaps the earliest such shopping center was the Country Club Plaza built in 1922 by the J. C. Nichols Company in Kansas City, Missouri. Other early shopping centers appeared in Baltimore and Dallas. By the mid-1930s the concept of a planned shopping center was well known and was expected to be the means to capture the trade of the growing number of suburban consumers.

International Trade and Finance

In the twenties a gold exchange standard was developed to replace the gold standard of the prewar world. Under a gold standard, each country’s currency carried a fixed exchange rate with gold, and the currency had to be backed up by gold. As a result, all countries on the gold standard had fixed exchange rates with all other countries. Adjustments to balance international trade flows were made by gold flows. If a country had a deficit in its trade balance, gold would leave the country, forcing the money stock to decline and prices to fall. Falling prices made the deficit countries’ exports more attractive and imports more costly, reducing the deficit. Countries with a surplus imported gold, which increased the money stock and caused prices to rise. This made the surplus countries’ exports less attractive and imports more attractive, decreasing the surplus. Most economists who have studied the prewar gold standard contend that it did not work as the conventional textbook model says, because capital flows frequently reduced or eliminated the need for gold flows for long periods of time. However, there is no consensus on whether fortuitous circumstances, rather than the gold standard, saved the international economy from periodic convulsions or whether the gold standard as it did work was sufficient to promote stability and growth in international transactions.

After the First World War it was argued that there was a “shortage” of fluid monetary gold to use for the gold standard, so some method of “economizing” on gold had to be found. To do this, two basic changes were made. First, most nations, other than the United States, stopped domestic circulation of gold. Second, the “gold exchange” system was created. Most countries held their international reserves in the form of U.S. dollars or British pounds and international transactions used dollars or pounds, as long as the United States and Great Britain stood ready to exchange their currencies for gold at fixed exchange rates. However, the overvaluation of the pound and the undervaluation of the franc threatened these arrangements. The British trade deficit led to a capital outflow, higher interest rates, and a weak economy. In the late twenties, the French trade surplus led to the importation of gold that they did not allow to expand the money supply.

Economizing on gold by no longer allowing its domestic circulation and by using key currencies as international monetary reserves was really an attempt to place the domestic economies under the control of the nations’ politicians and make them independent of international events. Unfortunately, in doing this politicians eliminated the equilibrating mechanism of the gold standard but had nothing with which to replace it. The new international monetary arrangements of the twenties were potentially destabilizing because they were not allowed to operate as a price mechanism promoting equilibrating adjustments.

There were other problems with international economic activity in the twenties. Because of the war, the United States was abruptly transformed from a debtor to a creditor on international accounts. Though the United States did not want reparations payments from Germany, it did insist that Allied governments repay American loans. The Allied governments then insisted on war reparations from Germany. These initial reparations assessments were quite large. The Allied Reparations Commission collected the charges by supervising Germany’s foreign trade and by internal controls on the German economy, and it was authorized to increase the reparations if it was felt that Germany could pay more. The treaty allowed France to occupy the Ruhr after Germany defaulted in 1923.

Ultimately, this tangled web of debts and reparations, which was a major factor in the course of international trade, depended upon two principal actions. First, the United States had to run an import surplus or, on net, export capital out of the United States to provide a pool of dollars overseas. Germany then had either to have an export surplus or else import American capital so as to build up dollar reserves—that is, the dollars the United States was exporting. In effect, these dollars were paid by Germany to Great Britain, France, and other countries that then shipped them back to the United States as payment on their U.S. debts. If these conditions did not occur, (and note that the “new” gold standard of the twenties had lost its flexibility because the price adjustment mechanism had been eliminated) disruption in international activity could easily occur and be transmitted to the domestic economies.

In the wake of the 1920-21 depression Congress passed the Emergency Tariff Act, which raised tariffs, particularly on manufactured goods. (Figures 26 and 27) The Fordney-McCumber Tariff of 1922 continued the Emergency Tariff of 1921, and its protection on many items was extremely high, ranging from 60 to 100 percent ad valorem (or as a percent of the price of the item). The increases in the Fordney-McCumber tariff were as large and sometimes larger than the more famous (or “infamous”) Smoot-Hawley tariff of 1930. As farm product prices fell at the end of the decade presidential candidate Herbert Hoover proposed, as part of his platform, tariff increases and other changes to aid the farmers. In January 1929, after Hoover’s election, but before he took office, a tariff bill was introduced into Congress. Special interests succeeded in gaining additional (or new) protection for most domestically produced commodities and the goal of greater protection for the farmers tended to get lost in the increased protection for multitudes of American manufactured products. In spite of widespread condemnation by economists, President Hoover signed the Smoot-Hawley Tariff in June 1930 and rates rose sharply.

Following the First World War, the U.S. government actively promoted American exports, and in each of the postwar years through 1929, the United States recorded a surplus in its balance of trade. (Figure 28) However, the surplus declined in the 1930s as both exports and imports fell sharply after 1929. From the mid-1920s on finished manufactures were the most important exports, while agricultural products dominated American imports.

The majority of the funds that allowed Germany to make its reparations payments to France and Great Britain and hence allowed those countries to pay their debts to the United States came from the net flow of capital out of the United States in the form of direct investment in real assets and investments in long- and short-term foreign financial assets. After the devastating German hyperinflation of 1922 and 1923, the Dawes Plan reformed the German economy and currency and accelerated the U.S. capital outflow. American investors began to actively and aggressively pursue foreign investments, particularly loans (Lewis, 1938) and in the late twenties there was a marked deterioration in the quality of foreign bonds sold in the United States. (Mintz, 1951)

The system, then, worked well as long as there was a net outflow of American capital, but this did not continue. In the middle of 1928, the flow of short-term capital began to decline. In 1928 the flow of “other long-term” capital out of the United States was 752 million dollars, but in 1929 it was only 34 million dollars. Though arguments now exist as to whether the booming stock market in the United States was to blame for this, it had far-reaching effects on the international economic system and the various domestic economies.

The Start of the Depression

The United States had the majority of the world’s monetary gold, about 40 percent, by 1920. In the latter part of the twenties, France also began accumulating gold as its share of the world’s monetary gold rose from 9 percent in 1927 to 17 percent in 1929 and 22 percent by 1931. In 1927 the Federal Reserve System had reduced discount rates (the interest rate at which they lent reserves to member commercial banks) and engaged in open market purchases (purchasing U.S. government securities on the open market to increase the reserves of the banking system) to push down interest rates and assist Great Britain in staying on the gold standard. By early 1928 the Federal Reserve System was worried about its loss of gold due to this policy as well as the ongoing boom in the stock market. It began to raise the discount rate to stop these outflows. Gold was also entering the United States so that foreigners could obtain dollars to invest in stocks and bonds. As the United States and France accumulated more and more of the world’s monetary gold, other countries’ central banks took contractionary steps to stem the loss of gold. In country after country these deflationary strategies began contracting economic activity and by 1928 some countries in Europe, Asia, and South America had entered into a depression. More countries’ economies began to decline in 1929, including the United States, and by 1930 a depression was in force for almost all of the world’s market economies. (Temin, 1989; Eichengreen, 1992)

Monetary and Fiscal Policies in the 1920s

Fiscal Policies

As a tool to promote stability in aggregate economic activity, fiscal policy is largely a post-Second World War phenomenon. Prior to 1930 the federal government’s spending and taxing decisions were largely, but not completely, based on the perceived “need” for government-provided public goods and services.

Though the fiscal policy concept had not been developed, this does not mean that during the twenties no concept of the government’s role in stimulating economic activity existed. Herbert Stein (1990) points out that in the twenties Herbert Hoover and some of his contemporaries shared two ideas about the proper role of the federal government. The first was that federal spending on public works could be an important force in reducin Smiley and Keehn, 1995.  investment. Both concepts fit the ideas held by Hoover and others of his persuasion that the U.S. economy of the twenties was not the result of laissez-faire workings but of “deliberate social engineering.”

The federal personal income tax was enacted in 1913. Though mildly progressive, its rates were low and topped out at 7 percent on taxable income in excess of $750,000. (Table 4) As the United States prepared for war in 1916, rates were increased and reached a maximum marginal rate of 12 percent. With the onset of the First World War, the rates were dramatically increased. To obtain additional revenue in 1918, marginal rates were again increased. The share of federal revenue generated by income taxes rose from 11 percent in 1914 to 69 percent in 1920. The tax rates had been extended downward so that more than 30 percent of the nation’s income recipients were subject to income taxes by 1918. However, through the purchase of tax exempt state and local securities and through steps taken by corporations to avoid the cash distribution of profits, the number of high income taxpayers and their share of total taxes paid declined as Congress kept increasing the tax rates. The normal (or base) tax rate was reduced slightly for 1919 but the surtax rates, which made the income tax highly progressive, were retained. (Smiley-Keehn, 1995)

President Harding’s new Secretary of the Treasury, Andrew Mellon, proposed cutting the tax rates, arguing that the rates in the higher brackets had “passed the point of productivity” and rates in excess of 70 percent simply could not be collected. Though most agreed that the rates were too high, there was sharp disagreement on how the rates should be cut. Democrats and  Smiley and Keehn, 1995.  Progressive Republicans argued for rate cuts targeted for the lower income taxpayers while maintaining most of the steep progressivity of the tax rates. They believed that remedies could be found to change the tax laws to stop the legal avoidance of federal income taxes. Republicans argued for sharper cuts that reduced the progressivity of the rates. Mellon proposed a maximum rate of 25 percent.

Though the federal income tax rates were reduced and made less progressive, it took three tax rate cuts in 1921, 1924, and 1925 before Mellon’s goal was finally achieved. The highest marginal tax rate was reduced from 73 percent to 58 percent to 46 percent and finally to 25 percent for the 1925 tax year. All of the other rates were also reduced and exemptions increased. By 1926, only about the top 10 percent of income recipients were subject to federal income taxes. As tax rates were reduced, the number of high income tax returns increased and the share of total federal personal income taxes paid rose. (Tables 5 and 6) Even with the dramatic income tax rate cuts and reductions in the number of low income taxpayers, federal personal income tax revenue continued to rise during the 1920s. Though early estimates of the distribution of personal income showed sharp increases in income inequality during the 1920s (Kuznets, 1953; Holt, 1977), more recent estimates have found that the increases in inequality were considerably less and these appear largely to be related to the sharp rise in capital gains due to the booming stock market in the late twenties. (Smiley, 1998 and 2000)

Each year in the twenties the federal government generated a surplus, in some years as much as 1 percent of GNP. The surpluses were used to reduce the federal deficit and it declined by 25 percent between 1920 and 1930. Contrary to simple macroeconomic models that argue a federal government budget surplus must be contractionary and tend to stop an economy from reaching full employment, the American economy operated at full-employment or close to it throughout the twenties and saw significant economic growth. In this case, the surpluses were not contractionary because the dollars were circulated back into the economy through the purchase of outstanding federal debt rather than pulled out as currency and held in a vault somewhere.

Monetary Policies

In 1913 fear of the “money trust” and their monopoly power led Congress to create 12 central banks when they created the Federal Reserve System. The new central banks were to control money and credit and act as lenders of last resort to end banking panics. The role of the Federal Reserve Board, located in Washington, D.C., was to coordinate the policies of the 12 district banks; it was composed of five presidential appointees and the current secretary of the treasury and comptroller of the currency. All national banks had to become members of the Federal Reserve System, the Fed, and any state bank meeting the qualifications could elect to do so.

The act specified fixed reserve requirements on demand and time deposits, all of which had to be on deposit in the district bank. Commercial banks were allowed to rediscount commercial paper and given Federal Reserve currency. Initially, each district bank set its own rediscount rate. To provide additional income when there was little rediscounting, the district banks were allowed to engage in open market operations that involved the purchasing and selling of federal government securities, short-term securities of state and local governments issued in anticipation of taxes, foreign exchange, and domestic bills of exchange. The district banks were also designated to act as fiscal agents for the federal government. Finally, the Federal Reserve System provided a central check clearinghouse for the entire banking system.

When the Federal Reserve System was originally set up, it was believed that its primary role was to be a lender of last resort to prevent banking panics and become a check-clearing mechanism for the nation’s banks. Both the Federal Reserve Board and the Governors of the District Banks were bodies established to jointly exercise these activities. The division of functions was not clear, and a struggle for power ensued, mainly between the New York Federal Reserve Bank, which was led by J. P. Morgan’s protege, Benjamin Strong, through 1928, and the Federal Reserve Board. By the thirties the Federal Reserve Board had achieved dominance.

There were really two conflicting criteria upon which monetary actions were ostensibly based: the Gold Standard and the Real Bills Doctrine. The Gold Standard was supposed to be quasi-automatic, with an effective limit to the quantity of money. However, the Real Bills Doctrine (which required that all loans be made on short-term, self-liquidating commercial paper) had no effective limit on the quantity of money. The rediscounting of eligible commercial paper was supposed to lead to the required “elasticity” of the stock of money to “accommodate” the needs of industry and business. Actually the rediscounting of commercial paper, open market purchases, and gold inflows all had the same effects on the money stock.

The 1920-21 Depression

During the First World War, the Fed kept discount rates low and granted discounts on banks’ customer loans used to purchase V-bonds in order to help finance the war. The final Victory Loan had not been floated when the Armistice was signed in November of 1918: in fact, it took until October of 1919 for the government to fully sell this last loan issue. The Treasury, with the secretary of the treasury sitting on the Federal Reserve Board, persuaded the Federal Reserve System to maintain low interest rates and discount the Victory bonds necessary to keep bond prices high until this last issue had been floated. As a result, during this period the money supply grew rapidly and prices rose sharply.

A shift from a federal deficit to a surplus and supply disruptions due to steel and coal strikes in 1919 and a railroad strike in early 1920 contributed to the end of the boom. But the most—common view is that the Fed’s monetary policy was the main determinant of the end of the expansion and inflation and the beginning of the subsequent contraction and severe deflation. When the Fed was released from its informal agreement with the Treasury in November of 1919, it raised the discount rate from 4 to 4.75 percent. Benjamin Strong (the governor of the New York bank) was beginning to believe that the time for strong action was past and that the Federal Reserve System’s actions should be moderate. However, with Strong out of the country, the Federal Reserve Board increased the discount rate from 4.75 to 6 percent in late January of 1920 and to 7 percent on June 1, 1920. By the middle of 1920, economic activity and employment were rapidly falling, and prices had begun their downward spiral in one of the sharpest price declines in American history. The Federal Reserve System kept the discount rate at 7 percent until May 5, 1921, when it was lowered to 6.5 percent. By June of 1922, the rate had been lowered yet again to 4 percent. (Friedman and Schwartz, 1963)

The Federal Reserve System authorities received considerable criticism then and later for their actions. Milton Friedman and Anna Schwartz (1963) contend that the discount rate was raised too much too late and then kept too high for too long, causing the decline to be more severe and the price deflation to be greater. In their opinion the Fed acted in this manner due to the necessity of meeting the legal reserve requirement with a safe margin of gold reserves. Elmus Wicker (1966), however, argues that the gold reserve ratio was not the main factor determining the Federal Reserve policy in the episode. Rather, the Fed knowingly pursued a deflationary policy because it felt that the money supply was simply too large and prices too high. To return to the prewar parity for gold required lowering the price level, and there was an excessive stock of money because the additional money had been used to finance the war, not to produce consumer goods. Finally, the outstanding indebtedness was too large due to the creation of Fed credit.

Whether statutory gold reserve requirements to maintain the gold standard or domestic credit conditions were the most important determinant of Fed policy is still an open question, though both certainly had some influence. Regardless of the answer to that question, the Federal Reserve System’s first major undertaking in the years immediately following the First World War demonstrated poor policy formulation.

Federal Reserve Policies from 1922 to 1930

By 1921 the district banks began to recognize that their open market purchases had effects on interest rates, the money stock, and economic activity. For the next several years, economists in the Federal Reserve System discussed how this worked and how it could be related to discounting by member banks. A committee was created to coordinate the open market purchases of the district banks.

The recovery from the 1920-1921 depression had proceeded smoothly with moderate price increases. In early 1923 the Fed sold some securities and increased the discount rate from 4 percent as they believed the recovery was too rapid. However, by the fall of 1923 there were some signs of a business slump. McMillin and Parker (1994) argue that this contraction, as well as the 1927 contraction, were related to oil price shocks. By October of 1923 Benjamin Strong was advocating securities purchases to counter this. Between then and September 1924 the Federal Reserve System increased its securities holdings by over $500 million. Between April and August of 1924 the Fed reduced the discount rate to 3 percent in a series of three separate steps. In addition to moderating the mild business slump, the expansionary policy was also intended to reduce American interest rates relative to British interest rates. This reversed the gold flow back toward Great Britain allowing Britain to return to the gold standard in 1925. At the time it appeared that the Fed’s monetary policy had successfully accomplished its goals.

By the summer of 1924 the business slump was over and the economy again began to grow rapidly. By the mid-1920s real estate speculation had arisen in many urban areas in the United States and especially in Southeastern Florida. Land prices were rising sharply. Stock market prices had also begun rising more rapidly. The Fed expressed some worry about these developments and in 1926 sold some securities to gently slow the real estate and stock market boom. Amid hurricanes and supply bottlenecks the Florida real estate boom collapsed but the stock market boom continued.

The American economy entered into another mild business recession in the fall of 1926 that lasted until the fall of 1927. One of the factors in this was Henry’s Ford’s shut down of all of his factories to changeover from the Model T to the Model A. His employees were left without a job and without income for over six months. International concerns also reappeared. France, which was preparing to return to the gold standard, had begun accumulating gold and gold continued to flow into the United States. Some of this gold came from Great Britain making it difficult for the British to remain on the gold standard. This occasioned a new experiment in central bank cooperation. In July 1927 Benjamin Strong arranged a conference with Governor Montagu Norman of the Bank of England, Governor Hjalmar Schacht of the Reichsbank, and Deputy Governor Charles Ritt of the Bank of France in an attempt to promote cooperation among the world’s central bankers. By the time the conference began the Fed had already taken steps to counteract the business slump and reduce the gold inflow. In early 1927 the Fed reduced discount rates and made large securities purchases. One result of this was that the gold stock fell from $4.3 billion in mid-1927 to $3.8 billion in mid-1928. Some of the gold exports went to France and France returned to the gold standard with its undervalued currency. The loss of gold from Britain eased allowing it to maintain the gold standard.

By early 1928 the Fed was again becoming worried. Stock market prices were rising even faster and the apparent speculative bubble in the stock market was of some concern to Fed authorities. The Fed was also concerned about the loss of gold and wanted to bring that to an end. To do this they sold securities and, in three steps, raised the discount rate to 5 percent by July 1928. To this point the Federal Reserve Board had largely agreed with district Bank policy changes. However, problems began to develop.

During the stock market boom of the late 1920s the Federal Reserve Board preferred to use “moral suasion” rather than increases in discount rates to lessen member bank borrowing. The New York City bank insisted that moral suasion would not work unless backed up by literal credit rationing on a bank by bank basis which they, and the other district banks, were unwilling to do. They insisted that discount rates had to be increased. The Federal Reserve Board countered that this general policy change would slow down economic activity in general rather than be specifically targeted to stock market speculation. The result was that little was done for a year. Rates were not raised but no open market purchases were undertaken. Rates were finally raised to 6 percent in August of 1929. By that time the contraction had already begun. In late October the stock market crashed, and America slid into the Great Depression.

In November, following the stock market crash the Fed reduced discount rates to 4.5 percent. In January they again decreased discount rates and began a series of discount rate decreases until the rate reached 2.5 percent at the end of 1930. No further open market operations were undertaken for the next six months. As banks reduced their discounting in 1930, the stock of money declined. There was a banking crisis in the southeast in November and December of 1930, and in its wake the public’s holding of currency relative to deposits and banks’ reserve ratios began to rise and continued to do so through the end of the Great Depression.

Conclusion

Though some disagree, there is growing evidence that the behavior of the American economy in the 1920s did not cause the Great Depression. The depressed 1930s were not “retribution” for the exuberant growth of the 1920s. The weakness of a few economic sectors in the 1920s did not forecast the contraction from 1929 to 1933. Rather it was the depression of the 1930s and the Second World War that interrupted the economic growth begun in the 1920s and resumed after the Second World War. Just as the construction of skyscrapers that began in the 1920s resumed in the 1950s, so did real economic growth and progress resume. In retrospect we can see that the introduction and expansion of new technologies and industries in the 1920s, such as autos, household electric appliances, radio, and electric utilities, are echoed in the 1990s in the effects of the expanding use and development of the personal computer and the rise of the internet. The 1920s have much to teach us about the growth and development of the American economy.

References

Adams, Walter, ed. The Structure of American Industry, 5th ed. New York: Macmillan Publishing Co., 1977.

Aldcroft, Derek H. From Versailles to Wall Street, 1919-1929. Berkeley: The University of California Press, 1977.

Allen, Frederick Lewis. Only Yesterday. New York: Harper and Sons, 1931.

Alston, Lee J. “Farm Foreclosures in the United States During the Interwar Period.” The Journal of Economic History 43, no. 4 (1983): 885-904.

Alston, Lee J., Wayne A. Grove, and David C. Wheelock. “Why Do Banks Fail? Evidence from the 1920s.” Explorations in Economic History 31 (1994): 409-431.

Ankli, Robert. “Horses vs. Tractors on the Corn Belt.” Agricultural History 54 (1980): 134-148.

Ankli, Robert and Alan L. Olmstead. “The Adoption of the Gasoline Tractor in California.” Agricultural History 55 (1981):— 213-230.

Appel, Joseph H. The Business Biography of John Wanamaker, Founder and Builder. New York: The Macmillan Co., 1930.

Baker, Jonathan B. “Identifying Cartel Pricing Under Uncertainty: The U.S. Steel Industry, 1933-1939.” The Journal of Law and Economics 32 (1989): S47-S76.

Barger, E. L, et al. Tractors and Their Power Units. New York: John Wiley and Sons, 1952.

Barnouw, Eric. A Tower in Babel: A History of Broadcasting in the United States: Vol. I—to 1933. New York: Oxford University Press, 1966.

Barnouw, Eric. The Golden Web: A History of Broadcasting in the United States: Vol. II—1933 to 1953. New York: Oxford University Press, 1968.

Beasley, Norman. Main Street Merchant: The Story of the J. C. Penney Company. New York: Whittlesey House, 1948.

Beckman, Theodore N. and Herman C. Nolen. The Chain Store Problem: A Critical Analysis. New York: McGraw-Hill Book Co., 1938.

Benson, Susan Porter. Counter Cultures: Saleswomen, Managers, and Customers in American Department Stores, 1890-1940. Urbana, IL: University of Illinois Press, 1986.

Bernstein, Irving. The Lean Years: A History of the American Worker, 1920-1933. Boston: Houghton Mifflin Co., 1960.

Bernstein, Michael A. The Great Depression: Delayed Recovery and Economic Change in America, 1929-1939. New York: Cambridge University Press, 1987.

Bishop, Jerry E. “Stock Market Experiment Suggests Inevitability of Booms and Busts.” The Wall Street Journal, 17 November, 1987.

Board of Governors of the Federal Reserve System. Banking and Monetary Statistics. Washington: USGOP, 1943.

Bogue, Allan G. “Changes in Mechanical and Plant Technology: The Corn Belt, 1910-1940.” The Journal of Economic History 43 (1983): 1-26.

Breit, William and Elzinga, Kenneth. The Antitrust Casebook: Milestones in Economic Regulation, 2d ed. Chicago: The Dryden Press, 1989.

Bright, Arthur A., Jr. The Electric Lamp Industry: Technological Change and Economic Development from 1800 to 1947. New York: Macmillan, 1947.

Brody, David. Labor in Crisis: The Steel Strike. Philadelphia: J. B. Lippincott Co., 1965.

Brooks, John. Telephone: The First Hundred Years. New York: Harper and Row, 1975.

Brown, D. Clayton. Electricity for Rural America: The Fight for the REA. Westport, CT: The Greenwood Press, 1980.

Brown, William A., Jr. The International Gold Standard Reinterpreted, 1914-1934, 2 vols. New York: National Bureau of Economic Research, 1940.

Brunner, Karl and Allen Meltzer. “What Did We Learn from the Monetary Experience of the United States in the Great Depression?” Canadian Journal of Economics 1 (1968): 334-48.

Bryant, Keith L., Jr., and Henry C. Dethloff. A History of American Business. Englewood Cliffs, NJ: Prentice-Hall, Inc., 1983.

Bucklin, Louis P. Competition and Evolution in the Distributive Trades. Englewood Cliffs, NJ: Prentice-Hall, 1972.

Bullock, Roy J. “The Early History of the Great Atlantic & Pacific Tea Company,” Harvard Business Review 11 (1933): 289-93.

Bullock, Roy J. “A History of the Great Atlantic & Pacific Tea Company Since 1878.” Harvard Business Review 12 (1933): 59-69.

Cecchetti, Stephen G. “Understanding the Great Depression: Lessons for Current Policy.” In The Economics of the Great Depression, Edited by Mark Wheeler. Kalamazoo, MI: W. E. Upjohn Institute for Employment Research, 1998.

Chandler, Alfred D., Jr. Strategy and Structure: Chapters in the History of the American Industrial Enterprise. Cambridge, MA: The M.I.T. Press, 1962.

Chandler, Alfred D., Jr. The Visible Hand: The Managerial Revolution in American Business. Cambridge, MA: the Belknap Press Harvard University Press, 1977.

Chandler, Alfred D., Jr. Giant Enterprise: Ford, General Motors, and the American Automobile Industry. New York: Harcourt, Brace, and World, 1964.

Chester, Giraud, and Garnet R. Garrison. Radio and Television: An Introduction. New York: Appleton-Century Crofts, 1950.

Clewett, Richard C. “Mass Marketing of Consumers’ Goods.” In The Growth of the American Economy, 2d ed. Edited by Harold F. Williamson. Englewood Cliffs, NJ: Prentice-Hall, 1951.

Cochran, Thomas C. 200 Years of American Business. New York: Delta Books, 1977.

Cohen, Yehoshua S. Diffusion of an Innovation in an Urban System: The Spread of Planned Regional Shopping Centers in the United States, 1949-1968. Chicago: The University of Chicago, Department of Geography, Research Paper No. 140, 1972.

Daley, Robert. An American Saga: Juan Trippe and His Pan American Empire. New York: Random House, 1980.

De Long, J. Bradford, and Andre Shleifer. “The Stock Market Bubble of 1929: Evidence from Closed-end Mutual Funds.” The Journal of Economic History 51 (September 1991): 675-700.

Clarke, Sally. “New Deal Regulation and the Revolution in American Farm Productivity: A Case Study of the Diffusion of the Tractor in the Corn Belt, 1920-1940.” The Journal of Economic History 51, no. 1 (1991): 105-115.

Cohen, Avi. “Technological Change as Historical Process: The Case of the U.S. Pulp and Paper Industry, 1915-1940.” The Journal of Economic History 44 (1984): 775-79.

Davies, R. E. G. A History of the World’s Airlines. London: Oxford University Press, 1964.

Dearing, Charles L., and Wilfred Owen. National Transportation Policy. Washington: The Brookings Institution, 1949.

Degen, Robert A. The American Monetary System: A Concise Survey of Its Evolution Since 1896. Lexington, MA: Lexington Books, 1987.

De Long, J. Bradford and Andre Shleifer. “The Stock Market Bubble of 1929: Evidence from Closed-end Mutual Funds.” The Journal of Economic History 51 (1991): 675-700.

Devine, Warren D., Jr. “From Shafts to Wires: Historical Perspectives on Electrification.” The Journal of Economic History 43 (1983): 347-372.

Eckert, Ross D., and George W. Hilton. “The Jitneys.” The Journal of Law and Economics 15 (October 1972): 293-326.

Eichengreen, Barry, ed. The Gold Standard in Theory and History. New York: Metheun, 1985.

Barry Eichengreen. “The Political Economy of the Smoot-Hawley Tariff.” Research in Economic History 12 (1989): 1-43.

Eichengreen, Barry. Golden Fetters: The Gold Standard and the Great Depression, 1919-1939. New York: Oxford University Press, 1992.

Eis, Carl. “The 1919-1930 Merger Movement in American Industry.” The Journal of Law and Economics XII (1969): 267-96.

Emmet, Boris, and John E. Jeuck. Catalogues and Counters: A History of Sears Roebuck and Company. Chicago: University of Chicago Press, 1950.

Fearon, Peter. War, Prosperity, & Depression: The U.S. Economy, 1947-1945. Lawrence, KS: University of Kansas Press, 1987.

Field, Alexander J. “The Most Technologically Progressive Decade of the Century.” The American Economic Review 93 (2003): 1399-1413.

Fischer, Claude. “The Revolution in Rural Telephony, 1900-1920.” Journal of Social History 21 (1987): 221-38.

Fischer, Claude. “Technology’s Retreat: The Decline of Rural Telephony in the United States, 1920-1940.” Social Science History, Vol. 11 (Fall 1987), pp. 295-327.

Fisher, Irving. The Stock Market Crash—and After. New York: Macmillan, 1930.

French, Michael J. “Structural Change and Competition in the United States Tire Industry, 1920-1937.” Business History Review 60 (1986): 28-54.

French, Michael J. The U.S. Tire Industry. Boston: Twayne Publishers, 1991.

Fricke, Ernest B. “The New Deal and the Modernization of Small Business: The McCreary Tire and Rubber Company, 1930-1940.” Business History Review 56 (1982):— 559-76.

Friedman, Milton, and Anna J. Schwartz. A Monetary History of the United States, 1867-1960. Princeton: Princeton University Press, 1963.

Galbraith, John Kenneth. The Great Crash. Boston: Houghton Mifflin, 1954.

Garnet, Robert W. The Telephone Enterprise: The Evolution of the Bell System’s Horizontal Structure, 1876-1900. Baltimore: The Johns Hopkins University Press, 1985.

Gideonse, Max. “Foreign Trade, Investments, and Commercial Policy.” In The Growth of the American Economy, 2d ed. Edited by Harold F. Williamson. Englewood Cliffs, NJ: Prentice-Hall, 1951.

Giedion, Simon. Mechanization Takes Command. New York: Oxford University Press, 1948.

Gordon, Robert Aaron. Economic Instability and Growth: The American Record. New York: Harper and Row, 1974.

Gray, Roy Burton. Development of the Agricultural Tractor in the United States, 2 vols. Washington, D. C.: USGPO, 1954.

Gunderson, Gerald. An Economic History of America. New York: McGraw-Hill, 1976.

Hadwiger, Don F., and Clay Cochran. “Rural Telephones in the United States.” Agricultural History 58 (July 1984): 221-38.

Hamilton, James D. “Monetary Factors in the Great Depression.” Journal of Monetary Economics 19 (1987): 145-169.

Hamilton, James D. “The Role of the International Gold Standard in Propagating the Great Depression.” Contemporary Policy Issues 6 (1988): 67-89.

Hayek, Friedrich A. Prices and Production. New York: Augustus M. Kelly reprint of 1931 edition.

Hayford, Marc and Carl A. Pasurka, Jr. “The Political Economy of the Fordney-McCumber and Smoot-Hawley Tariff Acts.” Explorations in Economic History 29 (1992): 30-50.

Hendrickson, Robert. The Grand Emporiums: The Illustrated History of America’s Great Department Stores. Briarcliff Manor, NY: Stein and Day, 1979.

Herbst, Anthony F., and Joseph S. K. Wu. “Some Evidence of Subsidization of the U.S. Trucking Industry, 1900-1920.” The Journal of Economic History— 33 (June 1973): 417-33.

Higgs, Robert. Crisis and Leviathan: Critical Episodes in the Growth of American Government. New York: Oxford University Press, 1987.

Hilton, George W., and John Due. The Electric Interurban Railways in America. Stanford: Stanford University Press, 1960.

Hoffman, Elizabeth and Gary D. Liebcap. “Institutional Choice and the Development of U.S. Agricultural Policies in the 1920s.” The Journal of Economic History 51 (1991): 397-412.

Holt, Charles F. “Who Benefited from the Prosperity of the Twenties?” Explorations in Economic History 14 (1977): 277-289.

Hower, Ralph W. History of Macy’s of New York, 1858-1919. Cambridge, MA: Harvard University Press, 1946.

Hubbard, R. Glenn, Ed. Financial Markets and Financial Crises. Chicago: University of Chicago Press, 1991.

Hunter, Louis C. “Industry in the Twentieth Century.” In The Growth of the American Economy, 2d ed., edited by Harold F. Williamson. Englewood Cliffs, NJ: Prentice-Hall, 1951.

Jerome, Harry. Mechanization in Industry. New York: National Bureau of Economic Research, 1934.

Johnson, H. Thomas. “Postwar Optimism and the Rural Financial Crisis.” Explorations in Economic History 11, no. 2 (1973-1974): 173-192.

Jones, Fred R. and William H. Aldred. Farm Power and Tractors, 5th ed. New York: McGraw-Hill, 1979.

Keller, Robert. “Factor Income Distribution in the United States During the 20’s: A Reexamination of Fact and Theory.” The Journal of Economic History 33 (1973): 252-95.

Kelly, Charles J., Jr. The Sky’s the Limit: The History of the Airlines. New York: Coward-McCann, 1963.

Kindleberger, Charles. The World in Depression, 1929-1939. Berkeley: The University of California Press, 1973.

Klebaner, Benjamin J. Commercial Banking in the United States: A History. New York: W. W. Norton and Co., 1974.

Kuznets, Simon. Shares of Upper Income Groups in Income and Savings. New York: NBER, 1953.

Lebhar, Godfrey M. Chain Stores in America, 1859-1962. New York: Chain Store Publishing Corp., 1963.

Lewis, Cleona. America’s Stake in International Investments. Washington: The Brookings Institution, 1938.

Livesay, Harold C. and Patrick G. Porter. “Vertical Integration in American Manufacturing, 1899-1948.” The Journal of Economic History 29 (1969): 494-500.

Lipartito, Kenneth. The Bell System and Regional Business: The Telephone in the South, 1877-1920. Baltimore: The Johns Hopkins University Press, 1989.

Liu, Tung, Gary J. Santoni, and Courteney C. Stone. “In Search of Stock Market Bubbles: A Comment on Rappoport and White.” The Journal of Economic History 55 (1995): 647-654.

Lorant, John. “Technological Change in American Manufacturing During the 1920s.” The Journal of Economic History 33 (1967): 243-47.

MacDonald, Forrest. Insull. Chicago: University of Chicago Press, 1962.

Marburg, Theodore. “Domestic Trade and Marketing.” In The Growth of the American Economy, 2d ed. Edited by Harold F. Williamson. Englewood Cliffs, NJ: Prentice-Hall, 1951.

Markham, Jesse. “Survey of the Evidence and Findings on Mergers.” In Business Concentration and Price Policy, National Bureau of Economic Research. Princeton: Princeton University Press, 1955.

Markin, Rom J. The Supermarket: An Analysis of Growth, Development, and Change. Rev. ed. Pullman, WA: Washington State University Press, 1968.

McCraw, Thomas K. TVA and the Power Fight, 1933-1937. Philadelphia: J. B. Lippincott, 1971.

McCraw, Thomas K. and Forest Reinhardt. “Losing to Win: U.S. Steel’s Pricing, Investment Decisions, and Market Share, 1901-1938.” The Journal of Economic History 49 (1989): 592-620.

McMillin, W. Douglas and Randall E. Parker. “An Empirical Analysis of Oil Price Shocks in the Interwar Period.” Economic Inquiry 32 (1994): 486-497.

McNair, Malcolm P., and Eleanor G. May. The Evolution of Retail Institutions in the United States. Cambridge, MA: The Marketing Science Institute, 1976.

Mercer, Lloyd J. “Comment on Papers by Scheiber, Keller, and Raup.” The Journal of Economic History 33 (1973): 291-95.

Mintz, Ilse. Deterioration in the Quality of Foreign Bonds Issued in the United States, 1920-1930. New York: National Bureau of Economic Research, 1951.

Mishkin, Frederic S. “Asymmetric Information and Financial Crises: A Historical Perspective.” In Financial Markets and Financial Crises Edited by R. Glenn Hubbard. Chicago: University of Chicago Press, 1991.

Morris, Lloyd. Not So Long Ago. New York: Random House, 1949.

Mosco, Vincent. Broadcasting in the United States: Innovative Challenge and Organizational Control. Norwood, NJ: Ablex Publishing Corp., 1979.

Moulton, Harold G. et al. The American Transportation Problem. Washington: The Brookings Institution, 1933.

Mueller, John. “Lessons of the Tax-Cuts of Yesteryear.” The Wall Street Journal, March 5, 1981.

Musoke, Moses S. “Mechanizing Cotton Production in the American South: The Tractor, 1915-1960.” Explorations in Economic History 18 (1981): 347-75.

Nelson, Daniel. “Mass Production and the U.S. Tire Industry.” The Journal of Economic History 48 (1987): 329-40.

Nelson, Ralph L. Merger Movements in American Industry, 1895-1956. Princeton: Princeton University Press, 1959.

Niemi, Albert W., Jr., U.S. Economic History, 2nd ed. Chicago: Rand McNally Publishing Co., 1980.

Norton, Hugh S. Modern Transportation Economics. Columbus, OH: Charles E. Merrill Books, Inc., 1963.

Nystrom, Paul H. Economics of Retailing, vol. 1, 3rd ed. New York: The Ronald Press Co., 1930.

Oshima, Harry T. “The Growth of U.S. Factor Productivity: The Significance of New Technologies in the Early Decades of the Twentieth Century.” The Journal of Economic History 44 (1984): 161-70.

Parker, Randall and W. Douglas McMillin. “An Empirical Analysis of Oil Price Shocks During the Interwar Period.” Economic Inquiry 32 (1994): 486-497.

Parker, Randall and Paul Flacco. “Income Uncertainty and the Onset of the Great Depression.” Economic Inquiry 30 (1992): 154-171.

Parker, William N. “Agriculture.” In American Economic Growth: An Economist’s History of the United States, edited by Lance E. Davis, Richard A. Easterlin, William N. Parker, et. al. New York: Harper and Row, 1972.

Passer, Harold C. The Electrical Manufacturers, 1875-1900. Cambridge: Harvard University Press, 1953.

Peak, Hugh S., and Ellen F. Peak. Supermarket Merchandising and Management. Englewood Cliffs, NJ: Prentice-Hall, 1977.

Pilgrim, John. “The Upper Turning Point of 1920: A Reappraisal.” Explorations in Economic History 11 (1974): 271-98.

Rae, John B. Climb to Greatness: The American Aircraft Industry, 1920-1960. Cambridge: The M.I.T. Press, 1968.

Rae, John B. The American Automobile Industry. Boston: Twayne Publishers, 1984.

Rappoport, Peter and Eugene N. White. “Was the Crash of 1929 Expected?” American Economic Review 84 (1994): 271-281.

Rappoport, Peter and Eugene N. White. “Was There a Bubble in the 1929 Stock Market?” The Journal of Economic History 53 (1993): 549-574.

Resseguie, Harry E. “Alexander Turney Stewart and the Development of the Department Store, 1823-1876,” Business History Review 39 (1965): 301-22.

Rezneck, Samuel. “Mass Production and the Use of Energy.” In The Growth of the American Economy, 2d ed., edited by Harold F. Williamson. Englewood Cliffs, NJ: Prentice-Hall, 1951.

Rockwell, Llewellyn H., Jr., ed. The Gold Standard: An Austrian Perspective. Lexington, MA: Lexington Books, 1985.

Romer, Christina. “Spurious Volatility in Historical Unemployment Data.” The Journal of Political Economy 91 (1986): 1-37.

Romer, Christina. “New Estimates of Prewar Gross National Product and Unemployment.” Journal of Economic History 46 (1986): 341-352.

Romer, Christina. “World War I and the Postwar Depression: A Reinterpretation Based on Alternative Estimates of GNP.” Journal of Monetary Economics 22 (1988): 91-115.

Romer, Christina and Jeffrey A. Miron. “A New Monthly Index of Industrial Production, 1884-1940.” Journal of Economic History 50 (1990): 321-337.

Romer, Christina. “The Great Crash and the Onset of the Great Depression.” Quarterly Journal of Economics 105 (1990): 597-625.

Romer, Christina. “Remeasuring Business Cycles.” The Journal of Economic History 54 (1994): 573-609.

Roose, Kenneth D. “The Production Ceiling and the Turning Point of 1920.” American Economic Review 48 (1958): 348-56.

Rosen, Philip T. The Modern Stentors: Radio Broadcasters and the Federal Government, 1920-1934. Westport, CT: The Greenwood Press, 1980.

Rosen, Philip T. “Government, Business, and Technology in the 1920s: The Emergence of American Broadcasting.” In American Business History: Case Studies. Edited by Henry C. Dethloff and C. Joseph Pusateri. Arlington Heights, IL: Harlan Davidson, 1987.

Rothbard, Murray N. America’s Great Depression. Kansas City: Sheed and Ward, 1963.

Sampson, Roy J., and Martin T. Ferris. Domestic Transportation: Practice, Theory, and Policy, 4th ed. Boston: Houghton Mifflin Co., 1979.

Samuelson, Paul and Everett E. Hagen. After the War—1918-1920. Washington: National Resources Planning Board, 1943.

Santoni, Gary and Gerald P. Dwyer, Jr. “The Great Bull Markets, 1924-1929 and 1982-1987: Speculative Bubbles or Economic Fundamentals?” Federal Reserve Bank of St. Louis Review 69 (1987): 16-29.

Santoni, Gary, and Gerald P. Dwyer, Jr. “Bubbles vs. Fundamentals: New Evidence from the Great Bull Markets.” In Crises and Panics: The Lessons of History. Edited by Eugene N. White. Homewood, IL: Dow Jones/Irwin, 1990.

Scherer, Frederick M. and David Ross. Industrial Market Structure and Economic Performance, 3d ed. Boston: Houghton Mifflin, 1990.

Schlebecker, John T. Whereby We Thrive: A History of American Farming, 1607-1972. Ames, IA: The Iowa State University Press, 1975.

Shepherd, James. “The Development of New Wheat Varieties in the Pacific Northwest.” Agricultural History 54 (1980): 52-63.

Sirkin, Gerald. “The Stock Market of 1929 Revisited: A Note.” Business History Review 49 (Fall 1975): 233-41.

Smiley, Gene. The American Economy in the Twentieth Century. Cincinnati: South-Western Publishing Co., 1994.

Smiley, Gene. “New Estimates of Income Shares During the 1920s.” In Calvin Coolidge and the Coolidge Era: Essays on the History of the 1920s, edited by John Earl Haynes, 215-232. Washington, D.C.: Library of Congress, 1998.

Smiley, Gene. “A Note on New Estimates of the Distribution of Income in the 1920s.” The Journal of Economic History 60, no. 4 (2000): 1120-1128.

Smiley, Gene. Rethinking the Great Depression: A New View of Its Causes and Consequences. Chicago: Ivan R. Dee, 2002.

Smiley, Gene, and Richard H. Keehn. “Margin Purchases, Brokers’ Loans and the Bull Market of the Twenties.” Business and Economic History. 2d series. 17 (1988): 129-42.

Smiley, Gene and Richard H. Keehn. “Federal Personal Income Tax Policy in the 1920s.” the Journal of Economic History 55, no. 2 (1995): 285-303.

Sobel, Robert. The Entrepreneuers: Explorations Within the American Business Tradition. New York: Weybright and Talley, 1974.

Soule, George. Prosperity Decade: From War to Depression: 1917-1929. New York: Holt, Rinehart, and Winston, 1947.

Stein, Herbert. The Fiscal Revolution in America, revised ed. Washington, D.C.: AEI Press, 1990.

Stigler, George J. “Monopoly and Oligopoly by Merger.” American Economic Review, 40 (May 1950): 23-34.

Sumner, Scott. “The Role of the International Gold Standard in Commodity Price Deflation: Evidence from the 1929 Stock Market Crash.” Explorations in Economic History 29 (1992): 290-317.

Swanson, Joseph and Samuel Williamson. “Estimates of National Product and Income for the United States Economy, 1919-1941.” Explorations in Economic History 10, no. 1 (1972): 53-73.

Temin, Peter. “The Beginning of the Depression in Germany.” Economic History Review. 24 (May 1971): 240-48.

Temin, Peter. Did Monetary Forces Cause the Great Depression. New York: W. W. Norton, 1976.

Temin, Peter. The Fall of the Bell System. New York: Cambridge University Press, 1987.

Temin, Peter. Lessons from the Great Depression. Cambridge, MA: The M.I.T. Press, 1989.

Thomas, Gordon, and Max Morgan-Witts. The Day the Bubble Burst. Garden City, NY: Doubleday, 1979.

Ulman, Lloyd. “The Development of Trades and Labor Unions,” In American Economic History, edited by Seymour E. Harris, chapter 14. New York: McGraw-Hill Book Co., 1961.

Ulman Lloyd. The Rise of the National Trade Union. Cambridge, MA: Harvard University Press, 1955.

U.S. Department of Commerce, Bureau of the Census. Historical Statistics of the United States: Colonial Times to 1970, 2 volumes. Washington, D.C.: USGPO, 1976.

Walsh, Margaret. Making Connections: The Long Distance Bus Industry in the U.S.A. Burlington, VT: Ashgate, 2000.

Wanniski, Jude. The Way the World Works. New York: Simon and Schuster, 1978.

Weiss, Leonard W. Cast Studies in American Industry, 3d ed. New York: John Wiley & Sons, 1980.

Whaples, Robert. “Hours of Work in U.S. History.” EH.Net Encyclopedia, edited by Robert Whaples, August 15 2001 URL— http://www.eh.net/encyclopedia/contents/whaples.work.hours.us.php

Whatley, Warren. “Southern Agrarian Labor Contracts as Impediments to Cotton Mechanization.” The Journal of Economic History 87 (1987): 45-70.

Wheelock, David C. and Subal C. Kumbhakar. “The Slack Banker Dances: Deposit Insurance and Risk-Taking in the Banking Collapse of the 1920s.” Explorations in Economic History 31 (1994): 357-375.

White, Eugene N. “The Stock Market Boom and Crash of 1929 Revisited.” The Journal of Economic Perspectives. 4 (Spring 1990): 67-83.

White, Eugene N., Ed. Crises and Panics: The Lessons of History. Homewood, IL: Dow Jones/Irwin, 1990.

White, Eugene N. “When the Ticker Ran Late: The Stock Market Boom and Crash of 1929.” In Crises and Panics: The Lessons of History Edited by Eugene N. White. Homewood, IL: Dow Jones/Irwin, 1990.

White, Eugene N. “Stock Market Bubbles? A Reply.” The Journal of Economic History 55 (1995): 655-665.

White, William J. “Economic History of Tractors in the United States.” EH.Net Encyclopedia, edited by Robert Whaples, August 15 2001 URL http://www.eh.net/encyclopedia/contents/white.tractors.history.us.php

Wicker, Elmus. “Federal Reserve Monetary Policy, 1922-1933: A Reinterpretation.” Journal of Political Economy 73 (1965): 325-43.

Wicker, Elmus. “A Reconsideration of Federal Reserve Policy During the 1920-1921 Depression.” The Journal of Economic History 26 (1966): 223-38.

Wicker, Elmus. Federal Reserve Monetary Policy, 1917-1933. New York: Random House, 1966.

Wigmore, Barrie A. The Crash and Its Aftermath. Westport, CT: Greenwood Press, 1985.

Williams, Raburn McFetridge. The Politics of Boom and Bust in Twentieth-Century America. Minneapolis/St. Paul: West Publishing Co., 1994.

Williamson, Harold F., et al. The American Petroleum Industry: The Age of Energy, 1899-1959. Evanston, IL: Northwestern University Press, 1963.

Wilson, Thomas. Fluctuations in Income and Employment, 3d ed. New York: Pitman Publishing, 1948.

Wilson, Jack W., Richard E. Sylla, and Charles P. Jones. “Financial Market Panics and Volatility in the Long Run, 1830-1988.” In Crises and Panics: The Lessons of History Edited by Eugene N. White. Homewood, IL: Dow Jones/Irwin, 1990.

Winkler, John Kennedy. Five and Ten: The Fabulous Life of F. W. Woolworth. New York: R. M. McBride and Co., 1940.

Wood, Charles. “Science and Politics in the War on Cattle Diseases: The Kansas Experience, 1900-1940.” Agricultural History 54 (1980): 82-92.

Wright, Gavin. Old South, New South: Revolutions in the Southern Economy Since the Civil War. New York: Basic Books, 1986.

Wright, Gavin. “The Origins of American Industrial Success, 1879-1940.” The American Economic Review 80 (1990): 651-668.

Citation: Smiley, Gene. “US Economy in the 1920s”. EH.Net Encyclopedia, edited by Robert Whaples. June 29, 2004. URL http://eh.net/encyclopedia/the-u-s-economy-in-the-1920s/

African Americans in the Twentieth Century

Thomas N. Maloney, University of Utah

The nineteenth century was a time of radical transformation in the political and legal status of African Americans. Blacks were freed from slavery and began to enjoy greater rights as citizens (though full recognition of their rights remained a long way off). Despite these dramatic developments, many economic and demographic characteristics of African Americans at the end of the nineteenth century were not that different from what they had been in the mid-1800s. Tables 1 and 2 present characteristics of black and white Americans in 1900, as recorded in the Census for that year. (The 1900 Census did not record information on years of schooling or on income, so these important variables are left out of these tables, though they will be examined below.) According to the Census, ninety percent of African Americans still lived in the Southern US in 1900 — roughly the same percentage as lived in the South in 1870. Three-quarters of black households were located in rural places. Only about one-fifth of African American household heads owned their own homes (less than half the percentage among whites). About half of black men and about thirty-five percent of black women who reported an occupation to the Census said that they worked as a farmer or a farm laborer, as opposed to about one-third of white men and about eight percent of white women. Outside of farm work, African American men and women were greatly concentrated in unskilled labor and service jobs. Most black children had not attended school in the year before the Census, and white children were much more likely to have attended. So the members of a typical African American family at the start of the twentieth century lived and worked on a farm in the South and did not own their home. Children in these families were unlikely to be in school even at very young ages.

By 1990 (the most recent Census for which such statistics are available at the time of this writing), the economic conditions of African Americans had changed dramatically (see Tables 1 and 2). They had become much less concentrated in the South, in rural places, and in farming jobs and had entered better blue-collar jobs and the white-collar sector. They were nearly twice as likely to own their own homes at the end of the century as in 1900, and their rates of school attendance at all ages had risen sharply. Even after this century of change, though, African Americans were still relatively disadvantaged in terms of education, labor market success, and home ownership.

Table 1: Characteristics of Households in 1900 and 1990

1900 1990
Black White Black White
A. Region of Residence
South 90.1% 23.5% 53.0% 32.9%
Northeast 3.6% 31.8% 18.9% 20.9%
Midwest 5.8% 38.5% 18.9% 25.3%
West 0.5% 6.2% 9.2% 21.0%
B. Share Rural
75.8% 56.1% 11.9% 25.7%
C. Share of Homes Owner-Occupied
22.1% 49.2% 43.4% 67.3%

Based on household heads in Integrated Public Use Microdata Series Census samples for 1900 and 1990.

Table 2: Characteristics of Individuals in 1900 and 1990

1900 1990
Male Female Male Female
Black White Black White Black White Black White
A. Occupational Distribution
Professional/Technical 1.3% 3.8% 1.6% 10.7% 9.9% 17.2% 16.6% 21.9%
Proprietor/Manager/Official 0.8 6.9 0.2 2.6 6.5 14.7 5.4 10.0
Clerical 0.2 4.0 0.2 5.6 10.7 7.2 29.7 31.9
Sales 0.3 4.2 0.2 4.1 2.9 6.7 4.1 7.3
Craft 4.2 15.9 0 3.1 17.4 20.7 2.3 2.1
Operative 7.3 13.4 1.8 24.5 20.7 14.9 12.4 8.0
Laborer 25.5 14.0 6.5 1.5 12.2 7.2 2.0 1.5
Private Service 2.2 0.4 33.0 33.2 0.1 0 2.0 0.8
Other Service 4.8 2.4 20.6 6.6 18.5 9.0 25.3 15.8
Farmer 30.8 23.9 6.7 6.1 0.2 1.4 0.1 0.4
Farm Laborer 22.7 11.0 29.4 2.0 1.0 1.0 0.4 0.5
B. Percent Attending School by Age
Ages 6 to 13 37.8% 72.2% 41.9% 71.9% 94.5% 95.3% 94.2% 95.5
Ages 14 to 17 26.7 47.9 36.2 51.5 91.1 93.4 92.6 93.5
Ages 18 to 21 6.8 10.4 5.9 8.6 47.7 54.3 52.9 57.1

Based on Integrated Public Use Microdata Series Census samples for 1900 and 1990. Occupational distributions based on individuals aged 18 to 64 with recorded occupation. School attendance in 1900 refers to attendance at any time in the previous year. School attendance in 1990 refers to attendance since February 1 of that year.

These changes in the lives of African Americans did not occur continuously and steadily throughout the twentieth century. Rather, we can divide the century into three distinct eras: (1) the years from 1900 to 1915, prior to large-scale movement out of the South; (2) the years from 1916 to 1964, marked by migration and urbanization, but prior to the most important government efforts to reduce racial inequality; and (3) the years since 1965, characterized by government antidiscrimination efforts but also by economic shifts which have had a great impact on racial inequality and African American economic status.

1900-1915: Continuation of Nineteenth-Century Patterns

As was the case in the 1800s, African American economic life in the early 1900s centered on Southern cotton agriculture. African Americans grew cotton under a variety of contracts and institutional arrangements. Some were laborers hired for a short period for specific tasks. Many were tenant farmers, renting a piece of land and some of their tools and supplies, and paying the rent at the end of the growing season with a portion of their harvest. Records from Southern farms indicate that white and black farm laborers were paid similar wages, and that white and black tenant farmers worked under similar contracts for similar rental rates. Whites in general, however, were much more likely to own land. A similar pattern is found in Southern manufacturing in these years. Among the fairly small number of individuals employed in manufacturing in the South, white and black workers were often paid comparable wages if they worked at the same job for the same company. However, blacks were much less likely to hold better-paying skilled jobs, and they were more likely to work for lower-paying companies.

While the concentration of African Americans in cotton agriculture persisted, Southern black life changed in other ways in the early 1900s. Limitations on the legal rights of African Americans grew more severe in the South in this era. The 1896 Supreme Court decision in the case of Plessy v. Ferguson provided a legal basis for greater explicit segregation in American society. This decision allowed for the provision of separate facilities and services to blacks and whites as long as the facilities and services were equal. Through the early 1900s, many new laws, known as Jim Crow laws, were passed in Southern states creating legally segregated schools, transportation systems, and lodging. The requirement of equality was not generally enforced, however. Perhaps the most important and best-known example of separate and unequal facilities in the South was the system of public education. Through the first decades of the twentieth century, resources were funneled to white schools, raising teacher salaries and per-pupil funding while reducing class size. Black schools experienced no real improvements of this type. The result was a sharp decline in the relative quality of schooling available to African-American children.

1916-1964: Migration and Urbanization

The mid-1910s witnessed the first large-scale movement of African Americans out of the South. The share of African Americans living in the South fell by about four percentage points between 1910 and 1920 (with nearly all of this movement after 1915) and another six points between 1920 and 1930 (see Table 3). What caused this tremendous relocation of African Americans? The worsening political and social conditions in the South, noted above, certainly played a role. But the specific timing of the migration appears to be connected to economic factors. Northern employers in many industries faced strong demand for their products and so had a great need for labor. Their traditional source of cheap labor, European immigrants, dried up in the late 1910s as the coming of World War I interrupted international migration. After the end of the war, new laws limiting immigration to the US would keep the flow of European labor at a low level. Northern employers thus needed a new source of cheap labor, and they turned to Southern blacks. In some cases, employers would send recruiters to the South to find workers and to pay their way North. In addition to this pull from the North, economic events in the South served to push out many African Americans. Destruction of the cotton crop by the boll weevil, an insect that feeds on cotton plants, and poor weather in some places during these years made new opportunities in the North even more attractive.

Table 3: Share of African Americans Residing in the South

Year Share Living in South
1890 90%
1900 90%
1910 89%
1920 85%
1930 79%
1940 77%
1950 68%
1960 60%
1970 53%
1980 53%
1990 53%

Sources: 1890 to 1960: Historical Statistics of the United States, volume 1, pp. 22-23; 1970: Statistical Abstract of the United States, 1973, p. 27; 1980: Statistical Abstract of the United States, 1985, p. 31; 1990: Statistical Abstract of the United States, 1996, p. 31.

Pay was certainly better, and opportunities were wider, in the North. Nonetheless, the region was not entirely welcoming to these migrants. As the black population in the North grew in the 1910s and 1920s, residential segregation grew more pronounced, as did school segregation. In some cases, racial tensions boiled over into deadly violence. The late 1910s were scarred by severe race riots in a number of cities, including East St. Louis (1917) and Chicago (1919).

Access to Jobs in the North

Within the context of this broader turmoil, black migrants did gain entry to new jobs in Northern manufacturing. As in Southern manufacturing, pay differences between blacks and whites working the same job at the same plant were generally small. However, black workers had access to a limited set of jobs and remained heavily concentrated in unskilled laborer positions. Black workers gained admittance to only a limited set of firms, as well. For instance, in the auto industry, the Ford Motor Company hired a tremendous number of black workers, while other auto makers in Detroit typically excluded these workers. Because their alternatives were limited, black workers could be worked very intensely and could also be used in particularly unpleasant and dangerous settings, such as the killing and cutting areas of meat packing plants, foundry departments in auto plants, and blast furnaces in steel plants.

Unions

Through the 1910s and 1920s, relations between black workers and Northern labor unions were often antagonistic. Many unions in the North had explicit rules barring membership by black workers. When faced with a strike (or the threat of a strike), employers often hired in black workers, knowing that these workers were unlikely to become members of the union or to be sympathetic to its goals. Indeed, there is evidence that black workers were used as strike breakers in a great number of labor disputes in the North in the 1910s and 1920s. Beginning in the mid-1930s, African Americans gained greater inclusion in the union movement. By that point, it was clear that black workers were entrenched in manufacturing, and that any broad-based organizing effort would have to include them.

Conditions around 1940

As is apparent in Table 3, black migration slowed in the 1930s, due to the onset of the Great Depression and the resulting high level of unemployment in the North in the 1930s. Beginning in about 1940, preparations for war again created tight labor markets in Northern cities, though, and, as in the late 1910s, African Americans journeyed north to take advantage of new opportunities. In some ways, moving to the North in the 1940s may have appeared less risky than it had during the World War I era. By 1940, there were large black communities in a number of Northern cities. Newspapers produced by these communities circulated in the South, providing information about housing, jobs, and social conditions. Many Southern African Americans now had friends and relatives in the North to help with the transition.

In other ways, though, labor market conditions were less auspicious for black workers in 1940 than they had been during the World War I years. Unemployment remained high in 1940, with about fourteen percent of white workers either unemployed or participating in government work relief programs. Employers hired these unemployed whites before turning to African American labor. Even as labor markets tightened, black workers gained little access to war-related employment. The President issued orders in 1941 that companies doing war-related work had to hire in a non-discriminatory way, and the Fair Employment Practice Committee was created to monitor the hiring practices of these companies. Initially, few resources were devoted to this effort, but in 1943 the government began to enforce fair employment policies more aggressively. These efforts appear to have aided black employment, at least for the duration of the war.

Gains during the 1940s and 1950s

In 1940, the Census Bureau began to collect data on individual incomes, so we can track changes in black income levels and in black/white income ratios in more detail from this date forward. Table 4 provides annual earnings figures for black and white men and women from 1939 (recorded in the 1940 Census) to 1989 (recorded in the 1990 Census). The big gains of the 1940s, both in level of earnings and in the black/white income ratio, are very obvious. Often, we focus on the role of education in producing higher earnings, but the gap between average schooling levels for blacks and whites did not change much in the 1940s (particularly for men), so schooling levels could not have contributed too much to the relative income gains for blacks in the 1940s (see Table 5). Rather, much of the improvement in the black/white pay ratio in this decade simply reflects ongoing migration: blacks were leaving the South, a low-wage region, and entering the North, a high-wage region. Some of the improvement reflects access to new jobs and industries for black workers, due to the tight labor markets and antidiscrimination efforts of the war years.

Table 4: Mean Annual Earnings of Wage and Salary Workers

Aged 20 and Over

Male

Female

Black White Ratio Black White Ratio
1939 $537.45 $1234.41 .44 $331.32 $771.69 .43
1949 1761.06 2984.96 .59 992.35 1781.96 .56
1959 2848.67 5157.65 .55 1412.16 2371.80 .59
1969 5341.64 8442.37 .63 3205.12 3786.45 .85
1979 11404.46 16703.67 .68 7810.66 7893.76 .99
1989 19417.03 28894.69 .67 15319.29 16135.65 .95

Source: Integrated Public Use Microdata Series Census samples for 1940, 1950, 1960, 1970, 1980, and 1990. Includes only those with non-zero earnings who were not in school. All figures are in current (nominal) dollars.

Table 5: Years of School Attended for Individuals 20 and Over

Male

Female

Black White Difference Black White Difference
1940 5.9 9.1 3.2 6.9 10.5 3.6
1950 6.8 9.8 3 7.8 10.8 3
1960 7.9 10.5 2.6 8.8 11.0 2.2
1970 9.4 11.4 2.0 10.3 11.7 1.4
1980 11.2 12.5 1.3 11.8 12.4 0.6

Source: Integrated Public Use Microdata Series Census samples for 1940, 1950, 1960, 1970, and 1980. Based on highest grade attended by wage and salary workers aged 20 and over who had non-zero earnings in the previous year and who were not in school at the time of the census. Comparable figures are not available in the 1990 Census.

Black workers relative incomes were also increased by some general changes in labor demand and supply and in labor market policy in the 1940s. During the war, demand for labor was particularly strong in the blue-collar manufacturing sector. Workers were needed to build tanks, jeeps, and planes, and these jobs did not require a great deal of formal education or skill. In addition, the minimum wage was raised in 1945, and wartime regulations allowed greater pay increases for low-paid workers than for highly-paid workers. After the war, the supply of college-educated workers increased dramatically. The GI Bill, passed in 1944, provided large subsidies to help pay the expenses of World War II veterans who wanted to attend college. This policy helped a generation of men further their education and get a college degree. So strong labor demand, government policies that raised wages at the bottom, and a rising supply of well-educated workers meant that less-educated, less-skilled workers received particularly large wage increases in the 1940s. Because African Americans were concentrated among the less-educated, low-earning workers, these general economic forces were especially helpful to African Americans and served to raise their pay relative to that of whites.

The effect of these broader forces on racial inequality helps to explain the contrast between the 1940s and 1950s evident in Table 4. The black-white pay ratio may have actually fallen a bit for men in the 1950s, and it rose much more slowly in the 1950s than in the 1940s for women. Some of this slowdown in progress reflects weaker labor markets in general, which reduced black access to new jobs. In addition, the general narrowing of the wage distribution that occurred in the 1940s stopped in the 1950s. Less-educated, lower-paid workers were no longer getting particularly large pay increases. As a result, blacks did not gain ground on white workers. It is striking that pay gains for black workers slowed in the 1950s despite a more rapid decline in the black-white schooling gap during these years (Table 5).

Unemployment

On the whole, migration and entry to new industries played a large role in promoting black relative pay increases through the years from World War I to the late 1950s. However, these changes also had some negative effects on black labor market outcomes. As black workers left Southern agriculture, their relative rate of unemployment rose. For the nation as a whole, black and white unemployment rates were about equal as late as 1930. This equality was to a great extent the result of lower rates of unemployment for everyone in the rural South relative to the urban North. Farm owners and sharecroppers tended not to lose their work entirely during weak markets, whereas manufacturing employees might be laid off or fired during downturns. Still, while unemployment was greater for everyone in the urban North, it was disproportionately greater for black workers. Their unemployment rates in Northern cities were much higher than white unemployment rates in the same cities. One result of black migration, then, was a dramatic increase in the ratio of black unemployment to white unemployment. The black/white unemployment ratio rose from about 1 in 1930 (indicating equal unemployment rates for blacks and whites) to about 2 by 1960. The ratio remained at this high level through the end of the twentieth century.

1965-1999: Civil Rights and New Challenges

In the 1960s, black workers again began to experience more rapid increases in relative pay levels (see Table 4). These years also marked a new era in government involvement in the labor market, particularly with regard to racial inequality and discrimination. One of the most far-reaching changes in government policy regarding race actually occurred a bit earlier, in the 1954 Supreme Court decision in the case of Brown v. the Board of Education of Topeka, Kansas. In that case, the Supreme Court ruled that racial segregation of schools was unconstitutional. However, substantial desegregation of Southern schools (and some Northern schools) would not take place until the late 1960s and early 1970s.

School desegregation, therefore, was probably not a primary force in generating the relative pay gains of the 1960s and 1970s. Other anti-discrimination policies enacted in the mid-1960s did play a large role, however. The Civil Rights Act of 1964 outlawed discrimination in a broad set of social arenas. Title VII of this law banned discrimination in hiring, firing, pay, promotion, and working conditions and created the Equal Employment Opportunity Commission to investigate complaints of workplace discrimination. A second policy, Executive Order 11246 (issued by President Johnson in 1965), set up more stringent anti-discrimination rules for businesses working on government contracts. There has been much debate regarding the importance of these policies in promoting better jobs and wages for African Americans. There is now increasing agreement that these policies had positive effects on labor market outcomes for black workers at least through the mid-1970s. Several pieces of evidence point to this conclusion. First, the timing is right. Many indicators of employment and wage gains show marked improvement beginning in 1965, soon after the implementation of these policies. Second, job and wage gains for black workers in the 1960s were, for the first time, concentrated in the South. Enforcement of anti-discrimination policy was targeted on the South in this era. It is also worth noting that rates of black migration out of the South dropped substantially after 1965, perhaps reflecting a sense of greater opportunity there due to these policies. Finally, these gains for black workers occurred simultaneously in many industries and many places, under a variety of labor market conditions. Whatever generated these improvements had to come into effect broadly at one point in time. Federal antidiscrimination policy fits this description.

Return to Stagnation in Relative Income

The years from 1979 to 1989 saw the return of stagnation in black relative incomes. Part of this stagnation may reflect the reversal of the shifts in wage distribution that occurred during the 1940s. In the late 1970s and especially in the 1980s, the US wage distribution grew more unequal. Individuals with less education, particularly those with no college education, saw their pay decline relative to the better-educated. Workers in blue-collar manufacturing jobs were particularly hard hit. The concentration of black workers, especially black men, in these categories meant that their pay suffered relative to that of whites. Another possible factor in the stagnation of black relative pay in the 1980s was weakened enforcement of antidiscrimination policies at this time.

While black relative incomes stagnated on average, black residents of urban centers suffered particular hardships in the 1970s and 1980s. The loss of blue-collar manufacturing jobs was most severe in these areas. For a variety of reasons, including the introduction of new technologies that required larger plants, many firms relocated their production facilities outside of central cities, to suburbs and even more peripheral areas. Central cities increasingly became information-processing and financial centers. Jobs in these industries generally required a college degree or even more education. Despite decades of rising educational levels, African Americans were still barely half as likely as whites to have completed four years of college or more: in 1990, 11.3% of blacks over the age of 25 had four years of college or more, versus 22% of whites. As a result of these developments, many blacks in urban centers found themselves surrounded by jobs for which they were poorly qualified, and at some distance from the types of jobs for which they were qualified, the jobs their parents had moved to the city for in the first place. Their ability to relocate near these blue-collar jobs seems to have been limited both by ongoing discrimination in the housing market and by a lack of resources. Those African Americans with the resources to exit the central city often did so, leaving behind communities marked by extremely high rates of poverty and unemployment.

Over the fifty years from 1939 to 1989, through these episodes of gain and stagnation, the ratio of black mens average annual earnings to white mens average annual earnings rose about 23 points, from .44 to .67. The timing of improvement in the black female/ white female income ratio was similar. However, black women gained much more ground overall: the black-white income ratio for women rose 50 points over these fifty years and stood at .95 in 1989 (down from .99 in 1979). The education gap between black women and white women declined more than the education gap between black and white men, which contributed to the faster pace of improvement in black womens relative earnings. Furthermore, black female workers were more likely to be employed full-time than were white female workers, which raised their annual income. The reverse was true among men: white male workers were somewhat more likely to be employed full time than were black male workers.

Comparable data on annual incomes from the 2000 Census are not available at the time of this writing. Evidence from other labor market surveys suggests that the tight labor markets of the late 1990s may have brought renewed relative pay gains for black workers. Black workers also experienced sharp declines in unemployment during these years, though black unemployment remained about twice as great as white unemployment.

Beyond the Labor Market: Persistent Gaps in Wealth and Health

When we look beyond these basic measures of labor market success, we find more disturbingly large and persistent gaps between African Americans and white Americans. Wealth differences between blacks and whites continue to be very large. In the mid-1990s, black households held only about one-quarter the amount of wealth that white households held, on average. If we leave out equity in ones home and personal possessions and focus on more strictly financial, income-producing assets, black households held only about ten to fifteen percent as much wealth as white households. Big differences in wealth holding remain even if we compare black and white households with similar incomes.

Much of this wealth gap reflects the ongoing effects of the historical patterns described above. When freed from slavery, African Americans held no wealth, and their lower incomes prevented them from accumulating wealth at the rate whites did. African Americans found it particularly difficult to buy homes, traditionally a households most important asset, due to discrimination in real estate markets. Government housing policies in the 1930s and 1940s may have also reduced their rate of home-buying. While the federal government made low interest loans and loan insurance available through the Home Owners Loan Corporation and the Federal Housing Authority, these programs generally could not be used to acquire homes in black or mixed neighborhoods, usually the only neighborhoods in which blacks could buy, because these were considered to be areas of high-risk for loan default. Because wealth is passed on from parents to children, the wealth differences of the mid-twentieth century continue to have an important impact today.

Differences in life expectancy have also proven to be remarkably stubborn. Certainly, black and white mortality patterns are more similar today than they once were. In 1929, the first year for which national figures are available, white life expectancy at birth was 58.6 years and black life expectancy was 46.7 years (for men and women combined). By 2000, white life expectancy had risen to 77.4 years and black life expectancy was 71.8 years. Thus, the black-white gap had fallen from about twelve years to less than six. However, almost all of this reduction in the gap was completed by the early 1960s. In 1961, the black-white gap was 6.5 years. The past forty years have seen very little change in the gap, though life expectancy has risen for both groups.

Some of this remaining difference in life expectancy can be traced to income differences between blacks and whites. Black children face a particularly high risk of accidental death in the home, often due to dangerous conditions in low-quality housing. African Americans of all ages face a high risk of homicide, which is related in part to residence in poor neighborhoods. Among older people, African Americans face high risk of death due to heart disease, and the incidence of heart disease is correlated with income. Still, black-white mortality differences, especially those related to disease, are complex and are not yet fully understood.

Infant mortality is a particularly large and particularly troubling form of health difference between blacks and whites.

In 2000 the white infant mortality rate (5.7 per 1000 live births) was less than half the rate for African Americans (14.0 per 1000). Again, some of this mortality difference is related to the effect of lower incomes on the nutrition, medical care, and living conditions available to African American mothers and newborns. However, the full set of relevant factors is the subject of ongoing research.

Summary and Conclusions

It is undeniable that the economic fortunes of African Americans changed dramatically during the twentieth century. African Americans moved from tremendous concentration in Southern agriculture to much greater diversity in residence and occupation. Over the period in which income can be measured, there are large increases in black incomes in both relative and absolute terms. Schooling differentials between blacks and whites fell sharply, as well. When one looks beyond the starting and ending points, though, more complex realities present themselves. The progress that we observe grew out of periods of tremendous social upheaval, particularly during the world wars. It was shaped in part by conflict between black workers and white workers, and it coincided with growing residential segregation. It was not continuous and gradual. Rather, it was punctuated by periods of rapid gain and periods of stagnation. The rapid gains are attributable to actions on the part of black workers (especially migration), broad economic forces (especially tight labor markets and narrowing of the general wage distribution), and specific antidiscrimination policy initiatives (such as the Fair Employment Practice Committee in the 1940s and Title VII and contract compliance policy in the 1960s). Finally, we should note that this century of progress ended with considerable gaps remaining between African Americans and white Americans in terms of income, unemployment, wealth, and life expectancy.

Sources

Butler, Richard J., James J. Heckman, and Brook Payner. “The Impact of the Economy and the State on the Economic Status of Blacks: A Study of South Carolina.” In Markets in History: Economic Studies of the Past, edited by David W. Galenson, 52-96. New York: Cambridge University Press, 1989.

Collins, William J. “Race, Roosevelt, and Wartime Production: Fair Employment in World War II Labor Markets.” American Economic Review 91, no. 1 (2001): 272-86.

Conley, Dalton. Being Black, Living in the Red: Race, Wealth, and Social Policy in America. Berkeley, CA: University of California Press, 1999.

Donohue, John H. III, and James Heckman. “Continuous vs. Episodic Change: The Impact of Civil Rights Policy on the Economic Status of Blacks.” Journal of Economic Literature 29, no. 4 (1991): 1603-43.

Goldin, Claudia, and Robert A. Margo. “The Great Compression: The Wage Structure in the United States at Mid-Century.” Quarterly Journal of Economics 107, no. 1 (1992): 1-34.

Halcoussis, Dennis and Gary Anderson. “The Political Economy of Legal Segregation: Jim Crow and Racial Employment Patterns.” Economics and Politics 8, no. 1 (1996): 1-15.

Herbst, Alma. The Negro in the Slaughtering and Meat Packing Industry in Chicago. New York: Houghton Mifflin, 1932.

Higgs, Robert. Competition and Coercion: Blacks in the American Economy 1865-1914. New York: Cambridge University Press, 1977.

Jaynes, Gerald David and Robin M. Williams, Jr., editors. A Common Destiny: Blacks and American Society. Washington, DC: National Academy Press, 1989.

Johnson, Daniel M. and Rex R. Campbell. Black Migration in America: A Social Demographic History. Durham, NC: Duke University Press, 1981.

Juhn, Chinhui, Kevin M. Murphy, and Brooks Pierce. “Accounting for the Slowdown in Black-White Wage Convergence.” In Workers and Their Wages: Changing Patterns in the United States, edited by Marvin H. Kosters, 107-43. Washington, DC: AEI Press, 1991.

Kaminski, Robert, and Andrea Adams. Educational Attainment in the US: March 1991 and 1990 (Current Population Reports P20-462). Washington, DC: US Census Bureau, May 1992.

Kasarda, John D. Urban Industrial Transition and the Underclass. In The Ghetto Underclass: Social Science Perspectives, edited by William J. Wilson, 43-64. Newberry Park, CA: Russell Sage, 1993.

Kennedy, Louise V. The Negro Peasant Turns Cityward: The Effects of Recent Migrations to Northern Centers. New York: Columbia University Press, 1930.

Leonard, Jonathan S. “The Impact of Affirmative Action Regulation and Equal Employment Law on Black Employment.” Journal of Economic Perspectives 4, no. 4 (1990): 47-64.

Maloney, Thomas N. “Wage Compression and Wage Inequality between Black and White Males in the United States, 1940-1960.” Journal of Economic History 54, no. 2 (1994): 358-81.

Maloney, Thomas N. “Racial Segregation, Working Conditions, and Workers’ Health: Evidence from the A.M. Byers Company, 1916-1930.” Explorations in Economic History 35, no. 3 (1998): 272-295.

Maloney, Thomas N., and Warren C. Whatley. “Making the Effort: The Contours of Racial Discrimination in Detroit’s Labor Markets, 1920-1940.” Journal of Economic History 55, no. 3 (1995): 465-93.

Margo, Robert A. Race and Schooling in the South, 1880-1950. Chicago: University of Chicago Press, 1990.

Margo, Robert A. “Explaining Black-White Wage Convergence, 1940-1950.” Industrial and Labor Relations Review 48, no. 3 (1995): 470-81.

Marshall, Ray F. The Negro and Organized Labor. NY: John Wiley and Sons, 1965.

Minino, Arialdi M., and Betty L. Smith. “Deaths: Preliminary Data for 2000” National Vital Statistics Reports 49, no. 12 (2001).

Oliver, Melvin L., and Thomas M. Shapiro. Race and Wealth. Review of Black Political Economy 17, no. 4 (1989): 5-25.

Ruggles, Steven, and Matthew Sobek. Integrated Public Use Microdata Series: Version 2.0. Minneapolis: Social Historical Research Laboratory, University of Minnesota, 1997.

Sugrue, Thomas J. The Origins of the Urban Crisis: Race and Inequality in Postwar Detroit. NJ: Princeton University Press, 1996.

Sundstrom, William A. “Last Hired, First Fired? Unemployment and Urban Black Workers During the Great Depression.” Journal of Economic History 52, no. 2 (1992): 416-29.

United States Bureau of the Census. Statistical Abstract of the United States 1973 (94th Edition). Washington, DC: Department of Commerce, Bureau of the Census, 1973.

United States Bureau of the Census. Historical Statistics of the United States: Colonial Times to 1970. Washington, DC: Department of Commerce, Bureau of the Census, 1975.

United States Bureau of the Census. Statistical Abstract of the United States 1985 (105th Edition). Washington, DC: Department of Commerce, Bureau of the Census, 1985.

United States Bureau of the Census. Statistical Abstract of the United States 1996 (116th Edition). Washington, DC: Department of Commerce, Bureau of the Census, 1996.

Vedder, Richard K. and Lowell Gallaway. “Racial Differences in Unemployment in the United States, 1890-1980.” Journal of Economic History 52, no. 3 (1992): 696-702.

Whatley, Warren C. “African-American Strikebreaking from the Civil War to the New Deal.” Social Science History 17, no. 4 (1993): 525-58.

Wilson, William J. The Truly Disadvantaged: The Inner City, the Underclass, and Public Policy. Chicago, IL: University of Chicago Press, 1987.

Wright, Gavin. Old South, New South: Revolutions in the Southern Economy since the Civil War. NY: Basic Books, 1986.

Citation: Maloney, Thomas. “African Americans in the Twentieth Century”. EH.Net Encyclopedia, edited by Robert Whaples. January 14, 2002. URL http://eh.net/encyclopedia/african-americans-in-the-twentieth-century/

The Ascent of Money: A Financial History of the World

Author(s):Ferguson, Niall
Reviewer(s):Horesh, Niv

Published by EH.NET (July 2009)

Niall Ferguson, The Ascent of Money: A Financial History of the World. New York: Penguin, 2008. v + 441 pp. $30 (hardcover), ISBN: 1594201929.

Reviewed for EH.NET by Niv Horesh, Faculty of Arts and Social Sciences, University of New South Wales.

Harvard?s Niall Ferguson is perhaps best known for his magisterial history of the House of Rothschild and, more recently, his exhortation against the risks of unbridled government borrowing and nebulous stimulus packages ostensibly designed to avert what is often termed the worst global economic crisis since the Great Depression. In the Ascent of Money he harnesses his narrative skills to offer lay readership a captivating account of global monetary history from time immemorial to the twenty-first century. The book?s release coincided with an eponymous television series that has already been broadcast in much of the English-speaking world. Both the series and the book are immensely entertaining and readily accessible, but the latter arguably makes for a more convenient platform from which academics can approach Ferguson?s many insights.

The Introduction (pp. 1-17) prepares readers for what Ferguson perceptively identifies as the core stories attending the evolution of money over the last four millennia. These are many and varied, as one would expect. He is concerned with, inter alia, the ?recurrent hostility? to financial intermediaries and religious minorities associated with them in early-modern European history; the triumph of the Dutch Republic over the Hapsburg Empire, the latter?s possession of silver mines in South America notwithstanding; the spread of paper money, fiat currency and invisible means of payment in the twentieth century; right through to the possible eclipse of American global primacy in the next two decades.

Titled ?Dreams of Avarice,? Chapter One sets course by recounting how the Incas were flabbergasted by the ?insatiable lust for gold and silver? that seemed to grip the Spanish conquistadors (p. 21). It then lays out with humor and verve the well-known story of Potos?, now a fairly sleepy town in the Bolivian Andes, which once provisioned Spain with untold amounts of silver. In the same breath, the chapter goes on to offer an overview of coinage since the seventh century BC. Notably, Ferguson sees the flow of silver from the Andes to Europe as a ?resource curse? which removed the incentives for more productive economic activity, while strengthening ?rent-seeking autocrats? in seventeenth-century Spain. Contrary to criticism of Eurocentrism often leveled at him, Ferguson carefully emphasizes here the contribution other peoples have made to modern finance: ?… economic life in the Eastern world ? in the Abassid caliphate or in Song China ? was far more advanced? at least until Fibonacci introduced Indian algebraic precepts in early thirteenth-century Italy (p. 32); these were later reified by the Medicis into double-entry bookkeeping in the Florentine republic (p. 43).

By the early seventeenth-century, European financial innovation had shifted from the Italian city-states to the Low Countries, though it was still driven by the exigencies of costly and recurrent warfare and ambitions of monopolizing trade with the East (p. 48-49). This spurt of European financial innovation had actually long ?preceded the industrial revolution,? a complex but much better-studied spate of events (p. 52). The financial and industrial revolutions then converged with the spread of joint-stock companies and proto-types of central banks in the latter half of the nineteenth century.

Subsequent chapters flesh out Ferguson?s analysis. Titled ?Of Human Bondage,? Chapter Two (pp. 65-118) explores, for example, the distinctness of the European economic trajectory, beginning with how the majority of Florentine citizenry partook of financing the Republic?s debt in the fourteenth century. In the seventeenth-century, the United Provinces of the Netherlands combined the borrowing techniques of an Italian city-state ?… with the scale of a nation-state.? The Dutch were able to finance their wars by pitching Amsterdam ?as the market for a whole range of new securities? (p. 75). The eighteenth and nineteenth centuries are characterized by Anglo-French friction, but here Ferguson sees a yawning gap between protestant Britain where public debt defaults became rarer and public debt itself increased many-fold and the powers of landed aristocracy diminished while a professional civil service became more influential ? and Catholic France where public offices were often sold to raise money, tax collection was farmed out and government bond issues lost credibility. Notably, the incremental spread of, and popular faith in, British government bonds allowed Whitehall to borrow overseas as well, much to the detriment of Napoleon?s armies. Ferguson similarly believes that (p. 97) the reluctance of European investors to buy into Confederate bonds during the American Civil War doomed the South?s endeavors. This historic lesson is invoked toward the end of the chapter when discussing, in passing, the Bush Administration?s large budget deficits.

Chapter Three (?Blowing Bubbles,? pp. 119-178) zooms in on arguably the most significant economic entity of our time: the joint-stock company. Ferguson aptly dubs it ?perhaps the single greatest Dutch invention of all.? Here, he elides earlier ? though fairly short-lived ? occurrences of comparable entities both in Europe and in pre-modern Asia. But there can be little doubt that the establishment of the Dutch VOC (1602) marked a veritable turning point, not least because it underlay the growth of the world?s first bourse. Indeed, the establishment of royally-chartered companies principally aimed at trade with Asia seems to have underpinned the rise of stock exchanges and public debt in Europe?s Northeast as a whole. The rise of public debt and publicly-listed equity was beset by frequent speculative bubbles, from which emerged a more sophisticated British credit economy.

Chapter Four (?The Return of Risk,? pp. 176-229) takes up a swag of issues from the impact of Hurricane Katrina on the U.S. psyche, through how the Great Fire of London (1666) created demand for insurance policies, to Japan?s welfare system and Milton Friedman?s mentorship of Latin American finance ministers. By comparison, Chapter Five (?Safe as Houses,? pp. 230-82) is more singularly framed around what Ferguson perceptively calls ?the passion for property? in the home-owning democracies of Anglo-Saxondom. He aptly reminds us (p. 233, 241) that as recently as the 1930s, little more than two-fifths of U.S. households owned their home compared with over 65% today, and traces back this staggering social transformation to the New Deal and the Civil Rights Movement. The expansion of home ownership was facilitated in the late 1930s by then-novel institutions like Fannie Mae, which are at the heart of the recent sub-prime meltdown. In that sense, but not in that sense only, Ferguson does a wonderful job of explaining well beyond clich?s the linkages between the Great Depression to today?s global finance crisis. He then points the finger (p. 269) at rating agencies such as Moody?s and Standard & Poor?s for obfuscating the precariousness of collateralized sub-prime mortgages, which financial ?alchemists? turned into tradable debt obligations.

In essence, the last chapter (?From Empire to Chimerica,? pp. 283-340) is dedicated to China?s resurgence in the twenty-first century, and subtly considers whether this might ultimately result in a catastrophic Sino-American military confrontation. From a China specialist?s perspective, it is perhaps a pity that a scholar of Ferguson?s wisdom and insight stops short of opining whether we are witnessing at present the rise of a new form of capitalism with Chinese characteristics (e.g. capitalism without democracy) or simply gradual Chinese adaptation to Western market norms. Academic pedants might also quip that Ferguson draws heavily on Kenneth Pomeranz?s path-breaking book, The Great Divergence, when writing that living standards in Europe and China were on par as late as the eighteenth-century (p. 285). This might have called for a more detailed discussion, given that earlier parts of the book allude to the Italian city-states (fourteenth century) as the progenitors of Europe?s financial revolution. Similarly, Ferguson?s assertion that the ?… ease with which the [Chinese] Empire could finance its deficits by printing money discouraged the emergence of European-style capital markets? (p. 286) might sound a little facile to specialists, not least because note issuance was all but abandoned by late-Imperial dynasties.

However, these are minor criticisms that do not detract in any way from the wonderful feat of storytelling which Ferguson has again pulled off. This book makes for a bold and original attempt to provide a comprehensive history of what, some say, makes the world go around. It is likely to turn into a best-selling classic, and a must-read item in countless undergraduate courses.

Niv Horesh is Lecturer in Chinese Studies at the School of Languages and Linguistics, University of New South Wales, Sydney, Australia. His first book, Shanghai’s Bund and Beyond (Yale University Press, 2009), is the first comparative study in English of foreign banks and banknote issuance in pre-war China. His second book (forthcoming in 2010), is a comprehensive socio-economic account of Shanghai?s rise to prominence (1842-2010).

Subject(s):Financial Markets, Financial Institutions, and Monetary History
Geographic Area(s):North America
Time Period(s):General or Comparative

The First Wall Street: Chestnut Street, Philadelphia, and the Birth of American Finance

Author(s):Wright, Robert E.
Reviewer(s):Rousseau, Peter L.

Published by EH.NET (September 2006)

Robert E. Wright, The First Wall Street: Chestnut Street, Philadelphia, and the Birth of American Finance. Chicago: University of Chicago Press, 2005. vii + 210 pp. $25 (cloth), ISBN: 0-226-91026-1.

Reviewed for EH.NET by Peter L. Rousseau, Department of Economics, Vanderbilt University.

Robert E. Wright, hailing from New York University’s Stern School of Business, continues to expand our understanding of early U.S. securities markets with his recent offering from the University of Chicago Press. If looking for an entertaining stroll through the rise and fall of Philadelphia as the hub of American finance from the late colonial period through the Bank War, one needs to go no further. At the same time, the book is a fine starting point for considering more extended research on the wide range of financial topics addressed therein.

The eleven chapters are, for the most part, organized around particular financial innovations or groups of innovations that emerged first in Philadelphia, many of which persist to the present in some form or another. But the deeper thread seems to be that, despite its eclipse by New York City around 1830, Philadelphia’s importance in setting the stage for the ascendancy of the United States as the world’s financial leader — a position achieved by the end of the nineteenth century and by some accounts well before — should not be discounted by historians and economic historians.

As Philadelphia rose to prominence as a political and economic center, it replaced London as the informational hub of the colonies, leading to a greater degree of financial integration and an end to the rather insulated existences that had prevailed for decades. Wright describes the rise of Chestnut Street with vivid narratives about the growth of the market for property rights (chapter 2), quasi-public and private banking (chapter 5), the fire and marine insurance industry (chapter 6), building societies (chapter 7), and financial securities markets. He also points out that it was Philadelphia that hosted the Federal mint and the nation’s first two central banks. Indeed, the anecdotes that Wright tells about the challenges that the mint faced in acquiring the bullion needed to perform its most basic function — challenges made more difficult by a seemingly constant need to justify its existence to legislators — are among the book’s most fascinating.

Given Chestnut Street’s precocious start as the nation’s first “Wall Street,” Wright devotes the final third of the book to explaining the fall of Philadelphia from its commanding perch atop the U.S. financial system (chapters 8-11). Here, the author recounts the familiar explanation that geographic inadequacies were at the heart of the city’s undoing — a fate sealed by New York’s unlocking the portal to the West with the Erie Canal. Wright’s explanation, however, is enhanced by a reconstruction of the life and business experiences of a less prominent yet well-established financier named Michael Hillegas. Hillegas began a modest career on Chestnut Street during its heyday and learned from some of the great financiers of that time. As some of his colleagues and mentors moved their operations to New York, however, Hillegas chose to stay behind, carving out a good living as one of the remaining “old timers.” In the end, though, the flow of human and financial capital away from Philadelphia led even Hillegas, and surely others like him, to experience increasing difficulties in keeping business from moving to New York and its more auspicious opportunities. Wright contends that it was the exodus of human capital that really spelled the end for Chestnut Street.

One point that Wright does not make explicitly, but which is nonetheless reinforced by his lively narratives, is the primal nature of real activity as the driving force behind the location and development of finance. At a time when colonial economic activity was more local in nature and commerce more international, Philadelphia’s position as an Atlantic port made it an adequate commercial center, especially since it was already a political center. It was therefore natural for the financial system to have its mainsprings there. A virtuous cycle of real needs leading to finance and promoting further real growth seems to have been the result. But as it became increasingly clear that the new nation and its large land mass was not a featureless plain, the move to New York might be seen as a classic example of Joan Robinson’s famous adage that “where enterprise leads, finance follows.” And follow it did in this case. As Chestnut Street’s best financiers headed off to New York, their expertise went with them. Only large sunk investments in plant and equipment for the Federal mint and the central bank could hold these institutions in the Quaker City, at least until political forces took care of the latter.

It is important to note that this book is not intended as a treatise on economic or financial theory as applied to the colonies or the young United States. Rather, it is solid effort to relate financial history to a wider popular audience. As such, the exposition might be a bit distracting at times to the academic reader. Yet the better understanding of Chestnut Street’s role in early U.S. growth and development that I gained from reading it was well worth an occasional diversion or two. Effectively bridging academic and non-academic audiences is a difficult feat indeed, but one that we have come to expect from a scholar as prolific as Wright. It leaves me in anticipation of what new ground his next work will cover.

Peter L. Rousseau is Associate Professor of Economics at Vanderbilt University in Nashville, TN and a Research Associate of the NBER. He is the author of “A Common Currency: Early U.S. Monetary Policy and the Transition to the Dollar,” Financial History Review (April 2006), and co-author (with Richard Sylla) of “Emerging Financial Markets and Early U.S. Growth,” Explorations in Economic History (January 2005).

Subject(s):Financial Markets, Financial Institutions, and Monetary History
Geographic Area(s):North America
Time Period(s):19th Century

For the Common Good? American Civic Life and the Golden Age of Fraternity

Author(s):Kaufman, Jason
Reviewer(s):Beito, David T.

Published by EH.NET (February 2004)

Jason Kaufman, For the Common Good? American Civic Life and the Golden Age of Fraternity. New York: Oxford University Press, 2002. x + 286 pp. $45 (hardcover), ISBN: 0-19-514858-4.

Reviewed for EH.NET by David T. Beito, Department of History, University of Alabama.

Jason Kaufman is a man on a mission. Dissenting from Alexis de Tocqueville and Robert Putnam, he examines the dark side of American voluntary associations. He indicts them for exacerbating racial and ethnic divisiveness, retarding the welfare state, nurturing a unique American “love affair with guns,” (p. 31), fueling “libertarian paranoia and mutual distrust,” (p. 9) and even for a causative role in Prohibition. Kaufman’s broadly social democratic outlook shapes his assumptions about the identity and definition of this dark side. To his credit, he does not shrink from acknowledging that his views about the good society help to frame his argument.

Kaufman carefully chronicles the development of a wide array of associations, including volunteer fire companies, business associations, shooting clubs and, most especially, fraternal societies. A key theme is that the disadvantages of the golden age of fraternity during the late nineteenth and early twentieth centuries far outweighed any advantages. A consequence of exclusionary membership policies, for example, was to encourage workers to define themselves in parochial terms, such as race and sex, and to foster anti-statism thus depriving America of the “opportunity to join Western Europe in the adoption of people-friendly social policies” (p. 198). Fraternalism, writes Kaufman, contributed to the comparative weakness of class-conscious organizations which, had they been stronger, might have united workers as they did in Europe.

There is much to like in this book. Kaufman’s research is diligent and he draws extensively and creatively from primary sources. But there is also considerable room for criticism. A persistent problem is that he too often holds American voluntarism to an impossibly high standard and exaggerates, or fails to put into comparative context, the sins in society that it allegedly fostered. A case in point is Kaufman’s almost matter-of-fact assumption that fraternal societies “historically engendered more ethnic, racial, and religious separatism” (p. 9) in the United States when compared to most other countries. This is by no means obvious. Ethnic and religious divisiveness have deeply plagued several highly developed welfare states, including France, the Netherlands, and Germany, in recent years. A strong case can be made, in fact, that the story of assimilation and the melting pot during the golden age of fraternity compares favorably with the Canada’s French and English strife, Britain’s Catholic and Protestant troubles, or renewed anti-Semitic and anti-Islam attacks in Western Europe. Blaming American voluntarism for the virulence of American racism makes about as much sense as blaming the pioneer welfare states of Germany and Austria for the holocaust or recent firebomb attacks on immigrants.

Along the same lines, Kaufman unduly slights the contributions of fraternal societies to assimilation by finding jobs and inculcating attitudes of upward mobility, thrift and self-reliance. Even a superficial survey in the primary sources of immigrant organizations in the late nineteenth and early twentieth societies reveals ample evidence of paeans to American patriotism and freedom of opportunity. Kaufman also neglects another crucial comparative context. While it is true that most fraternal organizations excluded blacks and women as full members, so too did the vast majority of political associations and labor unions during the same period. The exclusion of blacks from all-white unions often had the added effect of freezing them out of entire occupations. I do not raise this to downplay the importance of racial exclusion by fraternal societies but merely to point out that lodges were not the only, or for that matter, the most pernicious offenders as implied by Kaufman.

Similarly, Kaufman does not give adequate weight to the ways in which fraternalism contributed to the advancement of women. An example is his charge that societies “reached out to men but kept women at arm’s length, thus denying them the opportunity to purchase sickness and burial insurance on their own” (p. 50). Again, to the extent the critique is valid, it also applies to labor unions and political associations. In addition, it tends to slight the tremendous growth of auxiliaries and parallel organizations. While Kaufman mentions the Women’s Benefit Association, he does explore the implications of why this group, which also zealously supported feminism, existed in the first place. Finally, he skirts over the fact that women were primary financial beneficiaries of fraternal life insurance and that they constituted a majority of the residents of fraternal homes for the elderly.

As to blacks, some of his statements are overly dismissive such as one that they “had few if any successes with fraternalism” (p. 27). It is true that Kaufman’s study largely excludes the South. Even so, this sweeping claim, which includes no regional proviso, does not bear close scrutiny for either North or South (where most blacks lived, after all). Black women had significant leadership roles in thousands of lodges in cities ranging from New Orleans to New York that dispensed vast amounts of sick benefits, many continuously for decades. Black fraternal hospitals in the South gave low-cost health care to thousands for decades. The sponsoring organizations of these hospitals generally admitted women on equal terms with men and even, in several cases, had female majorities. It is likely, in fact, that the largest single black female voluntary association in the early twentieth century was the Household of Ruth (with nearly 200,000 members) of the Odd Fellows. The best-known black female fraternalist (not mentioned by Kaufman) was Maggie Lena Walker, the head of the Independent Order of St. Luke, probably the first woman bank president (at least who achieved the position in her own right).

These criticisms are not meant to belittle Kaufman’s overall accomplishment, however. His indictment may be relentless and uncompromising, but it also deserves to be taken seriously. He raises many challenging questions that other scholars have failed to ask. A copy of For the Common Good? should be on the shelf of any specialist in the history of American voluntary associations.

David Beito is author of From Mutual Aid to the Welfare State: Fraternal Societies and Social Services, 1890-1967 (Chapel Hill: University of North Carolina Press, 2000).

Subject(s):Social and Cultural History, including Race, Ethnicity and Gender
Geographic Area(s):North America
Time Period(s):20th Century: Pre WWII