Housing Prices Since 1975

Housing is generally the biggest asset on people’s balance sheets. Those with money tend to own homes, and home owners are generally deeply invested in their houses. For many years, housing prices rose, and these price rises made major contributions to the middle and upper-middle class’s asset base.

Those who owned houses during this boom benefited with increased wealth, but the rise in home prices also made it more expensive to buy a home. Up until the 2008 crash, many of the families who were not able to get in on the housing market “at the ground floor” made their way in by using a range of new debt products that a booming financial market was offering. It became easier to purchase a home with no downpayment. People could get bigger mortgages if they agreed to gamble on adjustible-rate mortgages. Eventually, some people didn’t even need documentation to get a loan. Once credit got that loose, it was only a matter of time before some major problem emerged.

When the housing bubble eventually burst in 2008, both lenders and consumers became more reluctant to buy new homes. The market dried up and housing prices crashed. Presumably, this crash meant that more people could get into the housing market, although it might have ultimately hurt the wealth accumulation of home owners.

How much did housing prices fall? How much affordable did houses become? How much damage did home owners have to absorb? One way to address these questions is to look at changes in housing affordability over time. The figure below describes changes in the Case-Shiller price index.1 The index measures home prices in 20 metropolitan areas, and expresses itself as a relative price level to that which prevailed in January 2000.2

caseshiller

 

In 2014, housing prices were about 65% higher than in 2000. This represents a modest recovery of about 19% from the trough in housing prices in 2011. However, it still represents 10% lower prices than those that prevailed at the peak of the housing boom in 2006. So housing prices have not recovered. Homeowners who bought near the peak of the last bubble are still in the hole, and those who were counting on home values returning to their 2006 levels are still behind in their financial plans.

The graph also imparts a sense of the 2007-9 recession’s impact on home prices. Home prices fell by about 24% from the 2006 peak to the 2011 trough. That is a considerable amount of lost wealth, particularly because homeowners are generally deeply invested in their homes.

Regardless of the ups and downs of recent years, housing is far more expensive than it was thirty years ago. Prices in 2014 have more than quintupled over the past forty years. Meanwhile, incomes have not.


  1. Data from Federal Reserve Board (2015) “S&P/Case-Shiller U.S. National Home Price Index©” Data series CSUSHPISA downloaded June 9, 2015. https://research.stlouisfed.org/fred2/series/CSUSHPISA
  2. S&P Dow Jones Indices (2015) S&P/Case-Shiller Home Price Indices: Methodology Methodological report.http://www.spindices.com/documents/methodologies/methodology-sp-cs-home-price-indices.pdf?force_download=true

Download the raw data and Markup file here

Long-Run Trends in the Poverty Rate

In 2013, about 14.5% of society was officially “poor”, with incomes that fall below the federal government’s official poverty line. This represents an increase from the early 2000s, when only about 11% – 13% of Americans registered as being poor.

Interestingly, today’s povery rates seem well within the normal boundaries established since the late-1960s. Over the past forty-plus years, poverty has bounced within an 11% to 15% range, going up during bad times and falling during good ones.

Long-term poverty patterns suggests that we have not made sustained progress in eradicating poverty. The figure below describes the poverty rate since 1959. Data is from the Census Bureau.1

us-poverty-over-time

 

Poverty fell considerably during the early-1960s, and reached its present range by 1966. Since then, we have not experienced any secular decline in the poverty rate.

Conservative critics of the welfare state often cite this fact as evidence that the policies established under Lyndon Johnson’s “War on Poverty” failed to achieve their goals. The signature policies of that “War”, the expansion of Social Security in 1964, the Food Stamp Act of 1964, the Economic Opportunity Act of 1965, and the Elementary and Secondary Education Act of 1965, marked substantial expansions of the social safety net.2 Critics argue that poverty stopped falling soon after these implementations, and infer that these programs were not effective. Moreover, one might note that median wages began to falter only a few years later, and argue that the expansion of the welfare state ultimately hurt people’s ability to secure jobs and earn money.

I’m more skeptical about this view. First, it is not altogether clear that domestic welfare policy was the primary determinant shaping hosueholds’ economic fortunes. One can read this graph and infer that poverty declined more or less steadily until 1973, right after the oubreak of the Stagflation Crisis, a range of systemic economic and financial problems that are just as much rooted in a changing geopolitical context than anything that was happening domestically.3 There is also the prospect that the dynamism of mid-20th century’s industry-led economy was nearing exhaustion.4 The country would soon also confront a range of demographic changes, like the Baby Boomers’ coming of age, the rise of divorce, and other assorted social changes that could have ultimately driven households into poverty.

More deeply, it is worth keeping in mind what is meant by “poor.” The official poverty line is the inflation-adjusted cost of what the USDA determined to be the cost of a minimal food diet in 1963. Those with incomes above that line are not poor, and those below are poor. This is a very crude measure of poverty, which to my mind borders on meaninglessness. In effect, a stagnating poverty rate does not imply that the number of poor people stagnated. Rather, it implies that gross household incomes roughly paced the real cost of basic food in 1963.

A different possibility is that these War on Poverty programs have prevented poverty (real or official) from exploding. This is particularly true of America’s burgeoninng elderly population, many more of whom would certainly be impoverished without their Social Security checks. Without public health care programs, like Medicaid, Medicare or CHIP, many more people would have much more trouble accessing medical care. Subsidized health insurance, subsidized school lunches, and other facets of the welfare state do not appear in household balance sheets as income, and so they would probably not affect poverty rates. Still, people’s overall wellbeing is likely helped by these programs.

Overall, what we can glean from long-term changes in the poverty rate is limited. Still, the graph is tought-provoking, and unpackaging what is happening here might help shed light on whether or not the welfare state actually helps the poor. I would wager it does, but the debate will likely continue for a long time.


  1. US Census Bureau (2014) “Table 2. Poverty Status of People by Family Relationship, Race, and Hispanic Origin: 1959 to 2013” Data table downloaded from http://www.census.gov/hhes/www/poverty/data/historical/people.html
  2. For an accessible overview, see Dylan Matthews (2014) “Everything you need to know about the war on poverty” Blog entry at Wonkblogfrom the Washington Post, January 8 http://www.washingtonpost.com/blogs/wonkblog/wp/2014/01/08/everything-you-need-to-know-about-the-war-on-poverty/
  3. See Fred Block (1977) The Origins of International Economic Disorder: A Study of United States International Monetary Policy from World War II to the Present University of Calfornia Press
  4. See Daniel Bell (1977) The Coming of Post-Industrial Society: A Venture in Social Forecasting Basic Books.

The Spectacular Fall, and Very Modest Recovery, of Household Savings

The personal savings rate tries to estimate the proportion of people’s disposable (post-tax) income that is not spent. Presumably, it gives us a sense of how much money people are putting aside for the future. A lower savings rate suggests that more people are financially ill-prepared for the future.

The figure below illustrates changes in US households’ personal savings rates. The data comes from the Federal Reserve Board.1 Keep in mind that these rates are derived from aggregate data, and represent means rather than medians. It seems likely that wealthier households with much higher savings rates push these averages up, and that the median household would save below the average personal savings rate.

During the 1960s and early-1970s, households saved between 10% and 14% of their incomes. At those rates, a household earning a yearly salary would put aside between $6,000 and $8,400 a year. Over thirty years of compounding 5% real annual returns, such savings would result in a nest egg of between $400 and $558 thousand.

us personal savings rate

Since 1976, the personal savings rate has declined steadily, eventually reaching near-zero right before the 2008 crisis. Throughout the Great Recession, many observers have celebrated a resurgence in savings, but the magnitude and expected durability of this rebound can easily be overstated. When savings rebounded to about 5% in 2013,[^21] it was reverting to levels that prevailed in the mid-1990s, not the mid-1960s. This decline is enough to produce a substantial diminishment in long-term wealth accumulation. Were our $60,000 a year family to save between 2% and 5% of their income (as opposed to 10% – 14%), they would be left with a nest egg of $78 to $199 thousand. The effect of low savings seems more moderate on a year-by-year basis, but they can render substantial differences in wealth over a lifetime.

Why have savings been falling? There are many explanations. Part of the picture probably involves income stagnation. Many analysts argue that people are more spendthrift today, and more amenable to borrowing money. Consumer debt is also cheaper and easier to obtain than in the 1960s or 1970s. In my own research, I also focus on the rising cost of living, particularly the rising cost of essential goods and services like health care or education.

Whatever the cause, it seems likely that people save less money, which suggests that they will have less wealth from which to draw in the future.

Federal Reserve Board (2014) “Personal Saving Rate, Percent, Annual, Seasonally Adjusted Annual Rate” Series PSAVERT from Federal Reserve Economic Data set. Accessed in Spring 2015. http://research.stlouisfed.org/fred2↩

The Rise and Fall of Unionism in America

Unions are a persistent point of conflict in economic policy debates. Many observers argue that the economic difficulties faced by America’s middle class are at least partly attributable to the decline of unions. Others see unions as a detrimental force in the economy, and believe that they harm employers, the US economy, and ultimately workers themselves. Unions are portrayed as powerful and corrupting forces in US society, and dying institutions that are being squashed by all-powerful business interests.

How strong are unions? One way to answer that question is to look at union density, the ratio of unionized workers to total workers. The figure below depicts changes in union density across all US workers from 1880 and 2013. Data come from Klaus Walde and Barry Hirsch and David Macpherson.1

Union Density

The graph suggests that unionization developed slowly between the 1880s and the Great Depression. By the 1880s, struggles to unionize labor were heated and sometimes violent. Larger movements to advance unionism included the Knights of Labor, and later the American Federation of Labor. Through the Great Depression, more workers unionized under the auspices of the Congress of Industrial Organizations. These latter two groups eventually merged to create the AFL-CIO.

Unionization increased dramatically with the institution of the 1935 National Labor Relations Act, which legally protected workers’ rights to organize unions, restricted businesses ability to interfere with or fire unionizing workers, and enabled compulsory union membership in organizations where unions had been established. Between World War II and the mid-1950s, union density peaked at around one-third of the work force. Union density began a secular decline after 1954, and is quickly approaching levels that prevailed before the passage of the NLRA.

The figure depicts the decline of union density, which is the proportion of workers in unions. This decline in density does not necessarily represent a decline in the absolute number of union jobs, but is more a reflection of a long-term faster growth in non-union jobs. However, after 1980, the absolute number of union jobs began to fall. In 1980, there were approximately 20 million union jobs, whereas there were about 14.5 million in 2013.2 This decline followed several changes, including the deinstitutionalization of regulations that benefitted unions (e.g., “right to work” legislation), employment declines in traditionally unionized sectors (e.g., automotive, utilities), and an increasingly cultural antipathy towards unionism.

Insofar as union membership is concerned, it is clear that the institution of unionism has declined considerably over the past several decades. Union jobs are less prevalent and decreasing in number. There are disagreements about whether or not the decline of unionism is a good or bad thing for workers and society-at-large, but it seems clear that this decline is taking place.


  1. Pre-1973 ata from Alejandro Donado and Klaus Walde (2012) “How Trade Unions Increase Welfare” Economic Journal 112(563): 990 – 1009. Set draws strongly from Richard B. Freeman (1998) “Spurts in Union Growth: Moments and Social Processes” in Michael D. Bordo, Claudia Goldin, and Eugene N. White (eds.) The Defining Moment: The Great Depression and the American Economy in the Twentieth Century University of Chicago Press. Post-1973 numbers from Barry Hirsch and David Macpherson “Union Membership, Coverage, Density, and Employment, Among All Wage and Salary Workers, 1973-2014” Unionstats.com
  2. Donado and Walde (2012) Op. Cit.

Download the Markdown file and raw data

Family Business in Decline?

Data from the Survey of Consumer Finances suggest that family-owned businesses are experiencing a decline. There appear to be as many households earning some income from proprietary businesses, but the proceeds from these businesses are falling. There are fewer businesses that earn a living wage, and the prevalence of high-earning proprietary businesses also seems to be falling.

Proportion of Households Earning Business Income

Consider the figure below, which shows the proportion of US households earning (1) any income, (2) at least a median income, and (3) at least a 90th percentile income from a proprietary business.

bizincearn

Between 8% and 10% of US households receive some business income, but between a two-thirds and three-quarters of these households fail to secure the equivalent of a median household income through personally-owned businesses. In 1992, about 3.3% of US households received at least a median household income from personally-owned businesses. By 2013, just over half as many households – 1.8% – received a median income from such businesses. The proportion of high-earning personal businesses has fallen even more sharply, from 1.3% in 1992 to 0.4% in 2013.

On one hand, these reduced earnings could be the byproduct of random variation. For example , business earnings took a bit of a dip in 1995 and 2004, but then recovered. This could be a product of sample variability or a natural minor fluctuation in small business profitability. However, that seems less likely when we look at the wider distribution of personal business earnings.

Distribution of Business Earnings

The figure below shows the distribution of business earnings from 1992 and 2013. It describes the 25th, 50th, 75th, 90th, and 95th percentile earnings from proprietary businesses.

incomedist

This figure shows a long-term decline in proprietary business earnings, which seems to have begun after the 2001 recession. In 2001, the median household business earned $26,664. This figure fell regularly through 2013, where the median stood at $16,400. This represents a fall of about 38%.

A similar decline occurred among the higher ranks of the business income scale. From 2001 to 2013, 75th percentile income fell from $79,200 to $40,000. Ninetieth percentile income fell from $217,800 to $87,000. Ninety-fifth percentile income from $370,920 to $156,600. These are staggering losses.

What Does It Mean?

If these figures accurately reflect changes in personal business earnings, it suggests that their earnings are falling quickly. Why might this be happening? It may be that small business faces mounting pressures from many quarters. Retailers may have increasing difficulty competing with big box stores and online retailers. Small manufacturing outfits may have trouble competing with foreigners. It might be that many small businesses are being replaced with automated substitutes (e.g., TurboTax and LegalZoom are killing small accountants and lawyers).

Whatever the cause, it seems quite clear that small businesses (at least unincorporated ones) are doing badly in the US.

The Long Consumption Boom, 1980 – 2014

Any attempt to explain US households’ financial struggles must engage the issue of rising spending. While income stagnation is almost certainly part of what is causing households’ money problems, it is only a partial explanation at best. “Stagnation” means not growing quickly – it does not imply that household incomes have necessarily been shrinking (although they often have done so over the past several years). Even if incomes are stagnating, people should be able to maintain their savings by restraining their spending.

The problem is that households generally have not tightened their belts when faced with earnings difficulties – at least not until the Great Recession. The first figure below depicts changes in average personal income and expenditures for the United States from 1921 until 2014. All values are denominated in inflation-adjusted 2014 dollars.

At first glance, the graph suggests that average incomes have generally outpaced average expenditures.However, a closer look reveals that the space between disposable incomes and expenditures has been shrinking. In other words, the average American was spending more of their take-home pay.

This space can be seen more clearly in the figure below, which depicts the ratio of mean per capita total expenditures to disposable incomes. As noted in the previous chapter, the typical household saves about 10% or so of the take-home pay, but spending grew steadily – relative to income – and households were putting aside pennies on the dollar right before the Great Recession. Household savings rebounded after the Recession, but it was to savings rates that prevailed in the early 2000’s, not the 1970s.

These shifts of 5 to 10 percentage points in household savings rates translated into substantial differences in the amount of money people were putting aside from year-to-year. In the late 1950s, the average household put aside about $1750 (at 2014 prices) yearly. With the passage of time, households found themselves able to put aside more money, and by the early 1970s the typical household was putting aside over $3000 (again, at 2014 prices). However, per capita savings fell steadily over the ensuing decades. Even though the average person earn far more money in 2005 than in 1970, Americans typically put aside three times as much from year-to-year. Savings did rebound after the 2008 Crisis, even though people should be putting aside much more money today – retirement, health care, a college education, and many other living costs are much higher today than 40 years ago.

More spending is undoubtedly part of what is causing US households’ money problems. One might argue that, in an age of Walmart, Costco, and cheap Chinese imports, it has never been so easy to save money. Yet people are not saving money. Any endeavor to explain US households’ financial insecurity must engage over-spending.

Download the R Markdown and Data FIles Here

Who Pays the Federal Government’s Bills?

In conservative circles, one often hears about the ways in which high taxes are unfairly strangling the rich and businesses, while everyone else enjoys a free ride on their tab. They often argue that the Obama administration has implemented some egregiously expropriatory taxes on them. In fact, the government draws less tax (in proportion to the size of the overall economy) from corporations and from income taxes (the primary tax on affluent people’s incomes) than it did over most of the postwar era.

The stacked area plot below shows how the composition and overall level of federal taxes have changed since the mid-1930s. It measures the ratio of government receipts to GDP, which approximates the size of government taxes relative to the overall size of national economic output (a rough proxy for the overall size of the economy). Data come from the US Office of Management and Budget.1

federalincomesources

 

After World War II, the government increased taxes dramatically, with government revenues rising from around 5% of GDP in the mid-1930s to just over 18% by 1952. Public sector revenues more than tripled.

In the early-1950s, governments drew about 42% of its revenue from individual income taxes, and about 32% from corporate income taxes. What I term payroll taxes involve taxes related to “Social Insurances and Retirement” programs, like Social Security and Medicare taxes. Excisetaxes are taxes on particular products, like alcohol, cigarrettes, and gasoline. During the mid-20th century, many governments also drew considerable money from trade tariffs.

Beginning in 1953, corporate income taxes decreased in proportion to the government’s overall revenues. Often, this was instituted by implementing and altering rules related to tax deductions. The first cut in corporate taxes is believed to be related to the Eisenhower’s passage of more generous capital depreciation rules. These types of corporate tax deductions have been implemented almost continuously throughout the postwar era.

WIth the passage of time, government revenues were increasingly funded through payroll taxes, while excise and corporate income taxes were eliminated. By 2000, about 50% of government revenues were drawn from personal income taxes, 10% from corporate taxes, 32% from payroll taxes, and 4% from excise taxes. The burden of funding government operations made a substantial shift away from corporate sources, and towards household sources.

Under the George W. Bush administration, personal taxes were cut along with overall government revenues, and the lost revenue was covered by public borrowing. By 2004, income taxes fell to 43% of total revenues, while payroll taxes, which were not reduced, rose to 40% of total government revenues.

What can we glean from these findings? First, business taxes seem much more modest than they’ve been throughout most of the postwar era. The government has been reducing taxes on corporations for decades. Whatever arguments might be made about the high tax rates levied on corporations, the federal government’s low overall take from this source suggests that tax deductions make the actual amount paid rather low. Insofar as personal taxes are concerned, top income tax rates have decreased considerably over decades (I’ll post on that another day), while the take from payroll taxes has risen considerably. Payroll taxes fall harder on lower income earners, as payroll taxes are only levied on the first $118,000 that someone earns.

Overall, taxes on corporations are very low by modern historical standards. Personal income taxes are not particularly high by historical standards either.


  1. Office of Management and Budget (2015) “Table 2.3: Receipts by Source as Percentages of GDP: 1934-2020” Accessed June 5, 2015.https://www.whitehouse.gov/omb/budget/Historicals

 

Slowdown in Educational Attainment

We always hear how education is a top priority in today’s economy.  We are told that society needs people to be better educated, and that progress in educating people is slow but steady.  Educational attainment may be rising, but is society doing a good job of ensuring that its young are educated?

The graph below, which is built on data from and reproduces a slightly modified graphic produced by the Census Bureau1, describes how educational attainment has changed across society since 1940.

attain1

 

The graph suggests remarkable improvements in educational attainment over the past seventy years. In 1940, roughly three-quarters of the population dropped out before completing high school, about 5% attended some college and 4.6% completed college. By 2014, only 12% of society had less than a high school education, 27% had some college, and 32% completed college.

The figure suggests that US educational attainment has improved continuously over the past seventy or so years. The smooth transition from a less- to more-educated society seems to have continued unabated. It looks like society has been in a continuous march toward more education.

However, the appearance of a smooth transition to a more educated society is partly an artifact of the data. Overall educational attainment figures include people of different generations, who came of age during during different periods. The inclusion of older generations obfuscates the ways in which society’s young have been educated at different rates.  It will make change look slower and more incremental.

To get a sense of these changing rates at which the young are being educated, it makes sense to focus on educational attainment among young people who are of an age at which college completion is likely. To do this, Census figures look at attainment among those aged 25 to 34.

The figure below shows changing educational enrollment among Americans in this age group. The data source is the same:

attain2

 

This graph provides a different picture to the image of continuous improvement imparted in the first figure. The graph suggests that the pace of increasing educational attainment was much faster in the 1940s through late-1970s, but slowed afterwards.

For example, high school drop outs fell from about 63% of young adults in 1940 to 15% in 1980. From 1980 to 2014, this proportion fell to 10%, a marginal improvement. On one hand, society might be forgiven for not being able to eradicate the phenomenon of dropping out. That final 10% to 15% of drop outs might be a particularly tough group to marshal towards high school completion. On the other hand, society has not made a concerted press for universal high school completion, much in the way that it stamped out illiteracy. It is reluctant to make bigger investments in education, and it is much more reluctant to expand social assistance to those who drop out due to economic pressures. It is hard to tell whether our failure to ensure universal completion is a matter of the problem being too difficult or society not caring enough to do the needed work. In any case, the pace at which minimum educational attainment improved has slowed considerably over the past thirty to forty years.

The proportion of young adults who have only completed high school completion is roughly where it was in 1940. In 1940, it was a result of too many people not having gone far enough in attaining education. By 2014, it was a matter of more people attaining high school and moving on to at least some college.

The pace at which college attainment rose accelerated between 1940 and 1977, and then stalled until the mid-1990s. From then to today, the proportion of young adults with college attainment has grown steadily. The proportion of college graduates grew faster between 1940 to 1977 (3.8% average annual growth) than from 1994 to today (2.1% average annual). Near-college attainment also grew faster before 1977 (2.9% average annual growth rate) than after 1991 (1.4% average annual growth rate).

What can we glean from these figures? Educational attainment rose much more quickly during the mid-20th century than since the mid- to late-1970s. In part, this is probably because the easier work has been done. Conceivably, it is easier to promote high school and college completion when fewer people do it. As rates rise, schools and education policy-makers have to find ways to educate more obstinate cases.

However, I’m not so sure that we should give the past several decades a total pass. Society has shown that it can stamp out obstinate problems if it is sufficiently motivated. We seem to lack that kind of motivation. It is harder to convince society to invest more in education. More people oppose extending economic aid to poor people to help them complete college. We are much less concerned with making college affordable, and more concerned with ensuring that college students don’t get “free rides.”  We might say that we are committed to educating Americans, but this self-image may be at odds with our revealed true preferences.


  1. US Census Bureau (2015) “Table A-1. Years of School Completed by People 25 Years and Over, by Age and Sex: Selected Years 1940 to 2014” Data table downloaded June 2015 from http://www.census.gov/hhes/socdemo/education/data/cps/historical/

Download the raw data and Markdown file