30 Eylül 2012 Pazar

The Drop in Personal Interest Income

To contact us Click HERE
Low interest rates are good for borrowers, but lousy for savers. Here's a graph showing personal interest income, which dropped by about $400 billion per year--a fall of more than one-fourth--as interest rates have plummeted.  

FRED Graph


One of my local newspapers, the (Minneapolis) Star Tribune, offered a nice illustrative set of anecdotes in a story last Sunday about this consequences of this change for those who were depending on interest-bearing assets--often those who are near-retirement or in-retirement, and who want to hold safe assets, but who are receiving a much lower return than they might have expected.
 Moreover, as the article points out, it's not just individual savers who are affected. Pension funds, life insurance companies, long-term care insurance companies, and others who keep a substantial proportion of their investments in safe interest-bearing assets are receiving much less in interest than they would have expected, too.


I'm someone who supported pretty much everything the Federal Reserve did through the depths of the recession and financial crisis that started in late 2007: cutting the federal funds rate down to near-zero percent; setting up a number of agencies to lend money to make short-term liquidity loans to number of firms in financial markets; and the "quantitative easing" policies that involved printing money to purchase federal debt and mortgage-backed securities. But the recession officially ended back in June 2009, more than three years ago. It's time to start recognizing that ultra-low interest rates pose some painful trade-offs, too. Higher-ups at the Fed were reportedly saying back in 2009 that when the financial crisis was over, they would unwind these steps--but with the sluggishness of the recovery, they haven't done so.

 A year ago, for example, I posted on Can Bernanke Unwind the Fed's Policies? I posted last month on "BIS on Dangers of Continually Expansionary Monetary Policy," in which the Bank of International Settlements states: "Failing to appreciate the limits of monetary policy can lead to central banks being overburdened, with potentially serious adverse consequences. Prolonged and aggressive monetary accommodation has side effects that may delay the return to a self-sustaining recovery and may create risks for financial and price stability globally."

I lack the confidence to say just when or how the Fed should start backing away from its extremely accommodating monetary policies, but after jamming the monetary policy pedal quite hard for the last five years, it seems time to acknowledge that monetary policy in certain settings like the aftermath of a financial crisis and an overleveraged economy is a more limited tool than many of us would have believed back in 2006. Moreover, the U.S. economy has a very recent example from the early 2000s in which the Federal Reserve kept interest rates too low for too long in the early 2000s, and it helped to trigger the boom in lending and borrowing, much of it housing-related in one way or another, that led to the financial crisis and the Great Recession. The dangers of ultra-low interest rates and quantitative easing may not yet outweigh their benefits, but the potential tradeoffs and dangers shouldn't be minimized or ignored.


Are Groups More Rational than Individuals?

To contact us Click HERE
"A decision maker in an economics textbook is usually modeled as an individual whose decisions are not influenced by any other people, but of course, human decision-making in the real world is typically embedded in a social environment. Households and firms, common decision-making agents in economic theory, are typically not individuals either, but groups of people—in the case of firms, often interacting and overlapping groups. Similarly, important political or military decisions as well as resolutions on monetary and economic policy are often made by configurations of groups and committees rather than by individuals."

Thus starts an article called "Groups Make Better Self-Interested Decisions," by Gary Charness and Matthias Sutter, which appears in the Summer 2012 issue of my own Journal of Economic Perspectives. (Like all articles appearing in JEP back to 1994, it is freely available on-line courtesy of the American Economic Association.) They explore ways in which individual decision-making is different from group decision making, with almost all of their evidence coming from behavior in economic lab experiments. To me, there were two especially intriguing results: 1) Groups are often more rational and self-interested than individuals; and 2) This behavior doesn't always benefit the participants in groups, because the group can be less good than individuals at setting aside self-interest when cooperation is more appropriate. Let me explore these themes a bit--and for some readers, offer a quick introduction to some economic games that they might not be familiar with.

There has been an ongoing critique of the assumption that individuals act in a rational and self-interested manner, based on the observations that people are often limited in the information that they have, muddled in their ability to process information, myopic in their time horizons, affected by how questions are framed, and many other "behavioral economics" issues. It turns out that in many contexts, groups are often better at avoiding these issues and acting according to pure rationality than are individuals.

As one example, consider the "beauty contest" game. As Charness and Sutter point out: "The name of the beauty-contest game comes from the Keynes (1936) analogy between beauty contests and financial investing in the General Theory: “It is not a case of choosing those which, to the best of one’s judgment, are really the prettiest, nor even those which average opinion genuinely thinks the prettiest. We have reached the third degree where we devote our intelligences to anticipating what average opinion expects the average opinion to be. And there are some, I believe, who practice the fourth, fifth and higher degrees.” Similarly, in a beauty-contest game, the choice requires anticipating what average opinion will be."

The game works like this. A group of players is told that they should choose a number between 0 and 100, and the winner of the game will be the person who chooses a number that is (say) 1/2 of the average of the other choices. In this game, the rational player will reason as follows: "OK, let's say that the other players choose randomly, so the average will be 50, and I should choose 25 to win. But if other players have this first level of insight, they will all choose 25 to win, and I should instead choose 12. But if other players have this second level of insight, then they will choose 12, and I should choose 6. Hmmm. If the other players are rational and self-interested, the equilibrium choice will end up being zero."

The players in a beauty contest game can be either individuals or groups. It turns out that groups choose lower numbers: that is, as a result of interacting in the group, they tend to be one step ahead.

Here's another example, called the "Linda paradox," in which players get the following question (quoting from Charness and Sutter):

"Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations. Which is more probable:
(a) Linda is a bank teller.
(b) Linda is a bank teller and is active in the feminist movement."

Notice that Linda is a bank teller in both choices, but only active in the feminist movement in the second choice: that is, the second choice is a subset of the first choice. For that reason, it is impossible for choice b to be more likely than choice a. However, early research on this question found that 85% of individuals answered b. But when the game is played with groups of 2, and with groups of 3, the error rate drops.

Charness and Sutter offer a number of other examples, but the underlying themes are clear. In many settings, a group of people is likely to be better than an individual at processing a question, processing information, and acting in a rational answer. However, there are a number of settings in which pure self-interest can be self-defeating, and a more cooperative approach is useful. It turns out that individuals are often better than groups at setting aside pure self-interest and perceiving such opportunities.

Here's an example called the "trust game." In this game, the first player starts with a certain sum of money. Player 1 divides the money and passes part of it to Player 2. The experimenter triples the amount passed to Player 2. Player 2 then divides what is received, and passes part of the money back to Player 1.  In this kind of game, clearly what's best for both players is if Player 1 gives all of the money to Player 2, thus tripling the entire total, and trusts that Player 2 will return enough to make such this worthwhile. However, a strictly self-interested Player 2 may see no reason in this game to send anything at all back to Player 1, and Player 1, perceiving this, will then see no reason to send anything to Player 2. If both players act in a self-interested manner, both can end up worse off.

It turns out that when groups play this game, when they are acting they send less of the pot from Player 1 to Player 2 than do individuals, and they return less of the pot from Player 2 to Player 1 than do individuals. Thus, groups pursue self-interest in a way that reduces the potential returns from cooperation and trust, as compared with individuals.

Much remains to be done in comparing actions of groups and individuals in a wider variety of contexts. But these results intrigue, because they seem to point toward an economic theory of when group decision-making might be preferable to that of individuals, and vice versa. For example, when looking at a potentially complex problem, where the appropriate decision isn't clear, getting a group of people with diverse backgrounds and information can be quite helpful in leading to a more rational decision. But groups can also become large enough that they don't work well in gathering input from individuals, and become unable to move ahead with decisions.


The results also suggest that economists and social scientists should be cautious in making quick-and-dirty statements about how economic actors either do engage or don't engage in rational self-interested behavior. For example, it's possible to have a bunch of individuals who can't manage to lose weight or save money when acting on their own, but who find a way to do so when acting acting as a group and reinforcing each other. A person may act irrationally in some aspects of their personal life, but still be a useful member of a highly rational group in their work environment.  On the other side, in situations calling for gains from cooperation, pitting one group against another may be dysfunctional.  For example, many negotiations in business and politics follow the model of designating a lead negotiator, and descriptions of such negotiations often suggest that good negotiators form a bond with those on the other side that helps a compromise to emerge.

Are CEO's Overpaid?

To contact us Click HERE
Are CEO's Overpaid? I confess that my knee-jerk answer to this question is "YES"! But Steve Kaplan makes a strong case for a more nuanced answer in Executive Compensation and Corporate Governance in the U.S.: Perceptions, Facts and Challenges," published in July as Chicago Booth working paper 12-42. Here's a sketch of some of  his arguments and graphs.

 First, just as a matter of getting the facts straight, CEO pay relative to household income did spike back in the go-go days of the dot-com boom in the late 1990s, but since then, it is relatively lower. Kaplan argues that there are two valid ways to measure executive pay. One measure looks at actual pay received, which he argues is useful for seeing whether top executives are paid for performance. The other measure looks at "estimated" pay, which is the amount that financial pay packages would have been expected to be worth at the time were granted. This calculation requires putting a value on stock options, restricted stock grants, and the like, and estimating what these were worth at the time the pay package was given. Kaplan argues that this measure is the appropriate one for looking at what corporate boards meant to do when they agreed on a compensation package.

Here's one figure showing actual average and median pay totals for S&P 500 CEOs from 1993 to 2010. Average pay is above median pay, which tells you that there are some very high-paid execs at the top pulling up the average.  Also, average CEO pay spikes when the stock market is high, as in 2000 and around 2006 an  2007. Median realized pay seems to have crept up over time. 
Here's a figure showing estimated pay--that is, the value of the pay packages when they were granted. But this time, instead of showing dollar amounts, this graph shows average and median CEO pay as a multiple of median household income. Average pay again spikes at the time of the dot-com boom. Kaplan emphasizes that estimated CEO pay is on average lower than in 2000 and that the median hasn't risen much. My eye is drawn to the fact that median pay for CEOs goes from something like 60 times median household income back in 1993 to about 170 times median household income by 2010.

An obvious question is whether these pay levels are distinctive for CEOs, or whether they are just one manifestation of widening income inequality across a range of highly-paid occupations. Kaplan makes a solid case that it is the latter. For example, here's a graph showing the average pay of the top 0.1% of the income distribution compared with the average pay of a large company CEO.Again, the story is that CEO pay really did spike in the 1990s, but by this measure, CEO pay relative to the top 0.1% is now back to the levels common in the the 1950s.

  

Kaplan also points out that the pay of those at the top of other highly-paid occupations has grown dramatically as well, like lawyers, athletes, and hedge fund managers. Here's a figure showing the pay of top hedge fund managers relative to that of CEOs in the last decade. Kaplan writes: "The top 25 hedge fund managers as a group regularly earn more than all 500 CEOs in the S&P 500. In other words, while public company CEOs are highly paid, other groups with similar backgrounds and talents have done at least equally well over the last fifteen years to twenty years. If one uses evidence of higher CEO pay as evidence of managerial power or capture, one must also explain why the other professional groups have had a similar or even higher growth in pay. A more natural interpretation is that the market for talent has driven a meaningful portion of the increase in pay at the top."   

Kaplan also compiles evidence that CEOs of companies with better stock market performance tend to be paid more than those with poor stock market performance, and that CEOs have shorter job tenures. He writes: Turnover levels since 1998 have been higher than in work that has studied previous periods. In any given year, one out of 6 Fortune 500 CEOs lose their jobs. This compares to one out of 10 in the 1970s. CEOs can expect to be CEOs for less time than in the past. If these declines in expected CEO tenures since 1998 are factored in, the effective decline in CEO pay since then is larger than reported above."And the CEO turnover is related to poor firm stock performance ..."

To me, Kaplan makes a couple of especially persuasive points: the run-up in CEO salaries was especially extreme during the 1990s, and less  so since then (depending on how you measure it); and the run-up in CEO salaries reflects the rise in inequality across a wider swath of professions. While I believe the arguments that job tenure can be shorter for the modern CEO, especially if a company isn't performing well, it seems to me that most former CEO's don't plummet too many percentiles down the income distribution in their next job, so my sympathy for them is rather limited on that point.

In this paper, Kaplan doesn't seek to address the deeper question of why the pay for those at the very top, CEOs included, has risen so dramatically.  While the demand for skills at the very top of the income distribution is surely part of the answer, I find it hard to believe that these rewards for skill increased so sharply in the 1990s--just coincidentally during a stock market boom. It seems likely to me that
  cozy institutional arrangements for many of those at the very top of the income distribution--CEOs, hedge fund managers, lawyers, and athletes and entertainers--also plays an important role.  

Labor's Smaller Share

To contact us Click HERE
Margaret Jacobson and Filippo Occhino have been investigating the fact that labor has been receiving a declining share of total economic output over the last few decades. I posted on their work last February in "Labor's Declining Share of Total Income."  Now they have written "Labor’s Declining Share of Income and Rising Inequality," which is "Economic Commentary" 2012-13 published by the Federal Reserve Bank of Cleveland.


The starting point is to look at labor income relative to the size of the economy. The top line in the figure shows labor income as a share of GDP, as measured in the national income and product accounts from the U.S. Bureau of Economic Analysis. The lower line in the figure shows the ratio of compensation to output for the nonfarm business sector, as measured by the U.S. Bureau of Labor Statistics. The measures are not identical, nor would one expect them to be, but they show the same trend: that is, with some ups and downs as the economy has fluctuated, the labor share of income has been falling for decades, and is now at an historically low figure.

 This fact lies behind much of the rise in inequality of incomes over this time. The income that is not being earned by labor is being earned by capital--and capital income is much more concentrated than is labor income. Jacobsen and Occhino offer an intriguing figure that measures the inequality of labor income and the inequality of capital income. The measure used here is a Gini coefficient, which "ranges between 0 and 1, with 0 indicating an equal distribution of income and 1 indicating unequal income." (Here's an earlier post with an explanation of Gini coefficients.)
The figure has two main takeaways. First, labor income has become more unequally distributed over time, but since the early 1990s, the big shift in income inequality is because capital income is more unequally distributed. Second, capital income tends to rise during booms and to fall in recessions. Thus, it seems plausible that the inequality of capital income has dropped in the last few years of the Great Recession and its aftermath, but will rise again as economic growth recovers.

What has caused the long-run decline of the labor share of income? Jacobson and Occhino explain this way: "[W]e begin by looking at what determines the labor share in the long run. The main factor is the technology available to produce goods and services. In competitive markets, labor and
capital are compensated in proportion to their marginal contribution to production, so the most important factor behind the labor and capital shares is the marginal productivities of labor and capital, which are determined by technology. In fact, one important cause of the post-1980 long-run decline in the labor share was a technological change, connected with advances in information and
communication technologies, which made capital more productive relative to labor, and raised the return to capital relative to labor compensation. Other factors that have played a role in the long-run decline in the labor share are increased globalization and trade openness, as well as changes in
labor market institutions and policies."

There is no particular reason to believe that these trends will continue--or that they won't. But the declining share of income going to labor suggests the importance of finding ways to increase the marginal product of labor, especially for workers of low and medium skills, perhaps by focusing on the kind of training and networking that might help them make greater use of the advances in information and communication technology to improve their own productivity.



What's Up With the Dodd-Frank Legislation?

To contact us Click HERE
Back in July 2010, President Obama signed into law the Dodd-Frank Wall Street Reform and Consumer Protection Act. The difficulty with the law has always been that while it was fairly clear on its goals, it did not specify how to reach those goals--instead turning over that task to current and newly-created regulatory agencies.  If you're looking for an update on how the law is proceeding, a good starting point is the Third Quarter 2012 issue of Economic Perspectives, published by the Federal Reserve Bank of Chicago, which has six articles on the Dodd-Frank legislation.

Douglas D. Evanoff and William F. Moeller offer an overview of the goals and approach of the law in their opening piece (footnotes and citations omitted):

"The stated goals of the act were to provide for financial regulatory reform, to protect consumers
and investors, to put an end to too-big-to-fail, to regulate the over-the-counter (OTC) derivatives markets, to prevent another financial crisis, and for other purposes. ... Implementation of Dodd–Frank requires the development of some 250 new regulatory rules and various mandated studies. There is also the need to introduce and staff a number of new entities (bureaus, offices, and councils) with responsibility to study, evaluate, and  promote consumer protection and financial stability. Additionally, there is a mandate for regulators to identify and increase regulatory scrutiny of systemically important institutions.  ... Two years into the implementation of the act, much has been done, but much remains to be done."

How are those rules coming along? The law firm of Davis Polk & Wardwell publishes a regular Dodd-Frank report. The September 2012 edition summarizes:

  • "As of September 4, 2012, a total of 237 Dodd-Frank rulemaking requirement deadlines have
    passed. This is 59.5% of the 398 total rulemaking requirements, and 84.6% of the 280
    rulemaking requirements with specified deadlines.
  • "Of these 237 passed deadlines, 145 (61.2%) have been missed and 92 (38.8%) have been
    met with finalized rules. Regulators have not yet released proposals for 31 of the 145 missed
    rules.
  • "Of the 398 total rulemaking requirements, 131 (32.9%) have been met with finalized rules and
    rules have been proposed that would meet 135 (33.9%) more. Rules have not yet been
    proposed to meet 132 (33.2%) rulemaking requirements.
The July 2010 Davis Polk update--the two-year anniversary of the legislation--offers some additional detail: "The two years since Dodd-Frank’s passage have seen 848 pages of statutory text expand to 8,843 pages of regulations. Already at almost a 1:10 page ratio, this staggering number represents
only 30% of required rulemaking contained within Dodd-Frank, affecting every area of the financial markets and involving over a dozen Federal agencies."

It's important to  recognize that writing a new regulation isn't as simple as, well, just writing it. Instead, there is often first an in-house study, followed by a draft regulation, which then is open to public comments, and then can revised, and eventually at some point a new regulation is created. It's not unusual for a regulation to get dozens or hundreds of detailed public comments.

This blizzard of evolving rules has to create considerable uncertainty in the financial sector. Matthew Richardson discusses the complexities of one particular issue in his contribution to the Chicago Fed publication. He picks one example: the problem that many banks made very low-quality subprime mortgage loans. What does the Dodd-Frank legislation do about this basic issue? As he describes, the act: 1) Sets up a Consumer Finance Protection Bureau in title X to deal with misleading products; 2)
Imposes particular underwriting standards for residential mortgages; 3) Requires firms performing securitization to retain at least 5 percent of the credit risk; and 4) Iincreases regulation of credit rating agencies. Each of these tasks requires detailed rulemaking. And as Richardson points out, "with all of these new provisions, the act does not even address what we at NYU Stern consider to be a primary fault for the poor quality of loans—namely, the mispriced government guarantees in the system that led to price distortions and an excessive buildup of leverage and risky credit."

I'm skeptical of anyone who has strong opinions about the Dodd-Frank legislation, because here we are more than two years later, less than halfway toward figuring out what rules the legislation will actually put in place. Wayne A. Abernethy of the American Bankers Association is one of the authors in the Chicago Fed symposium. Yes, he is speaking for the bankers' point of view. But his judgement about the overall process seems fair to me:

"At least in the financial regulatory history of the United States, there has never been anything like it. I have seen no definitive count of the number of regulations that the Dodd–Frank Act calls forth. The numbers seem to range between 250 and 400—numbers so large that they are numbing. It all defies hyperbole. The Fair and Accurate Credit Transactions Act, adopted in 2003, astonished the financial industry with more than a dozen significant new regulations to be written. ...

"One of the most common criticisms of Dodd–Frank implementation has been a lack of order and coordination in the regulatory process. Instead, the Dodd–Frank Act has succeeded in replacing the financial crisis with a regulatory crisis.  ... As agencies are grappling with impossible rulemaking tasks, most of them are also engaged in major structural reorganizations and shifts in the areas of responsibility. ... Nothing like this has ever been tried before in the history of the United States. Writing 400 financial regulations of the highest significance and the greatest complexity in a couple of years has clearly been too much to expect. ... Getting on with the work to end our self-inflicted regulatory crisis should be among the highest priorities."
 I'm someone who believes that financial regulation needed shaking up. Many of the broad goals of the Dodd-Frank legislation make sense to me: rethinking bank and regulation to deal with macroeconomic risk, not just the risk of an individual institution going broke; figuring out better ways to shut down even large financial institutions when needed; and better regulation of certain financial instruments like credit default swaps and repo agreements; a closer look at technologies that allow ultra-high-speed financial trading; and others.

The Dodd-Frank legislation is almost not a law in the conventional meaning of the term, because it mostly isn't about actual specific activities that are prohibited. Instead, it's about handing over the difficult problems to regulators and telling them to fix it. I'm not sure there was an easy alternative to this regulatory approach: the idea of Congress trying to debate, say, appropriate regulation of the over-the-counter swaps market is not an encouraging thought. But stating a goal is not the same as solving a problem. The passage of Dodd-Frank, in and of  itself, didn't solve any problems.

29 Eylül 2012 Cumartesi

Effects of Health Insurance: Randomized Evidence from Oregon

To contact us Click HERE
How might providing health insurance affect people along various dimensions, like how much health care they consume, their financial well-being, and their actual health? As health care economists have long recognized, this question is a lot tougher to answer than one might at first think. The basic analytical problem is that you can't just compare averages for those who have health insurance and those who don't, because these groups are different in fundamental ways. For example, those with private sector health insurance in the U.S. tend to get it through their employers, so they tend to be people of prime working ages who hold jobs, or those who get government health insurance for the elderly (Medicare) or the poor (Medicaid). It's easy to imagine cases of people who have a hard time holding a job because they have poor health, and thus don't have employer-provided health insurance. For these people, their poor health leads to a lack of health insurance, but wasn't primarily caused by their lack of health insurance. When the variables are interrelated like this, it's hard to sort out cause and effect.

From a social science research perspective, the ideal experiment would be to take a large group of people and to divide them randomly, giving health to one group but not the other. Then study the results. However, in the real world, such randomized experiments are quite rare. The one classic example is the Rand Health Insurance Experiment (HIE) conducted in the 1970s: for an overview written in 2010 with some applications to the health care debate, see here. 

"The HIE was a large-scale, randomized experiment conducted between 1971 and 1982. For the study, RAND recruited 2,750 families encompassing more than 7,700 individuals, all of whom were under the age of 65. They were chosen from six sites across the United States to provide a regional and urban/rural balance. Participants were randomly assigned to one of five types of health insurance plans created specifically for the experiment. There were four basic types of fee-for-service plans: One type offered free care; the other three types involved varying levels of cost sharing — 25 percent, 50 percent, or 95 percent coinsurance (the percentage of medical charges that the consumer must pay). The fifth type of health insurance plan was a nonprofit, HMO-style group cooperative. Those assigned to the HMO received their care free of charge. For poorer families in plans that involved cost sharing, the amount of cost sharing was income-adjusted to one of three levels: 5, 10, or 15 percent of income. Out-of-pocket spending was capped at these percentages of income or at $1,000 annually (roughly $3,000 annually if adjusted from 1977 to 2005 levels), whichever was lower. ... Families participated in the experiment for 3–5 years."

The basic lessons of the Rand experiment, which has been the gold standard for research on this question over the last 30 years, is that cost-sharing substantially reduced the quantity of health care spending by 20-30%. Further this reduction in the quantity of health care spending had no effect on the quality of health care services received and no overall effect on health status.

Now, 30 years later, there's finally another study on the effects of health insurance built on a randomized design. The first round of results from the study are reported in "The Oregon Health Insurance Experiment: Evidence from the First Year," co-authored by an all-star lineup of health care economists: Amy Finkelstein, Sarah Taubman, Bill Wright, Mira Bernstein, Jonathan Gruber, Joseph P. Newhouse, Heidi Allen, Katherine Baicker and the Oregon Health Study Group. It appears in the
 August 2012 issue of the Quarterly Journal of Economics, which is not freely available on-line, although many in academia will have access through library subscriptions.

The story begins when Oregon, in 2008, decided to offer health insurance coverage for low-income adults who would not usually have been eligible for Medicaid. However, Oregon only had the funds to provide this insurance to 10,000 people, so the state decided to choose the 10,000 people by lottery.  The health care economists heard about this plan, and recognized a research opportunity. They began to gather financial and health data about all of those eligible for the lottery, the 90,000 people who entered the lottery, and the 10,000 who were awarded coverage.  Here are some findings:

"About one year after enrollment, we find that those selected by the lottery have substantial and statistically significantly higher health care utilization, lower out-of-pocket medical expenditures and medical debt, and better self-reported health than the control group that was not given the opportunity to apply for Medicaid. Being selected through the lottery is associated with a 25 percentage point increase in the probability of having insurance during our study period. ... [W]e find that insurance coverage is associated with a 2.1 percentage point (30%) increase in the probability of having a
hospital admission, an 8.8 percentage point (15%) increase in the probability of taking any prescription drugs, and a 21 percentage point (35%) increase in the probability of having an outpatient visit. We are unable to reject the null of no change in emergency room utilization, although the confidence intervals do not allow us to rule out substantial effects in either direction.
In addition, insurance is associated with 0.3 standard deviation increase in reported compliance with recommended preventive care such as mammograms and cholesterol monitoring. Insurance also results in decreased exposure to medical liabilities and out-of-pocket medical expenses, including a 6.4 percentage point (25%) decline in the probability of having an unpaid medical bill sent to a collections agency and a 20 percentage point (35%) decline in having any out-of-pocket medical expenditures. ... Finally, we find that insurance is associated with improvements across the board in measures of self-reported physical and mental health, averaging 0.2 standard deviation improvement."



 Under the Patient Protection and Affordable Care Act signed into law by President Obama in March 2010, the U.S. is moving toward a health care system in which millions of people who lacked health insurance coverage will now receive it. Drawing implications from the Oregon study for the national health care reform should be done with considerable caution. Still, some likely lessons are possible. 

1) One sometimes hears optimistic claims about how, if people have health insurance, they will get preventive and other care sooner, and so they will avoid more costly episodes of care and we will end up saving money. This outcome is highly unlikely. If lots more people have health insurance, they will consume more health care spending overall.  


2) The cost of the Oregon health insurance coverage was about $3,000 per person--adequate for basic health insurance, although less than half of what is spent on behalf of the average American spends for health care in a given year. The health care reform legislation of 2010 is projected to provide health insurance to an additional 28 million people (leaving about 23 million still without health insurance). At the fairly modest cost of $3,000 per person, the expansion of coverage itself would cost $84 billion per year.

3) Although health insurance will improve people's well-being and financial satisfaction, the extent to which it improves actual health is not yet clear. As the Finkelstein team reports: "Whether there are also improvements in objective, physical health is more difficult to determine with the data we now have available. More data on physical health, including biometric measures such as blood pressure and blood sugar, will be available from the in-person interviews and health exams that we conducted about six months after the time frame in this article."

Evidence from theOregon health insurance experiment will be accumulating over the next few years. Stay tuned!

The Origins of Labor Day

To contact us Click HERE
 [Originally published on this blog on Labor Day, 2011]

It's clear that the first Labor Day celebration was held on Tuesday, September 5, 1882, and organized by the Central Labor Union, an early trade union organization operating in the greater New York City area in the 1880s. By the early 1890s, more than 20 states had adopted the holiday. On June 28, 1894, President Grover Cleveland signed into law: ''The first Monday of  September in each year, being the day celebrated and known as Labor's Holiday, is hereby made a legal public holiday, to all intents and purposes,  in the same manner as Christmas, the first day of January, the twenty-second day of February, the thirtieth day of May, and the fourth day of July are now made by law public holidays."

What is less well-known, at least to me, is that the very first Labor Day parade almost didn't happen, and that historians now dispute which person is most responsible for that first Labor Day. The U.S. Department of Labor tells how first Labor Day almost didn't happen, for lack of a band: 
"On the morning of September 5, 1882, a crowd of spectators filled the sidewalks of lower Manhattan near city hall and along Broadway. They had come early, well before the Labor Day Parade marchers, to claim the best vantage points from which to view the first Labor Day Parade. A newspaper account of the day described "...men on horseback, men wearing regalia, men with society aprons, and men with flags, musical instruments, badges, and all the other paraphernalia of a procession."

The police, wary that a riot would break out, were out in force that morning as well. By 9 a.m., columns of police and club-wielding officers on horseback surrounded city hall.

By 10 a.m., the Grand Marshall of the parade, William McCabe, his aides and their police escort were all in place for the start of the parade. There was only one problem: none of the men had moved. The few marchers that had shown up had no music.

According to McCabe, the spectators began to suggest that he give up the idea of parading, but he was determined to start on time with the few marchers that had shown up. Suddenly, Mathew Maguire of the Central Labor Union of New York (and probably the father of Labor Day) ran across the lawn and told McCabe that two hundred marchers from the Jewelers Union of Newark Two had just crossed the ferry — and they had a band!

Just after 10 a.m., the marching jewelers turned onto lower Broadway — they were playing "When I First Put This Uniform On," from Patience, an opera by Gilbert and Sullivan. The police escort then took its place in the street. When the jewelers marched past McCabe and his aides, they followed in behind. Then, spectators began to join the march. Eventually there were 700 men in line in the first of three divisions of Labor Day marchers.

With all of the pieces in place, the parade marched through lower Manhattan. The New York Tribune reported that, "The windows and roofs and even the lamp posts and awning frames were occupied by persons anxious to get a good view of the first parade in New York of workingmen of all trades united in one organization."

At noon, the marchers arrived at Reservoir Park, the termination point of the parade. While some returned to work, most continued on to the post-parade party at Wendel's Elm Park at 92nd Street and Ninth Avenue; even some unions that had not participated in the parade showed up to join in the post-parade festivities that included speeches, a picnic, an abundance of cigars and, "Lager beer kegs... mounted in every conceivable place."

From 1 p.m. until 9 p.m. that night, nearly 25,000 union members and their families filled the park and celebrated the very first, and almost entirely disastrous, Labor Day."

As to the originator of Labor Day, the traditional story I learned back in the day gave credit to Peter McGuire, the founder of the Carpenters Union and a co-founder of the American Federation of Labor. At a meeting of the Central Labor Union of New York on May 8, 1882, the story went, he recommended that Labor Day be designated to honor "those who from rude nature have delved and carved all the grandeur we behold." McGuire also typically received credit for suggesting the first Monday in September for the holiday, "as it would come at the most pleasant season of the year, nearly midway between the Fourth of July and Thanksgiving, and would fill a wide gap in the chronology of legal holidays." He envisioned that the day would begin with a parade, "which would publicly show the strength and esprit de corps of the trade and labor organizations," and then continue with "a picnic or festival in some grove.

But in recent years, the International Association of Machinists have also staked their claim, because one of their members named Matthew Maguire, a machinist, was serving as secretary of the Central Labor Union in New York in 1882 and who clearly played a major role in organizing the day. The U.S. Department of Labor has a quick summary of the controversy.
 "According to the New Jersey Historical Society, after President Cleveland signed into law the creation of a national Labor Day, The Paterson (N.J.) Morning Call published an opinion piece entitled, "Honor to Whom Honor is Due," which stated that "the souvenir pen should go to Alderman Matthew Maguire of this city, who is the undisputed author of Labor Day as a holiday." This editorial also referred to Maguire as the "Father of the Labor Day holiday. ...

According to The First Labor Day Parade, by Ted Watts, Maguire held some political beliefs that were considered fairly radical for the day and also for Samuel Gompers and his American Federation of Labor. Allegedly, Gompers did not want Labor Day to become associated with the sort of "radical" politics of Matthew Maguire, so in a 1897 interview, Gompers' close friend Peter J. McGuire was assigned the credit for the origination of Labor Day."

Are CEO's Overpaid?

To contact us Click HERE
Are CEO's Overpaid? I confess that my knee-jerk answer to this question is "YES"! But Steve Kaplan makes a strong case for a more nuanced answer in Executive Compensation and Corporate Governance in the U.S.: Perceptions, Facts and Challenges," published in July as Chicago Booth working paper 12-42. Here's a sketch of some of  his arguments and graphs.

 First, just as a matter of getting the facts straight, CEO pay relative to household income did spike back in the go-go days of the dot-com boom in the late 1990s, but since then, it is relatively lower. Kaplan argues that there are two valid ways to measure executive pay. One measure looks at actual pay received, which he argues is useful for seeing whether top executives are paid for performance. The other measure looks at "estimated" pay, which is the amount that financial pay packages would have been expected to be worth at the time were granted. This calculation requires putting a value on stock options, restricted stock grants, and the like, and estimating what these were worth at the time the pay package was given. Kaplan argues that this measure is the appropriate one for looking at what corporate boards meant to do when they agreed on a compensation package.

Here's one figure showing actual average and median pay totals for S&P 500 CEOs from 1993 to 2010. Average pay is above median pay, which tells you that there are some very high-paid execs at the top pulling up the average.  Also, average CEO pay spikes when the stock market is high, as in 2000 and around 2006 an  2007. Median realized pay seems to have crept up over time. 
Here's a figure showing estimated pay--that is, the value of the pay packages when they were granted. But this time, instead of showing dollar amounts, this graph shows average and median CEO pay as a multiple of median household income. Average pay again spikes at the time of the dot-com boom. Kaplan emphasizes that estimated CEO pay is on average lower than in 2000 and that the median hasn't risen much. My eye is drawn to the fact that median pay for CEOs goes from something like 60 times median household income back in 1993 to about 170 times median household income by 2010.

An obvious question is whether these pay levels are distinctive for CEOs, or whether they are just one manifestation of widening income inequality across a range of highly-paid occupations. Kaplan makes a solid case that it is the latter. For example, here's a graph showing the average pay of the top 0.1% of the income distribution compared with the average pay of a large company CEO.Again, the story is that CEO pay really did spike in the 1990s, but by this measure, CEO pay relative to the top 0.1% is now back to the levels common in the the 1950s.

  

Kaplan also points out that the pay of those at the top of other highly-paid occupations has grown dramatically as well, like lawyers, athletes, and hedge fund managers. Here's a figure showing the pay of top hedge fund managers relative to that of CEOs in the last decade. Kaplan writes: "The top 25 hedge fund managers as a group regularly earn more than all 500 CEOs in the S&P 500. In other words, while public company CEOs are highly paid, other groups with similar backgrounds and talents have done at least equally well over the last fifteen years to twenty years. If one uses evidence of higher CEO pay as evidence of managerial power or capture, one must also explain why the other professional groups have had a similar or even higher growth in pay. A more natural interpretation is that the market for talent has driven a meaningful portion of the increase in pay at the top."   

Kaplan also compiles evidence that CEOs of companies with better stock market performance tend to be paid more than those with poor stock market performance, and that CEOs have shorter job tenures. He writes: Turnover levels since 1998 have been higher than in work that has studied previous periods. In any given year, one out of 6 Fortune 500 CEOs lose their jobs. This compares to one out of 10 in the 1970s. CEOs can expect to be CEOs for less time than in the past. If these declines in expected CEO tenures since 1998 are factored in, the effective decline in CEO pay since then is larger than reported above."And the CEO turnover is related to poor firm stock performance ..."

To me, Kaplan makes a couple of especially persuasive points: the run-up in CEO salaries was especially extreme during the 1990s, and less  so since then (depending on how you measure it); and the run-up in CEO salaries reflects the rise in inequality across a wider swath of professions. While I believe the arguments that job tenure can be shorter for the modern CEO, especially if a company isn't performing well, it seems to me that most former CEO's don't plummet too many percentiles down the income distribution in their next job, so my sympathy for them is rather limited on that point.

In this paper, Kaplan doesn't seek to address the deeper question of why the pay for those at the very top, CEOs included, has risen so dramatically.  While the demand for skills at the very top of the income distribution is surely part of the answer, I find it hard to believe that these rewards for skill increased so sharply in the 1990s--just coincidentally during a stock market boom. It seems likely to me that
  cozy institutional arrangements for many of those at the very top of the income distribution--CEOs, hedge fund managers, lawyers, and athletes and entertainers--also plays an important role.  

Labor's Smaller Share

To contact us Click HERE
Margaret Jacobson and Filippo Occhino have been investigating the fact that labor has been receiving a declining share of total economic output over the last few decades. I posted on their work last February in "Labor's Declining Share of Total Income."  Now they have written "Labor’s Declining Share of Income and Rising Inequality," which is "Economic Commentary" 2012-13 published by the Federal Reserve Bank of Cleveland.


The starting point is to look at labor income relative to the size of the economy. The top line in the figure shows labor income as a share of GDP, as measured in the national income and product accounts from the U.S. Bureau of Economic Analysis. The lower line in the figure shows the ratio of compensation to output for the nonfarm business sector, as measured by the U.S. Bureau of Labor Statistics. The measures are not identical, nor would one expect them to be, but they show the same trend: that is, with some ups and downs as the economy has fluctuated, the labor share of income has been falling for decades, and is now at an historically low figure.

 This fact lies behind much of the rise in inequality of incomes over this time. The income that is not being earned by labor is being earned by capital--and capital income is much more concentrated than is labor income. Jacobsen and Occhino offer an intriguing figure that measures the inequality of labor income and the inequality of capital income. The measure used here is a Gini coefficient, which "ranges between 0 and 1, with 0 indicating an equal distribution of income and 1 indicating unequal income." (Here's an earlier post with an explanation of Gini coefficients.)
The figure has two main takeaways. First, labor income has become more unequally distributed over time, but since the early 1990s, the big shift in income inequality is because capital income is more unequally distributed. Second, capital income tends to rise during booms and to fall in recessions. Thus, it seems plausible that the inequality of capital income has dropped in the last few years of the Great Recession and its aftermath, but will rise again as economic growth recovers.

What has caused the long-run decline of the labor share of income? Jacobson and Occhino explain this way: "[W]e begin by looking at what determines the labor share in the long run. The main factor is the technology available to produce goods and services. In competitive markets, labor and
capital are compensated in proportion to their marginal contribution to production, so the most important factor behind the labor and capital shares is the marginal productivities of labor and capital, which are determined by technology. In fact, one important cause of the post-1980 long-run decline in the labor share was a technological change, connected with advances in information and
communication technologies, which made capital more productive relative to labor, and raised the return to capital relative to labor compensation. Other factors that have played a role in the long-run decline in the labor share are increased globalization and trade openness, as well as changes in
labor market institutions and policies."

There is no particular reason to believe that these trends will continue--or that they won't. But the declining share of income going to labor suggests the importance of finding ways to increase the marginal product of labor, especially for workers of low and medium skills, perhaps by focusing on the kind of training and networking that might help them make greater use of the advances in information and communication technology to improve their own productivity.



What's Up With the Dodd-Frank Legislation?

To contact us Click HERE
Back in July 2010, President Obama signed into law the Dodd-Frank Wall Street Reform and Consumer Protection Act. The difficulty with the law has always been that while it was fairly clear on its goals, it did not specify how to reach those goals--instead turning over that task to current and newly-created regulatory agencies.  If you're looking for an update on how the law is proceeding, a good starting point is the Third Quarter 2012 issue of Economic Perspectives, published by the Federal Reserve Bank of Chicago, which has six articles on the Dodd-Frank legislation.

Douglas D. Evanoff and William F. Moeller offer an overview of the goals and approach of the law in their opening piece (footnotes and citations omitted):

"The stated goals of the act were to provide for financial regulatory reform, to protect consumers
and investors, to put an end to too-big-to-fail, to regulate the over-the-counter (OTC) derivatives markets, to prevent another financial crisis, and for other purposes. ... Implementation of Dodd–Frank requires the development of some 250 new regulatory rules and various mandated studies. There is also the need to introduce and staff a number of new entities (bureaus, offices, and councils) with responsibility to study, evaluate, and  promote consumer protection and financial stability. Additionally, there is a mandate for regulators to identify and increase regulatory scrutiny of systemically important institutions.  ... Two years into the implementation of the act, much has been done, but much remains to be done."

How are those rules coming along? The law firm of Davis Polk & Wardwell publishes a regular Dodd-Frank report. The September 2012 edition summarizes:

  • "As of September 4, 2012, a total of 237 Dodd-Frank rulemaking requirement deadlines have
    passed. This is 59.5% of the 398 total rulemaking requirements, and 84.6% of the 280
    rulemaking requirements with specified deadlines.
  • "Of these 237 passed deadlines, 145 (61.2%) have been missed and 92 (38.8%) have been
    met with finalized rules. Regulators have not yet released proposals for 31 of the 145 missed
    rules.
  • "Of the 398 total rulemaking requirements, 131 (32.9%) have been met with finalized rules and
    rules have been proposed that would meet 135 (33.9%) more. Rules have not yet been
    proposed to meet 132 (33.2%) rulemaking requirements.
The July 2010 Davis Polk update--the two-year anniversary of the legislation--offers some additional detail: "The two years since Dodd-Frank’s passage have seen 848 pages of statutory text expand to 8,843 pages of regulations. Already at almost a 1:10 page ratio, this staggering number represents
only 30% of required rulemaking contained within Dodd-Frank, affecting every area of the financial markets and involving over a dozen Federal agencies."

It's important to  recognize that writing a new regulation isn't as simple as, well, just writing it. Instead, there is often first an in-house study, followed by a draft regulation, which then is open to public comments, and then can revised, and eventually at some point a new regulation is created. It's not unusual for a regulation to get dozens or hundreds of detailed public comments.

This blizzard of evolving rules has to create considerable uncertainty in the financial sector. Matthew Richardson discusses the complexities of one particular issue in his contribution to the Chicago Fed publication. He picks one example: the problem that many banks made very low-quality subprime mortgage loans. What does the Dodd-Frank legislation do about this basic issue? As he describes, the act: 1) Sets up a Consumer Finance Protection Bureau in title X to deal with misleading products; 2)
Imposes particular underwriting standards for residential mortgages; 3) Requires firms performing securitization to retain at least 5 percent of the credit risk; and 4) Iincreases regulation of credit rating agencies. Each of these tasks requires detailed rulemaking. And as Richardson points out, "with all of these new provisions, the act does not even address what we at NYU Stern consider to be a primary fault for the poor quality of loans—namely, the mispriced government guarantees in the system that led to price distortions and an excessive buildup of leverage and risky credit."

I'm skeptical of anyone who has strong opinions about the Dodd-Frank legislation, because here we are more than two years later, less than halfway toward figuring out what rules the legislation will actually put in place. Wayne A. Abernethy of the American Bankers Association is one of the authors in the Chicago Fed symposium. Yes, he is speaking for the bankers' point of view. But his judgement about the overall process seems fair to me:

"At least in the financial regulatory history of the United States, there has never been anything like it. I have seen no definitive count of the number of regulations that the Dodd–Frank Act calls forth. The numbers seem to range between 250 and 400—numbers so large that they are numbing. It all defies hyperbole. The Fair and Accurate Credit Transactions Act, adopted in 2003, astonished the financial industry with more than a dozen significant new regulations to be written. ...

"One of the most common criticisms of Dodd–Frank implementation has been a lack of order and coordination in the regulatory process. Instead, the Dodd–Frank Act has succeeded in replacing the financial crisis with a regulatory crisis.  ... As agencies are grappling with impossible rulemaking tasks, most of them are also engaged in major structural reorganizations and shifts in the areas of responsibility. ... Nothing like this has ever been tried before in the history of the United States. Writing 400 financial regulations of the highest significance and the greatest complexity in a couple of years has clearly been too much to expect. ... Getting on with the work to end our self-inflicted regulatory crisis should be among the highest priorities."
 I'm someone who believes that financial regulation needed shaking up. Many of the broad goals of the Dodd-Frank legislation make sense to me: rethinking bank and regulation to deal with macroeconomic risk, not just the risk of an individual institution going broke; figuring out better ways to shut down even large financial institutions when needed; and better regulation of certain financial instruments like credit default swaps and repo agreements; a closer look at technologies that allow ultra-high-speed financial trading; and others.

The Dodd-Frank legislation is almost not a law in the conventional meaning of the term, because it mostly isn't about actual specific activities that are prohibited. Instead, it's about handing over the difficult problems to regulators and telling them to fix it. I'm not sure there was an easy alternative to this regulatory approach: the idea of Congress trying to debate, say, appropriate regulation of the over-the-counter swaps market is not an encouraging thought. But stating a goal is not the same as solving a problem. The passage of Dodd-Frank, in and of  itself, didn't solve any problems.

28 Eylül 2012 Cuma

Campaign Contributions vs. Lobbying Expenses

To contact us Click HERE
I keep reading news coverage about campaign contributions: how much each side is getting, who is giving the money, whether the money is going directly to candidates or to other organizations like political parties or political action committees, and so on. But when I worry about the influence of money in politics, I worry a lot more about spending on lobbying than I do about spending on campaigns.

Here's some basic data from the always-useful Open Secrets website run by the Center for Responsive Politics. The first table shows total reported spending on lobbying, year by year, although the 2012 spending is only for part of the year so far.  The second table shows total campaign spending, both for congressional and presidential races.










Here are some thoughts:


1) My guess is that a lot of spending on what I might think of as lobbying activities goes unreported: this is only spending from by registered lobbyists. Nonetheless, the amount of spending by lobbyists is consistently high, and comparable in size to campaign spending. For example, total spending on the 2010 campaign and on lobbying in 2010 were roughly similar, at about $3.5 billion. But spending on lobbying was also that high in 2009 and 2011, when there were no elections at all.

2) Spending on lobbying is much more concentrated. Open Secrets reports that the total of lobbying spending is spread over about 12,000 lobbyist in the last few years. In contrast, in the 2008 presidential campaign about 1.3 million Americans gave more than $200 to political candidates, parties, or PACs, and of that group, about 280,000 gave more than $2,300, and about 1,000 gave more than $95,000. It's true that a fairly small percentage of Americans give any money at all to political campaigns, but it's also true that the 1 million-plus individuals who do give are a much broader cross-section of society than are the lobbyists.


3) In many ways, spending on lobbying is better-targeted and less publicly clear than are campaign contributions. Lobbyists are often focused on the fine print that might be inserted in a broader piece of legislation--that exception or added provision that means so much to their employer. A campaign contribution gets spent on much more general purposes: advertising or travel costs or website maintenance or some other form of outreach to voters. In addition, Campaign contributions come from many different directions, representing many different interests. The public knows the outcome of political campaigns on Election Day, but the public never finds out about the success of many lobbying efforts, which may involve something as subtle as just leaving something out, or changing a word like "required" to a word like "recommended."

4) Some years back in the Winter 2003 issue of my own Journal of Economic Perspectives, Stephen Ansolabehere, John M. de Figueiredo and James M. Snyder Jr. asked a classic social science question, "Why is There so Little Money in U.S. Politics?" (The article is freely available on-line, like all JEP articles from the current issue back to 1994, courtesy of the American Economic Association.)  In 2012, the U.S. government will collect something like $2.5 trillion in taxes and will probably spend more than $3.5 trillion. Yet the campaign spending that shapes who will vote on these levels of taxes and spending is just $6 billion--about two-tenths of 1% of federal spending. For comparison, total sales of hair care products in the U.S. are about $7 billion per year, and sales of toothpaste are $2 billion per year. Proctor & Gamble alone spent something like $4.6 billion on advertising in 2010. It's probably not reasonable to think that a free-and-open U.S. national election in a $15 trillion economy should spend a whole lot less than what the country spends on hair care and toothpaste, or what one company spends on advertising.

Ansolabehere, Figueiredo and Snyder argue most campaign contributions, most of the time, should be viewed as a consumption good, not as an investment with an expected return. After all, most people and companies don't give anything to campaigns, which suggests that they don't think the money they give will affect the outcome. It's very hard to find evidence that campaign contributions cause politicians to vote in a way that the people of their district or state would object to: that is, politicians respond much more to their constituencies than to their campaign donors. Lots of people give money to causes they believe in, from United Way to various charities, and for the ideologically committed, political contributions often fall into the same general category.


5) I'll close by noting that concern over the effect of political contributions on election outcomes is often highly partisan. For example, I don't know a lot of Democrats who believe that President Obama's 2008 victory was noticeably tainted by his decision to break his earlier campaign pledge and become the first presidential candidate to forego matching funds so that he could vastly outspend John McCain by $745 million to $368 million (again, amounts according to the Open Secrets website). And I don't know any Republicans who have ever felt that a Republican victory was tainted by outspending the opposition.

Democracy can be in some ways a rowdy and unedifying process. But I distrust attempts by incumbent politicians to regulate the quantity or quality of spending on campaigns, which often seems to end up helping the incumbents remain in office. I'd rather rely on voters on election day to separate the wheat from the chaff. And those who want to reduce the role of money in politics might perhaps reallocate some time away from worrying about campaign contributions to encourage those incumbent politicians to restrict and publicize the efforts of lobbyists.








The Great Maple Syrup Theft: A Supply and Demand Story

To contact us Click HERE
Those who teach introductory economics are always looking for a supply and demand story, preferably with a bit of a twist.Thus, my eyes lit up at the reports last week (for example, here
and here) of an enormous theft of millions of dollars of maple syrup from the St. Louis-De-Blandford maple syrup storage facility in Quebec. Apparently about  10 million pounds of maple syrup were taken: enough to fill 15,000 barrels. To understand the importance of this story, you need to know that Quebec is the Saudi Arabia of maple syrup, and the theft decreased their stock of maple syrup reserves by about one third.

 The New England Field Office of the U.S. Department of Agriculture explains developments in the maple syrup market in its "Maple Syrup 2012" newsletter from last June. In 2011, Canada produced 10.3 million gallons of maple syrup, with 9.2 million of that coming from Quebec alone. U.S. production was about 2.8 million gallons, with Vermont leading the way at 1.1 million gallons. From 2009-2011, the average price per gallon was about $37-$38.

But the 2012 maple syrup season in New England (Maine excepted) was ruined by drought. Here's the USDA description:

"The 2012 maple syrup season in New England was considered too warm. A series of heat waves in March ended the season for many, and resulted in a significant drop in maple syrup production. ...  Mild winter temperatures got the 2012 season off to an unusually early start and many maple producers were caught off guard for the first sap runs in January and February. March temperatures were highly volatile with a historic heat wave that brought summer-like temperatures in the 70s and 80s across New England. The heat wave forced early budding of maple trees, marking the end of the maple syrup season. ...The sugar content of the sap was significantly below average in New England, requiring approximately 48 gallons of sap to produce 1 gallon of syrup. ...  United States maple syrup production in 2012 totaled 1.91 million gallons, down 32 percent from 2011, and the lowest production since 2007. The number of taps was estimated at 9.77 million, 2 percent above the
2011 total of 9.58 million. Yield per tap was estimated at 0.195 gallons per tap, down 33 percent from the previous season’s yield."

One might usually expect that the drop in U.S. maple syrup production would drive up the price. However, the price of maple syrup is largely determined by the Quebec Farmers' Union. Before the theft, there was plenty of maple syrup in reserve to buffer a bad year or two. But now, instead of prices rising with a bad harvest, it's possible that prices will fall as the maple syrup thieves seek to unload their booty on unsuspecting breakfast-eaters. However, the USDA does not yet have maple syrup prices available for this year. 

Are CEO's Overpaid?

To contact us Click HERE
Are CEO's Overpaid? I confess that my knee-jerk answer to this question is "YES"! But Steve Kaplan makes a strong case for a more nuanced answer in Executive Compensation and Corporate Governance in the U.S.: Perceptions, Facts and Challenges," published in July as Chicago Booth working paper 12-42. Here's a sketch of some of  his arguments and graphs.

 First, just as a matter of getting the facts straight, CEO pay relative to household income did spike back in the go-go days of the dot-com boom in the late 1990s, but since then, it is relatively lower. Kaplan argues that there are two valid ways to measure executive pay. One measure looks at actual pay received, which he argues is useful for seeing whether top executives are paid for performance. The other measure looks at "estimated" pay, which is the amount that financial pay packages would have been expected to be worth at the time were granted. This calculation requires putting a value on stock options, restricted stock grants, and the like, and estimating what these were worth at the time the pay package was given. Kaplan argues that this measure is the appropriate one for looking at what corporate boards meant to do when they agreed on a compensation package.

Here's one figure showing actual average and median pay totals for S&P 500 CEOs from 1993 to 2010. Average pay is above median pay, which tells you that there are some very high-paid execs at the top pulling up the average.  Also, average CEO pay spikes when the stock market is high, as in 2000 and around 2006 an  2007. Median realized pay seems to have crept up over time. 
Here's a figure showing estimated pay--that is, the value of the pay packages when they were granted. But this time, instead of showing dollar amounts, this graph shows average and median CEO pay as a multiple of median household income. Average pay again spikes at the time of the dot-com boom. Kaplan emphasizes that estimated CEO pay is on average lower than in 2000 and that the median hasn't risen much. My eye is drawn to the fact that median pay for CEOs goes from something like 60 times median household income back in 1993 to about 170 times median household income by 2010.

An obvious question is whether these pay levels are distinctive for CEOs, or whether they are just one manifestation of widening income inequality across a range of highly-paid occupations. Kaplan makes a solid case that it is the latter. For example, here's a graph showing the average pay of the top 0.1% of the income distribution compared with the average pay of a large company CEO.Again, the story is that CEO pay really did spike in the 1990s, but by this measure, CEO pay relative to the top 0.1% is now back to the levels common in the the 1950s.

  

Kaplan also points out that the pay of those at the top of other highly-paid occupations has grown dramatically as well, like lawyers, athletes, and hedge fund managers. Here's a figure showing the pay of top hedge fund managers relative to that of CEOs in the last decade. Kaplan writes: "The top 25 hedge fund managers as a group regularly earn more than all 500 CEOs in the S&P 500. In other words, while public company CEOs are highly paid, other groups with similar backgrounds and talents have done at least equally well over the last fifteen years to twenty years. If one uses evidence of higher CEO pay as evidence of managerial power or capture, one must also explain why the other professional groups have had a similar or even higher growth in pay. A more natural interpretation is that the market for talent has driven a meaningful portion of the increase in pay at the top."   

Kaplan also compiles evidence that CEOs of companies with better stock market performance tend to be paid more than those with poor stock market performance, and that CEOs have shorter job tenures. He writes: Turnover levels since 1998 have been higher than in work that has studied previous periods. In any given year, one out of 6 Fortune 500 CEOs lose their jobs. This compares to one out of 10 in the 1970s. CEOs can expect to be CEOs for less time than in the past. If these declines in expected CEO tenures since 1998 are factored in, the effective decline in CEO pay since then is larger than reported above."And the CEO turnover is related to poor firm stock performance ..."

To me, Kaplan makes a couple of especially persuasive points: the run-up in CEO salaries was especially extreme during the 1990s, and less  so since then (depending on how you measure it); and the run-up in CEO salaries reflects the rise in inequality across a wider swath of professions. While I believe the arguments that job tenure can be shorter for the modern CEO, especially if a company isn't performing well, it seems to me that most former CEO's don't plummet too many percentiles down the income distribution in their next job, so my sympathy for them is rather limited on that point.

In this paper, Kaplan doesn't seek to address the deeper question of why the pay for those at the very top, CEOs included, has risen so dramatically.  While the demand for skills at the very top of the income distribution is surely part of the answer, I find it hard to believe that these rewards for skill increased so sharply in the 1990s--just coincidentally during a stock market boom. It seems likely to me that
  cozy institutional arrangements for many of those at the very top of the income distribution--CEOs, hedge fund managers, lawyers, and athletes and entertainers--also plays an important role.  

Labor's Smaller Share

To contact us Click HERE
Margaret Jacobson and Filippo Occhino have been investigating the fact that labor has been receiving a declining share of total economic output over the last few decades. I posted on their work last February in "Labor's Declining Share of Total Income."  Now they have written "Labor’s Declining Share of Income and Rising Inequality," which is "Economic Commentary" 2012-13 published by the Federal Reserve Bank of Cleveland.


The starting point is to look at labor income relative to the size of the economy. The top line in the figure shows labor income as a share of GDP, as measured in the national income and product accounts from the U.S. Bureau of Economic Analysis. The lower line in the figure shows the ratio of compensation to output for the nonfarm business sector, as measured by the U.S. Bureau of Labor Statistics. The measures are not identical, nor would one expect them to be, but they show the same trend: that is, with some ups and downs as the economy has fluctuated, the labor share of income has been falling for decades, and is now at an historically low figure.

 This fact lies behind much of the rise in inequality of incomes over this time. The income that is not being earned by labor is being earned by capital--and capital income is much more concentrated than is labor income. Jacobsen and Occhino offer an intriguing figure that measures the inequality of labor income and the inequality of capital income. The measure used here is a Gini coefficient, which "ranges between 0 and 1, with 0 indicating an equal distribution of income and 1 indicating unequal income." (Here's an earlier post with an explanation of Gini coefficients.)
The figure has two main takeaways. First, labor income has become more unequally distributed over time, but since the early 1990s, the big shift in income inequality is because capital income is more unequally distributed. Second, capital income tends to rise during booms and to fall in recessions. Thus, it seems plausible that the inequality of capital income has dropped in the last few years of the Great Recession and its aftermath, but will rise again as economic growth recovers.

What has caused the long-run decline of the labor share of income? Jacobson and Occhino explain this way: "[W]e begin by looking at what determines the labor share in the long run. The main factor is the technology available to produce goods and services. In competitive markets, labor and
capital are compensated in proportion to their marginal contribution to production, so the most important factor behind the labor and capital shares is the marginal productivities of labor and capital, which are determined by technology. In fact, one important cause of the post-1980 long-run decline in the labor share was a technological change, connected with advances in information and
communication technologies, which made capital more productive relative to labor, and raised the return to capital relative to labor compensation. Other factors that have played a role in the long-run decline in the labor share are increased globalization and trade openness, as well as changes in
labor market institutions and policies."

There is no particular reason to believe that these trends will continue--or that they won't. But the declining share of income going to labor suggests the importance of finding ways to increase the marginal product of labor, especially for workers of low and medium skills, perhaps by focusing on the kind of training and networking that might help them make greater use of the advances in information and communication technology to improve their own productivity.



What's Up With the Dodd-Frank Legislation?

To contact us Click HERE
Back in July 2010, President Obama signed into law the Dodd-Frank Wall Street Reform and Consumer Protection Act. The difficulty with the law has always been that while it was fairly clear on its goals, it did not specify how to reach those goals--instead turning over that task to current and newly-created regulatory agencies.  If you're looking for an update on how the law is proceeding, a good starting point is the Third Quarter 2012 issue of Economic Perspectives, published by the Federal Reserve Bank of Chicago, which has six articles on the Dodd-Frank legislation.

Douglas D. Evanoff and William F. Moeller offer an overview of the goals and approach of the law in their opening piece (footnotes and citations omitted):

"The stated goals of the act were to provide for financial regulatory reform, to protect consumers
and investors, to put an end to too-big-to-fail, to regulate the over-the-counter (OTC) derivatives markets, to prevent another financial crisis, and for other purposes. ... Implementation of Dodd–Frank requires the development of some 250 new regulatory rules and various mandated studies. There is also the need to introduce and staff a number of new entities (bureaus, offices, and councils) with responsibility to study, evaluate, and  promote consumer protection and financial stability. Additionally, there is a mandate for regulators to identify and increase regulatory scrutiny of systemically important institutions.  ... Two years into the implementation of the act, much has been done, but much remains to be done."

How are those rules coming along? The law firm of Davis Polk & Wardwell publishes a regular Dodd-Frank report. The September 2012 edition summarizes:

  • "As of September 4, 2012, a total of 237 Dodd-Frank rulemaking requirement deadlines have
    passed. This is 59.5% of the 398 total rulemaking requirements, and 84.6% of the 280
    rulemaking requirements with specified deadlines.
  • "Of these 237 passed deadlines, 145 (61.2%) have been missed and 92 (38.8%) have been
    met with finalized rules. Regulators have not yet released proposals for 31 of the 145 missed
    rules.
  • "Of the 398 total rulemaking requirements, 131 (32.9%) have been met with finalized rules and
    rules have been proposed that would meet 135 (33.9%) more. Rules have not yet been
    proposed to meet 132 (33.2%) rulemaking requirements.
The July 2010 Davis Polk update--the two-year anniversary of the legislation--offers some additional detail: "The two years since Dodd-Frank’s passage have seen 848 pages of statutory text expand to 8,843 pages of regulations. Already at almost a 1:10 page ratio, this staggering number represents
only 30% of required rulemaking contained within Dodd-Frank, affecting every area of the financial markets and involving over a dozen Federal agencies."

It's important to  recognize that writing a new regulation isn't as simple as, well, just writing it. Instead, there is often first an in-house study, followed by a draft regulation, which then is open to public comments, and then can revised, and eventually at some point a new regulation is created. It's not unusual for a regulation to get dozens or hundreds of detailed public comments.

This blizzard of evolving rules has to create considerable uncertainty in the financial sector. Matthew Richardson discusses the complexities of one particular issue in his contribution to the Chicago Fed publication. He picks one example: the problem that many banks made very low-quality subprime mortgage loans. What does the Dodd-Frank legislation do about this basic issue? As he describes, the act: 1) Sets up a Consumer Finance Protection Bureau in title X to deal with misleading products; 2)
Imposes particular underwriting standards for residential mortgages; 3) Requires firms performing securitization to retain at least 5 percent of the credit risk; and 4) Iincreases regulation of credit rating agencies. Each of these tasks requires detailed rulemaking. And as Richardson points out, "with all of these new provisions, the act does not even address what we at NYU Stern consider to be a primary fault for the poor quality of loans—namely, the mispriced government guarantees in the system that led to price distortions and an excessive buildup of leverage and risky credit."

I'm skeptical of anyone who has strong opinions about the Dodd-Frank legislation, because here we are more than two years later, less than halfway toward figuring out what rules the legislation will actually put in place. Wayne A. Abernethy of the American Bankers Association is one of the authors in the Chicago Fed symposium. Yes, he is speaking for the bankers' point of view. But his judgement about the overall process seems fair to me:

"At least in the financial regulatory history of the United States, there has never been anything like it. I have seen no definitive count of the number of regulations that the Dodd–Frank Act calls forth. The numbers seem to range between 250 and 400—numbers so large that they are numbing. It all defies hyperbole. The Fair and Accurate Credit Transactions Act, adopted in 2003, astonished the financial industry with more than a dozen significant new regulations to be written. ...

"One of the most common criticisms of Dodd–Frank implementation has been a lack of order and coordination in the regulatory process. Instead, the Dodd–Frank Act has succeeded in replacing the financial crisis with a regulatory crisis.  ... As agencies are grappling with impossible rulemaking tasks, most of them are also engaged in major structural reorganizations and shifts in the areas of responsibility. ... Nothing like this has ever been tried before in the history of the United States. Writing 400 financial regulations of the highest significance and the greatest complexity in a couple of years has clearly been too much to expect. ... Getting on with the work to end our self-inflicted regulatory crisis should be among the highest priorities."
 I'm someone who believes that financial regulation needed shaking up. Many of the broad goals of the Dodd-Frank legislation make sense to me: rethinking bank and regulation to deal with macroeconomic risk, not just the risk of an individual institution going broke; figuring out better ways to shut down even large financial institutions when needed; and better regulation of certain financial instruments like credit default swaps and repo agreements; a closer look at technologies that allow ultra-high-speed financial trading; and others.

The Dodd-Frank legislation is almost not a law in the conventional meaning of the term, because it mostly isn't about actual specific activities that are prohibited. Instead, it's about handing over the difficult problems to regulators and telling them to fix it. I'm not sure there was an easy alternative to this regulatory approach: the idea of Congress trying to debate, say, appropriate regulation of the over-the-counter swaps market is not an encouraging thought. But stating a goal is not the same as solving a problem. The passage of Dodd-Frank, in and of  itself, didn't solve any problems.