To contact us Click HERE
Here is a classic problem of cause and effect. Teenagers who give birth are more likely to be from households with lower income levels. Also, teenagers who give tend to end up later in life in households with lower income levels. But does the lower income level cause teens to be more likely to give birth? Or does giving birth cause as a teen cause that woman to be more likely to end up in a lower-income household? How can one untangle cause and effect? Melissa S. Kearney and Phillip B. Levine tackle these questions in "Why is the Teen Birth Rate in the United States So High and Why Does It Matter?" which appears in the Spring 2012 issue of my own Journal of Economic Perspectives. They have lots of interesting comments to make about variation in teen birthrates across states and countries. Here, I'll focus on their analysis of the cause and effect question, which surprised me and offers a nice example of how economist try to disentangle these sorts of issues.
"Our reading of the totality of evidence leads us to conclude that being on a low economic trajectory in life leads many teenage girls to have children while they are young and unmarried and that poor outcomes seen later in life (relative to teens who do not have children) are simply the continuation of the original low economic trajectory. That is, teen childbearing is explained by the low economic trajectory but is not an additional cause of later difficulties in life. Surprisingly, teen birth itself
does not appear to have much direct economic consequence."
Conceptually, how would one tell whether giving birth as a teenager is a cause of lower future economic prospects? Just comparing life outcomes for teenage girls who give birth and those who don't will give you a correlation, but not causation. "A comparison of the outcomes of women who did and who did not give birth as teens is inherently biased by selection effects: teenage girls who “select” into becoming pregnant and subsequently giving birth (as opposed to choosing abortion) are different in terms of their background characteristics and potential future outcomes than teenage girls who delay childbearing." The problem is made more difficult because some of the background characteristics may be measurable in the data (like family income level, or ethnicity, or if it's a single-parent family) but many other characteristics are not available in the data (like the personality traits of the teenage girl or the values lived by the family).
In an ideal experiment, one might want a research design in which a random sample of teenagers becomes pregnant and gives birth, and then you could track the outcomes. Of course, randomized pregnancy is an impractical research design! But here are four approaches used by clever economists to disentangle this question of cause and effect.
A within-family approach. Look at life outcomes for sisters who give birth at different ages. The result of this kind of study is "once background characteristics are controlled for, the differences are quite modest. Furthermore, even these modest differences likely overstate the costs of teen childbearing, since the sister who gives birth as a teen is likely to be “negatively” selected compared
to her sister who does not."
Miscarriages. Of those teens who become pregnant, some will suffer miscarriages. Compare women who are similar in measured characteristics of family background, but some of whom gave birth as teenagers while others had a miscarriage. It turns out that their life outcomes look quite similar: that is, giving birth as a teenager doesn't appear to cause any additional decline in later life outcomes.
Age at first menstruation. Girls who menstruate earlier are at greater risk of becoming pregnant as teenagers. One can use a statistical approach to look at two groups of women who are similar in measured characteristics of family background, but where one group has a higher pregnancy rate because they began their menstrual cycle earlier. However, the life outcomes for these groups look quite similar; is not correlated with lower life outcomes: that is, a random chance of being more likely to give birth as a teenager (because of an earlier age of first menstruation) doesn't appear to cause any additional decline in later life outcomes.
Propensity scores. Look at girls within a certain school, so that they live in more-or-less the same neighborhood. Using the available data, develop a "propensity score" that measures how likely a girl is to give birth as a teenager. Then compare the life outcomes for girls with similar propensity scores, some of whom gave birth and some of whom did not. There doesn't seem to be a difference in life outcomes, again suggesting that giving birth as a teenager doesn't much alter other life outcomes.
Kearney and Levine sum up the evidence on cause and effect this way: "Taken as a whole, previous research has had considerable difficulty finding much evidence in support of the claim that teen childbearing has a causal impact on mothers and their children. Instead, at least a substantial majority of the observed correlation between teen childbearing and inferior outcomes is the result of underlying differences between those who give birth as a teen and those who do not."
Kearney and Levine also offer an unexpected (to me) perspective on policies to reduce teen pregnancy:
"Moreover, no silver bullet such as expanding access to contraception or abstinence education will solve this particular social problem. Our view is that teen childbearing is so high in the United States because of underlying social and economic problems. It reflects a decision among a set of girls to “drop-out” of the economic mainstream; they choose nonmarital motherhood at a young age instead of investing in their own economic progress because they feel they have little chance of advancement. This thesis suggests that to address teen childbearing in America will require addressing some difficult social problems: in particular, the perceived and actual lack of economic opportunity among those at the bottom of the economic ladder."
The statement about teenage girls "choosing" nonmarital motherhood should be understood not as a claim that all pregnant 15 year-olds carefully considered their life options and decided on pregnancy! Instead, the economists' view of choice is that we all make groups of choices every day--say, choices about exercise and calories consumed--that make certain outcomes more likely. Decisions that are not well-considered, or that raise the risk of undesired side effects, still have a large ingredient of choice. For example, we typically view those who drive drunk as having made a "choice."
The cause-and-effect evidence here suggests that for many women who give birth as teenagers, their life outcomes like level of education achieved, income, employment, and chance of marriage are already so constrained that they are not made worse off by having a child as a teenager. Encouragement about contraception or abstinence can help reduce teen pregnancy on the margin. But what many teen girls from low socioeconomic status backgrounds need is a reduced prospect of marginalization, and a greater chance for personal and economic advancement.
30 Mayıs 2012 Çarşamba
Occupational Licensing and Low-Income Jobs
To contact us Click HERE
Pretty much everything I know about the economics of occupational licensing I learned from Morris Kleiner, a colleague from the days when I was based at the Humphrey School at the University of Minnesota. Morrie lays out many of the issues here in a Fall 2000 article in my own Journal of Economic Perspectives, as well as in his 2006 book, Licensing Occupations: Ensuring Quality or Restricting Competition?
He points out that nearly one-third of the U.S. labor force works in jobs where some form of government license is a requirement. Some of the largest occupations that require licenses include teachers, nurses, engineers, accountants, and lawyers. Occupational licensing poses a potential tradeoff: on one side, requiring licenses offers a promise of a reliably high quality of service; on the other side, requiring licenses is a barrier to entry that tends to reduce the quantity of jobs in that occupation but increase the wage. Kleiner and others investigate this subject by looking at differences in licensing requirements for a certain occupation across states, and searching for evidence of wage and quality differences. A typical finding is that the wage differences are readily perceptible, but the quality differences are not. Licensing is distinguishable from certification: with certification, you are free to hire someone who doesn't possess the certification if you like, but with licensing, hiring someone without the license is illegal. As an example, travel agents and mechanics are often certified, but they are typically not licensed.
Dick M. Carpenter II, Ph.D., Lisa Knepper, Angela C. Erickson and John K. Ross focus on documenting differences between states in 102 of the job categories counted by the Bureau of Labor Statistics that requires a license in at least one state and that pay below-average wages. They report the results in License to Work: A National Study of Burdens from Occupational Licensing, a report from the Institute for Justice. They make the case that many of these occupational rule are more about limiting competition than about quality of service in an indirect way: they point out that licensing rules about fees, training, exams, minimum age, and minimum schooling vary enormously across states, with no particular evidence that reliability or safety are worse in states with lesser or no licensing requirements. The report goes into state-by-state and occupation-by-occupation detail, but here are some summary comments:
"The need to license any number of the occupations in this sample defies common sense. A short list would include interior designers, shampooers, florists, upholsterers, home entertainment installers, funeral attendants, auctioneers and interpreters for the deaf. Most of these occupations are licensed in just a handful of states; interpreters are licensed in only 16 states, while auctioneers are licensed in 33. If, as licensure proponents often claim, a license is required to protect the public health and safety, one would expect more consistency. For example, only five states require licenses for shampooers, but it is highly unlikely that conditions in those five states are any different ..."
"Quite literally, EMTs [emergency medical technicians] hold lives in their hands, yet 66 other occupations have greater average licensure burdens than EMTs. This includes interior designers, barbers and cosmetologists, manicurists and a host of contractor designations. By way of perspective, the average cosmetologist spends 372 days in training; the average EMT a mere 33."
"Licensure irrationalities are doubly evident in the inconsistencies by burden across states. Looking again at manicurists, while 10 states require four months or more of training, Alaska demands only about three days and Iowa about nine days. It seems unlikely that aspiring manicurists in Alabama (163 days) and Oregon (140 days) truly need so much more time in training. But manicurists are not alone. The education and experience requirements for animal trainers range from zero to almost 1,100 days, or three years. And for vegetation pesticide handlers, training obligations range from zero to 1,460 days, or four years, with fees up to $350. This high degree of variation is prevalent throughout
the occupations. Thirty-nine of them have differences of more than 1,000 days between the minimum and maximum number of days required for education and experience. And another 23 occupations have differences of more than 700 days."
"Finally, irrationalities are particularly notable when few states license an occupation but do so onerously. One clear example is interior design, the most difficult of the 102 occupations to enter, yet licensed in only three states and D.C. Another is social service assistants, the fourth most difficult occupation to enter. It requires nearly three-and-a-half years of training but is only licensed in six states and D.C. Dietetic technicians must spend 800 days in education and training, making for the eighth most burdensome requirements, but they are licensed in only three states. Home entertainment installers must have about eight months of training on average, but only in three states. The seven states that license tree trimmers require, on average, more than a year of training."
"The 102 occupational licenses studied require of aspiring workers, on average, $209 in fees, one exam and about nine months of education and training. ·· Thirty-five occupations require more
than a year of education and training, on average, and another 32 require three to nine months. At least one exam is required for 79 of the occupations. ... Particularly noteworthy is the percentage of low- and middle-income workers with less than a high school diploma—15.7 percent. As documented below, a number of the 102 occupations studied require the completion of at least 12th grade, a requirement that effectively bans a substantial number of people from those occupations."
"[S]even of the 102 occupations studied are licensed in all 50 states and the District of Columbia:
pest control applicator, vegetation pesticide handler, cosmetologist, EMT, truck driver, school bus driver and city bus driver. Another eight occupations are licensed in 40 to 50 states. Thus, the vast majority of these occupations are licensed in fewer than 40 states, and five are licensed in only
one state each: florist, forest worker, fire sprinkler system tester, conveyor operator and non-contractor pipelayer. On average, the occupations on this list are licensed in about 22 states."
My own guess is that the politics of passing state-level occupational licensing laws is driven by three factors: 1) lobbying by those who already work in the occupation to limit competition; 2) passing laws in response to wildly unrepresentative anecdotes of terrible or dangerous service; and 3) the tendency when setting standards to feel like more is better. But in a U.S. economy which is hurting for job creation, especially jobs for low-income workers, states should be seriously rethinking many of their occupational licensing rules. Many would be better-replaced with lower standards, certification rather than licenses, or even no licenses at all.
He points out that nearly one-third of the U.S. labor force works in jobs where some form of government license is a requirement. Some of the largest occupations that require licenses include teachers, nurses, engineers, accountants, and lawyers. Occupational licensing poses a potential tradeoff: on one side, requiring licenses offers a promise of a reliably high quality of service; on the other side, requiring licenses is a barrier to entry that tends to reduce the quantity of jobs in that occupation but increase the wage. Kleiner and others investigate this subject by looking at differences in licensing requirements for a certain occupation across states, and searching for evidence of wage and quality differences. A typical finding is that the wage differences are readily perceptible, but the quality differences are not. Licensing is distinguishable from certification: with certification, you are free to hire someone who doesn't possess the certification if you like, but with licensing, hiring someone without the license is illegal. As an example, travel agents and mechanics are often certified, but they are typically not licensed.
Dick M. Carpenter II, Ph.D., Lisa Knepper, Angela C. Erickson and John K. Ross focus on documenting differences between states in 102 of the job categories counted by the Bureau of Labor Statistics that requires a license in at least one state and that pay below-average wages. They report the results in License to Work: A National Study of Burdens from Occupational Licensing, a report from the Institute for Justice. They make the case that many of these occupational rule are more about limiting competition than about quality of service in an indirect way: they point out that licensing rules about fees, training, exams, minimum age, and minimum schooling vary enormously across states, with no particular evidence that reliability or safety are worse in states with lesser or no licensing requirements. The report goes into state-by-state and occupation-by-occupation detail, but here are some summary comments:
"The need to license any number of the occupations in this sample defies common sense. A short list would include interior designers, shampooers, florists, upholsterers, home entertainment installers, funeral attendants, auctioneers and interpreters for the deaf. Most of these occupations are licensed in just a handful of states; interpreters are licensed in only 16 states, while auctioneers are licensed in 33. If, as licensure proponents often claim, a license is required to protect the public health and safety, one would expect more consistency. For example, only five states require licenses for shampooers, but it is highly unlikely that conditions in those five states are any different ..."
"Quite literally, EMTs [emergency medical technicians] hold lives in their hands, yet 66 other occupations have greater average licensure burdens than EMTs. This includes interior designers, barbers and cosmetologists, manicurists and a host of contractor designations. By way of perspective, the average cosmetologist spends 372 days in training; the average EMT a mere 33."
"Licensure irrationalities are doubly evident in the inconsistencies by burden across states. Looking again at manicurists, while 10 states require four months or more of training, Alaska demands only about three days and Iowa about nine days. It seems unlikely that aspiring manicurists in Alabama (163 days) and Oregon (140 days) truly need so much more time in training. But manicurists are not alone. The education and experience requirements for animal trainers range from zero to almost 1,100 days, or three years. And for vegetation pesticide handlers, training obligations range from zero to 1,460 days, or four years, with fees up to $350. This high degree of variation is prevalent throughout
the occupations. Thirty-nine of them have differences of more than 1,000 days between the minimum and maximum number of days required for education and experience. And another 23 occupations have differences of more than 700 days."
"Finally, irrationalities are particularly notable when few states license an occupation but do so onerously. One clear example is interior design, the most difficult of the 102 occupations to enter, yet licensed in only three states and D.C. Another is social service assistants, the fourth most difficult occupation to enter. It requires nearly three-and-a-half years of training but is only licensed in six states and D.C. Dietetic technicians must spend 800 days in education and training, making for the eighth most burdensome requirements, but they are licensed in only three states. Home entertainment installers must have about eight months of training on average, but only in three states. The seven states that license tree trimmers require, on average, more than a year of training."
"The 102 occupational licenses studied require of aspiring workers, on average, $209 in fees, one exam and about nine months of education and training. ·· Thirty-five occupations require more
than a year of education and training, on average, and another 32 require three to nine months. At least one exam is required for 79 of the occupations. ... Particularly noteworthy is the percentage of low- and middle-income workers with less than a high school diploma—15.7 percent. As documented below, a number of the 102 occupations studied require the completion of at least 12th grade, a requirement that effectively bans a substantial number of people from those occupations."
"[S]even of the 102 occupations studied are licensed in all 50 states and the District of Columbia:
pest control applicator, vegetation pesticide handler, cosmetologist, EMT, truck driver, school bus driver and city bus driver. Another eight occupations are licensed in 40 to 50 states. Thus, the vast majority of these occupations are licensed in fewer than 40 states, and five are licensed in only
one state each: florist, forest worker, fire sprinkler system tester, conveyor operator and non-contractor pipelayer. On average, the occupations on this list are licensed in about 22 states."
My own guess is that the politics of passing state-level occupational licensing laws is driven by three factors: 1) lobbying by those who already work in the occupation to limit competition; 2) passing laws in response to wildly unrepresentative anecdotes of terrible or dangerous service; and 3) the tendency when setting standards to feel like more is better. But in a U.S. economy which is hurting for job creation, especially jobs for low-income workers, states should be seriously rethinking many of their occupational licensing rules. Many would be better-replaced with lower standards, certification rather than licenses, or even no licenses at all.
Why Does the U.S. Spend More on Health Care than Other Countries?
To contact us Click HERE
Everyone knows that the U.S. spends far more on health care than other countries, but do you know how much more? In 2009, the U.S. spent 17.4% of GDP on health care (using OECD data). The closest contenders are Netherlands (12% of GDP), France (11.8%), Germany (11.6%), Denmark (11.5%), and Canada (11.4%). The U.S. has higher per capita GDP than these countries, so the gap in absolute spending is even higher. In 2009, the U.S. spent $7,960 per person on health care, and the closest contenders were Switzerland ($5,144 per person) and Netherlands ($4,914).
When I hear people argue that the U.S. should follow the path of the UK health care system, I sometimes find myself thinking: "You mean that U.S. health care spending per person should be slashed by 56%, from $7,960 per person to $3,487 per person? Really?"
What accounts for these differences in health care spending across countries? David Squires assembles some of the evidence in "Explaining High Health Care Spending in the United States: An International Comparison of Supply, Utilization, Prices and Quality," a May 2012 "issue brief" written for the Commonwealth Fund. I ran across it here at Larry Willmore's Thought du Jour blog. I'll also contrast and compare it with a paper by David M. Cutler and Dan P. Ly, "The (Paper)Work of Medicine: Understanding International Medical Costs," which appeared in the Spring 2011 issue of my own Journal of Economic Perspectives. For readability, footnotes and references to exhibits are omitted from the quotations below.
Higher U.S. health care spending is not because Americans on average are notably less healthy.
As Squires sums up: "U.S. has smaller elderly population and fewer smokers, but higher obesity rates. ... Higher rates of obesity undoubtedly inflate health spending; one study estimates the medical costs attributable to obesity in the U.S. reached almost 10 percent of all medical spending in 2008. However, the younger population and lower rates of smoking likely have an opposite effect, reducing U.S. health care spending relative to most other countries."
Higher U.S. health care spending is not because the U.S. has more doctors or hospital beds.
"There were 2.4 physicians per 1,000 population in the U.S. in 2009, fewer than in all other study countries except Japan. Likewise, patients had fewer doctor consultations in the U.S. (3.9 per capita)
than in any other country except Sweden. Hospital supply and use showed similar trends, with the U.S. having fewer hospital beds (2.7 per 1,000 population), shorter lengths of stay for acute care (5.4
days), and fewer discharges (131 per 1,000 population) than the OECD median ..."
Prices for brand-name drugs are much higher in the U.S., but generics are cheaper.
Squires writes: "[P]rices for the 30 most-commonly prescribed drugs are one-third higher than in Canada and Germany, and more than double the prices in Australia, France, Netherlands, New Zealand, and the U.K. Notably, prices for generic drugs are lower in the U.S. than in these other countries, whereas prices for brand-name drugs are much higher."
Cutler and Ly confirm this general pattern, but also put the potential cost savings in perspective: "However, because pharmaceuticals are only about 10 percent of U.S. healthcare spending, the overall amount that could be saved by moving to U.S. government monopsony purchasing of drugs
is relatively small—perhaps 20 to 30 percent of pharmaceutical spending, or 2 to 3 percent of total medical costs. These cost savings also would have to be weighed against the possibility of reduced incentives for investment and innovation in the pharmaceutical industry. The dollar amount of excess pharmaceutical payments in the United States is approximately the total amount of pharmaceutical company research and development (R&D)."
U.S. doctors are paid more, but they also live in an economy with a more unequal distribution of wages.
Squires writes: "U.S. primary care physicians generally receive higher fees for office visits and orthopedic physicians receive higher fees for hip replacements than in Australia, Canada, France, Germany, and the U.K. ... U.S. primary care doctors ($186,582) and particularly orthopedic doctors ($442,450) earned greater income than in the other five countries ..."
Cutler and Ly confirm: "The average U.S. specialist physician earns $230,000 annually—
78 percent above the average in other countries ... . Primary care physicians earn less (they earn $161,000 on average), but the same percentage more than their peers in other countries. ... If we reduced all physician incomes in the United States to match the international ratio of physicians’ incomes to per capita GDP, U.S. healthcare spending would be lower by roughly 2 percent.However, these seemingly high salaries for U.S. physicians appear less high in the context of the broader income distribution." Cutler and Ly go on to point out that high-compensation workers in the U.S. economy earn more than their international counterparts in just about every profession--after all, that's part of what it means to say that the U.S. has a less equal distribution of income.
Some medical device technologies like scanning are more widely used in the U.S; some like hip replacements are not.
"In 2009, the U.S., along with Germany, performed the most knee replacements (213 per 100,000
population) among the study countries, and 75 percent more knee replacements than the OECD median (122 per 100,000 population). However, the U.S. performed barely more hip replacements than the OECD median, and significantly less than several of the other study countries ..."
"Relative to the other study countries where data were available, there were an above-average
number of magnetic resonance imaging (MRI) machines (25.9 per million population), computed
tomography (CT) scanners (34.3 per million), positron emission tomography (PET) scanners (3.1 per million), and mammographs (40.2 per million) in the U.S. in 2009. Utilization of imaging was also highest in the U.S., with 91.2 MRI exams and 227.9 CT exams per 1,000 population. MRI and CT devices were most prevalent in Japan, though no utilization data were available for that country. ... [T]he U.S. commercial average diagnostic imaging fees ($1,080 for an MRI and $510 for a CT exam) are far higher than what is charged in almost all of the other countries ..."
The U.S. does a relatively poor job of managing chronic disease.
Squires writes: "[Consider] rates of potentially preventable mortality due to asthma (for those between ages 5 and 39) and lower-extremity amputations due to diabetes per 100,000 population. On both measures, the U.S. had among the highest rates, suggesting a failure to effectively manage these chronic conditions that make up an increasing share of the disease burden."
Many chronic diseases share the general property that if they are well-managed every single day, with a combination of drugs, lifestyle, and certain kinds of monitoring of physical conditions, it is possible to reduce the need for enormously costly episodes of hospitalization. As the Centers for Disease Control puts it: "Chronic diseases—such as heart disease, cancer, and diabetes—are the leading causes of death and disability in the United States. Chronic diseases account for 70% of all deaths in the U.S., which is 1.7 million each year. These diseases also cause major limitations in daily living for almost 1 out of 10 Americans ...."
Prices for hospital stays are substantially higher in the U.S.
Squires points out: "[H]ospital stays in the U.S. were far more expensive than in the other study countries, exceeding $18,000 per discharge compared with less than $10,000 in Sweden, Australia, New Zealand, France, and Germany." And remember, these higher costs per hospital stay happen even though the stays themselves are on average shorter in the U.S.
The tougher question is to what extent these higher costs per hospital stay reflect a larger quantity of concentrated and effective high-tech care being provided, and to what extent its just a matter of higher prices. The evidence here is mixed. It does appear that for some conditions, Americans receive more hospital care. Cutler and Ly write: Americans also receive more-intensive care than do Canadians. While the population-adjusted hospital admission rates are about the same in the two countries, additional procedures are provided to those with the same diagnosis in the United States. For example, people with a heart attack in the United States are twice as likely to receive bypass surgery or angioplasty than are similar people in Canada." When it comes to cancer survival rates, Squires points out: "The U.S. had the highest survival rates among the study countries for breast cancer (89%) and, along with Norway, for colorectal cancer (65%)."
On the other side, the more aggressive use of heart surgery in the U.S. as compared to Canada doesn't seem to mean better health outcomes; instead, it reflects the existence of more heart-surgery facilities. Cutler and Ly: " On one side, the greater use of intensive therapies after a heart attack in the United States compared to Canada is not associated with improved mortality, though morbidity is more diffifult to determine. Similarly, a recent study concluded that there was no systematic difference in outcomes in favor of the United States over Canada; if anything, Canadians had better outcomes in most circumstances ... [T]he province of Ontario has 11 open-heart surgery facilities, while the state of Pennsylvania, with roughly the same population as Ontario, has more than five times the number of heart surgery facilities. California is three times larger in population but has 10 times the number of heart surgery facilities. Given this difference in the number of facilities, it is simply impossible for physicians in Ontario to perform as many open heart surgery operations as those in Pennsylvania or California."
Also, not all cancer survival rates are better in the U.S. Squires writes: "However, at 64 percent, the survival rate for cervical cancer in the U.S. was worse than the OECD median (66%), and well below the 78 percent survival rate in Norway—indicating significant room for improvement."
Administrative costs of health care are much higher in the U.S.
Squires doesn't mention this point, but it is a main emphasis for Cutler and Ly. They write:
Conclusion
The question of why the U.S. spends more than 50% more per person on health care than the next highest countries (Switzerland and Netherlands), and more than double per person what many other countries spend, may never have a simple answer. Still, the main ingredients of an answer are becoming more clear. The U.S. spends vastly more on hospitalization and acute care, with a substantial share of that going to high-tech procedures like surgery and imaging. The U.S. does a poor job of managing chronic conditions, which then lead to episodes of costly hospitalization. The U.S. also seems to spend vastly more on administration and paperwork, with much of that related to credentialing, documenting, and billing--which is again a particular important issue in hospitals. Any honest effort to come to grips with high and rising U.S. health care costs will have to tackle these factors head-on.
When I hear people argue that the U.S. should follow the path of the UK health care system, I sometimes find myself thinking: "You mean that U.S. health care spending per person should be slashed by 56%, from $7,960 per person to $3,487 per person? Really?"
What accounts for these differences in health care spending across countries? David Squires assembles some of the evidence in "Explaining High Health Care Spending in the United States: An International Comparison of Supply, Utilization, Prices and Quality," a May 2012 "issue brief" written for the Commonwealth Fund. I ran across it here at Larry Willmore's Thought du Jour blog. I'll also contrast and compare it with a paper by David M. Cutler and Dan P. Ly, "The (Paper)Work of Medicine: Understanding International Medical Costs," which appeared in the Spring 2011 issue of my own Journal of Economic Perspectives. For readability, footnotes and references to exhibits are omitted from the quotations below.
Higher U.S. health care spending is not because Americans on average are notably less healthy.
As Squires sums up: "U.S. has smaller elderly population and fewer smokers, but higher obesity rates. ... Higher rates of obesity undoubtedly inflate health spending; one study estimates the medical costs attributable to obesity in the U.S. reached almost 10 percent of all medical spending in 2008. However, the younger population and lower rates of smoking likely have an opposite effect, reducing U.S. health care spending relative to most other countries."
Higher U.S. health care spending is not because the U.S. has more doctors or hospital beds.
"There were 2.4 physicians per 1,000 population in the U.S. in 2009, fewer than in all other study countries except Japan. Likewise, patients had fewer doctor consultations in the U.S. (3.9 per capita)
than in any other country except Sweden. Hospital supply and use showed similar trends, with the U.S. having fewer hospital beds (2.7 per 1,000 population), shorter lengths of stay for acute care (5.4
days), and fewer discharges (131 per 1,000 population) than the OECD median ..."
Prices for brand-name drugs are much higher in the U.S., but generics are cheaper.
Squires writes: "[P]rices for the 30 most-commonly prescribed drugs are one-third higher than in Canada and Germany, and more than double the prices in Australia, France, Netherlands, New Zealand, and the U.K. Notably, prices for generic drugs are lower in the U.S. than in these other countries, whereas prices for brand-name drugs are much higher."
Cutler and Ly confirm this general pattern, but also put the potential cost savings in perspective: "However, because pharmaceuticals are only about 10 percent of U.S. healthcare spending, the overall amount that could be saved by moving to U.S. government monopsony purchasing of drugs
is relatively small—perhaps 20 to 30 percent of pharmaceutical spending, or 2 to 3 percent of total medical costs. These cost savings also would have to be weighed against the possibility of reduced incentives for investment and innovation in the pharmaceutical industry. The dollar amount of excess pharmaceutical payments in the United States is approximately the total amount of pharmaceutical company research and development (R&D)."
U.S. doctors are paid more, but they also live in an economy with a more unequal distribution of wages.
Squires writes: "U.S. primary care physicians generally receive higher fees for office visits and orthopedic physicians receive higher fees for hip replacements than in Australia, Canada, France, Germany, and the U.K. ... U.S. primary care doctors ($186,582) and particularly orthopedic doctors ($442,450) earned greater income than in the other five countries ..."
Cutler and Ly confirm: "The average U.S. specialist physician earns $230,000 annually—
78 percent above the average in other countries ... . Primary care physicians earn less (they earn $161,000 on average), but the same percentage more than their peers in other countries. ... If we reduced all physician incomes in the United States to match the international ratio of physicians’ incomes to per capita GDP, U.S. healthcare spending would be lower by roughly 2 percent.However, these seemingly high salaries for U.S. physicians appear less high in the context of the broader income distribution." Cutler and Ly go on to point out that high-compensation workers in the U.S. economy earn more than their international counterparts in just about every profession--after all, that's part of what it means to say that the U.S. has a less equal distribution of income.
Some medical device technologies like scanning are more widely used in the U.S; some like hip replacements are not.
"In 2009, the U.S., along with Germany, performed the most knee replacements (213 per 100,000
population) among the study countries, and 75 percent more knee replacements than the OECD median (122 per 100,000 population). However, the U.S. performed barely more hip replacements than the OECD median, and significantly less than several of the other study countries ..."
"Relative to the other study countries where data were available, there were an above-average
number of magnetic resonance imaging (MRI) machines (25.9 per million population), computed
tomography (CT) scanners (34.3 per million), positron emission tomography (PET) scanners (3.1 per million), and mammographs (40.2 per million) in the U.S. in 2009. Utilization of imaging was also highest in the U.S., with 91.2 MRI exams and 227.9 CT exams per 1,000 population. MRI and CT devices were most prevalent in Japan, though no utilization data were available for that country. ... [T]he U.S. commercial average diagnostic imaging fees ($1,080 for an MRI and $510 for a CT exam) are far higher than what is charged in almost all of the other countries ..."
The U.S. does a relatively poor job of managing chronic disease.
Squires writes: "[Consider] rates of potentially preventable mortality due to asthma (for those between ages 5 and 39) and lower-extremity amputations due to diabetes per 100,000 population. On both measures, the U.S. had among the highest rates, suggesting a failure to effectively manage these chronic conditions that make up an increasing share of the disease burden."
Many chronic diseases share the general property that if they are well-managed every single day, with a combination of drugs, lifestyle, and certain kinds of monitoring of physical conditions, it is possible to reduce the need for enormously costly episodes of hospitalization. As the Centers for Disease Control puts it: "Chronic diseases—such as heart disease, cancer, and diabetes—are the leading causes of death and disability in the United States. Chronic diseases account for 70% of all deaths in the U.S., which is 1.7 million each year. These diseases also cause major limitations in daily living for almost 1 out of 10 Americans ...."
Prices for hospital stays are substantially higher in the U.S.
Squires points out: "[H]ospital stays in the U.S. were far more expensive than in the other study countries, exceeding $18,000 per discharge compared with less than $10,000 in Sweden, Australia, New Zealand, France, and Germany." And remember, these higher costs per hospital stay happen even though the stays themselves are on average shorter in the U.S.
The tougher question is to what extent these higher costs per hospital stay reflect a larger quantity of concentrated and effective high-tech care being provided, and to what extent its just a matter of higher prices. The evidence here is mixed. It does appear that for some conditions, Americans receive more hospital care. Cutler and Ly write: Americans also receive more-intensive care than do Canadians. While the population-adjusted hospital admission rates are about the same in the two countries, additional procedures are provided to those with the same diagnosis in the United States. For example, people with a heart attack in the United States are twice as likely to receive bypass surgery or angioplasty than are similar people in Canada." When it comes to cancer survival rates, Squires points out: "The U.S. had the highest survival rates among the study countries for breast cancer (89%) and, along with Norway, for colorectal cancer (65%)."
On the other side, the more aggressive use of heart surgery in the U.S. as compared to Canada doesn't seem to mean better health outcomes; instead, it reflects the existence of more heart-surgery facilities. Cutler and Ly: " On one side, the greater use of intensive therapies after a heart attack in the United States compared to Canada is not associated with improved mortality, though morbidity is more diffifult to determine. Similarly, a recent study concluded that there was no systematic difference in outcomes in favor of the United States over Canada; if anything, Canadians had better outcomes in most circumstances ... [T]he province of Ontario has 11 open-heart surgery facilities, while the state of Pennsylvania, with roughly the same population as Ontario, has more than five times the number of heart surgery facilities. California is three times larger in population but has 10 times the number of heart surgery facilities. Given this difference in the number of facilities, it is simply impossible for physicians in Ontario to perform as many open heart surgery operations as those in Pennsylvania or California."
Also, not all cancer survival rates are better in the U.S. Squires writes: "However, at 64 percent, the survival rate for cervical cancer in the U.S. was worse than the OECD median (66%), and well below the 78 percent survival rate in Norway—indicating significant room for improvement."
Administrative costs of health care are much higher in the U.S.
Squires doesn't mention this point, but it is a main emphasis for Cutler and Ly. They write:
"[T]the U.S. healthcare system is in great need of administrative simplification. There are few other areas of the U.S. economy where waste is so apparent and the possibility of savings is so tangible. ... Perhaps the most troubling difference between the U.S. and Canadian healthcare systems is the differential amount spent on administration. For every office-based physician in the United States, there are 2.2 administrative workers. That exceeds the number of nurses, clinical assistants, and technical staff put together. One large physician group in the United States estimates that it spends 12 percent of revenue collected just collecting revenue. Canada, by contrast, has only half as many administrative workers per office-based physician. The situation is no better in hospitals. In the United States, there are 1.5 administrative personnel per hospital bed, compared to 1.1 in Canada. Duke University Hospital, for example, has 900 hospital beds and 1,300 billing clerks. On top of this are the administrative workers in health insurance. Health insurance administration is 12 percent of premiums in the United States and less than half that in Canada.
"International comparisons of medical care occupations are difficult, but they suggest that the United States has more administrative personnel than other countries do. ... [T]he United States has 25 percent more healthcare administrators than the United Kingdom, 165 percent more than the Netherlands, and 215 percent more than Germany. The number of clerks of all forms (including data entry clerks) is much higher in the United States as well."
"What are all these administrative personnel doing? ... One part is credentialing—receiving permission to practice medicine in a particular hospital or for a particular health plan. The average physician submits 18 credentialing applications annually—each insurer, hospital, ambulatory surgery facility, and the like, requires a different one—consuming 70 minutes of staff time and 11 minutes of physician time per application. Verifying eligibility for services is also costly. Insurance information must be verified for 20 to 30 patients daily, including three or four patients for whom verification must be sought orally. Because people change insurance plans frequently and the cost-sharing they are charged varies with plan and with past utilization (for example, how much of the deductible have they spent?), the determination of what to charge a patient is especially difficult. ... Finally, significant time is spent on billing and payment collection. On average, about three claims are denied per physician per week and need to be rebilled. ... Three-quarters of denied bills are ultimately paid, but the administrative cost of securing the payment is very high. Provider groups in the United States employ 770 full-time equivalent workers per $1 billion collected, compared to an average in other U.S. industries of about 100. By all indications, the administrative burden is rising over time as insurance policies have become more complex, while the technology of administration has not kept pace."
Conclusion
The question of why the U.S. spends more than 50% more per person on health care than the next highest countries (Switzerland and Netherlands), and more than double per person what many other countries spend, may never have a simple answer. Still, the main ingredients of an answer are becoming more clear. The U.S. spends vastly more on hospitalization and acute care, with a substantial share of that going to high-tech procedures like surgery and imaging. The U.S. does a poor job of managing chronic conditions, which then lead to episodes of costly hospitalization. The U.S. also seems to spend vastly more on administration and paperwork, with much of that related to credentialing, documenting, and billing--which is again a particular important issue in hospitals. Any honest effort to come to grips with high and rising U.S. health care costs will have to tackle these factors head-on.
The Shifting U.S.-China Trade Picture
To contact us Click HERE
The standard story of U.S.-China trade over the last decade or so goes like this: Huge U.S. trade deficits, matched by huge Chinese trade surpluses. One underlying reason for the imbalance is that China has been acting to hold its exchange rate unrealistically low, which means that within-China production costs are lower compared to the rest of the world, thus encouraging exports from China, and outside-China production costs are relatively high, thus discouraging imports into China.
This story is simplified, of course, but it holds a lot of truth. But it's worth pointing out that these developments are fairly recent--really just over a portion of the last decade--and not a pattern that China has been following since its period of rapid growth started back around 1980. In addition, these development seem to be turning: the U.S. trade deficit is falling, China's trade surplus is declining, and China's exchange rate is appreciating. Here's the evolution in graphs that I put together using the ever-helpful FRED website from the St. Louis Fed.
First, here's a look at China's balance of trade over time. The top graph shows China's current account balance since 1980, roughly when China's process of economic growth got underway. Notice that China a trade balance fairly close to zero until the early 2000s, when the surpluses took off. Because the first graph stops in 2010, the second graph shows China's trade balance from 2010 through the third quarter of 2011. Clearly, China's trade surplus has dropped in the last few years, and in all likelihood will be lower in 2011 than in 2010.


China's pattern of trade surpluses maps loosely follows its exchange rate. Here is China's exchange rate vs. the U.S. dollar since 1980. When an economy is experiencing extremely rapid productivity growth, the expected economic pattern is that its currency will appreciate over time--that is, become more valuable. However, as China's growth took off in the 1980s and into the 1990s, its currency depreciated--on the graph, it took more Chinese yuan to equal $1 U.S. than before. In about 1994, there is an especially sharp depreciation of the yuan, as it went very quickly from about 5.8 yuan/$1 to about 8.6 yuan/$1. During the booming U.S. economy of the late 1990s, this change had relatively little effect on the balance of trade, but by the early 2000s, it began to pump up China's trade surplus. However, notice also that the value of China's exchange rate has dropped quite substantially over the last few years, withe much of the change coming before the Great Recession hit (U.S. recessions are shown with shaded gray vertical bands in the figure.)

The U.S. balance of trade in the last decade or so looks like China's pattern, in reverse. As China's trade surplus takes off around 2000 or so, the U.S. trade deficit plummets at about that time. As China's trade surplus diminishes in the last few years, the U.S. trade deficit also diminishes.

So what about the story with which I started: an overvalued Chinese currency, leading to huge Chinese trade surpluses and correspondingly huge U.S. trade deficits? At a minimum, the story is much less true that it was a few years back. Indeed, William R. Cline and John Williamson at the Peterson Institute for International Economics argue that the U.S.-China exchange rate has largely returned to the fundamental value justified by productivity and price differences between the two economies. Their argument appears in a May 2012 Policy Brief called "Estimates of Fundamental Equilibrium Exchange Rates, May 2012."
They point out that China's trade surpluses are likely to be much smaller than the IMF, for example, was predicting a few years ago. And while they believe China's currency is still slightly undervalued, and needs to continue appreciating over time, they estimate that it's current value is not far from their "fundamental equilibrium exchange rate" or FEER. They write:
This story is simplified, of course, but it holds a lot of truth. But it's worth pointing out that these developments are fairly recent--really just over a portion of the last decade--and not a pattern that China has been following since its period of rapid growth started back around 1980. In addition, these development seem to be turning: the U.S. trade deficit is falling, China's trade surplus is declining, and China's exchange rate is appreciating. Here's the evolution in graphs that I put together using the ever-helpful FRED website from the St. Louis Fed.
First, here's a look at China's balance of trade over time. The top graph shows China's current account balance since 1980, roughly when China's process of economic growth got underway. Notice that China a trade balance fairly close to zero until the early 2000s, when the surpluses took off. Because the first graph stops in 2010, the second graph shows China's trade balance from 2010 through the third quarter of 2011. Clearly, China's trade surplus has dropped in the last few years, and in all likelihood will be lower in 2011 than in 2010.


China's pattern of trade surpluses maps loosely follows its exchange rate. Here is China's exchange rate vs. the U.S. dollar since 1980. When an economy is experiencing extremely rapid productivity growth, the expected economic pattern is that its currency will appreciate over time--that is, become more valuable. However, as China's growth took off in the 1980s and into the 1990s, its currency depreciated--on the graph, it took more Chinese yuan to equal $1 U.S. than before. In about 1994, there is an especially sharp depreciation of the yuan, as it went very quickly from about 5.8 yuan/$1 to about 8.6 yuan/$1. During the booming U.S. economy of the late 1990s, this change had relatively little effect on the balance of trade, but by the early 2000s, it began to pump up China's trade surplus. However, notice also that the value of China's exchange rate has dropped quite substantially over the last few years, withe much of the change coming before the Great Recession hit (U.S. recessions are shown with shaded gray vertical bands in the figure.)

The U.S. balance of trade in the last decade or so looks like China's pattern, in reverse. As China's trade surplus takes off around 2000 or so, the U.S. trade deficit plummets at about that time. As China's trade surplus diminishes in the last few years, the U.S. trade deficit also diminishes.

So what about the story with which I started: an overvalued Chinese currency, leading to huge Chinese trade surpluses and correspondingly huge U.S. trade deficits? At a minimum, the story is much less true that it was a few years back. Indeed, William R. Cline and John Williamson at the Peterson Institute for International Economics argue that the U.S.-China exchange rate has largely returned to the fundamental value justified by productivity and price differences between the two economies. Their argument appears in a May 2012 Policy Brief called "Estimates of Fundamental Equilibrium Exchange Rates, May 2012."
They point out that China's trade surpluses are likely to be much smaller than the IMF, for example, was predicting a few years ago. And while they believe China's currency is still slightly undervalued, and needs to continue appreciating over time, they estimate that it's current value is not far from their "fundamental equilibrium exchange rate" or FEER. They write:
"China is still judged undervalued by about 3 percent ... Thus, whereas a year ago we estimated that the renminbi needed to rise 16 percent in real effective terms and 28.5 percent bilaterally against the dollar (in a general realignment to FEERs), the corresponding estimates now are 2.8 and 7.7 percent, respectively. It is entirely possible that future appreciation will bring the surplus [China's trade surplus] down to less than 3 percent of GDP. But China still has fast productivity growth in the tradable goods industries, which implies that a process of continuing appreciation is essential to maintain its current account balance at a reasonable level."In short, the episode of an overvalued Chinese currency driving huge trade imbalances may be largely behind us. The current U.S. trade deficits are thus more rooted in an economy which continues to save relatively little and to consume more than domestic production--thus drawing in imports.
Household Production: Levels and Trends
To contact us Click HERE
Since the early days of GDP accounting, and in every intro econ class since then, a standard talking-point is that measures of economic output leave out home production. Further, if two neighbors stopped doing home production and instead hired each other to do housework and yardwork, total GDP would rise because those activities were now part of paid market exchange, even though the quantity of housework and yardwork actually done didn't rise. But how much is household production actually worth in the U.S. economy and how has it changed over time? Benjamin Bridgman, Andrew Dugan, Mikhael Lal, Matthew Osborne, and Shaunda Villones tackle this question in Accounting for Household Production in the National Accounts, 1965–2010, which appears in the May 2012 issue of the Survey of Current Business. (I found this study at Gene Hayward's HaywardEconBlog.) Here are a few points that jumped out at me (footnotes omitted).
How can one estimate the value of home production?
Get an estimate of hours devoted to home production, and then multiply by the wage that would be paid to domestic labor. "To measure the value of nonmarket services, we make use of two unique surveys that track household labor activities and apply a wage to the total number of hours spent in home production. One of these surveys is the Multinational Time Use Survey (MTUS), which combined a number of time use surveys conducted by academic institutions into a single data set. These surveys were taken sporadically between 1965 and 1999. The other is the American Time Use Survey (ATUS) produced by the Bureau of Labor Statistics (BLS). This survey was taken annually between 2003 and 2010. ...
How does the value of home production relate to GDP?
"We find that incorporating home production in GDP raises the level of GDP 39 percent in 1965 and
25.7 percent in 2010."
Why has the value of home production fallen over time?
Fewer hours spent in home production over time, and the wage of household workers relative to other workers in the economy has fallen. "The impact of home production has dropped over time because women have been entering the workforce. This trend is driven by an increasing trend in the wage disparity between household workers and employees (that is, the opportunity cost of household labor)."
How would including home production in national output alter the growth rate of this expanded definition of GDP over time?
"Because standard GDP does not account for home production, some of the increase over time in GDP will be due to women switching from home production to market-based production. Our adjusted GDP measure includes the unmeasured home production, so the increase in GDP that occurs due to substitution from home production to market-based production will be smaller. During 1965 to 2010, the annual growth rate of nominal GDP was 6.9 percent. When household production is included, this growth rate drops to 6.7 percent."
How does time spent in home production vary with income level? How would including home production in output affect the inequality of income?
"We find that home production hours do not vary with family income: for women, who contribute to the bulk of home production hours, the correlation between family income and home production is about 0.01. Therefore, adding home production income to family income is essentially the same as adding a constant number to family income, which will raise the income of low income families proportionately more than high income families, leading to a decrease in inequality. This finding is consistent with earlier work in this literature ..."
What are the gender patterns for time spent in home production?
"In 1965, men and women spent an average of 27 hours in home production, and by 2010, they spent 22 hours. This overall decline reflects a drop in women’s home production from 40 hours to 26 hours, which more than offset an increase in men’s hours from 14 hours to 17 hours."
What is the connection between income and hours of household production?
Those with more income spend tend to spend slightly more time on home production. "Averaged over the years 2003 to 2010, the home production for women (men) in the lowest income category was 32.2 (23.3) hours per week, while in the highest income category it was 26.3 (19.0) hours per week."
How can one estimate the value of home production?
Get an estimate of hours devoted to home production, and then multiply by the wage that would be paid to domestic labor. "To measure the value of nonmarket services, we make use of two unique surveys that track household labor activities and apply a wage to the total number of hours spent in home production. One of these surveys is the Multinational Time Use Survey (MTUS), which combined a number of time use surveys conducted by academic institutions into a single data set. These surveys were taken sporadically between 1965 and 1999. The other is the American Time Use Survey (ATUS) produced by the Bureau of Labor Statistics (BLS). This survey was taken annually between 2003 and 2010. ...
How does the value of home production relate to GDP?
"We find that incorporating home production in GDP raises the level of GDP 39 percent in 1965 and
25.7 percent in 2010."
Why has the value of home production fallen over time?
Fewer hours spent in home production over time, and the wage of household workers relative to other workers in the economy has fallen. "The impact of home production has dropped over time because women have been entering the workforce. This trend is driven by an increasing trend in the wage disparity between household workers and employees (that is, the opportunity cost of household labor)."
How would including home production in national output alter the growth rate of this expanded definition of GDP over time?
"Because standard GDP does not account for home production, some of the increase over time in GDP will be due to women switching from home production to market-based production. Our adjusted GDP measure includes the unmeasured home production, so the increase in GDP that occurs due to substitution from home production to market-based production will be smaller. During 1965 to 2010, the annual growth rate of nominal GDP was 6.9 percent. When household production is included, this growth rate drops to 6.7 percent."
How does time spent in home production vary with income level? How would including home production in output affect the inequality of income?
"We find that home production hours do not vary with family income: for women, who contribute to the bulk of home production hours, the correlation between family income and home production is about 0.01. Therefore, adding home production income to family income is essentially the same as adding a constant number to family income, which will raise the income of low income families proportionately more than high income families, leading to a decrease in inequality. This finding is consistent with earlier work in this literature ..."
What are the gender patterns for time spent in home production?
"In 1965, men and women spent an average of 27 hours in home production, and by 2010, they spent 22 hours. This overall decline reflects a drop in women’s home production from 40 hours to 26 hours, which more than offset an increase in men’s hours from 14 hours to 17 hours."
What is the connection between income and hours of household production?
Those with more income spend tend to spend slightly more time on home production. "Averaged over the years 2003 to 2010, the home production for women (men) in the lowest income category was 32.2 (23.3) hours per week, while in the highest income category it was 26.3 (19.0) hours per week."
26 Mayıs 2012 Cumartesi
Ignorance as Asset and Strategic Outcome
To contact us Click HERE
The February 2012 issue of Economy and Society is a special issue focused on a theme of "Strategic unknowns: towards a sociology of ignorance." The opening essay with this title, by Linsey McGoey, is freely available here. Many academics will have access to the rest of the issue through their library subscriptions.
The central theme of the issue is that ambiguity and ignorance are not just the absence of knowledge, waiting to be illuminated by facts and disclosure. Instead, ambiguity and ignorance are in certain situations the preferred strategic outcome. McGoey writes (citations omitted): "Ignorance is knowledge: that is the starting premise and impetus of the following collection of papers. Together, they contribute to a small but growing literature which explores how different forms of strategic ignorance and social unknowing help both to maintain and to disrupt social and political orders, allowing both governors and the governed to deny awareness of things it is not in their interest to acknowledge ..."
Many of the examples are sociological in nature, but others are based in economic and policy situations. For example, consider a number of situations that have to do with a policy response to risky situations: the risk that smoking causes cancer, the risk that growing carbon emissions will lead to climate change, the risk of future terrorist actions (and whether invading certain countries will increase or reduce those risks), and the risk of fluctuations in fluctuations in financial markets. McGoey writes:
"Within the game of predicting risk, one often wins regardless of whether risks materialize or not. If a predicted threat fails to emerge, the identification of the threat is credited for deterring it. If a predicted threat does emerge, authorities are commended for their foresight. If an unpredicted threat appears, authorities have a right to call for more resources to combat their own earlier ignorance. ‘The beauty of a futuristic vision, of course, is that it does not have to be true’, writes Kaushik Sunder Rajan (2006, p. 121) in a study of the way expectations surrounding new biotechnologies help to create funding opportunities and foster faith in the technology regardless of whether expectations prove true or not. In fact, expectations are often particularly fruitful when they fail to materialize, for more hope and hype are needed to remedy thwarted expectations. Attention to the resilience of risks the way that claims of risk often feed on their own inaccuracy helps to highlight the value of conditionality for those in political authority."
One of the essays in the volume, by William Davies and Linsey McGoey, applies this framework to thinking about the recent financial crisis. They point out that many financial professionals begin from the starting point that risk and uncertainty are huge problems, and thus one needs their high-priced help to address these issues. In this way, claims of ambiguity and ignorance are an asset for the finance industry. If the investment go well, then the financial professionals claim credit for steering successfully through these oceans of uncertainty. But when investments and decision go badly, as in the Great Recession, they claim absolution for their decisions by reiterating just how ambiguous and unclear the financial markets are, and how no one could have really known what was going to happen. And somehow, this just proves that their expertise is more needed than ever. They write: "We examine the usefulness of the failure or refusal to act on warning signs, regardless of the motivations why. We look at the double value of ignorance: the ways that social silence surrounding unsettling facts enabled profitable activities to endure despite unease about their implications and, second, the way earlier silences are then harnessed and mobilized to absolve earlier inaction.
In another essay, Jacqueline Best applies these ideas in the context of the World Bank's "good governance agenda" and the IMF's "conditionality policy." She writes: "Both policies have been ambiguously defined throughout their history, enabling them to be interpreted and applied in different ways. This ambiguity has facilitated the gradual expansion of the scope of the policies. ... Actors at both the IMF and the World Bank were not only aware of the central role of ambiguity in their policies, but were also ambivalent about it. ... Finally, although staff and directors at both institutions may have been ambivalent about the role of ambiguity in these policies, they ultimately ensured that ambiguities persisted and even proliferated." Best also notes that ambiguity is hard to control, and can lead to unintended consequences.
In yet another essay, Steve Rayner write about "Uncomfortable knowledge: the social construction of ignorance in science and environmental policy discourses." He writes: "My interest is therefore in how information is kept out rather than kept in and my approach is to treat ignorance as a necessary social achievement rather than a a simple background failure to acquire, store, and retrieve knowledge." Rayner writes: "An example of clumsy or incompletely theorized arrangements is the implicit consensus on US nuclear energy policy that emerged in the 1980s and persisted for the best part of three decades. Despite the complete absence of any Act of Congress or Presidential Order, it was implicitly accepted by government, industry, and environmental NGOs that the US would continue to support nuclear R&D while operating an informal moratorium on the addition of new nuclear generating capacity. All of the parties agreed to this, but for various reasons, all had a stake in not acknowledging the existence of an settlement."
One might add that many environmental laws and other regulatory policies are chock-full of ambiguous language, which gives regulators the ability to interpret these rules as tough-minded while also giving potential offenders the possibility of saying that they had no way of knowing the rules would be applied in this way. Rayner also offers a nicely provocative claim about tendencies to dismiss and deny in the context of warnings about climate change: "It seems odd that climate science has been held to a `platinum standard' of precision and reliability that goes well beyond anything that is normally required to make significant decisions in either the public or private sectors. Governments have recently gone to war based on much lower-quality intelligence than that which science offers us about climate change. Similar firms embark on product launches and mergers on the bases of much lower-quality information."
Academic research of course often uses of a feigned ignorance to generate a greater persuasive effect. The title of a research paper is often written in the form of a question, and the theory and data are often presented as if the author was a Solomonic figure encountering this material for the first time, guided only by a disinterested pursuit of Truth (with a capital T). The implications for reputation of past past work, or its political implications, are shunted off to the side. Research would have less persuasive effect if it started off by saying, "I've been hammering on this same conclusion for 25 years now, and I find pretty much exactly the same result every time I look at any data set from any time or place--and by the way, this conclusion also supports the political outcomes I prefer."
One of many implications of thinking about ignorance and ambiguity as assets and as strategic behavior is that it highlights that many economic actors and policy-makers have strong incentives to promote both their own ignorance, and more broadly, the idea that ambiguity makes true knowledge impossible. Ignorance can be a power grab, and the basis for a job, and a get-out-of-jail-free card.
The central theme of the issue is that ambiguity and ignorance are not just the absence of knowledge, waiting to be illuminated by facts and disclosure. Instead, ambiguity and ignorance are in certain situations the preferred strategic outcome. McGoey writes (citations omitted): "Ignorance is knowledge: that is the starting premise and impetus of the following collection of papers. Together, they contribute to a small but growing literature which explores how different forms of strategic ignorance and social unknowing help both to maintain and to disrupt social and political orders, allowing both governors and the governed to deny awareness of things it is not in their interest to acknowledge ..."
Many of the examples are sociological in nature, but others are based in economic and policy situations. For example, consider a number of situations that have to do with a policy response to risky situations: the risk that smoking causes cancer, the risk that growing carbon emissions will lead to climate change, the risk of future terrorist actions (and whether invading certain countries will increase or reduce those risks), and the risk of fluctuations in fluctuations in financial markets. McGoey writes:
"Within the game of predicting risk, one often wins regardless of whether risks materialize or not. If a predicted threat fails to emerge, the identification of the threat is credited for deterring it. If a predicted threat does emerge, authorities are commended for their foresight. If an unpredicted threat appears, authorities have a right to call for more resources to combat their own earlier ignorance. ‘The beauty of a futuristic vision, of course, is that it does not have to be true’, writes Kaushik Sunder Rajan (2006, p. 121) in a study of the way expectations surrounding new biotechnologies help to create funding opportunities and foster faith in the technology regardless of whether expectations prove true or not. In fact, expectations are often particularly fruitful when they fail to materialize, for more hope and hype are needed to remedy thwarted expectations. Attention to the resilience of risks the way that claims of risk often feed on their own inaccuracy helps to highlight the value of conditionality for those in political authority."
One of the essays in the volume, by William Davies and Linsey McGoey, applies this framework to thinking about the recent financial crisis. They point out that many financial professionals begin from the starting point that risk and uncertainty are huge problems, and thus one needs their high-priced help to address these issues. In this way, claims of ambiguity and ignorance are an asset for the finance industry. If the investment go well, then the financial professionals claim credit for steering successfully through these oceans of uncertainty. But when investments and decision go badly, as in the Great Recession, they claim absolution for their decisions by reiterating just how ambiguous and unclear the financial markets are, and how no one could have really known what was going to happen. And somehow, this just proves that their expertise is more needed than ever. They write: "We examine the usefulness of the failure or refusal to act on warning signs, regardless of the motivations why. We look at the double value of ignorance: the ways that social silence surrounding unsettling facts enabled profitable activities to endure despite unease about their implications and, second, the way earlier silences are then harnessed and mobilized to absolve earlier inaction.
In another essay, Jacqueline Best applies these ideas in the context of the World Bank's "good governance agenda" and the IMF's "conditionality policy." She writes: "Both policies have been ambiguously defined throughout their history, enabling them to be interpreted and applied in different ways. This ambiguity has facilitated the gradual expansion of the scope of the policies. ... Actors at both the IMF and the World Bank were not only aware of the central role of ambiguity in their policies, but were also ambivalent about it. ... Finally, although staff and directors at both institutions may have been ambivalent about the role of ambiguity in these policies, they ultimately ensured that ambiguities persisted and even proliferated." Best also notes that ambiguity is hard to control, and can lead to unintended consequences.
In yet another essay, Steve Rayner write about "Uncomfortable knowledge: the social construction of ignorance in science and environmental policy discourses." He writes: "My interest is therefore in how information is kept out rather than kept in and my approach is to treat ignorance as a necessary social achievement rather than a a simple background failure to acquire, store, and retrieve knowledge." Rayner writes: "An example of clumsy or incompletely theorized arrangements is the implicit consensus on US nuclear energy policy that emerged in the 1980s and persisted for the best part of three decades. Despite the complete absence of any Act of Congress or Presidential Order, it was implicitly accepted by government, industry, and environmental NGOs that the US would continue to support nuclear R&D while operating an informal moratorium on the addition of new nuclear generating capacity. All of the parties agreed to this, but for various reasons, all had a stake in not acknowledging the existence of an settlement."
One might add that many environmental laws and other regulatory policies are chock-full of ambiguous language, which gives regulators the ability to interpret these rules as tough-minded while also giving potential offenders the possibility of saying that they had no way of knowing the rules would be applied in this way. Rayner also offers a nicely provocative claim about tendencies to dismiss and deny in the context of warnings about climate change: "It seems odd that climate science has been held to a `platinum standard' of precision and reliability that goes well beyond anything that is normally required to make significant decisions in either the public or private sectors. Governments have recently gone to war based on much lower-quality intelligence than that which science offers us about climate change. Similar firms embark on product launches and mergers on the bases of much lower-quality information."
Academic research of course often uses of a feigned ignorance to generate a greater persuasive effect. The title of a research paper is often written in the form of a question, and the theory and data are often presented as if the author was a Solomonic figure encountering this material for the first time, guided only by a disinterested pursuit of Truth (with a capital T). The implications for reputation of past past work, or its political implications, are shunted off to the side. Research would have less persuasive effect if it started off by saying, "I've been hammering on this same conclusion for 25 years now, and I find pretty much exactly the same result every time I look at any data set from any time or place--and by the way, this conclusion also supports the political outcomes I prefer."
One of many implications of thinking about ignorance and ambiguity as assets and as strategic behavior is that it highlights that many economic actors and policy-makers have strong incentives to promote both their own ignorance, and more broadly, the idea that ambiguity makes true knowledge impossible. Ignorance can be a power grab, and the basis for a job, and a get-out-of-jail-free card.
McWages Around the World
To contact us Click HERE
It's hard to compare wages in different countries, because the details of the job differ. A typical job in a manufacturing facility, for example, is a rather different experience in China, Germany, Michigan, or Brazil. But for about a decade, Orley Ashenfelter has been looking at one set of jobs that are extremely similar across countries--jobs at McDonald's restaurants. He discussed this research and a broader agenda of "Comparing Real Wage Rates" across countries in his Presidential Address last January to the American Economic Association meetings in Chicago. The talk has now been published in the April 2012 issue of the American Economic Review, which will be available to many academics through their library subscription. But the talk is also freely available to the public here as Working Paper #570 from the Princeton's Industrial Relations Section.
How do we know that food preparation jobs at McDonald's are similar? Here's Ashenfelter:
"There is a reason that McDonald’s products aresimilar. These restaurants operate witha standardized protocol for employee work. Food ingredients are delivered tothe restaurants and stored in coolers and freezers. The ingredients and foodpreparation system are specifically designed to differ very little from placeto place. Although the skills necessary to handle contracts with suppliers orto manage and select employees may differ among restaurants, the basic foodpreparation work in each restaurant is highly standardized. Operations aremonitored using the 600-page Operations and Training Manual, whichcovers every aspect of food preparation and includes precise time tables aswell as color photographs. ... As a resultof the standardization of both the product and the workers’ tasks,international comparisons of wages of McDonald’s crew members are free ofinterpretation problems stemming from differences in skill content or compensatingwage differentials."
Ashenfelter has built up McWages data from about 60 countries. Here is a table of comparisons. The first column shows the hourly wage of a crew member at McDonald's, expressed in U.S. dollars (using the then-current exchange rate). The second column is the wage relative to the U.S. wage level, where the U.S. wage is 1.00. The third column is the price of a Big Mac in that country, again converted to U.S. dollars. And the fourth column is the McWage divided by the price of a Big Mac--as a rough-and-ready way of measuring the buying power of the wage.

Ashenfelter sums up this data, and I will put the last line in boldface type: "There are three obvious, dramatic conclusionsthat it is easy to draw from the comparison of wage rates in Table 3. First, the developed countries, including theUS, Canada, Japan, and Western Europe have quite similar wage rates, whethermeasured in dollars or in BMPH. In these countries a worker earned between 2and 3 Big Macs per hour of work, and with the exception of Western Europe withits highly regulated wage structure, earned around $7 an hour. A second conclusion is that the vast majorityof workers, including those in India, China, Latin America, and the Middle Eastearned about 10% as much as the workers in developed countries, although theBMPH comparison increases this ratio to about 15%, as would anypurchasing-power-price adjustment. Finally, workers in Russia, Eastern Europe,and South Africa face wage rates about 25 to 35% of those in the developedcountries, although again the BMPH comparison increases this ratio somewhat. In sum, the data in Table 3 providetransparent and credible evidence that workers doing the same tasks andproducing the same output using identical technologies are paid vastlydifferent wage rates."
In passing, it's interesting to note that McWage jobs pay so much more in western Europe than in the U.S., Canada and Japan. But let's pursue the highlighted theme: How can the same job with the same output and the same technology pay more in one country than in another? One part of the answer, of course, is that you can't hire someone in India or Sough Africa to make you a burger and fries for lunch. But at a deeper level, the higher McWages in high-income countries is not about the skill or human capital in those countries, but instead reflects that the entire economy is operating at a higher productivity level.
Here is an illustrative figure. The horizontal axis shows the "McWage ratio": that is, the U.S. McWage is equal to 1.00, and the McWages in all other countries are expressed in proportion. The vertical axis is "Hourly Output Ratio." This is measuring output per hour worked in the economy, again with the U.S. level set equal to 1.00, and the output per hour worked in all other countries expressed in proportion. The straight line at a 45-degree angle plots the points in which a country with, say, a McWage at 20% of the U.S. level also has output per hour worked at 20% of the U.S. level, a country with a McWage at 50% of the U.S. level also has output per hour worked at 50% of the U.S. level, and so on.

The key lesson of the figure is that the differences in McWages across countries line up with the overall productivity differences across countries. The main exceptions, in the upper right-hand part of the diagram, are countries where the McWage is above U.S. levels but output-per-hour for the economy as a whole is below U.S. levels: New Zealand, Japan, Italy, Germany. These are countries with minimum wage laws that push up the McWage.
Ashenfelter emphasizes in his remarks how real wages can be used to assess and compare the living standards of workers. I would add that these measures show that the most important factor determining wages for most of us is not our personal skills and human capital, or our effort and initiative, but whether we are using those skills and human capital in the context of a a high-productivity or a low-productivity economy.
How do we know that food preparation jobs at McDonald's are similar? Here's Ashenfelter:
"There is a reason that McDonald’s products aresimilar. These restaurants operate witha standardized protocol for employee work. Food ingredients are delivered tothe restaurants and stored in coolers and freezers. The ingredients and foodpreparation system are specifically designed to differ very little from placeto place. Although the skills necessary to handle contracts with suppliers orto manage and select employees may differ among restaurants, the basic foodpreparation work in each restaurant is highly standardized. Operations aremonitored using the 600-page Operations and Training Manual, whichcovers every aspect of food preparation and includes precise time tables aswell as color photographs. ... As a resultof the standardization of both the product and the workers’ tasks,international comparisons of wages of McDonald’s crew members are free ofinterpretation problems stemming from differences in skill content or compensatingwage differentials."
Ashenfelter has built up McWages data from about 60 countries. Here is a table of comparisons. The first column shows the hourly wage of a crew member at McDonald's, expressed in U.S. dollars (using the then-current exchange rate). The second column is the wage relative to the U.S. wage level, where the U.S. wage is 1.00. The third column is the price of a Big Mac in that country, again converted to U.S. dollars. And the fourth column is the McWage divided by the price of a Big Mac--as a rough-and-ready way of measuring the buying power of the wage.

Ashenfelter sums up this data, and I will put the last line in boldface type: "There are three obvious, dramatic conclusionsthat it is easy to draw from the comparison of wage rates in Table 3. First, the developed countries, including theUS, Canada, Japan, and Western Europe have quite similar wage rates, whethermeasured in dollars or in BMPH. In these countries a worker earned between 2and 3 Big Macs per hour of work, and with the exception of Western Europe withits highly regulated wage structure, earned around $7 an hour. A second conclusion is that the vast majorityof workers, including those in India, China, Latin America, and the Middle Eastearned about 10% as much as the workers in developed countries, although theBMPH comparison increases this ratio to about 15%, as would anypurchasing-power-price adjustment. Finally, workers in Russia, Eastern Europe,and South Africa face wage rates about 25 to 35% of those in the developedcountries, although again the BMPH comparison increases this ratio somewhat. In sum, the data in Table 3 providetransparent and credible evidence that workers doing the same tasks andproducing the same output using identical technologies are paid vastlydifferent wage rates."
In passing, it's interesting to note that McWage jobs pay so much more in western Europe than in the U.S., Canada and Japan. But let's pursue the highlighted theme: How can the same job with the same output and the same technology pay more in one country than in another? One part of the answer, of course, is that you can't hire someone in India or Sough Africa to make you a burger and fries for lunch. But at a deeper level, the higher McWages in high-income countries is not about the skill or human capital in those countries, but instead reflects that the entire economy is operating at a higher productivity level.
Here is an illustrative figure. The horizontal axis shows the "McWage ratio": that is, the U.S. McWage is equal to 1.00, and the McWages in all other countries are expressed in proportion. The vertical axis is "Hourly Output Ratio." This is measuring output per hour worked in the economy, again with the U.S. level set equal to 1.00, and the output per hour worked in all other countries expressed in proportion. The straight line at a 45-degree angle plots the points in which a country with, say, a McWage at 20% of the U.S. level also has output per hour worked at 20% of the U.S. level, a country with a McWage at 50% of the U.S. level also has output per hour worked at 50% of the U.S. level, and so on.

The key lesson of the figure is that the differences in McWages across countries line up with the overall productivity differences across countries. The main exceptions, in the upper right-hand part of the diagram, are countries where the McWage is above U.S. levels but output-per-hour for the economy as a whole is below U.S. levels: New Zealand, Japan, Italy, Germany. These are countries with minimum wage laws that push up the McWage.
Ashenfelter emphasizes in his remarks how real wages can be used to assess and compare the living standards of workers. I would add that these measures show that the most important factor determining wages for most of us is not our personal skills and human capital, or our effort and initiative, but whether we are using those skills and human capital in the context of a a high-productivity or a low-productivity economy.
Dimensions of U.S. College Attendance
To contact us Click HERE
Alan Krueger, chairman of President Obama's Council of Economic Advisers, gave a lecture at Columbia University in late April on "Reversing the Middle Class Jobs Deficit." A certain proportion of the talk is devoted to explaining how all the good economic news is due to Obama's economic policies and how all of Obama's economic policies have benefited the U.S. economy. Readers can evaluate their own personal tolerance for that flavor of exposition. But the figures that accompany such talks are often of independent interest, and in particular, my eye was caught by some figures about U.S. college attendance. (Full disclosure: Alan was editor of my own Journal of Economic Perspectives, and thus my direct boss, from 1996-2002.)
First look at the share of U.S. 55-64 year-olds in 2009 who have a post-secondary degree of some sort. It hovers around 40% of this age group, highest in the world, according to OECD data. Then look at the share of U.S. 25-34 year-olds in 2009 who have a post-secondary degree of some word. It's also right around 40% for this age group. Although one might expect that a higher proportion of the younger generation would be obtaining post-secondary degrees, this isn't actually true for the United States over the last 30 years. However, it is true for many other countries, and as a result, the U.S. is middle-of-the-pack in post-secondary degrees among the 25-34 age group. This news isn't new--for example, I posted about it in July 2011 here--but it's still striking. It seems to me possible to have doubts about the value and cost of certain aspects of post-secondary education (and I do), but still to be concerned that the U.S. population is falling back among its international competitors on this measure (and I am).


Krueger also points out that the chance of completing a bachelor's degree is strongly affected by the income level of your family. The horizontal axis shows the income distribution divided into fourths. The vertical axis shows the share of those who complete a bachelor's degree by age 25. The lower red line is for those born between 1961-1964--that is, those who started attending college roughly 18 years later in 1979. The upper line is for those those born from 1979-1982--that is, those who started attending college in 1998.
Here are a few observations based on this figure:
1) Even for those from top-quartile groups in the more recent time frame, only a little more than half are completing a bachelor's degree by age 25. To put it another way, the four-year college degree has never been the relevant goal for the median U.S. high school student. Given past trends and the current cost of such degrees, it seems implausible to me that the U.S. is going to increase dramatically the share of its population getting a college degree. I've posted at various times about how state and local funding for public higher education is down; about how the U.S. plan for expanding higher education appears to involve handing out more student loans, which then are often used at for-profit institutions with low graduation rates; and about how alternatives to college like certification programs, apprenticeships, and ways of recognizing nonformal and informal learning should be considered.
2) Those from families in in lower income quartiles clearly have a much lower chance of finishing a four-year college degree. My guess is that this difference is only partly due to the cost of college, while a major reason for the difference is that those with lower incomes are more likely to attend schools and to come from family backgrounds that aren't preparing them to attend college. Moreover, the gap in college attendance between those from lower- and higher-income families hasn't changed much over the two decades between the lower and the higher line in the figure, so whatever we've been doing to close the gap doesn't seem to be working.
3) It's a safe bet that many of those in the top quarter are families where the parents are college graduates, supporting and pushing their children to be college graduates. It's also a safe bet that many of those in the bottom quarter are families where the parents are not college graduates, and their children are not getting the support of all kinds that they need to become college graduates. In this way, it seems likely that college education is serving a substantial role in causing inequality of incomes to pass from one generation to the next. Krueger has referred to this pattern of high income inequality at one time leading to high inequality in the future as the "Great Gatsby Curve," as I described here.
First look at the share of U.S. 55-64 year-olds in 2009 who have a post-secondary degree of some sort. It hovers around 40% of this age group, highest in the world, according to OECD data. Then look at the share of U.S. 25-34 year-olds in 2009 who have a post-secondary degree of some word. It's also right around 40% for this age group. Although one might expect that a higher proportion of the younger generation would be obtaining post-secondary degrees, this isn't actually true for the United States over the last 30 years. However, it is true for many other countries, and as a result, the U.S. is middle-of-the-pack in post-secondary degrees among the 25-34 age group. This news isn't new--for example, I posted about it in July 2011 here--but it's still striking. It seems to me possible to have doubts about the value and cost of certain aspects of post-secondary education (and I do), but still to be concerned that the U.S. population is falling back among its international competitors on this measure (and I am).


Krueger also points out that the chance of completing a bachelor's degree is strongly affected by the income level of your family. The horizontal axis shows the income distribution divided into fourths. The vertical axis shows the share of those who complete a bachelor's degree by age 25. The lower red line is for those born between 1961-1964--that is, those who started attending college roughly 18 years later in 1979. The upper line is for those those born from 1979-1982--that is, those who started attending college in 1998.
Here are a few observations based on this figure:1) Even for those from top-quartile groups in the more recent time frame, only a little more than half are completing a bachelor's degree by age 25. To put it another way, the four-year college degree has never been the relevant goal for the median U.S. high school student. Given past trends and the current cost of such degrees, it seems implausible to me that the U.S. is going to increase dramatically the share of its population getting a college degree. I've posted at various times about how state and local funding for public higher education is down; about how the U.S. plan for expanding higher education appears to involve handing out more student loans, which then are often used at for-profit institutions with low graduation rates; and about how alternatives to college like certification programs, apprenticeships, and ways of recognizing nonformal and informal learning should be considered.
2) Those from families in in lower income quartiles clearly have a much lower chance of finishing a four-year college degree. My guess is that this difference is only partly due to the cost of college, while a major reason for the difference is that those with lower incomes are more likely to attend schools and to come from family backgrounds that aren't preparing them to attend college. Moreover, the gap in college attendance between those from lower- and higher-income families hasn't changed much over the two decades between the lower and the higher line in the figure, so whatever we've been doing to close the gap doesn't seem to be working.
3) It's a safe bet that many of those in the top quarter are families where the parents are college graduates, supporting and pushing their children to be college graduates. It's also a safe bet that many of those in the bottom quarter are families where the parents are not college graduates, and their children are not getting the support of all kinds that they need to become college graduates. In this way, it seems likely that college education is serving a substantial role in causing inequality of incomes to pass from one generation to the next. Krueger has referred to this pattern of high income inequality at one time leading to high inequality in the future as the "Great Gatsby Curve," as I described here.
Lemley on Fixing the U.S. Patent System
To contact us Click HERE
Mark Lemley has written "Fixing the Patent Office" for SIEPR, the Stanford Institute for Economic Policy Research (Discussion Paper No. 11-014, published May 21, 2012). Lemley has an interesting starting point for thinking about the U.S. patent system. He writes (footnotes omitted):
One approach is to give patent applicants a method of signalling whether they believe the patent will be important. The idea here is that patent applicants can apply under the current system, in which case their patent would have only the usual legal presumption in its favor if challenged in court, or they can pay a substantial amount extra for a more exhaustive patent examination, which would have a much stronger presumption in its favor if challenged in court. Lemley writes:
Another approach would be to allow other parties to pay a substantial fee to the Patent Office to re-examine the grounds for a recently granted patent. Lemley again:
Finally, the traditional way to focus on the 1-2% of patents that really matter, and where the parties can't agree, is to litigate. Lemley argues that such litigation will continue to be quite important, and that the underlying legal doctrine should acknowledge that many patents do not deserve a strong presumption of validity--unless is has been earned through an especially exhaustive process at the Patent and Trademark Office. Lemley one more time:
None of this is to say that doesn't make sense to rethink training and expectations for patent examiners themselves, and Lemley has some interesting evidence about how patent examiners tend to turn down fewer patents the longer they are on the job, and how they often rely on the background that they personally gather,rather than on background collected by others--including others in the patent office itself. But the idea that patent reform shouldn't focus on trying to review every application exhaustively, but instead on how to give greater attention to the applications that have real-world importance, seems to me a highly useful insight.
"Most patents don’t matter. They claim technologies that ultimately failed in the marketplace. They protect a firm from competitors who for other reasons failed to materialize. They were acquired merely to signal investors that the relevant firm has intellectual assets. Or they were lottery tickets filed on the speculation that a given industry or invention would take off. Those patents will never be licensed, never be asserted in negotiation or litigation, and thus spending additional resources to examine them would yield few benefits."
"Some bad patents, however, are more pernicious. They award legal rights that are far broader than what their relevant inventors actually invented, and they do so with respect to technologies that turn out to be economically significant. Many Internet patents fall into this category. Rarely a month goes by that some unknown patent holder does not surface and claim to be the true inventor of eBay or the first to come up with now‐familiar concepts like hyperlinking and e‐commerce. While some such Internet patents may be valid--someone did invent those things, after all--more often the people asserting the patents actually invented something much more modest. But they persuaded the Patent Office to give them rights that are broader than what they actually invented, imposing an implicit tax on consumers and thwarting truly innovative companies who do or would pioneer those fields.Long-time devotees of my own Journal of Economic Perspectives may recognize this argument, because it is similar to what Lemley argued with co-author Carl Shapiro in "Probabilistic Patents" in the Spring 2005 issue. (JEP articles are freely available to all courtesy of the American Economic Association.) As Lemley argues, the problems of the patent system aren't as simple as taking longer to examine patent applications, hiring more patent examiners, or being more stingy in granting patents. Instead, the goal should be to give greater the question attention to patents that are likely to end up being more important. How might this be done?
"Compounding the problem, bad patents are too hard to overturn. Courts require a defendant to provide “clear and convincing evidence” to invalidate an issued patent. In essence, courts presume that the Patent Office has already done a good job of screening out bad patents. Given what we know about patents in force today, that is almost certainly a bad assumption."
"The problem, then, is not that the Patent Office issues a large number of bad patents. Rather, it is that the Patent Office issues a small but worrisome number of economically significant bad patents and those patents enjoy a strong, but undeserved, presumption of validity."
One approach is to give patent applicants a method of signalling whether they believe the patent will be important. The idea here is that patent applicants can apply under the current system, in which case their patent would have only the usual legal presumption in its favor if challenged in court, or they can pay a substantial amount extra for a more exhaustive patent examination, which would have a much stronger presumption in its favor if challenged in court. Lemley writes:
"[A]pplicants should be allowed to “gold plate” their patents by paying for the kind of searching review that would merit a strong presumption of validity. An applicant who chooses not to pay could still get a patent. That patent, however, would be subject to serious—maybe even de novo—review in the event of litigation. Most likely, applicants would pay for serious review with respect to their most important patents but conserve resources on their more speculative entries. That would allow the Patent Office to focus its resources on those self-selected patents, thus benefiting from the signal given by the applicant’s own self‐interested choice. The Obama campaign proposed this sort of tiered review, and the PTO [Patent and Trademark Office] has recently implemented a scaled‐down version, in which applicants can choose the speed but not the intensity of review.Adoption has been significant but modest ... [I]t appears to be performing its intended function of distinguishing some urgent applications from the rest of the pack."
Another approach would be to allow other parties to pay a substantial fee to the Patent Office to re-examine the grounds for a recently granted patent. Lemley again:
"Post‐grant opposition is a process by which parties other than the applicant have the opportunity to request and fund a thorough examination of a recently issued patent. A patent that survives collateral attack should earn a presumption of validity ... [P]ost‐grant opposition is attractive because it harnesses private information; this time, information in the hands of competitors. It thus helps the PTO to identify patents that warrant serious review, and it also makes that review less expensive by creating a mechanism by which competitors can share critical information directly with the PTO. A post‐grant opposition system is part of the new America Invents Act, but it won’t begin to apply for another several years, and the new system will be unavailable to many competitors because of the short time limits for filing an opposition. ... But the evidence from operation of similar systems in Europe is encouraging."
Finally, the traditional way to focus on the 1-2% of patents that really matter, and where the parties can't agree, is to litigate. Lemley argues that such litigation will continue to be quite important, and that the underlying legal doctrine should acknowledge that many patents do not deserve a strong presumption of validity--unless is has been earned through an especially exhaustive process at the Patent and Trademark Office. Lemley one more time:
"[W]e will continue to rely on litigation for the foreseeable future as a primary means for weeding out bad patents. Litigation elicits information from both patentees and competitors through the adversarial process, which is far superior to even the best‐intentioned government bureaucracy as a mechanism for finding truth. More important, litigation is focused on the very few patents (1-2 percent) that turn out to be important and about which parties cannot agree in a business transaction. Litigation can be abused, and examples of patent litigation abuse have been rampant in the last two decades. But a variety of reforms have started to bring that problem under control,
and the courts have the means to continue that process. ... Courts could modulate the presumption of validity for issued patents. A presumption like that embraced by the current “clear and convincing” standard must be earned, and under current rules patent applicants do not earn it. ... The current presumption is so wooden that courts today assume a patent is valid even against evidence that the patent examiner never saw, much less considered, a rule that makes no sense."
None of this is to say that doesn't make sense to rethink training and expectations for patent examiners themselves, and Lemley has some interesting evidence about how patent examiners tend to turn down fewer patents the longer they are on the job, and how they often rely on the background that they personally gather,rather than on background collected by others--including others in the patent office itself. But the idea that patent reform shouldn't focus on trying to review every application exhaustively, but instead on how to give greater attention to the applications that have real-world importance, seems to me a highly useful insight.
Is Wikipedia Politically Biased?
To contact us Click HERE
Wikipedia aspires to a neutral point of view. How well does it succeed? Shane Greenstein and Feng Zhu tackle this question in the May 2012 issue of the American Economic Review. (The article is not freely available, but many academics will have access through their library websites.) They conclude:
"To summarize, the average old political article in Wikipedia leans Democratic. Gradually, Wikipedia’s articles have lost that disproportionate use of Democratic phrases, moving to nearly equivalent use of words from both parties, akin to an NPOV [neutral point of view] on average. The number of recent articles far outweighs the number of older articles, so, by the last date, Wikipedia’s articles appear to be centered close to a middle point on average. Though the evidence is not definitive about the causes of change, the extant patterns suggest that the general tendency toward more neutrality in Wikipedia’s political articles largely does not arise from revision. There is a weak tendency for articles to become less biased over time. Instead, the overall change arises from the entry of later vintages of articles with an opposite point of view from earlier articles."
How do they reach this conclusion? Greenstein and Zhu focus on entries that bear on topics of importance in U.S. politics; in particular, they begin by selecting all articles in January 2011 that include "republican" or "democrat" as keywords. This procedure generates about 111,000 articles, and when they have dropped the articles that aren't about U.S. politics, they have about 70,000 articles remaining.
They then rely on a process from earlier research, which selects "1,000 phrases based on the number of times these phrases appear in the text of the 2005 Congressional Record, applying statistical methods to identify phrases that separate Democratic representatives from Republican representatives, under the model that each group speaks to its respective constituents with a distinct set of coded language. In brief, we ask whether a given Wikipedia article uses phrases favored more by Republican members or by Democratic members of Congress."
Some of their 70,000 articles don't include any of these phrases, and so can't be evaluated by this method. For the 28,000 article they can evaluate, they find on average a Democratic slant. "[W]hen they have a measured slant, articles about civil rights tend to have a Democrat slant (-0.16), while the topic of trade tends to have a Republican slant (0.06). At the same time, many seemingly controversial topics such as foreign policy, war and peace, and abortion are centered at zero [that is, no slant]."
They then look back at the earlier revisions of their 70,000 articles, and to keep the numbers manageable, when an article has more than 10 revisions they look only at 10. This gives them 647,000 entries, but again many of them don't use any of the key phrases, leaving 237,000 that do include some of those phrases. They find that older revisions tend to lean more Democratic, while newer revisions and newer entries are more balanced.
Wikipedia is in many ways an extraordinary success. Greenstein and Zhu write:
"To summarize, the average old political article in Wikipedia leans Democratic. Gradually, Wikipedia’s articles have lost that disproportionate use of Democratic phrases, moving to nearly equivalent use of words from both parties, akin to an NPOV [neutral point of view] on average. The number of recent articles far outweighs the number of older articles, so, by the last date, Wikipedia’s articles appear to be centered close to a middle point on average. Though the evidence is not definitive about the causes of change, the extant patterns suggest that the general tendency toward more neutrality in Wikipedia’s political articles largely does not arise from revision. There is a weak tendency for articles to become less biased over time. Instead, the overall change arises from the entry of later vintages of articles with an opposite point of view from earlier articles."
How do they reach this conclusion? Greenstein and Zhu focus on entries that bear on topics of importance in U.S. politics; in particular, they begin by selecting all articles in January 2011 that include "republican" or "democrat" as keywords. This procedure generates about 111,000 articles, and when they have dropped the articles that aren't about U.S. politics, they have about 70,000 articles remaining.
They then rely on a process from earlier research, which selects "1,000 phrases based on the number of times these phrases appear in the text of the 2005 Congressional Record, applying statistical methods to identify phrases that separate Democratic representatives from Republican representatives, under the model that each group speaks to its respective constituents with a distinct set of coded language. In brief, we ask whether a given Wikipedia article uses phrases favored more by Republican members or by Democratic members of Congress."
Some of their 70,000 articles don't include any of these phrases, and so can't be evaluated by this method. For the 28,000 article they can evaluate, they find on average a Democratic slant. "[W]hen they have a measured slant, articles about civil rights tend to have a Democrat slant (-0.16), while the topic of trade tends to have a Republican slant (0.06). At the same time, many seemingly controversial topics such as foreign policy, war and peace, and abortion are centered at zero [that is, no slant]."
They then look back at the earlier revisions of their 70,000 articles, and to keep the numbers manageable, when an article has more than 10 revisions they look only at 10. This gives them 647,000 entries, but again many of them don't use any of the key phrases, leaving 237,000 that do include some of those phrases. They find that older revisions tend to lean more Democratic, while newer revisions and newer entries are more balanced.
Wikipedia is in many ways an extraordinary success. Greenstein and Zhu write:
"As the largest wiki ever and one of the most popular websites in the world, Wikipedia accommodates a skyrocketing number of contributors and readers. At the end of 2011, after approximately a decade of production, Wikipedia supports 3.8 million articles in English and well over twenty million articles in all languages, and it produces and hosts content that four hundreds of millions of readers view each month. Every ranking places Wikipedia as the fifth or sixth most visited website in the United States, behind Google, Facebook, Yahoo!, YouTube, and, perhaps, eBay. In most countries with unrestricted and developed Internet sectors, Wikipedia ranks among the top ten websites visited by households."Any semi-serious researcher (and here I include junior-high-school students) knows that while Wikipedia can be a useful starting point, it should never be an endpoint. Instead, it can serve as a useful shortcut to finding links to other sources. But the Greenstein and Zhu evidence suggests that Wikipedia on average has found a reasonable level of political balance--although you may need to read a few related entries on the same broad topic to achieve it.
Kaydol:
Yorumlar (Atom)