Cato Op-Eds

Individual Liberty, Free Markets, and Peace
Subscribe to Cato Op-Eds feed

Jeffrey Herbst, the President and CEO of the Newseum, recently released a report about free speech on campus. It is brief and well worth reading.

Herbst believes we are missing the major problem exposed by recent attacks on free speech at universities.

Systematic public opinion polling and anecdotal evidence suggests, however, that the real problem of free expression on college campuses is much deeper than episodic moments of censorship: With little comment, an alternate understanding of the First Amendment has emerged among young people that can be called “the right to non-offensive speech.” This perspective essentially carves out an exception to the right of free speech by trying to prevent expression that is seen as particularly offensive to an identifiable group, especially if that collective is defined in terms of race, ethnicity, gender, or sexual identity. The crisis is not one of the very occasional speaker thrown off campus, however regrettable that is; rather, it is a generation that increasingly censors itself and others, largely silently but sometimes through active protest.

Many people believe university students have adopted a “right to non-offensive speech” under the influence of their leftwing professors who are hostile to libertarian values. But Herbst shows that high school students and their teachers are equally doubtful about protecting speech that offends. He notes, “young adults come to campus with some fairly well-developed views that explain much of what subsequently occurs as they confront challenging speech.”

Jeffrey Herbst notes that young people support free speech in theory but not, as we have seen with Murray and others, in particular cases. In the past polls showed that while the First Amendment in the abstract received near unanimous support, its applications to unpopular speakers sometimes failed to attract a majority. Maybe the boomers were different, and young people now are returning—ironically enough—to views held by pre-boomers.

Herbst shows that millennials in general are less supportive of free speech than older cohorts. I would like to see if this pattern holds controlling for age. Were baby boomers less supportive of free speech in 1974? If so, people may grow out of intolerance. For purposes of argument, let’s assume that in the past people became more tolerant with age. Perhaps the millennials will follow that path too. But might the world have changed? Might some factor now exists that could preclude millennials from following the normal path of increasing toleration and greater support for free speech?

Maybe. Jeffrey Herbst argues that early education now fosters illiberalism:

The approach to diversity in many elementary and secondary schools seems to be little more than ‘Don’t say things that could hurt others.’ While this might be very good life advice, students have come to interpret it as curtailing the First Amendment.

What can be done to counteract this trend? The libertarian answer to most free speech problems is always: more speech. Notice, however, that education is different from most speech situations. In a normal speech situation, two people speak and argue about a topic, and neither has authority about that topic if we understand authority as a presumption of being correct. Teachers, especially teachers of children, do have such authority. And advocates of free speech cannot simply interpose themselves and their arguments between teacher and student. By the time students enter the university (which can approximate the normal speech situation), they apparently have learned to be illiberal in pursuit of “niceness.”

Private schools are another answer to this problem. If most parents want a genuinely liberal education for their children, the authority of teachers will inculcate a respect for free speech even if it offends. But what if parents value virtue or social justice more than free speech? The children of those parents may become illiberal. What then? Of course, for now private schools can only be part of the answer to our problem even if all such schools were libertarian in outlook.

We need teachers who support free speech or specifically teachers who see free speech and diversity as compatible rather than as values in conflict that should be reconciled by limiting speech. Professors and public intellectuals should be working on that reconciliation while defending a strong view on freedom of speech. For example, Cato’s Flemming Rose is thinking hard about the importance of free speech in a multicultural society.

One final point. We live in a world too defined by partisanship and closed minds. Progressives may doubt the case for free speech when it is made by people who otherwise doubt progressivism. On the other hand, progressives who defend free speech will have real authority with those who doubt free speech but are otherwise progressive. The world being what it is, the future of free speech depends crucially on progressive advocates of the First Amendment. But not just them. Perhaps though, especially them.

Law professor Catherine Ross is a leading progressive advocate of free speech. She explores the challenges to the First Amendment in schools in Lessons in Censorship. You can see Cato’s forum on her book here.

Robert F. Bauer is an important progressive defender of free speech. You can read his thoughtful blog here.

The world lost a great champion of liberty with the passing of Allan Meltzer, a longtime Professor of Political Economy at Carnegie Mellon University.  Allan was a prodigious worker who wrote hundreds of articles and more than ten books, including his monumental A History of the Federal Reserve and more recently Why Capitalism?  The latter provides a strong defense of limited government, the rule of law, private property and free markets, which he saw as the surest means to increase the wealth of nations.

A Passion for Ideas and Policy

Allan had a passion for ideas and a desire to influence policy; he sought to make the world a better place by safeguarding economic and personal freedom. He became a major player in the marketplace for ideas — writing, teaching, advising policymakers, serving on editorial boards, co-founding the Shadow Open Market Committee with his close colleague and lifelong friend Karl Brunner, acting as president of the Mont Pelerin Society founded by F. A. Hayek, chairing the International Financial Institution Advisory Commission (also known as the “Meltzer Commission”), and participating in numerous conferences. He continued working right up until his death on May 8, at the age of 89.

A Giant in Monetary Economics

I first met Allan in the early 1980s, when he began to participate in Cato’s Annual Monetary Conference. His paper “Monetary Reform in an Uncertain Environment” was delivered at the first conference, in January 1983, and published in the Cato Journal later that year; it was reprinted in The Search for Stable Money (University of Chicago Press, 1987), a book I co-edited with Anna J. Schwartz.

In that article, Allan examined alternative monetary regimes and their implications for reducing risk and uncertainty. He sought a rule-based regime that would minimize uncertainty and best allow markets to flourish. He preferred, at the time, a quantity rule that would have the monetary base grow in line with the growth of real output adjusted for changes in the velocity of base money. Such a rule, he argued, would anchor expectations regarding the path of nominal income and achieve long-run price stability. However, the rule had to be credible and be supplemented with a fiscal rule that limited the taxing and spending powers of government. He did not want the Fed to finance government deficits or to allocate credit.

It is important to note that Allan was not opposed to private money. At the 1993 monetary conference and in his paper, he held that

individuals or groups should be permitted to issue and use privately produced money or monies… . The objective of policy rules is to reduce the uncertainty that the community must bear, not to prevent voluntary risk taking.

Allan was open-minded and was willing to change his policy advice based on logic and evidence.

He continued to participate in Cato’s Annual Monetary Conference for many years and contributed 15 articles to the Cato Journal (see Table 1). Although he was often critical of Fed policy, he thought Paul Volcker was correct in ending double-digit inflation by slowing the growth of money and credit, and that Alan Greenspan was correct in following an implicit monetary rule to prevent wide fluctuations in nominal income during the “Great Moderation.”

Meltzer, however, was highly critical of the Fed’s unconventional monetary policy and wrote in the Spring/Summer 2012 Cato Journal:

Overresponse to short-run events and neglect of longer-term consequences of its actions is one of the main errors that the Federal Reserve makes repeatedly. The current recession offers many examples of actions that some characterize as bold and innovative. I regard many of these actions as inappropriate for an allegedly independent central bank because they involve credit allocation, fill the Fed’s portfolio with an unprecedented volume of long-term assets, evade or neglect the dual mandate, distort the credit markets, and initiate other actions that are not the responsibility of a central bank.

He kept up his criticism until the end, writing articles for the Hoover Institute, where he was a distinguished senior fellow, with such titles as “Fed Up with the Fed” (Defining Ideas, February 17, 2016), “Fed Failures” (March 9, 2016), and “Reform the Federal Reserve” (October 12, 2016).  His last article in Hoover’s online journal appeared on April 25, less than two weeks before he died.

The last time I saw Allan was in Switzerland, in September 2016, where we had enjoyed many discussions at Karl Brunner’s Interlaken Seminar on Analysis and Ideology. He was in Zurich to commemorate the 100th anniversary of Karl’s birth, sponsored by the Swiss National Bank, and to deliver a paper discussing Karl’s many contributions to monetary theory as well as to political economy in general. In his paper, “Karl Brunner, Scholar: An Appreciation,” he emphasized that Karl

highlighted information, institutions and uncertainty as well as the importance of microanalysis in macroeconomics. Karl Brunner explained that nominal monetary impulses changed real variables by changing the relative price of assets to output prices. And he concluded that economic fluctuations occurred because of an unstable public sector — especially the monetary sector — that disturbs a more stable private sector, a policy lesson forgotten or never learned by many central banks.

Those ideas also were central to Allan’s work — both with Karl and independently — and they are evident in his interpretation of Keynes’s monetary theory.

John Maynard Keynes and Meltzer’s Monetary Rule

In a careful study of John Maynard Keynes’s writings, Meltzer argues that the vast literature on Keynes neglected the importance he placed on credible rules, which he thought would reduce uncertainty and improve economic welfare (see Keynes’s Monetary Theory: A Different Interpretation, Cambridge University Press, 1988).[i]

In particular, Allan was influenced by Keynes’s classic A Tract on Monetary Reform (1923), which discusses rules for domestic (internal) price stability and for international (external) price stability — that is, exchange rate stability. In thinking about a rule to reduce the variability of unanticipated changes in prices and outputs, Meltzer ([1987] 1989: 78–81) draws on Keynes’s distinction and his recognition of the benefits of reducing both internal and external instability.[ii] The problem, of course, is to choose the appropriate institutional framework. Countries operating independently cannot achieve both internal and external stability, argued Keynes, unless a key country anchors its price level by enforcing a credible rule.

Building upon Keynes’s insights, Meltzer (p. 78) notes that if each major trading partner makes domestic price stability a priority, then uncertainty about the future path of prices will diminish and exchange rates among the partners will be more stable. To realize both internal and external stability, Meltzer proposes a simple rule: each major country should set “the rate of growth of the monetary base equal to the difference between the moving average of past real output growth and past growth in base velocity” (p. 83). If each country complies, the rule will reduce the “variability of exchange rates arising from differences in expected rates of inflation.”[iii]

Meltzer’s proposed rule is “forecast free” and adaptable; it is mildly activist but nondiscretionary, similar to Bennett McCallum’s monetary rule.[iv]  By choosing to stabilize the anticipated price level rather than the actual price level, there is no need “to reverse all changes in the price level,” argues Meltzer (1989: 79). Instead, the actual price level is allowed “to adjust as part of the process by which the economy adjusts real values to unanticipated supply shocks.”  In other words, Meltzer’s monetary rule “does not adjust to short-term, transitory changes in level, but it adjusts fully to permanent changes in growth rates of output and intermediation (or other changes in the growth rate of velocity) within the term chosen for the moving averages” (p. 81).

In the current environment — with the Fed paying interest on excess reserves (in excess of  what banks can get on highly liquid assets), no competitive fed funds market, banks subject to uncertainty about future monetary policy and complex macro-prudential regulations, and depositors holding more cash due to ultralow interest rates, Meltzer’s monetary rule would be severely constrained. The link between base money and broader monetary aggregates has weakened significantly since the 2008 financial crisis and the Fed’s unconventional monetary policy.

Before serious consideration can be given to implementing any rule-based monetary regime, the Fed needs to normalize monetary policy by ending interest on excess reserves and shrinking its balance sheet to restore a pre-crisis fed funds market. Once changes in base money can be effectively transmitted to changes in the money supply and nominal income, Meltzer’s rule would reduce uncertainty and spur investment and growth.

The key point, however, is that Allan wanted to explore alternative monetary rules and select those he thought would work best to reduce the variability of prices and output. That comparative-institutions approach was evident in all his work. But he recognized that, ultimately, the choice of a rule would be heavily influenced by the political economy. His careful scholarship was intended to help shape the climate of ideas and public policy in the direction of what Richard Epstein has called “simple rules for a complex world.”[v]

A Breadth of Knowledge

Although Allan was known primarily for his work on monetary theory and history, he was deeply interested in the role of government in a free society; the relation between institutions, incentives, and behavior; the determinants of economic growth; the theory of public choice; the damaging effects of official foreign aid; and the distribution of income.[vi] He wrote many articles for the popular press, including the Wall Street Journal, Los Angeles Times, and Financial Times, and he was always willing to help younger scholars and students understand the complexities of political economy.

A Man of Integrity

Allan Meltzer was a great scholar and teacher, a friend of liberty, a man of integrity who kept his word, and a fine human being. He was persistent in his research and his life. Allan taught at Carnegie Mellon for 60 years and was married to his lovely wife Marilyn for 67 years.

When Allan was five years old, he lost his mother and went to live with his grandmother for several years until he moved to Los Angeles where his family ran a business. Reflecting on his early years, Allan said, “Her most important influence on my career and my outlook was her strongly held belief that, in America (and only in America), there were no real limits other than ability to what one could achieve by personal effort.”[vii]

In his many accomplishments and honors, Allan certainly realized the American Dream, and had a life well lived.[viii] He will be sorely missed, but his work will live on.

_________________

TABLE 1: Allan H. Meltzer’s Articles in the Cato Journal
  1. Monetary Reform in an Uncertain Environment,” Cato Journal 3 (1), Spring/Summer 1983. Reprinted in J. A. Dorn and A. J. Schwartz (eds.) The Search for Stable Money, University of Chicago Press (1987).
  2. The International Debt Problem,” Cato Journal 4 (1), Spring/Summer 1984.
  3. Monetary and Exchange Rate Regimes: A Comparison of Japan and the United States,” Cato Journal 6 (2), Fall 1986.
  4. Comment on “Can Monetary Disequilibrium Be Eliminated?Cato Journal 9 (2), Fall 1989.
  5. Some Empirical Findings on Differences between EMS and Non-EMS Regimes: Implications for Currency Blocs,” Cato Journal 10 (2), Fall 1990.
  6. Karl Brunner: In Memoriam,” Cato Journal 12 (1), Spring/Summer 1992.
  7. Benefits and Costs of Currency Boards,” Cato Journal 12, Vol. 12 (3), Winter 1993.
  8. Asian Problems and the IMF.” Cato Journal, 17, (3), pp. 267-274.
  9. Monetary Policy in the New Global Economy: The Case of Japan,” Cato Journal 20 (1),Spring/Summer 2000.
  10. Argentina 2002: A Case of Government Failure,” Cato Journal 23 (1), Spring/Summer 2003.
  11. A Monetary History as a Model for Historians,” Cato Journal 23 (3), Winter 2004.
  12. New Mandates for the IMF and World Bank,” Cato Journal 25 (1), Winter 2005.
  13. Learning about Policy from Federal Reserve History,” Cato Journal 30 (2), 2010.
  14. Federal Reserve Policy in the Great Recession,” Cato Journal 32 (2), Spring/Summer 2012.
  15. What’s Wrong with the Fed? What Would Restore Independence?”  Cato Journal 33 (3), Fall 2013.

_______________________

[i] When Allan’s book was still being drafted, I organized a conference in October 1986, sponsored by the Liberty Fund, which took place in San Francisco and brought together a number of leading monetary scholars to critique Allan’s arguments and help facilitate completion of his book. Participants included Milton Friedman, Anna Schwartz, Karl Brunner, Leland Yeager, David Laidler, John Whitaker, Lawrence H. White, and Axel Leijonhuvud.

[ii] A. H. Meltzer, “On Monetary Stability and Monetary Reform,” in J. A. Dorn and W. A. Niskanen (eds.) Dollars, Deficits, and Trade, 63–85. Boston: Kluwer (1989).  This paper was originally presented the Third International Conference of the Institute for Monetary and Economic Studies at the Bank of Japan, June 3, 1987.

[iii] Meltzer’s proposal is similar to Brunner’s call for a “club of financial stability.”  See K. Brunner, “Policy Coordination and the Dollar,” Shadow Open Market Committee: Policy Statement and Position Papers (PPS 87-01), 49–51. Center for Research in Government Policy & Business, University of Rochester, March 1987.

[iv] See B. T. McCallum, “Monetarist Rules in the Light of Recent Experience.American Economic Review 74 (May 1984): 388–96.

[v] See R. A. Epstein, Simple Rules for a Complex World, Cambridge, Mass.: Harvard University Press, 1995.

[vi] Meltzer viewed economics as “a policy science, not a branch of applied mathematics.”  He argued that “economics will be poorer if it does not include institutions and the incentives embodied in the rules, institutions or arrangements that we call society.”  See A. H. Meltzer, “My Life Philosophy,” The American Economist 34 (1), Spring 1990, p. 27.

[vii] Ibid., p. 22.

[viii]  Meltzer’s many honors include: Distinguished Fellow, American Economic Association; Irving Kristol Award, American Enterprise Institute; Distinguished Professional Achievement Medal, UCLA; The Adam Smith Award, National Association for Business Economics; The Bradley Foundation Award; The Harry Truman Award for Public Policy; and the Distinguished Teacher Award, International Mensa Foundation.

[Cross-posted from Alt-M.org]

Former House Ways and Means Committee staffer Joanne Butler wrote a recent piece calling for greater use of E-Verify to fight illegal immigration. Like other pieces advocating for the massive expansion of this government-run employment verification program, Butler’s presents a rosy view of E-Verify that is at odds with the reality. E-Verify remains an ineffective program that promises much, accomplishes little, and is dangerous to citizens and non-citizens alike.

E-Verify is still based off of Reagan-era employment verification forms. After collecting I-9 tax forms from employees, the employer enters the information into a government website. The system compares these data with information held in Social Security Administration (SSA) and Department of Homeland Security (DHS) databases. SSA data is then used to check the validity of the Social Security number while DHS checks immigration status.

If both databases decide that the data are valid then it approves the employee for work. A flag raised by either database returns a tentative non-confirmation that requires the employee and employer to sort out the error. These errors can range from the simple (a misspelled name) to the complex (such as the system flagging a Social Security number as fake or already in use). The employer and the employee must correct these errors, eating up valuable labor hours and resources. The current I-9 form costs employers an estimated 13.48 million man-hours each year, while 46.5 percent of contested E-Verify cases took longer than eight working days to resolve. A hypothetical nationwide E-Verify mandate would sacrifice many millions more work hours on the altar of immigration enforcement.

E-Verify’s errors and inaccuracies are far too frequent and notoriously difficult to actually measure. The last major survey of E-Verify’s accuracy rates was published in 2012.  According to that last survey, 54 percent of unauthorized workers were incorrectly found to be work authorized due to E-Verify’s reliance on documents presented by the workers themselves. This makes it easy to fool E-Verify: the system checks the validity of documents but does little to check the veracity of documents.

For example, a blatantly false Social Security number, such as one with the wrong number of digits or an impossible combination of numbers, will be flagged. However, an unauthorized worker utilizing a valid number illicitly acquired will not be flagged by the system. The system will not question why a 44-year-old man in California is using, say, a Social Security number issued to a 5-year-old girl born in Texas or one of the 6.5 million numbers attached to Americans who are 112 years old or older who are not recorded as deceased. Any documents acquired with the valid but illicit number will also fail to trigger the system.

E-Verify also blocks some legal workers.  Around 0.15 percent of E-Verify queries result in a false final non-confirmation, locking out otherwise legal citizens and permanent residents. An SSN “lockdown” feature only makes things worse, as the Social Security Administration can lock a number used multiple times in the system. An American could find their Social Security number “locked” and have to go through a long process to unlock it. The valid number holder is forced to bear the burden of having to prove to the government that they are who they say they are.

Even more absurd, about half of all new hires in states that require mandatory E-Verify for all new hires are not run through E-Verify.  Enforcing E-Verify is about as difficult and expensive as enforcing the current I-9 system except that it adds another layer of bureaucracy for employers and employees to overcome.  Arizona, Mississippi, South Carolina, and Alabama are states committed to immigration enforcement.  If they can’t even make mandatory E-Verify truly mandatory then there is no hope for the federal government accomplishing that goal.

Finally, E-Verify is expensive. Expanding E-Verify through a nationwide mandate would, per the CBO, cost the federal government $635 million from 2018 to 2023. It would additionally impose $10 million in annual costs on state and local governments, as well as a minimum $200 million in costs to the private sector as employers struggle to verify millions of employees. All that cost for a system that doesn’t even work very well – what a bad deal.

While there’s been thousands of legacy media stories about the very real decline in summer sea-ice extent in the Arctic Ocean, we can’t find one about the statistically significant increase in Antarctic sea ice that has been observed at the same time.

Also, comparisons between forecast temperature trends down there and what’s been observed are also very few and far between. Here’s one published in 2015:

Observed (blue) and model-forecast (red) Antarctic sea-ice extent published by Shu et al. (2015) shows a large and growing discrepancy, but for unknown reasons, their illustration ends in 2005.

For those who utilize and trust in the scientific method, forming policy (especially multi-trillion dollar policies!) on the basis of what could or might happen in the future seems imprudent. Sound policy, in contrast, is best formulated when it is based upon repeated and verifiable observations that are consistent with the projections of climate models. As shown above, this does not appear to be the case with the vast ice field that surrounds Antarctica.

According to the most recent report by the Intergovernmental Panel on Climate Change (IPCC), CO2-induced global warming will result in a considerable reduction in sea ice extent in the Southern Hemisphere. Specifically, the report predicts a multi-model average decrease of between 16 and 67 percent in the summer and 8 to 30 percent in the winter by the end of the century (IPCC, 2013). Given the fact that atmospheric CO2 concentrations have increased by 20 percent over the past four decades, evidence of sea ice decline should be evident in the observational data if such model predictions are correct. But are they?

Thanks to a recent paper in the Journal of Climate by Josefino Comiso and colleagues, we now know what’s driving the increase in sea-ice down there. It’s—wait for it—cooling temperatures over the ocean surrounding Antarctica.

This team of six researchers set out to produce an updated and enhanced dataset of sea ice extent and area for the Southern Hemisphere for the period 1978 to 2015. The key enhancement over prior datasets included an improved cloud masking technique that eliminated anomalously high or low sea ice values, assuring that their work is the most definitive study of Antarctic sea ice trends to date.

The six scientists report the existence of a long-term increasing trend in both sea ice extent and area over the period of study (see figure below), with the former measure increasing by 1.7 percent per decade and the latter by 2.5 percent per decade. 

Figure 1. Monthly anomalies of Southern Hemisphere sea ice extent (left panel) and area (right panel) derived using the newly enhanced SB2 data (black) of Comiso et al. and the older SBA data (red) prior to the enhancements made by Comiso et al. Trend lines for each data set are also shown and the trend values with statistical errors are provided. Source: Comiso et al. (2017).

With regard to these observed increases, Comiso et al. confirm “the trend in Antarctic sea ice cover is positive,” adding “the trend is even more positive than previously reported because prior to 2015 the sea ice extent was anomalously high for a few years, with the record high recorded in 2014 when the ice extent was more than 20 x 106 km2 for the first time during the satellite era.”

They compared satellite-based estimates of temperature over the ocean/ice and found a very high negative correlation between ice cover and temperature. So, the large and systematic increase in ice extent must be related to a cooling over the sea-ice region throughout the 36-year period of record in this study.

Why is this important? Much like the problems with the missing “tropical hot spot” we noted last month, Antarctic sea-ice modulates a cascade of meteorology. When it’s gone, or in decline, as is the forecast from the climate models, much more of the sun’s energy goes into the ocean, as that energy is only very poorly absorbed by ice, which means an enhanced warming of the Southern Ocean. That has effects on Antarctica itself, where slightly warmed surrounding waters will dramatically increase snowfall on the continent. The fact that there are only glimmerings of this showing up (if at all) should have tipped people off that something was very wrong with the temperature forecast for the nearby ocean.

Consequently, it is clear that despite a 20 percent increase in atmospheric CO2, and model predictions to the contrary, sea ice in the Antarctic has expanded for decades. Such observations are in direct opposition to the model-based predictions of the IPCC.

(N.B. as noted in our May Day post, the Antarctic ice sensor crashed last April, and subsequent data appears to be very unreliable and, in some cases, physically impossible.)

 

References:

Comiso, J.C., Gersten, R.A., Stock, L.V., Turner, J., Perez, G.J. and Cho, K. 2017. Positive trend in the Antarctic sea ice cover and associated changes in surface temperature. Journal of Climate 30: 2251-2267.

IPCC. 2013. Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change [Stocker, T.F., D. Qin, G.-K. Plattner, M. Tignor, S.K. Allen, J. Boschung, A. Nauels, Y. Xia, V. Bex and P.M. Midgley (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, 1535 pp.

Shu, Q., et al., 2015.  Assessment of sea ice simulations in the CMIP5 models.  The Cryosphere 9, 399-409.

A Class Camping Trip

Forget about monetary policy for a moment or two, and imagine, instead, that you’re back in 6th grade. You and your classmates are about to go on a camping trip, involving some strenuous hiking, and lasting several days.

Somehow, your teacher must see to it that all of you are kept well fed. To do so, she plans to appoint one of you Class Quartermaster. The school’s budget is limited, and rations can get heavy, so there will only be so much food to go around — so many hotdogs, baked beans, scrambled eggs, peanut butter sandwiches, and granola bars. The Quartermaster’s job will be to make sure it all gets divvied-up fairly and efficiently.

The catch is that your classmates are a motley bunch. Pete Smith, the football team captain, is even taller than the teacher, and otherwise built like an old oak tree. His body goes through fuel like a small steam locomotive. Mary Beth Johnson, on the other hand, looks like a gust of wind might carry her off, and eats so little that she doesn’t mind Peter grabbing her grilled cheese sandwich on tomato soup day. The rest generally fall between those two extremes. But just how several days of hiking will affect all their needs is anybody’s guess.

Still the food has got to be rationed somehow. And the class must decide how before a drawing of straws determines who will be Quartermaster. Will it be Jane “Goody Two Shoes” Miller, the teachers’ pet, or Wesley “The Weasel” Jones, who, though never caught red-handed, is widely suspected of cheating on his tests? Or could it — perish the thought! — turn out to be the ravenous Pete Smith himself? Whatever the choice, the class will have to live with it once the straw poll has been taken.

After some discussion, the class decides to vote for one of two options for rationing the food. The first is to simply let the Quartermaster dole out food according to his or her best judgement. That option will allow the limited provisions to be used as efficiently as possible, with Pete Smith getting the bigger helpings he needs, and Mary Beth getting less, assuming that less suffices. The second option is to insist that the Quartermaster give equal rations to everyone, big, small, or in-between. That’s bound to be inefficient, of course. Still, it can easily beat having Wesley or Peter decide!

So, which option will you vote for? If you settle for the first, you favor a “discretionary” rationing policy; if the second, you favor a rationing “rule” over discretion.

The Lesser of Evils

The point of the camping trip story is that neither of the two alternatives — rules or discretion — is obviously better than the other, let alone perfect. Instead, the task you and your classmates faced was that of selecting the lesser of two evils.

The long-standing debate between proponents of monetary rules on one hand, and defenders of monetary discretion on the other, should likewise be understood as a debate concerning the lesser of evils, that is, the least-bad choice among inevitably imperfect alternatives. Only a fool would want to force monetary authorities to cling to a rule that could only serve to rule-out better policy choices. But real-world monetary authorities are capable of screwing-up for all sorts of reasons too. Just as our Class Quartermaster might misjudge his or her classmates’ caloric needs, they might misjudge an economy’s need for monetary accommodation. And just as the Quartermaster might be inclined to favor particular students, or the teacher, over others, so too might monetary authorities cater to special interests, including the government, instead of doing what’s best for the public taken as a whole.

For these reasons one can’t just dismiss the case for a monetary rule by observing how unhindered monetary authorities might improve upon it. Yet such dismissals are encountered surprisingly often, especially (though here their presence is perhaps a little less surprising) in the statements and writings of monetary authorities themselves. Not long ago, for instance, Fed Chair Janet Yellen responded to the suggestion that the FOMC follow a monetary rule by observing, with supporting figures and charts, that “rules often do not take into account important considerations and information.” Following one mechanically, she said, “could have adverse consequences for the economy.” Of course Yellen’s claim is correct. But whoever denied it? Certainly no proponent of monetary rules ever did. The argument for a monetary rule isn’t that sticking to such a rule will never have adverse consequences. It’s that the adverse consequences of sticking to a rule may be less serious than those of relying upon the discretionary choices of fallible monetary authorities.

Limited Knowledge

Let’s have a closer look at some reasons why unfettered monetary authorities can’t avoid making  mistakes. The most fundamental reason is that, despite all the statistics and other information available to them, monetary authorities generally lack the knowledge required to choose the best possible policy stance.

In a previous Alt-M  article, summarizing the limited-knowledge case for monetary rules, CMFA Senior Fellow Gerald O’Driscoll explains that much of the information required to determine the optimal course for monetary policy at any time “is dispersed among the millions of actors in society.” Like any rule of thumb, a monetary policy rule compensates for this unavoidable lack of knowledge concerning actual and developing circumstances by leaning on past experience. Although critics of monetary rules sometimes suggest that, unless a perfect monetary rule can be devised, discretion is necessary, the truth, O’Driscoll observes,

is just the opposite. There would be no need for reliance on a rule if the economy were fully understood. The less we know about the specifics of a situation, the more we must rely on rules. A good rule incorporates the general features of a class of situations, in which the specific features vary unpredictably. If we possess full information, why would we want to rely on a rule?

While O’Driscoll’s version of the limited-knowledge argument for a monetary rule draws on Friedrich Hayek’s famous essay, “The Use of Knowledge in Society,”  the arguments of Milton Friedman — perhaps the best-known proponent of monetary rules — are (as O’Driscoll notes elsewhere) similar in spirit. So far as Friedman was concerned the main challenge facing monetary policymakers was that of keeping their own actions “from being a major source of economic disturbances.” He considered a monetary rule the best means for meeting this challenge, in part because such a rule would make monetary policy more predictable, thereby avoiding unsettling policy surprises and the uncertainty that the very possibility of such surprises engenders.

Friedman believed, furthermore, that discretionary policymakers’ attempts to improve upon a rule were likely to backfire, not only because they lacked needed knowledge of existing circumstances, but because of the “long and variable lags” that stood between their decisions and those decisions’ ultimate effects on spending, inflation, and employment. A decision to ease monetary policy today, based on information suggesting that money is or may be getting tight, could end up making money too loose months from now, when the decision has had its greatest impact on the money stock, because the demand for money has since subsided. The problem is especially likely during recoveries, when central bankers are loathe to stop administering monetary medicine until their patient displays robust health. The opposite problem — of money becoming excessively tight in the midst of a crash — is also common, because central banks often come to realize that their policies have been too loose, and therefore decide to tighten, just as asset prices that had been bolstered by their formerly loose stance are set to come crashing down.

Political Pressure

Even if they could command the necessary knowledge, monetary authorities might fail to choose the best policies for political, bureaucratic, or psychological reasons.

Among such unwelcome influences, political pressure reigns supreme. No matter how nominally “independent” they may be, central banks are creatures of legislation, and as such depend on government support for whatever powers they possess. They who give, can take away; so central bankers can never altogether avoid having to set-aside their preferred policies for the sake of mollifying those  government officials who are in a position to punish them or their institution.

That even nominally independent central banks have often acted as if they were mere backscratchers of fiscal authorities, especially by taking part in inflationary wartime finance, is only the most obvious example of their tendency to set-aside sound monetary policy for the sake of accommodating their sponsoring governments’ fiscal needs. That tendency surprising, given that fiscal accommodation of governments was the very raison d’être of all early central banks. The Fed is no exception to this rule: it helped to finance every one of this country’s major post-1913 military conflicts, though doing so meant tolerating high rates of inflation.

Nor has the Fed been immune to peacetime political pressure, especially from Presidents. That during the 1971 Presidential election President Nixon pressured then Fed Chairman Arthur Burns to ease policy to boost his prospects for re-election, despite mounting inflation, is notorious, thanks mainly to the Nixon Tapes, which leave no room for doubt concerning what took place. But Burns’ conduct, far from being exceptional, was of a piece with the by then well-established understanding of the Fed’s limited “independence within government” as Burns’ predecessor, William McChesney Martin, described it. In essence, Martin was saying that, although the Fed was independent, it could remain so only if it occasionally did whatever the government wanted it to do!

In short, although it’s tempting to assume that an “independent” central bank is one that’s not operating under the sway of government officials, the truth is that, so long as central bankers enjoy discretionary powers, government officials will try to influence their decisions, and will succeed in doing so at least to some extent. To truly insulate monetary policy from short-run political influences, something beyond central bank independence is needed, that can rule-out any tendency for central bankers to  pander to their own overseers — something like an inviolable monetary rule.

Other Sources of Bad Discretionary Policy

Even when central bankers aren’t caving-in to pressure from politicians, their choices can reflect considerations apart from those that ought to inform their policies. Being bureaucrats, they are no less inclined than other bureaucrats to take advantage of opportunities to increase their institutions’ budgets, even when doing so isn’t consistent with their assigned objectives. And being like politicians themselves to some extent, when faced with an emergency they often can’t resist acting according to the “politician’s syllogism,” to wit:

  • We must do something.
  • This is something.
  • Therefore, we must do this.

It’s well established, to offer an analogy, that doctors tend to over-prescribe both drugs and tests, and also to have patients admitted to hospitals more often than is in their patients’ own interest, because existing insurance arrangements are such that, by doing those things (“something”) they can both earn more and reduce their risk of being sued for malpractice. Although central bankers can’t be sued for malpractice (alas), they have their own reasons for insisting on doing “something” whenever one of their patients — entire economies — so much as sneezes.

The same central bankers are, on the other hand, all too inclined to refrain from taking action to address troubles brewing in an economy that seems all-too-healthy. Besides coming up with “independence within government,” Martin also memorably compared a responsible central banker to a chaperone whose unenviable responsibilities included that of having “the punch bowl removed just when the party was really warming.” Unfortunately the quip has gained notoriety mainly as a description of what discretionary central bankers often fail to do.

There are plenty of other reasons why central bankers might misuse their discretionary powers, including various psychological biases to which they might be prone. Drawing on the field of behavioral economics, Mark Calabria, the Cato’s former Director of Financial Regulation Studies, has discussed several of these potential biases;  so has Andrew Haldane, the Bank of England’s Chief Economist. Status-Quo bias, Myopia bias, Hubris biases, and Groupthink, are just a few of the afflictions they consider. Just how serious these afflictions are in practice is an open question. But the likelihood that central bankers suffer from at least some of them supplies that much more reason for entertaining the possibility that the right sort of monetary rule might outperform monetary discretion.

The Time-Inconsistency Twist

The case for a monetary rule, as I’ve summarized it so far, rests on the claim that central bankers may lack either the information required, or the inclination, to pursue the best possible monetary policies. But according to a now-famous paper by Finn Kydland and Edward Prescott, even omniscient altruists, left to manage the money supply as they think best, might be bested by a  monetary rule. That’s so because of what Kydland and Prescott call the “time-inconsistent” nature of optimal monetary policy, which can prevent even well-intentioned and well-informed central bankers from carrying-out their preferred plans.

To illustrate the problem, Kydland and Prescott imagine a case in which inflation, deflation, and unemployment are all considered undesirable.  An unexpected burst of money creation can, however,  make everyone better off by temporarily reducing unemployment, albeit at the cost of a one-time increase in the price level.

So what’s the best policy? Suppose that a committee of benevolent central bankers promises to keep inflation at zero, and that the public believes it. The very fact that the public doesn’t expect any inflation will then tempt the central bankers to take advantage of “surprise” inflation to lower unemployment, since doing so results in a one-time gain. The temptation in question is what makes the announced policy “time-inconsistent.”

Were the public naive enough to go on believing the central bankers’ promises no matter how often these were broken, the central bankers’ best strategy would be to trick them again and again, thereby achieving a permanently lower unemployment rate, albeit at the (perhaps worthwhile) cost of a higher inflation rate. But hoodwinking the public, even for its own good, isn’t so simple. The public will eventually come to anticipate the central bankers’ inflationary strategy, assuming it isn’t smart enough to divine it from the get-go. Either way, instead of allowing them to make everyone better off than they might by sticking to zero inflation, the discretion the central bankers exercise ends up trapping them in a high-inflation equilibrium, with no employment gains at all, because they find that they must either continue to live up to the public’s positive inflation expectations, or surprise them with a non-inflationary policy that will lead, for some time, to higher-than-necessary unemployment.

Is it possible for our central bankers to avoid the trap that Kydland and Prescott describe? It is, but to avoid it they must renounce their discretionary powers, and instead commit the central bank to a strictly-enforced zero inflation rule. Unlike central bankers’ mere promises, an unbreakable rule will necessarily be “time consistent.” Consequently the public has no reason to anticipate any deviation from it. Just as Ulysses was only able to resist the Sirens’ call by having himself lashed to the mast, our altruistic central bankers are only able to avoid being drawn into an inflationary equilibrium by renouncing their freedom to “fine tune” monetary policy.

Flexible and Inflexible Rules

Showing that a monetary rule might outperform discretionary central banking is one thing; identifying a particular rule that’s likely to do so is another. Indeed, it’s fair to say that, of countless monetary rules that have been proposed at one time or another, the vast majority would eventually have led to some extremely undesirable outcomes, if not to outright disaster.

Take, for example, the “k-percent” money supply growth rule that Milton Friedman once favored — a rule according to which some simple monetary aggregate (Friedman tended to prefer M2) was set to grow at a modest but unchanging rate. Such a rule could work reasonably well only so long as there were no major changes in people’s real demand for the money assets in question. If that demand increased at a steady pace of 5 percent each year, a 5 percent growth rule would just meet the public’s needs, keeping spending stable. But in practice the rate at which the demand for any fixed set of monetary assets — for M1, or M2, or M3, or whatever — grows is likely to change over time, and perhaps dramatically — as financial innovations and other changes alter different asset’s relative attractiveness. Speaking generally, Janet Yellen was entirely correct in observing some months ago that “sensible implementation of policy rules requires adjustments to take such changes into account, as a failure to do so would result in poor monetary policy decisions and poor economic outcomes.”

But, valid as it is, Yellen’s observation doesn’t mean that we must, after all, fall back on discretion as our only hope for getting monetary policy (almost) right. Instead, it’s possible to design monetary rules that themselves take account of changing conditions. The most well-known example of such a “flexible” monetary rule is the so-called “Taylor Rule,” named after Stanford Economist John B. Taylor, in which the central bank sets a federal funds rate target that is constantly adjusted in response to deviations of inflation and output from their desired and “potential” levels, respectively. Taylor himself has argued that, had it stuck to his rule (as it did, more-or-less, from the mid-1980s until 2000 or so), the Fed might have avoided a good part of the calamitous boom-bust cycle of 2002-2009. But the more important and general point is one that Athanasios Orphanides and John Williams emphasized at the start of the crisis, to wit: that it is after all “possible to design a simple policy rule that can deliver reasonably good macroeconomic performance even in an environment of considerable uncertainty regarding expectations formation and natural rate uncertainty.”

Furthermore, having a monetary rule, however rigid or flexible the rule may be, doesn’t necessarily mean having one written in stone: provisions can be made for a rule’s periodic reconsideration and revision, according to a regular schedule, or in response to designated circumstances, and following agreed-upon procedures. One can, in other words, combine a monetary rule in the ordinary sense of the term with a “meta” monetary rule for revising the rule over time. In short, to think clearly about the relative merits of rules and discretion, its important to realize that there are many weigh stations between the extremes of an inflexible and unalterable rule on one hand and unalloyed monetary discretion on the other.

A Stable Spending Rule

If discovering  a reliable monetary rule has been harder in practice than it appears to be in theory, that’s largely because of confusion regarding the appropriate, ultimate objectives of monetary policy. Keynesians have, on the one hand, tended to insist on a “full employment” objective, while (old-school) monetarists have, on the other, insisted on the need for low (if not zero) and steady inflation. Some monetary rules, like Taylor’s, attempt to strike a compromise between these positions.

If you ask me, such compromises, for all their practical merits, still have economists barking up the wrong tree. The belief that inflation and deflation are necessarily bad, and the related belief that a constant (if not necessarily zero) inflation rate is better than a varying rate, are both widely subscribed to, even among professional economists. Those beliefs are nevertheless mistaken: as I’ve suggested in previous chapters of this primer, and as I’ve argued at some length elsewhere, there are good reasons, and plenty of them, for letting the inflation rate vary along with an economy’s productivity, so that in more-productive times prices rise less quickly, and perhaps even decline, than in less productive ones.

The goals of “full employment” and its close counterparts, including “potential” output, also leave much to be desired as guides to sound monetary policy, in part because they’re nebulous, but also because theory tells us that even if they weren’t so, attending to them alone wouldn’t suffice to “pin down” monetary policy in the sense of establishing a uniquely desirable path for either the money supply itself or the price level. Instead, many such paths might be equally capable of keeping employment and output close to their “full” or “potential” levels.

So, what should monetary policy aim for? I’ve said it before, and I’ll say it again: it’s aim should be the stable growth of total spending in the economy. Let spending grow at a steady rate, roughly equal to the rate of growth of the labor force, and the inflation rate will vary only as productivity varies, which is what it ought to do. At the same time, employment, though perhaps less than “full” according to some other criteria, will not be so on account of any lack of spending, and will therefore not be so in any way warranting further doses of monetary medicine.

While devising a monetary rule that strikes a correct balance between the supposed “wrongs” of price level movements on one hand and less than “full” employment on the other may ultimately prove as intractable a problem as squaring the circle, devising one that’s consistent with preserving a stable level of spending is, comparatively speaking, child’s play. The challenge consists of getting both Keynesians and Monetarists, as well as others, to agree that stability of spending, rather than any particular values of inflation or unemployment, ought to be the ultimate objective of monetary policy.

[Cross-posted from Alt-M.org]

Like Allan Meltzer, I received my Ph.D. from UCLA. He and his major professor, Karl Brunner, had both left by the time I arrived. UCLA is an important intellectual connection. At the time, UCLA was informally known as “Chicago West,” for its intellectual affinity to the University of Chicago Economics Department.

The characterization was misleading if not wrong. UCLA was where Chicago and Vienna (the Austrian School) intersected. UCLA’s professors and their students were influenced by both traditions. That explains positions taken by them over the years on many issues. For instance, in their work, Brunner and Meltzer adhered to a conception of Knightian (after Frank Knight) or true uncertainty. That type of uncertainty is not readily modeled with definitive results.

For a long time, I had minimal interaction with Allan. When he came to Washington, D.C., to begin writing his multi-volume history of the Federal Reserve System, however, that began to change. AEI supported the research, and its president, Chris DeMuth, provided Allan with an office and association. Early on, Allan invited me over to AEI for lunch.

Importantly, he began inviting me to attend the meetings of the Shadow Open Market Committee as an observer. They were very instructive and illuminating. Aside from the substance, I marveled at his performance as Chairman. Getting academics to agree is like herding cats. Allan had the skill.

Later, I worked at the Heritage Foundation. Ed Feulner, Heritage’s President, was appointed to the Congressionally mandated International Financial Institutions Advisory Commission (IFIAC). In the wake of multiple global financial crises, Congress wanted a review of the IMF, World Bank, and other international agencies. The Commission’s original Chairman withdrew before the Commission began meeting. Ed asked my advice on a replacement. Without hesitation, I replied “Allan Meltzer.” “Why?” I was asked. “He can herd cats,” I replied.

Allan accepted on the condition I would be his Chief of Staff. Now I had two full-time jobs, each with lots of overtime.

Allan’s domination of the Commission soon led the press to rename IFIAC as “The Meltzer Commission.” From the beginning, Allan was determined that the commission would arrive at a nonpartisan set of recommendations. The membership had 6 Republican and 5 Democratic members and was expected to divide along those partisan lines.

Because of an informal alliance the Chairman struck with Jeffrey Sachs, the deliberations of the Commission were largely nonpartisan. With a few exceptions, the deliberations were conducted in a collegial atmosphere. The final vote was 8-3 in favor of the findings. It was a remarkable result because the Democratic members were under extreme pressure not to sign the majority report. It was all a testimony to the Chairman’s remarkable political skills.

Allan passed away early Tuesday. He will be remembered. He will be missed.

[Cross-posted from Alt-M.org]

Following is my response to the Commerce Department’s request for public comments on the “Causes of Significant Trade Deficits.”

In a globalized economy, where the value embedded in most manufactured goods originates in multiple countries and two-thirds of trade flows are intermediate goods, bilateral trade accounting is meaningless.  In a world where statistical agencies attribute the entire $180 cost of producing an Apple iPhone to China, where it is merely assembled for a cost of about $6, what do trade statistics and trade balances mean?   By assigning 100 percent of the value of an import to the final country on the assembly line, trade statistics have lost most of their meaning.

The misguided belief that the trade account is a scoreboard measuring the success or failure of trade policy explains much of the public’s skepticism about trade and trade agreements, lends plausibility to claims that the United States is routinely outsmarted by shrewder foreign trade negotiators, and provides cover for the same, recycled mercantilist and protectionist arguments that have persisted without merit for centuries.

If the trade deficit reduces economic activity and destroys jobs, why are there positive relationships between these variables?  The overall trade deficit, by and large, is also a meaningless statistic.  It is neither a barometer of economic health nor a running tally of debt with which we are burdening future generations.

For 42 straight years, the United States has registered an annual trade deficit with the rest of the world.  That means that year after year, Americans spend more on foreign-produced goods and services than foreigners spend on U.S.-produced goods and services or, put simply, the dollar value of U.S. imports exceeds the dollar value of U.S. exports.

For almost as long, some economists have been arguing that trade deficits are unsustainable – they sap economic growth, bleed jobs, and saddle our descendants with debt.  Perhaps if one fixates on the trade deficit (or the slightly broader current account deficit, which includes interest on foreign assets and remittances) in isolation, these concerns might seem to have merit.  But looking at the U.S. trade or current account deficits without considering the capital account surplus is a meaningless, misleading exercise.

The trade deficit is not a problem because the associated capital surplus (the excess of inward investment over outward investment), which includes high-quality foreign direct investment, bestows huge advantages on the U.S. economy. 

After all, one of the reasons that trade is so maligned is that the public has been led to believe that the trade account is a scoreboard, with the deficit indicating that Team America is losing – and it’s losing on account of poorly negotiated trade deals and foreign cheating.  The United States runs a trade deficit with the rest of the world because Americans spend more dollars on foreign-produced goods and services than foreigners spend on U.S.-produced goods and services. The dollar value of U.S. imports exceeds the dollar value of U.S. exports, so our trade account is negative. It’s in deficit. That’s straightforward.

A slightly broader measure of international transactions than the trade account is the current account. The current account includes the trade account plus net proceeds on investment (income earned on U.S. assets abroad minus income earned on foreign-held assets in the United States) plus net transfers (remittances and aid, primarily, flowing into the United States minus remittances and aid, primarily, flowing out of the United States). Those two components (net proceeds and net transfers) are much smaller than the value of exports and imports, so the U.S. current account typically isn’t much larger or much smaller than the trade account. In 2016, the trade deficit amounted to $503 billion and the current account deficit was $481 billion.

So, how is it even possible to run a trade deficit in the first place? How can Americans send $503 billion more abroad for goods and services than foreigners send to the United States for goods and services?

Americans are able to purchase more goods and services from foreigners than they sell to them because foreigners buy more assets from Americans than Americans buy from foreigners. There is a positive inflow of dollars on the capital account. Foreigners don’t only buy goods and services from Americans. They buy U.S. assets (equities, property, factories, service centers, shopping malls, machines, other physical assets, corporate debt, and government debt) from Americans. Likewise, Americans don’t only buy goods and services from foreigners. We buy the same kinds of assets from foreigners, as well.

The proper way to account for international transactions is to note that the value of the goods, services, and assets that Americans purchase from foreigners is approximately identical to the value of the goods, services, and assets that foreigners purchase from Americans.  If there is a difference between the current account deficit and the net capital inflow, it is accounted for by the change in foreign reserves.

The United States ran a $481 billion current account deficit with the rest of the world in 2016, and it ran a $481 billion capital account surplus. The capital account consists of three broad components: U.S. purchases of foreign assets; foreign purchases of U.S. assets; and, the change in foreign reserves. And it is a mathematical certainty that the current account plus the capital account equals zero. Put another way, the value of the current account deficit is identical to the value of the capital account surplus.

So, the U.S. trade deficit is financed by inflows of foreign capital used to purchase U.S. assets. Most of the assets purchased are equities and physical assets (direct investment). Some of the assets purchased are corporate debt and government debt. As of the end of 2014, Americans held a total of $24.6 trillion of foreign assets. Foreigners held a total of $31.6 trillion of U.S. assets. Of that $31.6 trillion foreign asset portfolio, treasury bills and bonds accounted for about $6 trillion — just under 20 percent of the total. It is only this portion — government debt owned by foreigners — that the American public (of this generation or the next) is on the hook to pay back. Corporate debt has to be repaid, but only by the shareholders and employees of the companies issuing the debt — not by you or me or our children, generally. Equity purchases don’t have to be paid back at all — they’re not loans! When European, Japanese, Korean, Chinese or any foreign investors purchase U.S. companies or make “greenfield” investments to build new production or research facilities or hotels or shopping centers, there is no debt to be repaid.  Americans are not on the hook to repay Honda for its investment in production facilities in Marysville, Ohio, for example.  Americans simply benefit from Honda’s success, earning wages and profits they would not have enjoyed without Honda’s presence. 

Selling equity or property or even entire U.S. companies to foreigners does not constitute debt and it is not akin to subsidizing our consumption by draining down our assets, as some suggest. It’s not a reverse mortgage. By the simple logic of supply and demand, the presence of foreigners in U.S. asset markets is good for U.S. asset holders. Foreign participation constitutes greater demand in the market, which increases the price of the assets in question. And, there are some real knock-on benefits associated with foreign-headquartered companies operating in the United States. These “insourcing” companies tend to perform well above the average U.S. company in terms of value creation, capital investments, research and development spending, compensation, employment, and many other metrics, as this paper documents.

In fact, there is a compelling argument that a trade deficit is actually good for the U.S. economy because the quality, experience, and successes of the foreign companies that actually come and operate in the United States are better, deeper, and more numerous, respectively, than the average U.S. company.  Foreign direct investment in the United States is a conduit for bringing world class companies that have succeeded and thrived in other markets to share their expertise with Americans.

Let me conclude this one by reiterating that only a portion of our trade deficit needs to be repaid by the American public to foreigners, and it is that portion used to finance government budget deficits — roughly one-fifth of the annual trade deficit.  The trade deficit is not an argument for trade barriers, but rather one for reducing profligate government spending.

The Low Income Housing Tax Credit (LIHTC) is a federal program that subsidizes the construction of housing for poor tenants. The $8 billion program suffers numerous failures, as discussed in this study. One problem is that the program’s subsidies may flow more to developers and financial institutions than to the needy population that is supposed to benefit.

National Public Radio investigated the LIHTC for a show aired yesterday. The joint investigation with PBS found that the program has “little federal oversight” and is producing “fewer units than it did 20 years ago, even though it’s costing taxpayers 66 percent more.” The investigation discovered that “little public accounting of the costs exists, even among government officials and regulators charged with monitoring the program.”

Here’s how the program works:

Every year, the IRS distributes a pool of tax credits to state and local housing agencies. Those agencies pass them on to developers. The developers then sell the credits to banks and investors for cash. Often, to find investors, developers will use middlemen called syndicators. The banks and investors get to take tax deductions, while the developers now have cash to build the apartments.

With lots of groups on the federal gravy train—state and local housing bureaucracies, developers, banks, syndicators, and investors—the LIHTC program has fortified itself politically. Developers apparently take a 15 percent cut on the total value of housing projects, while syndicators earned more than $300 million in fees last year.  

Some share of LIHTC subsidies disappear in corruption and fraud. NPR profiles a Miami-area criminal enterprise led by Biscayne Housing and Carlisle Development Group, which is “one of the country’s top affordable housing developers.” The companies stole $34 million from 14 LIHTC projects. Biscayne’s former head Michael Cox admits, “It was a construction kickback scheme … The scam was to submit grossly inflated construction numbers to the state in order to get more money than the project required and then have an agreement with the contractor to get it back during construction.”

NPR interviewed Assistant U.S. Attorney Michael Sherwin, who has spent five years investigating the LIHTC program in Florida. “This program has been described as a subterranean ATM, and only the developers know the PIN,” he says.

The IRS runs the program at the federal level, but its oversight is “minimal” says the GAO. The IRS relies on local housing agencies to prevent corruption, but those agencies don’t put much effort into program integrity. “It’s really a program of trust,” Sherwin noted to NPR.

The man who ran Florida’s housing agency at the time of the Biscayne/Carlisle crime spree, Steve Auger, defended the LIHTC and assured NPR such scams would not happen again. He said, “It’s probably the most efficient tax housing program that has ever existed.”

So far, this blog has had a serious focus. But now it’s time for the comic relief:

After the interview with NPR and Frontline in late 2016, Auger was forced to resign from the agency after an audit revealed he spent more than $50,000 on a steak and lobster dinner for affordable housing lenders and gave his own staff almost half a million dollars in bonuses.

I guess Auger was lying about the efficiency thing, but he was also wrong about scams not happening again:

A few months later, Sherwin charged a Miami-based shell company called DAXC LLC belonging to the owners of Pinnacle Housing Group, another one of the largest developers in the country, with the theft of $4 million from four tax credit developments. In an agreement with prosecutors, a DAXC representative acknowledged that the company “inflated costs” for its own “personal benefit.”

Is LIHTC corruption confined to Florida? The Assistant U.S. Attorney doesn’t think so:

Sherwin says he is not done investigating the LIHTC program. He says he is turning his investigation to more developers with projects in other states and also to the banks, lenders and syndicators. “I know that this fraud doesn’t just reside in South Florida,” he says. “There’s too much money involved, and based upon other information that we’ve looked at, this fraud exists in other jurisdictions.”

The vast majority of housing agencies have never been audited. There have been only seven audits of the 58 state and local housing agencies that the IRS relies on to watch the program since it began in 1986. And when you trace the tax credits of LIHTC properties upward to syndicators and investors, the profit structure becomes even more obscure.

Congress is considering tax reform this year to cut rates and eliminate loopholes. The LIHTC is a corporate loophole that should be chopped. Tax reform means less steak and lobsters for the insiders and more income and opportunity for average families.

Kudos to NPR and PBS for their investigation. Further discussion of the LIHTC herehere, and here. Discussion of reforms to reduce housing costs here

Yesterday President Trump fired FBI Director James Comey.  Although the manner in which this was handled was ham-fisted, this is likely to be seen, at least in retrospect, as a wise move.

The warning signs about James Comey were there all along.  The Wall Street Journal summarized some of his spectacular misjudgments in a 2013 editorial titled, “The Political Mr. Comey.“  The overzealous pursuit of Frank Quattrone and Steven Hatfill.  The appointment of Patrick Fitzgerald who then ran amok in the Valerie Plame and Robert Novak case.   

I disagree with the Journal’s take on Comey’s fight with then-White House Counsel Alberto Gonzales over the reauthorization of Bush’s warrantless surveillance program—that goes on the plus side of Comey’s ledger.  But there are even more bad judgments that the Journal did not mention. For example, Comey went after Martha Stewart in a case of ruthless ambition.

When the high stakes “enemy combatant” controversy was pending before the Supreme Court, Comey pulled one of his stunts, holding a press conference to “inform” the public of the gravity of the case.  Attorney and author Scott Turow rightly called out Comey’s outrageous trial by news conference.

We can do much better than James Comey.  If Trump can repeat the careful process by which he selected Neil Gorsuch for the Supreme Court and secure a fairly swift confirmation vote, this matter will soon be forgotten.  If the selection process is mishandled, the political storm clouds will hang over the White House for quite some time.

My own review of the troubled history of the FBI can be found here and here.

Traditional educators frequently claim that public charter schools are failing, even when evidence indicates that they perform no worse than traditional institutions on student test scores. This logic fails to recognize costs, which are paramount to educational success, primarily because wasted funds could otherwise be efficiently allocated towards further academic achievement. If students are receiving less public funding in charters, then choice schools are significantly outperforming residentially assigned institutions.

I just released a study with Patrick Wolf, Larry Maloney, and Jay May examining disparities in funding between students in charters and traditional public schools in 15 metropolitan areas in the 2013–14 school year. As shown in figure 1 from the report, students enrolled in a public charter school receive substantially less funding than those in traditional public schools in all but one location. In fact, we find that students in charter schools receive about $5,721 less in total annual funding than their peers in district schools.

Source: Wolf, Maloney, May, and DeAngelis (2017). “Charter School Funding: Inequity in the City.” School Choice Demonstration Project, Department of Education Reform, University of Arkansas.

Critics of this type of evaluation often argue that funding disparities are due to differences in types of students. After all, traditional public schools (TPS) may have a larger proportion of students requiring additional educational resources. While the TPS in our study do enroll more special needs children, we find that these differences do not fully explain the funding gap between traditional public schools and public charter schools.

Funding inequity across the two sectors has only gotten worse over time. Eleven years after the research team first revealed that public charter schools receive less funding than their traditional public schools peers, the funding disparity had grown by about 79% in eight cities.

Should these results surprise us? If you could force your customers to buy your product at a high price, would you need to reduce expenses? Perhaps more importantly, if your customers could not leave, how would you know which costs to cut? The traditional system of schooling makes it impossible to allocate resources efficiently, even if local public school leaders are highly competent and benevolent.

Nonetheless, these findings are important for decision-makers to consider, especially if they care about improving student outcomes through efficiently allocating educational funding. Just imagine what would happen to the education sector if families could choose which institution to send their funds to. Schools would be rewarded for quality and efficiency, freeing up the resources necessary to improve the lives of millions of children around the nation.

In four little panels Steve Kelley punctures the government’s bizarre claims about its powers and our rights.

Although many false arrests are exposed in court, that’s cold comfort when you’re getting handcuffed and realize you’ll be locked up a while. Here’s a fairly recent example of such a false arrest caught on tape:

For related Cato work, go here and here.

H/T: Jacob Sullum at Hit & Run.

 

Two front-page stories in the Washington Post today tell a depressing story:

President Trump’s most senior military and foreign policy advisers have proposed a major shift in strategy in Afghanistan that would effectively put the United States back on a war footing with the Taliban…more than 15 years after U.S. forces first arrived there.

Seventeen years and $10 billion after the U.S. government launched the counternarcotics and security package known as Plan Colombia, America’s closest drug-war ally is covered with more than 460,000 acres of coca. Colombian farmers have never grown so much, not even when Pablo Escobar ruled the drug trade. 

There are high school students about to register for the draft who have never known a United States not at war in Afghanistan and Iraq. And of course the policy of drug prohibition has now lasted more than a century, though the specific Colombian effort began only under President Clinton around 1998, getting underway in 2000.

I wrote an op-ed, “Let’s Quit the Drug War,” in the New York Times in 1988. Cato scholars and authors have been writing about the seemingly endless war(s) in the Middle East for years now. Maybe it’s time for policymakers to start considering whether endless war is a sign of policy failure.

And maybe one day, a generation from now, our textbooks will not tell our children, We have always been at war with Eastasia.

One of the key concerns about climate change is ecosystem resilience. This is particularly true for those that are anchored over large locations with little ability to move. Ecological communities in the Chesapeake Bay come to mind.

According to the U.S. National Climate Assessment report published in 2014 (Melillo et al., 2014), there is “very high confidence that coastal ecosystems are particularly vulnerable to climate change because they have already been dramatically altered by human stresses, as documented in extensive and conclusive evidence” (Moser et al., 2014). Additionally, the report claims there is “very high confidence that climate change will result in further reduction or loss of the services that these ecosystems provide, as there is extensive and conclusive evidence related to this vulnerability” (Moser et al., 2014).

That Assessment has been criticized as being far too alarmist, too political, and very incomplete with regard to its summarization of important scientific literature. It didn’t help that when it was released, the National Oceanic and Atmospheric Administration (whose bailiwick includes coastal ecosystems), called the report “a key deliverable in President Obama’s Climate Action Plan” in the press release for its rollout.

It’s important to quantify claims like the ones made above, and one type of ecosystem that has received considerable attention in this regard is the seagrass biome. These dense underwater meadows are found in numerous coastal waters, including those of the United States. They are a foundational basis for an ecosystem as diverse and variegated as those associated with coral reefs, but they get little public attention because they aren’t nearly as showy. But they are important. Their presence helps to reduce coastal erosion, improve water quality and mediate ocean chemistry, as which adds economic value. Given the important functions that they perform within their coastal ecosystems, it should come as no surprise, therefore, that concerns have arisen over the current and future ability of seagrass ecosystems to withstand rising atmospheric CO2 concentrations – i.e. global warming and ocean acidification.

A new study by Shelton et al. (2017) sheds some important light in this regard. Working with over 160,000 observations from Puget Sound, Washington, USA, the team of six researchers created a database of eelgrass, a common constituent of seagrass ecosystems worldwide. They surveyed data along hundreds of kilometers of shoreline over the 41-year period 1972-2012 in the Puget Sound, home to millions of people as well as tourism, transportation and recreation. It’s fair to call it the Chesapeake Bay of the Northwest, and there’s all kinds of pressures to keep it healthy. Their long survey period includes rapid economic development as well as increases in dissolved carbon dioxide as atmospheric concentrations rose. Their hope was to quantify the natural and anthropogenic factors contributing to eelgrass change across various spatial and temporal scales.

Shelton et al. indeed did report there were “substantial changes” in eelgrass populations over the four decades of study. But a look at the smaller spatial scales yielded “no obvious geographic coherence in [the] trends,” noting that adjacent eelgrass sites sometimes had opposite trends. This lack of geographic coherence, according to Shelton et al., “would [not] be expected if shared oceanographic or climate drivers controlled eelgrass trends,”.  Those would include, especially, climate change or ocean acidification.

Scaling up to the regional level and covering the entire estuary, Shelton et al. report, as illustrated in the figure below, that “over the past 40 years, eelgrass in Puget Sound has proven resilient to large-scale climatic and anthropogenic change,” confirming once again that “we do not see coincident changes in eelgrass populations that would indicate a major shared climatic driver across sites.”

This large-scale stability of eelgrass populations observed in the Puget Sound estuary over the past four decades has endured despite (1) a more than doubling of the human population in the area and (2) multiple major oceanographic anomalies (including several major El Niño and La Niña events), which is a testament to the adaptability and resistance of this keystone marine ecosystem species to human influence.

Perhaps more important, this undermine the “very high confidence” the U.S. National Climate Assessment assigns to predictions of future coastal ecosystem demise in response to CO2-induced global warming and ocean acidification. The reality is that estimates of such vulnerability are largely overstated. One can only hope that the forthcoming 2018 U.S. National Climate Assessment will temper such projections by incorporating the realism observed in nature from studies like that of Shelton et al.

The discussion around private school choice legislation is almost always framed as an intense battleground with teachers on one side and families on the other. Political scientists are quick to point out that teachers win the skirmish more often than not because their interests are concentrated amongst a few, while their enemies, the parents, bear costs that are widely dispersed. While the political theory behind the claim is strong, the argument that school choice programs are at odds with the interests of professional educators is feeble.

Discouragement & Hostile Work Environments

The traditional public school system has utterly failed teachers in the United States. Educators operate in a system that does not reward them for performance or determination. Instead, their motivation levels are shattered after they find out that time served and meaningless credentials, rather than effort, lead to career success.

Perhaps even worse, public school teachers must function within a hostile environment where children are compelled to attend and parents are forced to pay. If citizens were forced to read my blog posts, I am sure that many of them would stress and complain. It would be impossible to please the diverse set of required readers, especially if they were grouped primarily by their zip codes. Alternatively, if families could choose their educational services, they could match with educators based on interests and learning styles, creating a friendly and feasible work environment for teachers.

Compensation

As critics of the U.S. education system often contend, current levels of teacher pay do not entice large quantities of highly skilled labor to enter the field. Perhaps more importantly, the uniform pay scale does not incentivize teachers to perform above minimal levels. Alternatively, as Andrew Coulson pointed out in School, Inc., high quality teachers in places like South Korea can earn millions of dollars each year through the system of voluntary exchange.

Private school choice can benefit teachers through increasing motivation levels, improving work environments, and rewarding high performance. In an educational system of voluntary schooling selections, institutions would need to compete for high quality talent through improving job satisfaction and compensation levels. Instead of searching for enemies within the education sector, we should realize that teachers ought to embrace school choice as tightly as possible.

Steven Camarota of the Center for Immigration Studies (CIS) responded to our criticism of his claim that the border wall will pay for itself. Most of Camarota’s comments confuse the multiple and different simulations that I published with David Bier. He only responds to a handful of our points and then spends most of his space attacking a section called “A Better Cost Estimate Should Include These Variables.” We did not incorporate any of the suggestions from that section into our corrected version of his fiscal analysis.

The only changes we made in our headline findings, relative to Camarota, were that we adjusted for the border crosser age of arrival in 2015, adjusted for the education level for 2015 border crossers, and used an actual cost estimate for the border wall. We also copied Camarota’s methods for our additional simulations but clearly stated the changes we made and why.

Camarota’s comments are in the block quotes, my responses are below.

“[D]espite the Cato blog post being titled ‘The Border Wall Cannot Pay for Itself’, their own cost estimates would simply mean that a border wall would have to stop 16 to 20 percent of those expected in the next decade to pay for itself (as opposed to 9 to 12 percent in my estimate).”

Camarota misread our response. The point of generating a new estimate from his assumptions was to demonstrate how flawed his report was by showing that small changes drastically change his results.  These are not our “own estimates,” but rather, they would have been his estimates if he had bothered to use more up-to-date and precise numbers.  Instead, Camarota pretends that our updates are a comprehensive fiscal cost estimate despite the fact that we have an entire section dedicated to explaining what sorts of other factors a good estimate would need to include.

Cato argues for excluding state and local costs. Cato makes the argument that costs at the state and local level should not be counted, even though this information is available from the NAS study and I included it in my analysis. The only reason they give for not including these costs is that ‘the federal government will actually be paying for the wall.’ This is a very odd argument. The federal government often considers the costs of its policies at the state and local level, so why should building a wall be any different? These costs are real and have to be paid for by the same taxpayers who pay for the federal government.”

Camarota’s comment is perplexing. In the “Calculating the Fiscal Cost Section” of our blog, we used the average net present value flows for consolidated federal, state, and local governments in Table 8-12 of the NAS report. Camarota used that same table in his paper. We even averaged the net fiscal costs for all eight tables like Camarota did. The only exception is that we controlled for the age of the border crossers.  Camarota’s passage is actually criticizing one of the three additional simulations we ran later in the blog with different assumptions.  A person reading his criticism would inaccurately assume that we used a different table from the NAS than we really did.

“[T]he Cato authors argue that my analysis assumes that legal and illegal immigrants cost the same. For example, they say my analysis assumes that illegal immigrants will retire in the United States at the same rate as legal immigrants. In fact, my analysis very much takes this into account. Nowrasteh and Bier do mention the reduction in fiscal costs associated with illegal vs. legal immigrants that I included in my analysis, but they do not seem to understand the implications.”

Camarota’s statement is false. We never argue that legal and illegal immigrants impose the same fiscal costs.  Camarota does attempt to adjust downward the cost of border crossers, but he drew his estimate from a 2013 Heritage report that provides an estimate for a single year, not a lifetime. Thus, it does not take into account the emigration rate of each group.  The NAS report takes into account only the average emigration rate for all immigrants and not the emigration rate for illegal immigrants, meaning that this is Camarota’s assumption as well.

Furthermore, Camarota does not respond to our point that the Heritage report is methodologically incompatible with the NAS report.  Heritage’s report focuses on households headed by illegal immigrants while the NAS estimate measures individuals.  NAS also discounts a 75-year projection to the present value while the Heritage report does not discount a 50-year projection and, thus, reports a meaningless figure.  There is no sound justification for combining the figures from these two incompatible reports.

Cato inflates cost of the wall. Cato argues the cost of the wall will be much higher than the $12 to $15 billion Senate Majority Leader Mitch McConnell (R-Ky.) has said Congress will spend, and the senator is certainly in a good position to know what Congress is likely to spend. A wall is not an entitlement program that grows on its own without Congress specifically allocating money. Further, Congress and the president will determine the structure, design, and length of the wall, as well as spending levels. In some sense “the wall” is whatever Congress and the president decide.”

David Bier and I decided to rely on actual DHS cost estimates that included maintenance and eminent domain.  Camarota relied on a quote by Senator Mitch McConnell.  Camarota confused what Senator Mitch McConnell said the Senate would spend on a border wall with what a complete border wall would actually cost. The two are not the same. Camarota then assumed that whatever Congress decides to spend would complete whatever project Congress sets out to complete.  Following Camarota’s line of thinking, if Congress wanted to build a complete border wall out of sunshine and puppy dogs then it will be so because they decreed it.

Camarota twists the words of a Senator to fit his own meaning while we take the average per-mile costs of construction and maintenance. The reader can decide which method produces a fairer cost estimate.   

“Cato recalculated the education level of illegal immigrants in order to reduce their costs, but they do not explain how they did this.”

Camarota correctly guessed how we estimated the education of illegal border crossers.  This is Camarota’s strongest point but it only accounts for less than half of the difference in our estimates.  Adjusting for the age of arrival accounts for most of the difference between our Camarota-inspired estimate of -$43,444 and Camarota’s actual estimate of -$74,722 (more on this below).  Adjusting for age of arrival is important.       

Camarota did not respond to some important points:

  • Cato’s adjustment for illegal immigrant age of entry. This minor adjustment accounts for slightly more than half of the difference between CIS and Cato and means that each border crosser produces a -$59.210 net fiscal impact. That means the border wall would have to deter about 739,092 border crossers without incurring additional costs to pay for the wall. That is approximately 44 percent of all estimated future border crossers over the next decade – more than twice as high as Camarota’s worst-case-scenario estimate.  
  • Illegal immigrant border crossers are younger than what Camarota estimates if Border Patrol apprehensions data is meaningfully related. The surge in UAC since 2010 has lowered those ages even further, making the net fiscal impact more positive. Age is an important adjustment that Camarota should take account of.
  • Border crossers are down in the first few months of the Trump administration. This might or might not continue depending on whether President Trump’s words turn into action as well as myriad economic factors. Recent research by Warren and Kerwin found that approximately 140,000 border-crossers entered annually from 2011-2013. If those lower numbers hold then the 1.7 million estimated border crossers over the next decade that Camarota relies upon may already be too high without factoring in President Trump’s other non-wall immigration enforcement actions.  In that case, the border wall will have to deter a much larger percentage of border crossers than he claims even without any changes to his model.

Camarota’s response to our blog is disappointing. He is correct that we are arguing that illegal immigrants have a smaller negative fiscal impact relative to legal immigrants when controlling for age and education. His estimates of new border crosser education levels could also be better than ours is. However, Camarota misses our biggest criticisms when he ignores actual border wall cost estimates and refuses to acknowledge that a border crosser’s age of arrival is important to determining his net fiscal impact.  There is not a good reason for relying on Senator McConnell’s quote for a border wall cost estimate while ignoring real cost projections and failing to adjust for the age of arrival of the border crossers. 

In March, the Federal Open Market Committee (FOMC) signaled it could begin shrinking the Fed’s balance sheet sometime later this year. However, with limited official details about what that means and none forthcoming from last week’s FOMC press release, many questions remain:

  • How will the Fed decide exactly when to begin shrinking its balance sheet, and will the move be data or date dependent?
  • Once the wind-down begins, how rapidly will the balance sheet shrink and to what new normal level?
  • How will the Fed dispose of its assets: by simply refraining from reinvesting the proceeds from maturing securities, passively shrinkage the balance sheet, or by actively disposing of some assets to ensure a smoother path for balance sheet reduction?
  • And would asset sales, should they occur, include both mortgage-backed securities (MBS) and Treasuries or would the Fed initially focus on a single asset class?

Back in September 2014, the FOMC released its Policy Normalization Principles and Plans (henceforth “the Framework”), its official statement outlining a three-step normalization strategy, including balance sheet reduction. First, the Fed would raise policy rates[1] to “normal levels.” Second, the Fed would begin to shrink the balance sheet in a “gradual and predictable manner” by ending the reinvestment policy. And third, the wind down would continue until the Fed holds only enough securities to conduct monetary policy “efficiently and effectively” with a portfolio consisting primarily of Treasuries. There is, of course, a caveat that the Fed can deviate from the Framework as economic conditions change. Since December 2015 the Fed has raised policy rates three times, but it has yet to update the Framework to provide further details on the next steps for balance sheet normalization.

With only the broad principles in the Framework as yet available, more detailed information must be gleaned from elsewhere. Fortunately, nearly every Federal Reserve official has discussed the balance sheet to some extent recently; but while their attention may be uniform, the policy discussion is not. Some officials have said nothing beyond the Framework, while others, particularly those regional bank presidents that do not vote on this year’s FOMC, have offered additional comments about the timing, speed, and ultimate target size associated with reducing the balance sheet. This essay examines the views of FOMC permanent voters first, then regional Fed presidents voting in 2017, followed by non-voting regional presidents.

PERMANENT FOMC VOTING MEMBERS Federal Reserve Chair Janet Yellen

Chair Yellen has said very little beyond the Framework, and, as the leader of the Fed, keeping to the official talking points is no surprise. In a March speech, Yellen reiterated that the balance sheet would remain elevated until “sometime after” rates rise, though she declined to add specific benchmarks. When asked for additional clarity during the March FOMC press conference she said only that shrinking the balance sheet is not predicated on a pre-specified level for the federal funds rate and that overall monetary policy normalization would be “well under way” before shrinking the balance sheet commenced. George Selgin did not think much of her remarks.

New York Federal Reserve Bank President & FOMC Vice Chairman William Dudley

Dudley, a dove who is a close ally of Chair Yellen, gave a slight preview of the Framework in a May 2014 speech indicating that he wanted to see rates quite a bit higher before the cessation of reinvestments. This was a break from the 2011 Exit Strategy Principles that had called for ending reinvestments first and raising rates secondarily. In that talk, Dudley downplayed the potential adverse consequences of the larger for longer balance sheet approach, believing it prudent to tolerate those risks as the Fed moved off the zero lower bound. Dudley’s preferred order, to raise rates before touching the balance sheet, is, of course, the order now in the Framework.

More recently Dudley discussed balance sheet actions beyond the Framework. In  March he mentioned that shrinking the balance sheet and raising interest rates are, “…two different, yet related, ways of removing monetary policy accommodation.” Because ending reinvestments could act similarly to a rate hike, Dudley cautioned, “…when we begin to end reinvestment, we will have to consider the implications for the appropriate short-term interest rate trajectory.” He has also commented on the mechanics of how to shrink the balance sheet, saying he does not see “a strong need to differentiate between mortgages and Treasuries” as the reinvestment policy ends, which he believes might end this year or in early 2018. Nonetheless, the New York Fed’s trading desk has conducted very small MBS sales to test the operational readiness of such transactions.

Federal Reserve Vice Chairman Stanley Fischer

In a February 2016 speech, Fischer said that because the federal funds rate is now adjusted using two new tools, interest on excess reserves (IOER) and overnight reverse repurchases (ON RRP), the Fed can change the size of the balance sheet independently from interest rate policy. At that time, Fischer saw benefits to maintaining a larger balance sheet, remarking that when to “…begin phasing out reinvestment will depend on how economic and financial conditions and the economic outlook evolve.”

In November, Fischer reiterated the Framework position, saying that shrinking the balance sheet would commence when “…the short-term interest rate approaches more normal levels.” However, he also offered a position different from Dudley’s, explicitly stating that the Fed would begin by ending reinvestments on mortgage-backed securities while continuing to roll over Treasuries. Just last month, Fischer said he does not expect significant market disturbances, such as another taper tantrum when reinvestments end, given the muted market responses to Fed officials’ discussions about shrinking the balance sheet, thus far.

Federal Reserve Governor Jerome Powell

Powell said in a recent interview that he wants the Fed “well into the normalization process” before the balance sheet begins to shrink. With rates far from zero, “removing accommodation” by ending reinvestments would then proceed in “a very predictable almost automatic way.”

Federal Reserve Governor Lael Brainard

Though Brainard is widely considered to be the most dovish Federal Reserve official, she voted with the rest of her colleagues to raise interest rates at the March FOMC meeting. She has also signaled a willingness to increase the speed of rate increases provided the new administration makes good on its campaign pledges of expansionary fiscal policy.

Brainard offered more details about the normalization strategy than her colleagues on the Board when she identified two available strategies in a recent speech. The first is the complementarity strategy, in which balance sheet adjustments would be viewed as an independent and thus second tool for conducting monetary policy. As Brainard says, “Under this strategy, both tools would be actively used to help achieve the Committee’s goals…to take advantage of the ways in which the balance sheet might affect certain aspects of the economy or financial markets differently than the short-term rate.” The Fed might deploy the balance sheet to affect term premiums on longer-term securities and use the policy rates to affect money markets. The second option is the subordination strategy, in which the policy rates would remain the primary tool for the Fed’s conduct of monetary policy. Once normalization of short term rates was “well under way” the balance sheet could begin to shrink in a “gradual [and] predictable way.” When reinvestments end, the balance sheet would then shrink on “autopilot.”

Brainard is an advocate of the subordination strategy and supports the automatic process that Powell discussed, though she does maintain that were the economy to be hit with a large adverse shock restarting reinvestments could be prudent in order to preserve traditional policy space in the federal funds rate.

2017 VOTING REGIONAL BANK PRESIDENTS Minneapolis Federal Reserve Bank President Neel Kashkari

Kashkari made national headlines when he posted an essay explaining his dissent at the March FOMC meeting, where all his colleagues voted for a rate increase. In dissenting, he noted that a 2% inflation target was no reason to raise rates as though 2% was a ceiling . His preferred strategy was for the Fed to publish a detailed plan for shrinking its balance sheet, allow some time to gauge the market reaction, and then continue to use short term rates as the primary policy lever. Kashkari supports Brainard’s subordination strategy when he says, “…we can return to using the federal funds rate as our primary policy tool, with the balance sheet normalization under way in the background.”

Philadelphia Federal Reserve Bank President Patrick Harker

Harker, an engineer by training, has been more precise than his colleagues. In January he said, “When we are at or above 100 basis points — and we are moving toward that — I think it is time to start serious consideration of first stopping reinvestment and then over a period of time unwinding the balance sheet.” In March, Harker said that the right number for interest rates could be 1.5%, but that balance sheet reduction is not going to be dependent on a trigger or a target and that it will also depend on the “momentum” of the economy — a position similar to Chair Yellen’s at the March FOMC press conference. Harker does prefer the “Treasury-heavy” portfolio called for in the Framework, though he is not sure that the Fed should completely get out of the MBS market.

Chicago Federal Reserve Bank President Charles Evans

Evans, who originally gained national prominence when the Fed began to employ Forward Guidance, is one of the more dovish members, believing that only two hikes in 2017 are possible, while, by contrast, Eric Rosengren of Boston is predicting four. When it comes to shrinking the balance sheet, though, Evans is one of the few to comment, not on the timing, but on a new target size. Recently, he estimated a target size for the balance sheet of $1-1.5 trillion, requiring as much as $3 trillion of securities to roll off. That is drastically different from former Federal Reserve Chairman Ben Bernanke’s estimate for a new normal balance sheet of $2.5-$4 trillion. Despite the potential reduction, Evans has yet to say when reinvestments might actually end.

Dallas Federal Reserve Bank President Robert Steven Kaplan

Kaplan has become more vocal on balance sheet action throughout this year. In January, he said that 2017 would be a good year to discuss a “plan of action” to “slim” the balance sheet, but that nothing should actually be done until rates hikes were “further along.” Kaplan echoed those sentiments in February: “…as we make further progress in removing accommodation, I believe we should be turning our attention to a discussion of how we might begin the process of reducing the size of the Federal Reserve balance sheet.”

After the March rates hike, Kaplan went even further, saying that as rates rise the Fed should publish a plan to shrink the balance sheet. He added that he does not want balance sheet normalization to “unduly affect” financial market conditions, suggesting that securities rolling off ought to be kept to a percentage of daily trading volumes in MBS and Treasuries. Such a strategy would require more active management of the balance sheet than the autopilot strategy proposed by Brainard and Powell. For Kaplan, one of the most important considerations as the balance sheet begins to shrink is to “minimize disruption” to markets.

NON-VOTING REGIONAL BANK PRESIDENTS

As mentioned, the most varied opinions about the next move for the balance sheet come from the regional bank presidents who do not have a vote on the FOMC in 2017.

St. Louis Federal Reserve Bank President James Bullard

Bullard is known to be the low dot on the “dot plot,” as he believes the economy is stuck in a low rate regime likely to persist for years. He differs from many of his colleagues in other important ways. For example, Bullard believes that the policy rates are currently at the appropriate levels and that the Fed has, “…delayed a little bit too long in reducing the size of the balance sheet.” While he doesn’t necessarily oppose another hike this year, Bullard thinks the FOMC’s priority should be reducing the balance sheet in an effort to increase the Fed’s ability to react to the next downturn.

San Francisco Federal Reserve Bank President John Williams

Recently, Williams offered perhaps the most comprehensive assessment of the future of the Fed’s balance sheet, with a call for the reinvestment policy to end this year. Like Evans, Williams offered a target, saying that a balance sheet around $2 trillion is likely appropriate, though added that no decision had been made. But, unlike Evans, Williams also offered a timeframe, remarking that getting to a balance sheet that size would likely take 5 years. Williams also believes that with the policy rate and the balance sheet moving contemporaneously, the path of each one will be slower than if they were operating alone, similar to Dudley. He thinks the Fed will raise rates twice more this year, though leaves open the possibility for a third hike if the data support it — a position held by his colleague in Boston.

Boston Federal Reserve Bank President Eric Rosengren

Rosengren is now one of the leading hawks, having announced in a recent speech that he anticipates three more rate hikes this year, likely at every other FOMC meeting. While Dudley and Williams believe shrinking the balance sheet might slow rates hikes, all else equal, and Bullard thinks balance sheet reduction can replace a rates increase, Rosengren believes the path for rate increases is not affected much by gradually shrinking the balance sheet and that the process can begin soon. As identified by Ben Bernanke, Rosengren also differs from his colleagues by being the very rare Fed official to discuss asset sales — though he stopped short of actually advocating them in the speech. However, Rosengren also thinks it is likely that the Fed would resume asset purchases during future recessions, “…unless they are very, very mild.”

Kansas City Federal Reserve Bank President Esther George

George is the most hawkish member on the FOMC, having said that the Fed was behind the curve in raising rates in December 2015, repeatedly voting to trim assest purchases during QE3, and having far and away the most dissenting FOMC votes — now that Jeffrey Lacker has stepped down. And yet, at a recent event George indicated that she did not think that any decision regarding the balance sheet would be made soon. She wants the Fed to spend more time analyzing its path toward normalization, stating that in the meantime the size of the balance sheet is not likely to change. This is a change from her position back in 2014, when she thought it was appropriate to begin shrinking the balance sheet via “passive runoff” before the first rate hike, following the policy articulated in the original 2011 Exit Strategy Principles.

Cleveland Federal Reserve Bank President Loretta Mester  

In three recent speeches Mester has shown an increasing comfort level with shrinking the balance sheet this year. She wants to end reinvestments in 2017 and believes this move is consistent with the Framework, putting the Fed on a path towards a balance sheet consisting primarily of Treasuries. And just yesterday, Mester supported her colleagues’ notion to announce a plan for balance sheet reduction, which will take “several years,” as well as a return to using the federal funds rate as the “main tool” for monetary policy. Mester added that the balance sheet will eventually be “considerably smaller than it is today.”

WHAT’S NEXT

How Federal Reserve officials view the balance sheet will change as new data come in. There are also potential shifts at the Fed via new personnel. With the retirement of Governor Tarullo in April, President Trump can appoint three new Fed Governors. Additionally, Rafael Bostic will assume leadership of the Atlanta Fed in early June and sit on the FOMC next year while the Richmond Fed is continuing its search for Jeffrey Lacker’s successor, who will have a vote in 2018 as well.

Whoever comes to the Fed and however the views of those already there change, the important questions about the balance sheet will remain. These questions can be grouped into four buckets: Timing, Mechanics, Interest Rates and the Endgame.

On timing, the most important question is when the reinvestment policy ends. There is a growing chorus suggesting that 2017 will see the end of the reinvestment policy, as laid out in the Framework. However, many officials condition their balance sheet remarks as data dependent. It is unknown how much the data would need to soften to move a Fed official’s view away from Mester’s position and towards George’s.

The mechanics of the balance sheet wind down are extremely uncertain. Will the Federal Reserve simply allow for passive shrinking when securities mature, or will they actively manage the process and shrink the balance sheet on a smoother path, perhaps limited by trading volume ratios as Kaplan suggested? These questions require clear answers in the kind of public, detailed plan called for by Kashkari and Kaplan. Another mechanical issue to address is distinguishing between Treasuries and other securities. Is that distinction less important, as Dudley has implied, or will the Fed start by paring back its MBS holdings, as Fischer has suggested?

Related to the mechanics is how shrinking the balance sheet will affect the path of interest rates. Will the Fed adopt the subordination strategy advocated by Brainard? Or will the balance sheet runoff tighten financial market conditions such that the paths for rates hikes and shrinking the balance sheet could be slower together, as Dudley and Williams have considered? Or could ending reinvestments be a substitute for a rates hike, as Bullard prefers?

And lastly, what is the Fed’s endgame when it comes to balance sheet normalization; what is the proper size? Many Fed officials have noted an elevated demand for currency, compared to what existed before the crisis, but only a few have offered specifics as to the balance sheet’s final size. Will the balance sheet stay quite large, something Ben Bernanke advocates, or will it pare down to $2 trillion, as Williams suggests, or even beyond that to $1.5 trillion, as Evans estimates?

As most officials concede, the Federal Reserve is about to take actions with which it has virtually no experience. Providing further details on how and when they will normalize the balance sheet would go a long way to reducing uncertainty. But even then, it will remain critical to track where Fed officials stand on this issue and how those views evolve with the data.
______________

[1] The Framework discusses, “…steps to raise the federal funds rate and other short-term interest rates to more normal levels…” That language, however, is ambiguous as the federal funds market has shrunk dramatically in a financial system awash in reserves. Consequently, interest rate policy is now conducted using two new policy rates to create a federal funds rate target “range:” the interest paid on excess reserves (IOER) creates the target ceiling while the overnight reverse repurchase (ON RRP) rate creates the target floor. Both rates are set administratively by the Fed. For further reading on the Fed’s new monetary control mechanism using IOER and ON-RRP for a federal funds rate range, see “A Monetary Policy Primer, Part 9: Monetary Control, Now” by George Selgin.

[Cross-posted from Alt-M.org]

The Wall Street Journal reports, “the Pentagon has endorsed a plan to invest nearly $8 billion to bulk up the U.S. presence in the Asia-Pacific region over the next five years by upgrading military infrastructure, conducting additional exercises and deploying more forces and ships.”

The reasons behind such a military build-up in Asia are not entirely clear. Here are Senator John McCain’s statements justifying it:

“This initiative could enhance U.S. military power through targeted funding to realign our force posture in the region, improve operationally relevant infrastructure, fund additional exercises, pre-position equipment and build capacity with our allies and partners,” Mr. McCain told Adm. Harris in an April hearing.

Dustin Walker, a spokesman to Mr. McCain, described the plan in an email as a way to make the American posture in the region more “forward-leaning, flexible, resilient and formidable.”

This is essentially a garbled word salad of Pentagon jargon that emphasizes tactical justifications while omitting any strategic rationale. The reporter gets a bit closer to a clear strategic justification here: “The effort is seen by backers as one way to signal more strongly the U.S. commitment to the region as Washington confronts an increasingly tenuous situation on the Korean peninsula, its chief security concern in the area.”

To be clear, spending almost $8 billion to boost U.S. military presence in Asia will have precisely zero utility in resolving the “tenuous situation on the Korean peninsula,” and in fact would likely be detrimental to that goal. Furthermore, signaling “more strongly the U.S. commitment to the region” is unnecessary even on the terms of our current strategy. The United States already maintains more than 154,000 active-duty military personnel in the region. Washington keeps scores of major bases throughout Asia, five aircraft carrier strike groups, including 180 ships and 1,500 aircraft, two-thirds of the Marine Corps’ combat strength, five Army Stryker Brigades, and more than half of overall U.S. naval power. And finally, the United States is treaty-bound to defend most of the region’s major nations, including Japan, South Korea, the Philippines, Thailand, Australia, and New Zealand. Do we really need $8 billion worth of more troops, equipment, exercises, and infrastructure to signal our commitment? Hardly.

Rather than a buildup, Washington should be debating how and when to draw down forces in Asia. The massive U.S. military presence in the Asia-Pacific region is not necessary to protect America’s core economic and security interests. And staving off a rising China or upholding the “liberal world order” are bad reasons for maintaining preponderant military power in the region. Indeed, in some ways it exacerbates tensions by making China feel encircled and motivating Pyongyang to obtain deliverable nuclear weapons. China is a long way from achieving a hegemonic position in Asia and the region generally is in a state of defense dominance where conquest is hard, offense is risky, and deterrence is robust. American military dominance is simply not needed to keep the region peaceful, to protect trade flows, or solve myriad local disputes. 

Last week, Senator Ron Johnson (R-WI) introduced the State Sponsored Visa Pilot Program Act of 2017. Senator John McCain (R-AZ) is an official co-sponsor. If enacted, this bill would create a flexible state-sponsored visa system for economic migrants whereby states would regulate the type of visas and the federal government would handle admissions and issue the actual visas. Representative Ken Buck (R-CO) plans to introduce a companion version in the House in the near future. 

This is an innovative bill but we have encountered one persistent question from conservatives, libertarians, and others who are sympathetic to the idea of immigration federalism: Is a state-sponsored visa constitutional? 

The state-sponsored visa is perfectly consistent with the current migration system. The Johnson-Buck bill does not actually end federal control of migration but it merely creates a visa category whereby the states select the migrants through whatever processes they establish. The federal government is in full control of visa issuance and admission at ports of entry. Thus, states would be acting as sponsors on behalf of migrants whom they represent in their states in the same way that they currently sponsor foreign-born students at state universities and other workers in their capacity as employers.

In 2014, Brandon Fuller and Sean Rust authored a policy analysis for Cato that explored how a state-sponsored visa program could operate in the United States. They wrote a section addressing the constitutionality of such a program:

Historically, the Supreme Court has interpreted Congress to have “plenary power” over immigration, generally giving deference to the political branches of the federal government as an extension of the Naturalization Clause under Article 1, section 8, clause 4, which gives Congress the power “To establish an uniform Rule of Naturalization.”[1] Under current interpretations, this gives Congress the sole power to establish naturalization guidelines. However, Congress can also allow states to be involved in immigration policy in areas besides naturalization, such as managing a state-based visa within federal guidelines. Some immigration policies, with the exception of naturalization, can be partly devolved to the states within a range of powers permitted by the federal government.

The recent case of Arizona v. the United States, which decided the constitutionality of Arizona’s strict immigration laws, reiterates the point that states are allowed to participate in immigration policy and enforcement, but only within the scope permitted by the federal government.[2] In debating the case of Arizona v. United States, Peter Spiro, an immigration law scholar at Temple University’s Beasley School of Law, wrote, “[I]n Arizona, the Supreme Court constricted the possibilities for unilateral state innovation on immigration, both good and bad. That does not stop the federal government from affirming state discretion.” A state-based visa program does just that—allowing states to participate in the selection of immigrants under guidelines permitted by the federal government which is consistent with current interpretations of the Supremacy Clause and the plenary power of the federal government in the matter of immigration.

It is also important to note that U.S. law defines a nonimmigrant visa holder as “an alien who seeks temporary entry to the United States for a specific purpose,” and the federal government may set conditions in accordance with this purpose. For example, in the current immigration system a foreign entrant may be required to be attached to a singular petitioning employer under a number of employer-based non-immigrant visas, such as the H-1B. Like holders of employment-based visas, state-based visa holders would be nonimmigrants with a temporary right to live and work in the United States and an option to pursue permanent residency. As such, the state-based system is simply a variation on the condition being attached to the foreign entrant.

The Johnson-Buck bill is a federally created visa that allows states to sponsor migrants that would operate by the guidelines established under the Supreme Court cases argued over the Arizona immigration enforcement laws. The same precedents that established that states can increase immigration enforcement beyond what the federal government intended, within the confines of a federal program, also allow states to choose whether to have more legal migrants under a federally managed system. 

Naturalization is a solely federal power that the state-sponsored bill does not interfere with. If a worker on a state-sponsored visa finds an employer or a family member to sponsor him for lawful permanent residency then he will have full mobility, employment, and residence rights just like any green card holder.

The federal government currently runs the visa system in the United States and the Supreme Court has interpreted the Constitution to give Congress that power. There is nothing unconstitutional with Congress asking the states to play a role in the process of selecting migrants for visas.

[1] INS v. Chadha, 462 U.S. 919 (1983).

[2] Chamber of Commerce v. Whiting, 563 U.S. (2011); Arizona v. United States, 567 U.S. (2012).

A decade ago an errant pass in a basketball game hit my thumb hard along the nail. After a couple days of intense pain, the thumbnail fell off and then grew back misshapen. It turned out that the injury killed a portion of the nail bed. As afflictions go it is pretty minor, but it is a tad grotesque and makes a few tasks a bit more difficult.

An orthopedic surgeon suggested I either opt for surgery—which may not have worked or been covered by insurance—or else have the entire nail permanently removed for aesthetic reasons. I oped to leave it alone and began getting a regular manicure to keep the thumbnail under control.

A couple months ago, the owner of the salon I frequent asked if a new employee could do my manicure. The issue was that he spoke no English and had no license, but they assured me he had been doing manicures for years in Vietnam and was quite talented. I agreed.

The owner explained my thumbnail issue to him, and he spent several minutes on the digit. A few days later, to my surprise, the dead nail bed began growing again. The nail now looks almost normal.

The story of my healing nail asks a question: to what extent should states license manicurists, or professions that by and large have nothing to do with health and safety? Wisconsin—and many other states—requires graduation from an accredited institution that teaches the trade as well as hundreds of hours of experience. It does not automatically recognize licenses issued by another state or country either. In other words, there would be no clear path for this manicurist to legally practice his profession in the state.

The typical state licenses hundreds of professions. Some of those are unobjectionable—most people want doctors and anesthetists to undergo a licensing regime before assuming their professions, for instance. But other licenses are problematic. For instance, many states require interior designers and florists to be licensed. Do we really need to be protected from a rogue designer who might do damage to the color scheme of our homes? The same question can also be asked of manicurists, barbers, aestheticians, and other professions that have little to do with health or safety.

The harm in excessive licensing is twofold. First, people with an aptitude for a profession but without the means to take the classes to obtain the license are effectively shut out of a way to earn a decent living. A license for an interior designer, for instance, requires six years of training, including at least two years of school.

Second, the higher wages from excessive licensing translates to higher costs for these services as well. A manicure in Oshkosh—a former home of mine—costs more than in Washington DC, where I currently reside. While not everyone might need or want such services, the disparity in prices between my high-cost current home and my former low-cost residence suggests that someone’s getting a bad deal.

A study I wrote with my colleague Logan Albright, published last month by the Wisconsin Policy Research Center, examines the inexorable expansion of licensing in the state—driven both by the expansion of the service sector as well as the increase in the number of occupations in the state requiring a license. We suggest that in an economy where states have been ratcheting up their efforts to attract jobs and boost economic growth, it is time for Wisconsin to examine the current licensing regime and think cogently about which tasks merit licensing and which can do without. Many other states have begun to do precisely this—and are concluding that their current licensing regime has gone too far.

Such an exercise should be a bipartisan affair. Unnecessary licensing hurts the entire state, but those who come from low-income households or lack the means to obtain the training to get such jobs suffer the most.

Governor Walker and the state legislature have both announced they will look at this issue. There’s a lot to look at, we submit.

Ike Brannon is president of the consulting firm Capital Policy Analytics.

 

At a Cato Institute Capitol Hill Briefing today, Senate Homeland Security Committee Chairman Ron Johnson (R-WI) and Congressman Ken Buck (R-CO) announced their intention to introduce new immigration legislation that would allow states to sponsor workers, entrepreneurs, and investors. Sen. Johnson introduced his version this afternoon. In 2014, Cato wrote a policy analysis about this idea. My colleague Alex Nowrasteh and I have published blog posts and op-eds about it, and Cato’s Handbook for Policymakers urged Congress to implement such a policy.

State-sponsored visas would build much-needed flexibility and adaptability into the federal immigration system. We are pleased that members of Congress are finally taking up this innovative and important idea.

The federal government’s monopoly over legal immigration fails to address the diversity of economic needs among the states. A more decentralized visa program could head off local problems before they build into a national crisis, building flexibility into the system that exists in every other area of the market. Giving states greater control would also increase political support for immigration programs and allow Congress to reform the system without needing to agree on every issue.

The federal government determines the number of foreign workers, the type of work that they can perform, and the terms under which they must live. The question today is whether any of these functions could be better handled at the state level.

As a legal matter, this is a question that Congress may answer. Most recently, in the Arizona v. U.S. decision, the Supreme Court held that the states are limited in this area only to the extent that Congress chooses to limit them.

From an economic perspective, the static federal monopoly makes little sense. In a market economy, you want systems that adjust quickly to changes at the local level. The federal system doesn’t change until local problems build into a national one, while a decentralized system could head off issues before a crisis develops. Despite widespread agreement that there has been a crisis for more than a decade, no changes have occurred.

The federal-only system also makes little sense politically. Giving states greater control would increase political support for immigration programs. The fights in Congress that have killed reform efforts in the past could be effectively transferred to state Capitols. Congress could fix the system without finding total agreement.

From an enforcement perspective, guest worker programs have historically reduced illegal immigration, creating an incentive for people to come to the United States legally. And limiting workers to a single state is actually less of a challenge than limiting them to a single employer, as the current federal guest worker programs do. More importantly, according to the Government Accountability Office, about 90 percent of overstays are tourists, not guest workers, because the workers want to be invited back to work legally. This incentive has kept their overstay rate well below 3%.

As is detailed in the Cato policy analysis, this idea has been implemented successfully in two other geographically diverse, former British colonies—Canada and Australia. Both countries use regional visa programs to distribute immigration more fairly and allow rural areas to obtain labor for difficult jobs.

The popularity of these programs can be seen in their rapid growth over the last two decades. They are now the second largest source of economic immigration to these countries.

The United States has a long history of federalism and federal-state partnerships, yet it has so far not applied this tradition to immigration. But some states have already passed bills advocating state-based visas. All states already directly sponsor visa applicants as students through their public universities or workers in their capacity as employers. These protocols could be expanded to allow states to sponsor workers on behalf of their industries.

Hopefully, the fact that it is two conservative members of Congress who are pushing this proposal will change the game politically.

Pages