Cato Op-Eds

Individual Liberty, Free Markets, and Peace
Subscribe to Cato Op-Eds feed

As of this writing, Tuesday, September 11, Hurricane Florence is threatening millions of folks from South Carolina to Delaware. It’s currently forecast to be near the threshold of the dreaded Category 5 by tomorrow afternoon. Current thinking is that its environment will become a bit less conducive as it nears the North Carolina coast on Thursday afternoon, but still hitting as a Major Hurricane (Category 3+). It’s also forecast to slow down or stall shortly thereafter, which means it will dump disastrous amounts of water in southeastern North Carolina. Isolated totals of over two feet may be common. 

At the same time that it makes landfall, there is going to be the celebrity-studded “Global Climate Action Summit” in San Francisco, and no doubt Florence will be the poster girl.

There’s likely to be the usual hype about tropical cyclones (the generic term for hurricanes) getting worse because of global warming, even though their integrated energy and frequency, as published by Cato Adjunct Scholar Ryan Maue, show no warming-related trend whatsoever.

Maue’s Accumulated Cyclone Energy index shows no increase in global power or strength.

Here is the prevailing consensus opinion of the National Oceanic and Atmospheric Administration’s Geophysical Fluid Dynamics Laboratory (NOAA GFDL): “In the Atlantic, it is premature to conclude that human activities–and particularly greenhouse gas emissions that cause global warming–have already had a detectable impact on hurricane activity.”

We’ll also hear that associated rainfall is increasing along with oceanic heat content. Everything else being equal (dangerous words in science), that’s true. And if Florence does stall out, hey, we’ve got a climate change explanation for that, too! The jet stream is “weirding” because of atmospheric blocking induced by Arctic sea-ice depletion. This is a triple bank shot on the climate science billiards table. If that seems a stretch, it is, but climate models can be and are “parameterized” to give what the French Climatologist, Pierre Hourdin, recently called “an anticipated acceptable range” of results.

The fact is that hurricanes are temperamental beasts. On September 11, 1984, Hurricane Diana, also a Category 4, took aim at pretty much the same spot that Florence is forecast to landfall—Wilmington, North Carolina. And then—34 years ago—it stalled and turned a tight loop for a day, upwelling the cold water that lies beneath the surface, and it rapidly withered into a Category 1 before finally moving inland. (Some recent model runs for Florence have it looping over the exact same place.) The point is that what is forecast to happen on Thursday night—a major category 3+ landfall—darned near happened over three decades earlier… and exactly 30-years before that, in 1954, Hurricane Hazel made a destructive Category 4 landfall just south of the NC/SC border. The shape of the Carolina coastlines and barrier islands make the two states very susceptible to destructive hits. Fortunately, this proclivity toward taking direct hits from hurricanes has also taught the locals to adapt—many homes are on stilts, and there is a resilience built into their infrastructure that is lacking further north.

There’s long been a running research thread on how hurricanes may change in a warmer world. One thing that seems plausible is that the maximum potential power may shift a bit further north. What would that look like? Dozens of computers have cranked away thousands years of simulations and we have a mixture of results: but the consensus is that there will be slightly fewer but more intense hurricanes by the end of the 21st Century. 

We actually have an example of how far north a Category 4 can land, on August 27, 1667 in the tidewater region of southeast Virginia. It prompted the publication of a pamphlet in London called “Strange News from Virginia, being a true relation of the great tempest in Virginia.” The late, great weather historian David Ludlum published an excerpt:

Having this opportunity, I cannot but acquaint you with the Relation of a very strange Tempest which hath been in these parts (with us called a Hurricane) which began on Aug. 27 and continued with such Violence that it overturned many houses, burying in the Ruines much Goods and many people, beating to the ground such as were in any ways employed in the fields, blowing many Cattle that were near the Sea or Rivers, into them, (!!-eds), whereby unknown numbers have perished, to the great affliction of all people, few escaped who have not suffered in their persons or estates, much Corn was blown away, and great quantities of Tobacco have been lost, to the great damage of many, and the utter undoing of others. Neither did it end here, but the Trees were torn up by their roots, and in many places the whole Woods blown down, so that they cannot go from plantation to plantation. The Sea (by the violence of the winds) swelled twelve Foot above its usual height, drowning the whole country before it, with many of the inhabitants, their Cattle and Goods, the rest being forced to save themselves in the Mountains nearest adjoining, where they were forced to remain many days in great want.

Ludlum also quotes from a letter from Thomas Ludwell to Virginia Governor Lord Berkeley about the great tempest:

This poore Country…is now reduced to a very miserable condition by a continual course of misfortune…on the 27th of August followed the most dreadful Harry Cane that ever the colony groaned under. It lasted 24 hours, began at North East and went around to Northerly till it came to South East when it ceased. It was accompanied by a most violent raine, but no thunder. The night of it was the most dismal time I ever knew or heard of, for the wind and rain raised so confused a noise, mixed with the continual cracks of falling houses…the waves were impetuously beaten against the shores and by that violence forced and as it were crowded the creeks, rivers and bays to that prodigious height that it hazarded the drownding of many people who lived not in sight of the rivers, yet were then forced to climb to the top of their houses to keep themselves above water…But then the morning came and the sun risen it would have comforted us after such a night, hat it not lighted to us the ruins of our plantations, of which I think not one escaped. The nearest computation is at least 10,000 house blown down.

It is too bad that there were no anemometers at the time, but the damage and storm surge are certainly consistent with a Category 4 storm. And this was in 1667, at the nadir of the Little Ice Age.

A Maryland story in the Washington Post last week presents a classic case of local political corruption. The broader message of the story is that when we give government the power to regulate an activity—in this case liquor sales—we open the door to corruption.

Even if you believe that regulatory regimes are created with good intentions, the politicians and officials in charge inevitably get swarmed by lobbyists and some of them will focus on lining their own pockets. With respect to the public interest, the resulting policy outcomes are a crapshoot.

Former Maryland state delegate Michael L. Vaughn (D) was sentenced to 48 months in federal prison Tuesday after he was convicted of accepting cash in exchange for votes that would expand liquor sales in Prince George’s County.

A jury found Vaughn guilty of conspiracy and bribery in March. During his six-day trial in U.S. District Court in Maryland, Vaughn and his attorneys argued that the bundles of cash he received from liquor store owners and a lobbyist in 2015 and 2016 were campaign contributions that he failed to report because he had personal financial problems.

But prosecutors for the government argued that the more than $15,000 that changed hands in a coffee shop bathroom, a dark restaurant and other locations throughout the county were bribes.

… Sentencing Judge Paula Xinis called Vaughn’s misconduct ‘exceptionally serious’ and ‘grievous bribery.’

Vaughn was one of seven arrested last year in a federal corruption case that investigators called “Operation Dry Saloon.” Liquor store owners, lobbyists, former liquor board commissioners and former Prince George’s County Council member William A. Campos (D) conspired to pass laws that would allow for Sunday liquor sales in the county in exchange for cash.

… Prosecutors, however, argued that Vaughn and former chief liquor inspector David Son hashed out a scheme in which local liquor store owners Young Paig and Shin Ja Lee would pay Vaughn $20,000 over two years to clear the way for Sunday sales.

… ‘He fully embraced the pay-to-play culture that has been a repeat phrase in this court for a decade,’ Windom said, alluding to the 87-month sentence former Prince George’s County executive Jack Johnson received for bribery and corruption.

Local governments have large and excessive power over private land development, and that power has long been a source of corruption. Here’s what the Washington Post said about Jack Johnson’s crimes in a 2011 story:

Jack Johnson, a Democrat who was county executive from 2002 until December 2010, came to the attention of federal authorities in 2006, when the FBI began investigating allegations of corruption, campaign finance violations and tax fraud. Authorities found massive corruption centered around a “pay-to-play culture” that began months after Johnson took office.

‘Under Jack Johnson’s leadership, government in Prince George’s County literally was for sale,’ the [sentencing] memo said.

The pay-to-play scheme involved several developers, including Laurel physician and developer Mirza H. Baig … In his plea agreement, Jack Johnson acknowledged accepting up to $400,000 from the scheme.

Johnson, 62, was charged last November with evidence tampering and destruction of evidence after federal agents arrested him and his wife, 59, at their Mitchellville home. They were overheard on a wiretap scheming to stash $79,600 in cash in Leslie Johnson’s underwear and flush a $100,000 check that Jack Johnson received as a bribe from a developer.

… On the day of their arrests, Johnson was at Baig’s office picking up a cash bribe and talking about how he would continue the corruption ‘through his wife’s new position on the county council,’ the memorandum said.

‘He proudly bragged about how he was going to orchestrate approval of various funding and approvals by the County Council for Baig’s projects,’ according to the memo.

Federal officials valued the benefits that Baig received in exchange for illegal payments to Johnson at more than $10 million on two development projects.

With public healthcare programs accounting for over a trillion dollars of federal spending, efforts to identify and remedy sources of waste are increasing. A new working paper finds: 

There is substantial waste in U.S. healthcare, but little consensus on how to identify or combat it. We identify one specific source of waste: long-term care hospitals (LTCHs). These post-acute care facilities began as a regulatory carve-out for a few dozen specialty hospitals, but have expanded into an industry with over 400 hospitals and $5.4 billion in annual Medicare spending in 2014. We use the entry of LTCHs into local hospital markets and an event study design to estimate LTCHs’ impact. We find that most LTCH patients would have counterfactually received care at Skilled Nursing Facilities (SNFs) – post-acute care facilities that provide medically similar care to LTCHs but are paid significantly less – and that substitution to LTCHs leaves patients unaffected or worse off on all measurable dimensions. Our results imply that Medicare could save about $4.6 billion per year – with no harm to patients – by not allowing for discharge to LTCHs.

The cost of healthcare in the United States remains a significant problem, but eliminating regulatory carve-outs such as LTCHs is one way to address this growing issue.

Research assistant Erin Partin contributed to this blog post.

 

Dedicated readers may recall my having reported here several years ago the suit filed by Colorado’s Four Corner’s Credit Union against the Kansas City Fed — after the Fed refused it a Master Account on the grounds that it planned to cater to Colorado’s marijuana-related businesses. Until then the episode was almost unique, for the Fed had scarcely ever refused a Master Account to any properly licensed depository institution. Eventually the Fed and Four Corners reached a compromise, of sorts, with the Fed agreeing to grant the credit union an account so long as it promised not to do business with the very firms it was originally intended to serve!

Well, as The Wall Street Journal’s Michael Derby reported last week, the Fed once again finds itself being sued for failing to grant a Master Account to a duly chartered depository institution. Only the circumstances couldn’t be more different. The plaintiff this time, TNB USA Inc, is a Connecticut-chartered bank; and its intended clients, far from being small businesses that cater to herbalistas, include some of Wall Street’s most venerable establishments. Also, although TNB is suing the New York Fed for not granting it a Master Account, opposition to its request comes mainly, not from the New York Fed itself, but from the Federal Reserve System’s head honchos in Washington. Finally, those honchos are opposed to TNB’s plan, not because they worry that TNB’s clients might be breaking Federal laws, but because of unspecified “policy concerns.”

Just what are those concerns? The rest of this post explains. But I’ll drop a hint or two by observing that the whole affair (1) has nothing to do with either promoting or opposing safe banking and (2) has everything to do with (you guessed it) the Fed’s post-2008 “floor” system of monetary control and the interest it pays on bank reserves to support that system.

What’s In a Name?

To understand the Fed’s concerns, one has first to consider TNB’s business plan. Doing that in turn means demolishing a myth that has already taken root concerning that enterprise — one based entirely on it’s name.

You see, “TNB” stands for “The Narrow Bank.” And some commentators, including John Cochrane, initially took this to mean that TNB was supposed to be a narrow bank in the conventional sense of the term, meaning one that would cater to ordinary but risk-averse depositors — like your grandma — by investing their money entirely in perfectly safe assets, such as cash reserves or Treasury securities. For example, the Niskanen Center’s Daniel Takash says that, if TNB wins its suit,

it would offer many businesses (and potentially consumers) the option [to] save their money in a safer financial institution and increase interest-rate competition in the banking industry.

Fans of narrow banking see it as a superior alternative to the present practice of insuring bank deposits while allowing banks to use such deposits to fund risky investments.

The assumption that TNB has no other aim than that of being a safer alternative to already established banks naturally makes the Fed’s opposition to it seem irrational: “Fed Rejects Bank for Being Too Safe,” is the attention-getting (but equally question-begging) headline assigned to Matt Levine’s Bloomberg article about the lawsuit. It seems irrational, that is, unless one assumes that Fed officials place other interests above that of financial-system safety. “That the Fed, which is a banker’s bank, protects the profits of the big banks’ system against competition, would be the natural public-choice speculation,” Cochrane observes. Alternatively, he wonders whether his vision of a narrow banking system might not be

as attractive to the Fed as it should be. If deposits are handled by narrow banks, which don’t need asset risk regulation, and risky investment is handled by equity-financed banks, which don’t need asset risk regulation, a lot of regulators and “macro-prudential” policy makers, who want to use regulatory tools to control the economy, are going to be out of work.

Get Lost, Grandma!

No one who knows me will imagine that I’d go out of my way to defend the Fed against the charge that it doesn’t always have the general public’s best interests in mind. Yet I’m compelled to say that explanations like Cochrane’s for the Fed’s treatment of TNB, let alone ones that suppose that the Fed has it in for safety-minded bankers, miss their mark. Such explanations badly misconstrue TNB’s business plan, especially by failing to grasp the significance of the declaration, included in its complaint against the New York Fed, that its “sole business will be to accept deposits only from the most financially secure institutions” (my emphasis).

You see, despite what Cochrane and Levine and some others have suggested, TNB was never meant to be a bank for me, thee, or the fellow behind the tree. Nor would it cater to any of our grandmothers. And why would it bother to? After all, unless grandma keeps over $250,000 in her checking account, her ordinary bank deposit is already safer than a mouse in a malt-heap. There’s no need, therefore, for any Fed conspiracy to keep a safe bank aimed at ordinary depositors from getting off the ground.

Instead TNB is exclusively meant to serve non-bank financial institutions, and money market mutual funds (MMMF) especially. Its purpose is to allow such institutions, which are not able to directly take advantage of the Fed’s policy of paying interest on excess reserves (IOER), to do so indirectly. In other words, TNB is meant to serve as a “back door” by which non-banks may gain access to the Fed’s IOER payments, with their TNB deposits serving as surrogate Fed balances, thereby allowing non-banks to realize higher returns, with less risk, than they might realize by investing directly in Treasury securities. J.P. Koning gets this (and much else) right in his own post about TNB, published while yours truly was readying this one for press:

TNB is a designed as a pure warehousing bank. It does not make loans to businesses or write mortgages. All it is designed to do is accept funds from depositors and pass these funds directly through to the Fed by redepositing them in its Fed master account. The Fed pays interest on these funds, which flow through TNB back to the original depositors, less a fee for TNB. Interestingly, TNB hasn’t bothered to get insurance from the Federal Deposit Insurance Corporation (FDIC). The premiums it would have to pay would add extra costs to its lean business model. Any depositor who understands TNB’s model wouldn’t care much anyways if the deposits are uninsured, since a deposit at the Fed is perfectly safe.

Once one realizes what TNB is about, explaining the Fed’s reluctance to grant it a Master Account becomes as easy as winking. The explanation, in a phrase, is that, were it to gain a charter, TNB could cause the Fed’s present operating system, or a substantial part of it, to unravel. Having gone to great lengths to get that system up and running, the Fed doesn’t want to see that happen. Since the present operating system is chiefly the brainchild of the Federal Reserve Board, it’s no puzzle that the Board is leading the effort to deny TNB its license.

How would TNB’s presence matter? The Fed has been paying interest on banks’ reserve balances, including their excess reserves, since October 2008. Ever since then, IOER rates have exceeded yields on many shorter-term Treasury securities — while being free from the interest-rate risk associated with holdings of longer-term securities. But banks alone (that is, “depository institutions”) are eligible for IOER. Other financial firms, including MMMFs, have had to settle for whatever they could earn on their own security holdings or for the fixed offering rate on the Fed’s Overnight Reverse Repurchase (ON-RRP) facility, which is presently 20 basis points lower than the IOER rate.

Naturally, any self-respecting MMMF would relish the opportunity to tap into the Fed’s IOER program. But how can any of them do so? Not being depository institutions, they can’t earn it directly. Nor will placing funds in an established bank work, since such a bank will only “pass through” a modest share of its IOER earnings keeping some — and probably well over 20 basis points — to cover its expenses and profits. But a bank specifically designed to cater to the MMMFs needs — now that’s a horse of a different color.

What would happen, then, if TNB, and perhaps some other firms like it, had their way? That would be the end, first of all, of the Fed’s ON-RRP facility and, therefore, of the lower limit of the Fed’s interest rate target range that that facility is designed to maintain.

Second, the Fed would face a massive increase in the real demand for excess reserve balances that would complicate both its monetary control efforts and its plan to shrink its balance sheet.

TANSTAAFL

OK, so the Fed may not like what TNB is up to. But why should the rest of us mind it? So what if the Fed’s leaky “floor-type” operating system lacks a “subfloor” to limit the extent to which the effective fed funds rate can wander below the IOER rate? Why not have the Fed pay IOER to the money funds, and to the GSEs while it’s at it, and have a leak-free floor instead? Besides, many of us have money in money funds, so that we stand to earn a little more from those funds once they can help themselves to the Fed’s interest payments. What’s not to like about that?

Plenty, actually. Consider, first of all, what the change means. The Fed would find itself playing surrogate to a large chunk of the money market fund industry: instead of investing their clients’ funds in some portfolio of Treasury securities, money market funds would leave the investing to the Fed, for a return — the IOER rate — which, instead of depending directly upon the yield on the Fed’s own asset portfolio, is chosen by Fed bureaucrats.

Now ask yourself: Just how is it that the Fed’s IOER payments could allow MMMFs to earn more than they might by investing money directly into securities themselves? Because the Fed has less overhead? Don’t make me laugh. Because Fed bureaucrats are more astute investors? I told you not to make me laugh! No, sir: it’s because the Fed can fob-off risk — like the duration risk it assumed by investing in so many longer-term securities — on third parties, meaning taxpayers, who bear it in the form of reduced Fed remittances to the Treasury. That means in turn that any gain the MMMFs would realize by having a bank that’s basically nothing but a shell operation designed to let them bank with the Fed would really amount to an implicit taxpayer subsidy. There Ain’t No Such Thing As A Free Lunch.

As it stands, of course, ordinary banks are already taking advantage of that same subsidy. But two wrongs don’t make a right. Or so my grandmother told me.

[Cross-posted from Alt-M.org]

 The Reason Foundation’s Bob Poole has published a new book, Rethinking America’s Highways: A 21st Century Vision for Better Infrastructure.

The book examines the structure of U.S. highway ownership and financing and describes why major reforms are needed. Bob has a deep understanding of both the economics and engineering of highways.

Bob puts U.S. highways in international context. He describes, for example, how Europe has more experience with private highways than we do. The photo below is the Millau Viaduct in southern France. Wiki says it is “ranked as one of the great engineering achievements of all time.” The structure includes the tallest bridge tower in the world, and it was built entirely by private money. Isn’t that beautiful? I mean both the bridge and the fact that it is private enterprise.

Bob’s book regards the institutional structure for highways, which is different that the often superficial highway discussions in D.C. Those often surround the total amount of money the government spends. But the more important issue is ensuring that we spend on projects where the returns outweigh the costs.

D.C. policymakers often focus on the jobs created by highway construction. But labor is a cost of projects, not a benefit. Instead, policymakers should focus on generating long-term net value.

Finally, spending advocates often decry potholes and deficient bridges, but the optimal amount of wear-and-tear on infrastructure is not zero, else we would spend an infinite amount.

So the challenge is to spend the right amount, and to focus it on the most needed repairs and expansions. To do that, we need to get the institutional structure right, and that is what Bob’s book is about.

Every policy wonk and politician interested in infrastructure should read Bob’s book.

 

The Independent said this of the bridge: “The viaduct, costing €400m (£278m), has been built in record time (just over three years) for a project of this size. The French construction company, Eiffage, the direct descendant of the company started by Gustav Eiffel, the builder of the celebrated tower beside the Seine, has raised the money entirely from private financing. In return, the company has been given a 75 -year concession to run the viaduct as a toll-bridge.”

Shortly after Iowa prosecutors charged illegal immigrant Christian Rivera with the murder of Molly Tibbetts in August, his Iowa employer erroneously stated that E-Verify had approved him for legal work. That later turned out to be false as his employer, Yarrabee Farms, ran his name and Social Security Number (SSN) through another system called Social Security Number Verification Service (SSNVS) that merely verified that the name and number matched, not E-Verify.  That mix-up has inspired many to argue that an E-Verify mandate for all new hires would have stopped Rivera from working and, thus, prevented the murder of Mollie Tibbetts.  That’s almost certainly not true.  New details reveal that E-Verify would likely not have prevented Rivera from working.    

E-Verify is an electronic eligibility for employment verification system run by the federal government at taxpayer expense. Created as a pilot program in 1996, E-Verify is intended to prevent the hiring of illegal immigrants by verifying the identity information they submit for employment against federal government databases in the Social Security Administration and Department of Homeland Security.  The theory behind E-Verify is that illegal immigrants won’t have the identity documents to pass E-Verify (hold your laughter) so they won’t be able to work, thus sending them all home and preventing more from coming.  That naïve theory fails when confronted with the reality of the Rivera case.

Rivera submitted the name John Budd on an out of state drivers license and an SSN that matched that name to his employer, Yarrabee Farms, when he was hired in 2014.  Yarrabee Farms ran the SSN and name John Budd through the Social Security Number Verification Service (SSNVS) to guarantee that they matched for tax purposes (Yarrabee Farms confused SSNVS with E-Verify).  SSNVS matched the name with the SSN and approved Rivera-disguised-as-Budd to work. 

E-Verify would also have matched the name with the SSN and approved Rivera for work.  The systematic design flaw in E-Verify is that it only verifies the documents that a worker hands his employers, not the worker himself.  Thus, if an illegal immigrant hands the identity documents of an American citizen to an E-Verify-using employer then it verifies the documents and the worker with the documents gets the job – just as happened here with Rivera handing Yarrabee Farms the identity of John Budd.  That’s why 54 percent of illegal immigrants run through E-Verify are approved for legal work.  E-Verify is worse than a coin toss at identifying known illegal immigrants. 

Rivera’s identity would even have gotten around the DRIVE program in Iowa because he handed his employer an out-of-state drivers license.  DRIVE is intended to link other identity information from the Iowa state’s DMV to the job applicants as an extra layer of security.  If any of that information doesn’t match the information that the applicant gives to his employer then his employer is supposed to realize the applicant is an illegal worker.  However, the flaw in DRIVE is that it only works for the state-level DMV and fails to add extra security for out-of-state drivers licenses.  Thus, Rivera’s out-of-state identity would not have been caught by DRIVE.     

Rivera is a low-skilled and poor illegal immigrant from Mexico whose English language skills are so bad that he needs an interpreter in court.  Yet he would easily have been able to fool E-Verify, a sophisticated government immigration enforcement program praised by members of Congress, the President, and the head of at least one DC think-tank, by using somebody else’s name and SSN with a driver’s license from another state. 

A law passed in 1986 has required workers in the United States to present a government identification to work legally – a requirement that has resulted in an explosion in identity theft.  Rivera likely stole Budd’s identity to get a job, an unintended consequence of that 1986 law. A national E-Verify mandate will vastly expand identity theft

As a further wrinkle, if Yarrabee Farms found any of Rivera’s identity documents or information suspicious and confronted Rivera with their suspicions concerning Rivera’s identity, his name, race, or age, then Yarrabee Farms would likely have run afoul of other labor laws and exposed itself to a serious lawsuit.  The federal government expects employers to enforce immigration laws but not to the point that they can profile applicants.  The safe choice is not to profile anyone and hire those who present documents so long as they are not obviously fake.

The last wrinkle is that many businesses don’t comply with E-Verify in states where it is mandated.  In the second quarter of 2017, only 59 percent of new hires in Arizona were run through E-Verify even though the law mandates that 100 percent be run through.  Arizona has the harshest state-level immigration enforcement laws in the country and they can’t even guarantee compliance with E-Verify.  There is even evidence that Arizona’s E-Verify mandate temporarily increased property crime committed by a subpopulation that is more likely to be illegally present in the United States, prior to that population learning that E-Verify is easy to fool.  South Carolina, the state with the best-reputed enforcement of E-Verify, only had 55 percent compliance in the same quarter of 2017.  The notion that a lackluster Washington will do better than Arizona or South Carolina is too unserious a charge to rebut. 

Since SSNVS matched the name John Budd with a valid SSN and Rivera used an out-of-state drivers license, E-Verify would not have caught him.  E-Verify is a lemon of a system that is not a silver bullet to stop illegal immigration.  It wouldn’t have stopped Rivera from working legally in Iowa.  E-Verify’s cheerleaders should stop using the tragic murder of Mollie Tibbetts as a sales pitch for their failed government program.

 

Cato released my study today on “Tax Reform and Interstate Migration.”

The 2017 federal tax law increased the tax pain of living in a high-tax state for millions of people. Will the law induce those folks to flee to lower-tax states?

To find clues, the study looks at recent IRS data and reviews academic studies on interstate migration.

For each state, the study calculated the ratio of domestic in-migration to out-migration for 2016. States losing population have ratios of less than 1.0. States gaining population have ratios of more than 1.0. New York’s ratio is 0.65, meaning for every 100 residents that left, only 65 moved in. Florida’s ratio is 1.45, meaning that 145 households moved in for every 100 that left.

Figure 1 maps the ratios. People are generally moving out of the Northeast and Midwest to the South and West, but they are also leaving California, on net.

People move between states for many reasons, including climate, housing costs, and job opportunities. But when you look at the detailed patterns of movement, it is clear that taxes also play a role.

I divided the country into the 25 highest-tax and 25 lowest-tax states by a measure of household taxes. In 2016, almost 600,000 people moved, on net, from the former to the latter.

People are moving into low-tax New Hampshire and out of Massachusetts. Into low-tax South Dakota and out of its neighbors. Into low-tax Tennessee and out of Kentucky. And into low-tax Florida from New York, Connecticut, New Jersey, and just about every other high-tax state.

On the West Coast, California is a high-tax state, while Oregon and Washington fall just on the side of the lower-tax states.

Of the 25 highest-tax states, 24 of them had net out-migration in 2016.

Of the 25 lowest-tax states, 17 had net in-migration.  

 

https://object.cato.org/sites/cato.org/files/pubs/pdf/tbb-84-revised.pdf

A new report from the American Public Transportation Association (APTA) comes out firmly in support of the belief that correlation proves causation. The report observes that traffic fatality rates are lower in urban areas with high rates of transit ridership, and claims that this proves “that modest increases in public transit mode share can provide disproportionally larger traffic safety benefits.”


Here is one of the charts that APTA claims proves that modest increases in transit ridership will reduce traffic fatalities. Note that, in urban areas with fewer than 25 annual transit trips per capita – which is the vast majority of them – the relationship between transit and traffic fatalities is virtually nil. You can click the image for a larger view or go to APTA’s document from which this chart was taken.

In fact, APTA’s data show no such thing. New York has the nation’s highest per capita transit ridership and a low traffic fatality rate. But there are urban areas with very low ridership rates that had even lower fatality rates in 2012, while there are other urban areas with fairly high ridership rates that also had high fatality rates. APTA claims the correlation between transit and traffic fatalities is a high 0.71 (where 1.0 is a perfect correlation), but that’s only when you include New York and a few other large urban areas: among urban areas of 2 million people or less, APTA admits the correlation is a low 0.28.

The United States has two kinds of urban areas: New York and everything else. Including New York in any analysis of urban areas will always bias any statistical correlations in ways that have no application to other urban areas.

In most urban areas outside of New York, transit ridership is so low that it has no real impact on urban travel. Among major urban areas other than New York, APTA’s data show 2012 ridership ranging from 55 trips per person per year in Los Angeles to 105 in Washington DC to 133 in San Francisco-Oakland. From the 2012 National Transit Database, transit passenger miles per capita ranged from 287 in Los Angeles to 544 in Washington to 817 in San Francisco.

Since these urban areas typically see around 14,000 passenger miles of per capita travel on highways and streets per year, the 530-mile difference in transit usage between Los Angeles and San Francisco is pretty much irrelevant. Thus, even if there is a weak correlation between transit ridership and traffic fatalities, transit isn’t the cause of that correlation.

San Francisco and Washington actually saw slightly more per capita driving than Los Angeles in 2012, yet APTA says they had significantly lower fatality rates (3.7 fatalities per 100,000 residents in San Francisco and 3.6 in Washington vs. 6.4 in Los Angeles). Clearly, some other factor must be influencing both transit ridership and traffic fatalities.

With transit ridership declining almost everywhere, this is just a desperate attempt by APTA to make transit appear more relevant than it really is. In reality, contrary to APTA’s unsupported conclusion, modest rates in transit ridership will have zero measurable effect on traffic fatality rates.

Content moderation remains in the news following President Trump’s accusation that Google manipulated its searches to harm conservatives. Yesterday Congress held two hearings on content moderation, one mostly about foreign influence and the other mostly about political bias. The Justice Department also announced Attorney General Sessions will meet soon with state attorneys general “to discuss a growing concern that these companies may be hurting competition and intentionally stifling the free exchange of ideas on their platforms.” 

None of this is welcome news. The First Amendment sharply limits government power over speech. It does not limit private governance of speech. The Cato Institute is free to select speakers and topics for our “platform.” The tech companies have that right also even if they are politically biased. Government officials should also support a culture of free speech. Government officials bullying private companies contravenes a culture of free speech. Needless to say, having the Justice Department investigate those companies looks a lot like a threat to the companies’ freedom. 

So much for law and theory. Here I want to offer some Madisonian thoughts on these issues. No one can doubt James Madison’s liberalism. But he wanted limited government in fact as well as in theory. Madison thought about politics to realize liberal ideals. We should too. 

Let’s begin with the question of bias. The evidence for bias against conservatives is anecdotal and episodic. The tech companies deny any political bias, and their incentives raise doubts about partisan censorship. Why take the chance you might drive away millions of customers and invite the wrath of Congress and the executive branch on your business? Are the leaders of these companies really such political fanatics that they would run such risks? 

Yet these questions miss an important point. The problem of content moderation bias is not really a question of truth or falsity. It is rather a difficult political problem with roots in both passion and reason. 

Now, as in the past, politicians have powerful reasons to foster fear and anger among voters. People who are afraid and angry are more likely to vote for a party or a person who promises to remedy an injustice or protect the innocent. And fear and anger are always about someone threatening vital values. For a Republican president, a perfect “someone” might be tech companies who seem to be filled with Progressives and in control of the most important public forums in the nation. 

But the content moderation puzzle is not just about the passions. The fears of the right (and to a lesser degree, the left) are reasonable. To see this, consider the following alternative world. Imagine the staff of the Heritage Foundation has gained potential control over much of the online news people see and what they might say to others about politics. Imagine also that after a while Progressives start to complain that the Heritage folks are removing their content or manipulating new feeds. The leaders of Heritage deny the charges. Would you believe them? 

Logically it is true that this “appearance of bias” is not the same as bias, and bias may be a vice but cannot be a crime for private managers. But politically that may not matter much, and politics may yet determine the fate of free speech in the online era. 

Companies like Google have to somehow foster legitimacy for their moderation of content, moderation that cannot be avoided if they are to maximize shareholder value. They have to convince most people that they have a right to govern their platforms even when their decisions seem wrong. 

Perhaps recognizing that some have reasonable as well as unreasonable doubts about their legitimacy would be a positive step forward. And people who harbor those reasonable doubts should keep in mind the malign incentives of politicians who benefit from fostering fear and anger against big companies. 

If the tech companies fail to gain legitimacy, we all will have a problem worse than bias. Politicians might act, theory and law notwithstanding. The First Amendment might well stop them. But we all would be better off with numerous, legitimate private governors of speech on the internet. Google’s problem is ours.

In Supreme Court nominee Brett Kavanaugh’s opening statement at his hearing Tuesday, he praised Merrick Garland, with whom he serves on the D.C. Circuit, as “our superb chief judge.”

If you were surprised by that, you shouldn’t have been. When President Obama nominated Garland to the high court, Judge Kavanaugh described his colleague as “supremely qualified by the objective characteristics of experience, temperament, writing ability, scholarly ability for the Supreme Court … He has been a role model to me in how he goes about his job.”

In fact, it has been reported in at least one place that one reason Kavanaugh was left off Trump’s initial list of SCOTUS nominees was that he had been so vocal and public in praising Garland’s nomination.

Now, it would be understandable if neither side in the partisan confirmation wars chose to emphasize this bit of background to the story. Republican strategists might not be keen on reminding listeners of what their party did with Garland’s nomination, and might also worry about eroding enthusiasm for Kavanaugh among certain elements of their base. Democratic strategists, meanwhile, might see the episode as one in which the present nominee comes off as not-a-monster, and, well, you can’t have that.

The lesson, if there is one, might be that the federal courts are not as polarized and tribal as much of the higher political class and punditry at nomination time.

The Italian general elections of March 4, 2018 have produced an improbable coalition government between two upstart populist parties: left-Eurosceptic-nationalist Movimento 5 Stelle (Five Star Movement) and the right-Eurosceptic-nationalist Lega (League). The coalition partners agree on greater public spending and, at the same time, on tax cuts that would reduce revenue. How then to pay for the additional spending? Italy is already highly indebted. Its public debt stands at 133 percent of GDP, highest in the Eurozone apart from Greece, and well above the EU’s average of 87 percent. Its sovereign bonds carry a high default risk premium. Today, the yield on Italian 10-year bonds stands at 291 basis points above the yield on 10-year German bunds, up from a spread in the 130-40 range during the months before the election.

If tax revenue and debt cannot practically be increased, the remaining fiscal option—for a country with its own fiat currency—is printing base money. But Italy is part of the Eurozone, and only the ECB can create base-money euros. A group of four Italian economists (Biagio Bossone, Marco Cattaneo, Massimo Costa, and Stefano Sylos Labini), correctly noting that “budget constraints and a lack of monetary sovereignty have tied policymakers’ hands,” and regarding this as a bad thing, have proposed in a series of publications that Italy should introduce a new domestic quasi-money, a kind of parallel currency that they call “fiscal money.” Similar proposals have been made by Yanis Varoufakis, the former Greek finance minister, and by Joseph Stiglitz, the prominent American economist. Italy’s coalition government is reportedly considering these proposals seriously.

Under the Bossone et al. proposal, the Italian government would issue euro-denominated bearer “tax rebate certificates” (TRCs). The government would pledge to accept these at face value in “future payments to the state (taxes, duties, social contributions, and so forth).” The certificates in that sense would be “redeemable at a later date – say, two years after issuance.” If non-interest-bearing, they would trade at a discounted value. But if interest were paid to keep the certificates always at par, and the payment system accordingly accepted them as the equivalent of base-money euros, the certificates would be additional spendable money in the public’s hands. “As a result,” they argue, “Italy’s output gap — that is, the difference between potential and actual GDP — would close.” Thus they claim that “properly designed, such a system could substantially boost economic output and public revenues at little to no cost.”

Remarkable claims. Bossone et al. have recently argued that their “fiscal money” program would not violate ECB rules. But there is a more basic question: would it actually work to boost real GDP sustainably by shrinking unemployment and excess capacity? On critical examination, the answer is no. The proposal is based on wishful thinking.

To provide empirical context, note that estimated slack in the Italian economy is already shrinking. The OECD estimate of Italy’s output gap (the percentage by which real GDP falls short of estimated full-employment or “potential” GDP) was large—greater than 5 percent—for 2014, the year when Bossone et al. first floated their proposal. Among the major Eurozone economies, only Greece, Spain, and Portugal had larger gaps; France had a gap half as large, while Germany was above its estimated potential GDP. For 2018, however, Italy’s estimated output gap is under 0.5 percent. For 2019 the OECD projects that actual GDP will exceed full-employment GDP.

Theoretically (as famously argued by Leland Yeager and by Robert Clower and Axel Leijonhufvud), in a world of sticky prices and wages a depressed level of real output can be due to an unsatisfied excess demand for money, which logically corresponds to an aggregate excess supply (unsold inventories) of other goods including labor. People building up their real money balances will do so by buying fewer goods at current prices and offering more labor at current wages. But is that the cause depressed output in Italy today? Yeager’s “cash-balance interpretation of depression” assumes an economy with its own money, domestically fixed in quantity, so that an excess demand for money can be satisfied only by a drawn-out process of falling prices and wages that raises real balances.

But Italy today does not have its own money. It is a part of a much larger monetary area, the Eurozone. (For one indication of Italy’s share of the euro economy, Italian banks hold 14.7% of euro deposits.) The European Central Bank through tight monetary policy can create an excess demand for money in the entire Eurozone, in which case Italy suffers equally with other Eurozone countries, but it cannot create an excess demand for money specifically in Italy. A specifically Italian excess demand for money can arise if Italians increase their demand for money balances relative to other Eurozone residents, but in that case euros can and will flow in from the rest of the Eurozone (corresponding to Italians more eagerly selling goods or borrowing) to satisfy that demand.

Because Italy’s small output gap in 2018 therefore cannot be plausibly attributed to an unsatisfied excess demand for money, an expansion of the domestic money stock through the creation of “fiscal money” is not an appropriate remedy.

If not due to an excess demand for money, what is the cause of Italy’s lingering output gap? I don’t know, but I would look for real factors. Likely candidates are labor-market inflexibility in the face of real shocks, and the reluctance of investors to put financial or real capital into a country with serious fiscal problems (hence a serious risk of new taxes or higher tax rates soon) and a non-negligible threat of leaving the euro.

The flip side of flowing into Italy from the rest of the Eurozone, to satisfy Italian money demand, is that any excess money in Italy will flow out.  If Italians already hold the quantity of euro balances they desire, then the creation of “fiscal money” would not increase Italy’s money stock except transitorily. Supposing that Italians treat new “fiscal money” as the domestic equivalent of euros, the addition to their money balances would result in holdings greater than desired at current euro prices and interest rates. In restoring their desired portfolio shares (spending off excess balances) they would send euros abroad (by assumption, the domestic quasi-money would not be accepted abroad) in purchases of imported goods and financial assets.

It isn’t clear, however, that the public would actually regard “fiscal money” as the equivalent of base-money euros added to the circulation. Unlike fiat base money, TRCs are not a net asset. They come with corresponding debts, the government’s obligation to accept them in lieu of euros for taxes (say) two years after issue. There is no reason for taxpayers to think themselves richer for having more TRCs in their wallets given that they will need to pay future taxes (equivalent in present value) to service and retire them.

Despite $8.6 billion spent on the eradication of opium in Afghanistan over the past seventeen years, the US military has failed to stem the flow of Taliban revenue from  the illicit drug trade. Afghanistan produces the majority of the world’s opium, and recent U.S. military escalations have failed to alter the situation. According to a recent piece in the Wall Street Journal:

“Nine months of targeted airstrikes on opium production sites across Afghanistan have failed to put a significant dent in the illegal drug trade that provides the Taliban with hundreds of millions of dollars, according to figures provided by the U.S. military.”

This foreign war on drugs has been no more successful than its domestic counterpart. If U.S. military might cannot suppress the underground market, local police forces have no hope.  Supply side repression does not seem to work, and its costs and unintended consequences are large.

 Research assistant Erin Partin contributed to this blog post.

In 1985, Reason Foundation co-founder and then-president Robert Poole heard about a variable road pricing experiment in Hong Kong. In 1986, he learned that France and other European countries were offering private concessions to build toll roads. In 1987, he interviewed officials of Amtech, which had just invented electronic transponders that could be used for road tolling. He put these three ideas together in a pioneering 1988 paper suggesting that Los Angeles, the city with the worst congestion in America, could solve its traffic problems by adding private, variable-priced toll lanes to existing freeways.

Although Poole’s proposal has since been carried out successfully on a few freeways in southern California and elsewhere, it is nowhere near as ubiquitous as it ought to be given that thirty years have passed and congestion is worse today in dozens of urban areas than it was in Los Angeles in 1988. So Poole has written Rethinking America’s Highways, a 320-page review of his research on the subject since that time. Poole will speak about his book at a livestreamed Cato event this Friday at noon, eastern time.

Because Poole has influenced my thinking in many ways (and, to a very small degree, the reverse is true), many of the concepts in the book will be familiar to readers of Gridlock or some of my Cato policy analyses. For example, Poole describes elevated highways such as the Lee Roy Selmon Expressway in Tampa as a way private concessionaires could add capacity to existing roads. He also looks at the state of autonomous vehicles and their potential contributions to congestion reduction.

France’s Millau Viaduct, by many measures the largest bridge in the world, was built entirely with private money at no risk to French taxpayers. The stunning beauty, size, and price of the bridge are an inspiration to supporters of public-private partnerships everywhere.

Beyond these details, Poole is primarily concerned with fixing congestion and rebuilding the nation’s aging Interstate Highway System. His “New Vision for U.S. Highways,” the subject of the book’s longest chapters, is that congested roads should be tolled and new construction and reconstruction should be done by private concessionaires, not  public agencies. The book’s cover shows France’s Millau Viaduct, which a private concessioner opened in 2004 at a cost of more than $400 million. Poole compares the differences between demand-risk and availability-payment partnerships – in the former, the private partner takes the risk and earns any profits; in the latter, the public takes the risk and the private partner is guaranteed a profit – coming down on the side of the former.

This chart showing throughput on a freeway lane is based on the same data as a chart on page 256 of Rethinking America’s Highways. It suggests that, by keeping speeds from falling below 50 mph, variable-priced tolling can greatly increase throughput during rush hours.

The tolling chapter answers arguments against tolling, responses Poole has no doubt made so many times he is tired of giving them. He mentions (but doesn’t emphasize enough, in my opinion) that variable pricing can keep traffic moving at 2,000 to 2,500 vehicles per hour per freeway lane, while throughout can slow to as few as 500 vehicles per hour in congestion. This is the most important and unanswerable argument for tolling, for – contrary to those who say that tolling will keep poor people off the roads – it means that tolling will allow more, not fewer, people to use roads during rush hours.

While I agree with Poole that private partners would be more efficient at building new capacity than public agencies, I don’t think this idea is as important as tolling. County toll road authorities in Texas, such as the Fort Bend Toll Road Authority, have been very efficient at building new highways that are fully financed by tolls.

Despite considerable (and uninformed) opposition to tolling, an unusual coalition of environmentalists and fiscal conservatives has persuaded the Oregon Transportation Commission to begin tolling Portland freeways. As a result, Portland may become the first city in America to toll all its freeways during rush-hour, a goal that would be thwarted if conservatives insisted on private toll concessions.

Tolling can end congestion, but Poole points out that this isn’t the only problem we face: the Interstate Highway System is at the end of its 50-year expected lifespan and some method will be needed to rebuild it. He places his faith in public-private partnerships for such reconstruction.

Tolling and public-private partnerships are two different questions, but of the two only tolling (or mileage-based user fees, which uses the same technology to effectively toll all roads) is essential to eliminating congestion. It is also the best alternative to what Poole argues are increasingly obsolescent gas taxes. Anyone who talks about congestion relief without including road pricing isn’t serious about solving the problem. Poole’s book should be required reading for all politicians and policymakers who deal with transportation.

The environmental impact of cryptocurrencies looms large among the many concerns voiced by sceptics. Earlier this year, Agustín Carstens, who runs the influential Bank for International Settlements, called Bitcoin “a combination of a bubble, a Ponzi scheme and an environmental disaster.”

Carstens’ first two indictments have been challenged. Contrary to his assertion, while the true market potential of Bitcoin, Ethereum and other such decentralized networks remains uncertain, by now it is clear to most people that they are more than mere instruments for short-term speculation and the fleecing of unwitting buyers.

That Bitcoin damages the environment without countervailing benefits is, on the other hand, an allegation still widely believed even by many cryptocurrency fans. Sustaining it is the indisputable fact that the electricity now consumed by  the Bitcoin network, at 73 TWh per year at last count, rivals the amount consumed by countries like Austria and the Philippines.

Computing power is central to the success of Bitcoin

Bitcoin’s chief innovation is enabling payments without recourse to an intermediary. Before Bitcoin, any attempt to devise an electronic payments network without a middleman suffered from a double-spend problem: There was no easy way for peers to verify that funds promised to them had not also been committed in other transactions. Thus, a central authority was inescapable.

“Satoshi Nakamoto”’s 2008 white paper proposing “a peer-to-peer electronic cash system” changed that. Nakamoto suggested using cryptography and a public ledger to resolve the double-spend problem. Yet, in order to ensure that only truthful transactions were added to the ledger, this decentralized payments system needed to encourage virtuous behavior and make fraud costly.

Bitcoin achieves this by using a proof-of-work consensus algorithm to reach agreement among users about which transactions should go on the ledger. Proof-of-work means that users expend computing power as they validate transactions. The reward from validation are newly minted bitcoins, as well as a transaction fee. Nakamoto writes:

Once the CPU effort has been expended, to make it satisfy the proof-of-work, the block cannot be changed without redoing the work. As later blocks are chained after it, the work to change the block would include redoing all the blocks after it.

[…] Proof-of-work is essentially one-CPU-one-vote. The majority decision is represented by the longest chain, which has the greatest proof-of-work effort invested in it.

Because consensus is required for transactions to go on the ledger, defrauding the system – forcing one user’s false transactions on the public ledger, against other users’ disagreement – would require vast expenditures of computing power. Thus, Bitcoin renders fraud uneconomical.

Electricity powers governance on Bitcoin

Bitcoin and other cryptocurrencies replace payments intermediation with an open network of independent users, called ‘miners’, who compete to validate transactions and whose majority agreement is required for any transaction to be approved.

Intermediation is not costless. Payment networks typically have large corporate structures and expend large amounts of resources to facilitate transactions. Mastercard, which as of 2016 accounted for 23 percent of the credit card and 30 percent of the debit card market in the U.S., employs more than 13,000 staff worldwide. Its annual operating expenses reached $5.4 billion in fiscal year 2017. Its larger competitor Visa had running costs of $6.2 billion.

Equally, doing away with intermediaries such as Mastercard has costs. Bitcoin miners require hardware and electricity to fulfill their role on the network. A recent study puts the share of electricity costs in all mining costs at 60 to 70 percent.

Electricity prices vary widely across countries, and miners will tend to locate in countries where electricity is comparably cheap, since the bitcoin price is the same all over the world. One kilowatt-hour of electricity in China, reportedly the location of 80 percent of Bitcoin mining capacity, costs 8.6 U.S. cents, 50 percent below the average price in America. Assuming an average price of 10 cents per kWh, the Bitcoin network would consume $7.3 billion of electricity per year, based on current mining intensity. This yields total Bitcoin annual running costs of $10 to 12 billion.

The value of Bitcoin’s electricity use

Bitcoin total operating costs do not differ much from those of intermediated payment networks such as Mastercard and Visa. Yet these card networks facilitate many more transactions than Bitcoin: Digiconomist reports that Bitcoin uses 550,000 times as much electricity per transaction as Visa.

However, the number of transactions is a poor standard for judging the value exchanged on competing networks. Mastercard and Visa handle large numbers of small-dollar exchanges, whereas the Bitcoin transactions are $16,000 on average. The slow speed of the Bitcoin network and the large fluctuations in average transaction fees make low-value exchanges unattractive. Moreover, unlike card networks, cryptocurrencies are still not generally accepted and therefore used more as a store of value than a medium of exchange.

With that in mind, if we compare Bitcoin and the card networks by the volume of transactions processed, a different picture emerges. The volume of Bitcoin transactions over the 24 hours to August 27 was $3.6 billion, which is not an outlier. That yields annual transaction volume of $1.33 trillion. This is below Mastercard’s approximate $6 trillion and Visa’s $7.8 trillion in payments volume over 2017. But it is not orders of magnitude below.

In fact, as ARK Invest reported over the weekend, Bitcoin has already surpassed the smaller card network Discover, and online payments pioneer Paypal, in online transactions volume. Overtaking the most successful payment network of the internet era is quite a milestone.

Source: ARK Invest newsletter, Aug. 26

Long-term prospects for the Bitcoin network

As mentioned above, comparisons between Bitcoin and intermediated payment networks must be conducted with caution, because most transactions on Mastercard, Paypal and Visa are for the exchange of goods and services, whereas much of the dollar value of Bitcoin transactions has to do with speculative investment in the cryptocurrency and the mining of new bitcoins. Only a fraction of Bitcoin payments involve goods and services.

However, that people are eager to get a hold of bitcoins today shows that some firmly believe Bitcoin has the potential to become more widely demanded.

The prospects for Bitcoin as an investment, on the other hand, are perhaps more questionable than its proponents assume. After all, the value of a medium of exchange is given by the equation MV = PQ, where M is the money supply, V the velocity at which money units change hands, P the price level and Q the real volume of transactions.

Bitcoin bulls posit, quite plausibly, that Q will only grow in coming years. But unless Bitcoin becomes a store-of-value cryptocurrency that is not frequently exchanged, V will also grow as Bitcoin users transact more on the network. This will push up the price level P and depress the value of individual bitcoins. Thus, Bitcoin’s very success as a medium of exchange may doom it as an investment.

But that is orthogonal to the policy discussion as to whether Bitcoin’s admittedly large power requirements are a matter of concern. Whereas competing payment systems rely on a many inputs, from physical buildings to a skilled workforce to reputational and financial capital, Bitcoin’s primary input is electricity. What Bitcoin illustrates is that achieving successful governance without the role of an intermediary is costly – costly enough that Bitcoin may struggle to outcompete payment intermediaries.

On the other hand, there are efforts afoot to introduce innovations into the Bitcoin network to increase its energy efficiency. The Lightning Network project, which seeks to enable transactions to happen outside the Bitcoin blockchain over the course of (for example) a trading day, and to record only the starting and closing balance, is such an initiative. Other, more controversial ideas are to rapidly increase the maximum size of a transaction block, which would speed up transaction processing but might not make much of a dent in power usage. Others have addressed the issue more directly, by building renewable generation capacity specifically aimed at cryptocurrency mining.

Is Bitcoin’s electricity use socially wasteful?

Behind claims like Carstens’ that Bitcoin is “an environmental disaster” lies the veiled accusation that the cryptocurrency’s electricity use is somehow less legitimate, or socially less valuable, than electricity use by schools, hospitals, households and offices. Is there any truth to this claim?

Economists have known since at least Pigou that the only way to determine wastefulness in resource use is by examining whether an activity has unpriced externalities which might lead agents to over- or underuse the resource. In those instances, these social costs must be incorporated into the price of the resource to motivate efficient production.

What this means for Bitcoin is that the cryptocurrency itself cannot be “socially wasteful.” The environmental impact of electricity use is unconcerned with the purpose of that use. Whether electric power is consumed for the mining of cryptocurrency or the production of cars has no bearing on the environmental effects. Therefore, the impact of Bitcoin depends on two factors over which the network has no control: the way in which power is generated and how electricity is priced.

Both vary widely across jurisdictions. Iceland, which due to its comparably low power costs and still lower temperatures is a favorite location for Bitcoin miners, generates nearly all of its electricity from renewable geothermal sources, which also emit much lower amounts of carbon than coal- or gas-fired plants. Iceland participates in the European Union’s emissions trading system, which despite its imperfect design does a good job of internalizing the social cost of power generation.

Canada, like Iceland a cold jurisdiction, uses hydroelectric power to generate 59 percent of its electricity. In the crypto-favorite province of Quebec, 95 percent of power is hydroelectric, and prices are particularly low. While the overall environmental impact of hydropower is contested, there is agreement that its carbon footprint is a fraction of those of gas- and coal-fired plants. Canada has also recently implemented a nationwide cap-and-trade scheme in a bid to price carbon emissions.

China, the largest jurisdiction for mining, offers a less encouraging picture, as it still generates close to half its electricity from coal. However, this is drastically down from 72 percent in 2015 and has dropped even in absolute terms. The People’s Republic’s attempts to reduce its carbon footprint per unit of GDP have relied more on command-and-control shifts from coal to other sources than on market forces.

Whichever way one looks at it, however, the environmental impact of Bitcoin and other electricity-intensive cryptocurrencies is a function not of their software architecture, but of the energy policies in the countries where miners operate.

Much ado about nothing

We can only conclude that reports of cryptocurrencies’ wreaking environmental havoc have been greatly exaggerated. An examination of transaction volumes shows that Bitcoin’s power use is not outside the league of intermediated payments systems. Moreover, it will be in the interest of Bitcoin miners to reduce the per-transaction electricity cost of mining, as otherwise the network will struggle to grow and compete with incumbents. Finally, there is no evidence that cryptocurrencies have environmental externalities beyond those that can be ascribed to any electricity user wherever electricity is inefficiently priced. But public policy, not cryptocurrency innovation, is at fault there.

[Cross-posted from Alt-M.org]

As a practicing physician I have long been frustrated with the Electronic Health Record (EHR) system the federal government required health care practitioners to adopt by 2014 or face economic sanctions. This manifestation of central planning compelled many doctors to scrap electronic record systems already in place because the planners determined they were not used “meaningfully.” They were forced to buy a government-approved electronic health system and conform their decision-making and practice techniques to algorithms the central planners deem “meaningful.”  Other professions and businesses make use of technology to enhance productivity and quality. This happens organically. Electronic programs are designed to fit around the unique needs and goals of the particular enterprise. But in this instance, it works the other way around: health care practitioners need to conform to the needs and goals of the EHR. This disrupts the thinking process, slows productivity, interrupts the patient-doctor relationship, and increases the risk of error. As Twila Brase, RN, PHN ably details in “Big Brother in the Exam Room,” things go downhill from there.

With painstaking, almost overwhelming detail that makes the reader feel the enormous complexity of the administrative state, Ms. Brase, who is president and co-founder of Citizens’ Council for Health Freedom (CCHF), traces the origins and motives that led to Congress passing the Health Information Technology for Economic and Clinical Health (HITECH) Act in 2009. The goal from the outset was for the health care regulatory bureaucracy to collect the private health data of the entire population and use it to create a one-size-fits-all standardization of the way medicine is practiced. This standardization is based upon population models, not individual patients. It uses the EHR design to nudge practitioners into surrendering their judgment to the algorithms and guidelines adopted by the regulators. Along the way, the meaningfully used EHR makes practitioners spend the bulk of their time entering data into forms and clicking boxes, providing the regulators with the data needed to generate further standardization.

Brase provides wide-ranging documentation of the way this “meaningful use” of the EHR has led to medical errors and the replication of false information in patients’ health records. She shows how the planners intend to morph the Electronic Health Record into a Comprehensive Health Record (CHR), through the continual addition of new data categories, delving into the details of lifestyle choices that may arguably relate indirectly to health: from sexual proclivities, to recreational behaviors, to gun ownership, to dietary choices. In effect, a meaningfully used Electronic Health Record is nothing more than a government health surveillance system.  As the old saying goes, “He who pays the piper calls the tune.” If the third party—especially a third party with the monopoly police power of the state—is paying for health care it may demand adherence to lifestyle choices that keep costs down.

All of this data collection and use is made possible by the Orwellian-named Health Insurance Portability and Accountability Act (HIPAA) of 1996.  Most patients think of HIPAA as a guarantee that their health records will remain private and confidential. They think all those “HIPAA Privacy” forms they are signing at their doctor’s office is to insure confidentiality. But, as Brase points out very clearly, HIPAA gives numerous exemptions to confidentiality requirements for the purposes of collecting data and enforcing laws. As Brase puts it, 

 It contains the word privacy, leaving most to believe it is what it says, rather than reading it to see what it really is. A more honest title would be “Notice of Federally Authorized Disclosures for Which Patient Consent Is Not Required.”

It should frighten any reader to learn just how exposed the personal medical information is to regulators in and out of government. Some of the data collected without the patients’ knowledge is generated by what Brase calls “forced hospital experiments” in health care delivery and payment models, also conducted without the patients’ knowledge. Brase documents how patients remain in the dark about being included in payment model experiments, even including whether or not they are patients being cared for by an Accountable Care Organization (ACO). 

Again quoting Brase, 

Congress’s insistence that physicians install government health surveillance systems in the exam room and use them for the care of patients, despite being untested and unproven—and an unfunded mandate—is disturbing at so many levels—from privacy to professional ethics to the patient-doctor relationship. 

As the book points out, more and more private practitioners are opting out of this surveillance system. Some are opting out of the third party payment system (including Medicare and Medicaid) and going to a “Direct Care” cash pay model, which exempts them from HIPAA and the government’s EHR mandate. Some are retiring early and/or leaving medical practice altogether. Many, if not most, are selling their practices to hospitals or large corporate clinics transferring the risk of severe penalties for non-compliance to those larger entities. 

Health information technology can and should be a good thing for patients and doctors alike. But when the government rather than individual patients and doctors decide what kind of technology that will be and how it will be used, health information technology can become a dangerous threat to liberty, autonomy, and health. 

“Big Brother In The Exam Room” is the first book to catalog in meticulous detail the dangerous ways in which health information technology is being weaponized against us all.  Everyone should read it. 

It has been a whirlwind week of negotiations on the North American Free Trade Agreement (NAFTA), ending on Friday in apparent deadlock. Canada was not able to reach a deal with the United States on some of the remaining contentious issues, but that did not stop President Trump from submitting a notice of intent to Congress to sign a deal with Mexico that was agreed to earlier this week. This action allows the new trade agreement to be signed by the end of November, before Mexican President Enrique Pena Nieto leaves office. While a high degree of uncertainty remains, it is premature to ring the alarm for the end of NAFTA as we know it.

Why? First, there is still some negotiating latitude built into the Trade Promotion Authority (TPA) legislation, which outlines the process for how the negotiations unfold. The full text of the agreement has to be made public thirty days after the notice of intent to sign is submitted to Congress. This means that the parties have until the end of September to finalize the contents of the agreement. What we have now is just an agreement in principle, which can be thought of as a draft of the agreement, with a lot of little details still needing to be filled in. Therefore, it is not surprising that the notice submitted to Congress today left open the possibility of Canada joining the agreement “if it is willing” at a later date. Canadian Foreign Minister Chrystia Freeland will resume talks with U.S. Trade Representative Robert Lighthizer next Wednesday, and this should be seen as a sign that the negotiations are far from over.

Relatedly, TPA legislation does not provide a clear answer as to whether the President can split NAFTA into two bilateral deals. The original letter of intent to re-open NAFTA, which was submitted by Amb. Lighthizer in May 2017, notified Congress that the President intended to “initiate negotiations with Canada and Mexico regarding modernization of the North American Free Trade Agreement (NAFTA).” This can be read as signaling that not only were the negotiations supposed to be with both Canada and Mexico, but also that Congress only agreed to this specific arrangement.  In addition, it could be argued that TPA would require President Trump to “restart the clock” on negotiations with a new notice of intent to negotiate with Mexico alone. The bottom line, however, is that it is entirely up to Congress to decide whether or not it will allow for a vote on a bilateral deal with Mexico only, and so far, it appears that Congress is opposed to this. 

In fact, Congress has been fairly vocal about the fact that a NAFTA without Canada simply does not make sense. Canada and Mexico are the top destination for U.S. exports and imports, with total trade reaching over $1 trillion annually. Furthermore, we don’t just trade things with each other in North America, we make things together. Taking Canada out of NAFTA is analogous to putting a wall in the middle of a factory floor. It is has been estimated that every dollar of imports from Mexico includes forty cents of U.S. value added, and for Canada that figure is twenty-five cents for every dollar of imports—these are U.S. inputs in products that come back to the United States.

While President Trump may claim that he’s playing hardball with Canada by presenting an offer they cannot reasonably accept, we should approach such negotiating bluster with caution. In fact, the reality is that there is still plenty of time to negotiate, and Canada seems willing to come back to the table next week. At a press conference at the Canadian Embassy in Washington D.C. after negotiations wrapped up for the week, Minister Freeland remarked that Canada wants a good deal, and not just any deal, adding that a win-win-win was still possible. Negotiations are sure to continue amidst the uncertainty, and it will be a challenging effort to parse the signal from the noise. However, we should remain optimistic that a trilateral deal is within reach and take Friday’s news as just another step in that direction.

A Massachusetts statute prohibits ownership of “assault weapons,” the statutory definition of which includes the most popular semi-automatic rifles in the country, as well as “copies or duplicates” of any such weapons. As for what that means, your guess is as good as ours. A group of plaintiffs, including two firearm dealers and the Gun Owners’ Action League challenged the law as a violation of the Second Amendment. Unfortunately, federal district court judge William Young upheld the ban.

Judge Young followed the lead of the Fourth Circuit case of Kolbe v. Hogan (in which Cato filed a brief supporting a petition to the Supreme Court) which misconstrued from a shred of the landmark 2008 District of Columbia v. Heller case that the test for whether a class of weapons could be banned was whether it was “like an M-16,” contravening the core of Heller—that all weapons in common civilian use are constitutionally protected. What’s worse is that Judge Young seemed to go a step further, rejecting the argument that an “M-16” is a machine gun, unlike the weapons banned by Massachusetts, and deciding that semi-automatics are “almost identical to the M16, except for the mode of firing.” (The mode of firing is, of course, the principle distinction between automatic and semi-automatic firearms.)

The plaintiffs are appealing to the U.S. Court of Appeals for the First Circuit. Cato, joined by several organizations interested in the protection of our civil liberties and a group of professors who teach the Second Amendment, has filed a brief supporting the plaintiffs. We point out that the Massachusetts law classifies the common semi-automatic firearms used by police officers as “dangerous and unusual” weapons of war, alienating officers from their communities and undermining policing by consent.

Where for generations Americans needed look no further than the belt of their local deputies for guidance in selecting a defensive firearm, Massachusetts’ restrictions prohibit these very same arms from civilians. Those firearms selected by experts for reliability and overall utility as defensive weapons, would be unavailable for the lawful purpose of self-defense. According to Massachusetts, these law enforcement tools aren’t defensive, but instead implements of war designed to inflict mass carnage.

Where tensions between police and policed are a sensitive issue, Massachusetts sets up a framework where the people can be fired upon by police with what the state fancies as an instrument of war, a suggestion that only serves to drive a wedge between police and citizenry.

Further, the district court incorrectly framed the question as whether the banned weapons were actually used in defensive shootings, instead of following Supreme Court precedent and asking whether the arms were possessed for lawful purposes (as they unquestionably were). This skewing of legal frameworks is especially troublesome where the Supreme Court has remained silent on the scope of the right to keep and bear arms for the last decade, leading to a fractured and unpredictable state of the law.

Today, the majority of firearms sold in the United States for self-defense are illegal in Massachusetts. The district court erred in upholding this abridgment of Bay State residents’ rights. The Massachusetts law is unconstitutional on its face and the reasoning upholding it lacks legal or historical foundation.

Last weekend the Federal Reserve Bank of Kansas City hosted its annual symposium in Jackson Hole. Despite being the Fed’s largest annual event, the symposium has been “fairly boring” for years, in terms of what can be learned about the future of actual policy. This year’s program, Changing Market Structures and Implications for Monetary Policy, was firmly in that tradition—making Jerome Powell’s speech, his first there as Fed Chair, the main event. In it, he covered familiar ground, suggesting that the changes he has begun as Chair are likely to continue.

Powell constructed his remarks around a nautical metaphor of “shifting stars.” In macroeconomic equations a variable has a star superscript (*) on it to indicate it is a fundamental structural feature of the economy. In Powell’s words, these starred values in conventional economic models are the “normal, or “natural,” or “desired” values (e.g. u* for the natural rate of unemployment, r* for the neutral rate of interest, and π* for the optimal inflation rate). In these models the actual data are supposed to fluctuate around these stars. However, the models require estimates for many star values (the exception being desired inflation, which the Fed has chosen to be a 2% annual rate) because they cannot be directly observed, and therefore must be inferred.

These models then use the gaps between actual values and the starred values to guide—or navigate, in Powell’s metaphor—the path of monetary policy. The most famous example being, of course, the Taylor Rule, which calls for interest rate adjustments depending on how far the actual inflation rate is from desired inflation and how far real GDP is from its estimated potential. Powell’s thesis is that as these fundamental values change, particularly as the estimates become more uncertain—as the stars shift so to speak—using them as guides to monetary policy becomes more difficult and less desirable.

His thesis echoes a point he made during his second press conference as Fed Chair when he said policymakers “can’t be too attached to these unobservable variables.” It also underscores Powell’s expressed desire to move the Fed in new directions: less wedded to formal models, open to a broader range of economic views, and potentially towards using monetary policy rules. To be clear, while Powell has outlined these new directions it remains to be seen how and whether such changes will actually be implemented.

A specific example of a new direction—and to my mind the most important comment in the Jackson Hole speech—was Powell’s suggestion that the Fed look beyond inflation in order to detect troubling signs in the economy. A preoccupation with inflation is a serious problem at the Fed, and one that had disastrous consequences in 2008. Indeed, Powell noted that the “destabilizing excesses,” (a term that he should have defined) in advance of the last two recessions showed up in financial market data rather than inflation metrics.

While Powell is more open to monetary policy rules than his predecessors, he’s yet to formally endorse them as anything other than helpful guides in the policymaking process. At Jackson Hole he remarked, “[o]ne general finding is that no single, simple approach to monetary policy is likely to be appropriate across a broad range of plausible scenarios.” This was seen as a rejection of rule-based monetary policy by Mark Spindel, noted Fed watcher and co-author of a political history of the Fed. However, given the shifting stars context of the speech, Powell’s comment should be interpreted as saying that when the uncertainty surrounding the stars is increasing, the usefulness of the policy rules that rely on those stars as inputs is decreasing. In other words, Powell is questioning the use of a mechanical rule, not monetary policy rules more generally.

Such an interpretation is very much in keeping with past statements made by Powell. For example, in 2015, as a Fed Governor, he said he was not in favor of a policy rule that was a simple equation for the Fed to follow in a mechanical fashion. Two years later, Powell said that traditional rules were backward looking, but that monetary policy needs to be forward looking and not overly reliant on past data. Upon becoming Fed Chair early this year, Powell made it a point to tell Congress he found monetary policy rules helpful—a sentiment he reiterated when testifying on the Hill last month.

The good news is that there is a monetary policy rule that is forward looking, not concerned with estimating the “stars,” and robust against an inflation fixation. I am referring to a nominal GDP level target, of course; a monetary policy rule that has been gaining advocates.

Like in years past, there was not a lot of discussion about the future of actual monetary policy at the Jackson Hole symposium. But if Powell really is moving the Federal Reserve towards adopting a rule, he is also beginning to outline a framework that should make a nominal GDP rule the first choice.

[Cross-posted from Alt-M.org]

It would have been natural to assume that partisan gerrymandering would not return as an issue to the Supreme Court until next year at the earliest, the election calendar for this year being too far advanced. But yesterday a federal judicial panel ruled that North Carolina’s U.S. House lines were unconstitutionally biased toward the interests of the Republican Party and suggested that it might impose new lines for November’s vote, even though there would be no time in which to hold a primary for the revised districts. Conducting an election without a primary might seem like a radical remedy, but the court pointed to other offices for which the state of North Carolina provides for election without a preceding primary stage.

If the court takes such a step, it would seem inevitable that defenders of the map will ask for a stay of the ruling from the U.S. Supreme Court. In June, as we know, the Court declined to reach the big constitutional issues on partisan gerrymandering, instead finding ways to send the two cases before it (Gill v. Whitford from Wisconsin and Benisek v. Lamone from Maryland) back to lower courts for more processing. 

In my forthcoming article on Gill and Benisek in the Cato Supreme Court Review, I suggest that with the retirement of Justice Anthony Kennedy, who’d been the swing vote on the issue, litigators from liberal good-government groups might find it prudent to refrain for a while from steering the question back up to the high court, instead biding their time in hopes of new appointments. After all, Kennedy’s replacement, given current political winds, is likely to side with the conservative bloc. But a contrasting and far more daring tactic would be to take advantage of the vacancy to make a move in lower courts now. To quote Rick Hasen’s new analysis at Election Law Blog, “given the current 4-4 split on the Supreme Court, any emergency action could well fail, leaving the lower court opinion in place.” And Hasen spells out the political implications: “if the lower court orders new districts for 2018, and the Supreme Court deadlocks 4-4 on an emergency request to overturn that order, we could have new districts for 2018 only, and that could help Democrats retake control of the U.S. House.”

Those are very big “ifs,” however. As Hasen concedes, “We know that the Supreme Court has not liked interim remedies in redistricting and election cases close to the election, and it has often rolled back such changes.” Moreover, Justices Breyer and Kagan in particular have lately shown considerable willingness to join with conservatives where necessary to find narrow grounds for decision that keep the Court’s steps small and incremental, so as not to risk landmark defeats at the hands of a mobilized 5-4 conservative court. It would not be surprising if one or more liberal Justices join a stay of a drastic order in the North Carolina case rather than set up a 2019 confrontation in such a way as to ensure a maximally ruffled conservative wing.

Some of these issues might come up at Cato’s 17th annual Constitution Day Sept. 17 – mark your calendar now! – where I’ll be discussing the gerrymandering cases on the mid-afternoon panel.

In the first of this series of posts, I explained that the mere presence of fractional-reserve banks itself has little bearing on an economy’s rate of money growth, which mainly depends on the growth rate of its stock of basic (commodity or fiat) money. The one exception to this rule, I said, consists of episodes in which growth in an economy’s money stock, defined broadly to include the public’s holdings of readily-redeemable bank IOUs as well as its holdings of basic money, is due in whole or in part to a decline in bank reserve ratios

In a second post, I pointed out that, while falling bank reserve ratios might in theory be to blame for business booms, a look at some of the more notorious booms shows that they did not in fact coincide with any substantial decline in bank reserve ratios.

In this third and final post, I complete my critique of the “Fractional Reserves lead to Austrian Business Cycles” (FR=ABC) thesis, by showing that, when fractional-reserve banking system reserve ratios do decline, the decline doesn’t necessarily result in a malinvestment boom.

Causes of Changed Bank Reserve Ratios

That historic booms haven’t typically been fueled by falling bank reserve ratios, meaning ratios of commercial bank reserves to commercial bank demand deposits and notes, doesn’t mean that those ratios never decline. In fact they may decline for several reasons. But when they do change, commercial bank reserve ratios usually change gradually rather than rapidly. In contrast central banks, and fiat-money issuing central banks especially, can and sometimes do occasionally expand their balance sheets quite rapidly, if not to a dramatic extent. It’s for this reason that monetary booms are more likely to be fueled by central bank credit operations than by commercial banks’ decision to “skimp” more than usual on reserves.

There are, however, some exceptions to the rule that reserve ratios tend to change only gradually. One of these stems from government regulations, changes in which can lead to reserve ratio changes that are both more substantial and more sudden. Thus in the U.S. during the 1990s changes to minimum bank reserve requirements and the manner of their enforcement led to a considerable decline in actual bank reserve ratios. In contrast, the Federal Reserve’s decision to begin paying interest on bank reserves starting in October 2008, followed by its various rounds of Quantitative Easing, caused bank reserve ratios to increase dramatically.

The other exception concerns cases in which fractional reserve banking is just developing. Obviously as that happens a switch from 100-percent reserves, or its equivalent, to some considerably lower fraction, might take place over a relatively short time span. In England during the last half of the 17th century, for example, the rise first of the goldsmith banks and then of the Bank of England led to a considerable reduction in the demand for monetary gold, its place being taken by a combination of paper notes and readily redeemable deposits.

Yet even that revolutionary change involved a less rapid increase in the role of fiduciary media, with even less significant cyclical implications, than one might first suppose, for several reasons. First, only a relatively small number of persons dealt with banks at first: for the vast majority of people, “money” still meant nothing other than copper and silver coins, plus (for the relatively well-heeled) the occasional gold guinea. Second, bank reserve ratios remained fairly high at first — the best estimates put them at around 30 percent or so — declining only gradually from that relatively high level. Finally, the fact that the change was as yet limited to England and one or two other economies meant that, instead of resulting in any substantial change England’s money stock, level of spending, or price level, it led to a largely contemporaneous outflow of now-surplus gold to the rest of the world. By allowing paper to stand in for specie, in other words, England was able to export that much more precious metal. The same thing occurred in Scotland over the course of the next century, only to a considerably greater degree thanks to the greater freedom enjoyed by Scotland’s banks. It was that development that caused Adam Smith to wax eloquent on the Scottish banking system’s contribution to Scottish economic growth.

Eventually, however, any fractional-reserve banking system tends to settle into a relatively “mature” state, after which, barring changes to government regulations, bank reserve ratios are likely to decline only gradually, if they decline at all, in response to numerous factors including improvements in settlement arrangements, economies of scale, and changes in the liquidity of marketability of banks’ non-reserve assets. For this reason it’s perfectly absurd to treat the relatively rapid expansion of fiduciary media in a fractional-reserve banking system that’s just taking root as illustrating tendencies present within established fractional-reserve banking systems.

Yet that’s just what some proponents of 100-percent banking appear to do. For example, in a relatively recent blog Robert Murphy serves-up the following “standard story of fractional reserve banking”:

Starting originally from a position of 100% reserve banking on demand deposits, the commercial banks look at all of their customers’ deposits of gold in their vaults, and take 80% of them, and lend them out into the community. This pushes down interest rates. But the original rich depositors don’t alter their behavior. Somebody who had planned on spending 8 of his 10 gold coins still does that. So aggregate consumption in the community doesn’t drop. Therefore, to the extent that the sudden drop in interest rates induces new investment projects that wouldn’t have occurred otherwise, there is an unsustainable boom that must eventually end in a bust.

Let pass Murphy’s unfounded — and by now repeatedly-refuted — suggestion that fractional reserve banking started out with bankers’ lending customers’ deposits without the customers knowing it. And forget as well, for the moment, that any banker who funds loans using deposits that the depositors themselves intend spend immediately will go bust in short order. The awkward fact remains that, once a fractional-reserve banking system is established, it cannot go on being established again and again, but instead settles down to a relatively stable reserve ratio. So instead of explaining how fractional reserve banking can give rise to recurring business cycles, the story Murphy offers is one that accounts for only a single, never to be repeated fractional-reserve based cyclical event.

Desirable and Undesirable Reserve Ratio Changes

Finally, a declining banking system reserve ratio doesn’t necessarily imply excessive money creation, lending, or bank maturity mismatching. That’s because, notwithstanding what Murphy and others claim, competing commercial banks generally can’t create money, or loans, out of thin air. Instead, their capacity to lend, like that of other intermediaries, depends crucially on their success at getting members of the public to hold on to their IOUs. The more IOUs bankers’ customers are willing to hold on to, and the fewer they choose to cash in, the more the bankers can afford to lend. If, on the other hand, instead of holding onto a competing bank’s IOUs, the bank’s customers all decide to spend them at once, the bank will fail in short order, and will do so even if its ordinary customers never stage a run on it. All of this goes for the readily redeemable bank IOUs that make up the stock of bank-supplied money no less than for IOUs of other sorts. In other words, contrary to what Robert Murphy suggests in his passage quoted above, it matters a great deal to any banker whether or not persons who have exchanged basic money for his banks’ redeemable claims plan to go on spending, thereby drawing on those claims, or not.

Furthermore, as I show in part II of my book on free banking, in a free or relatively free banking system, meaning one in which there are no legal reserve requirements and banks are free to issue their own circulating currency, bank reserve ratios will tend to change mainly in response to changes in the public’s demand to hold on to bank-money balances. When people choose to increase their holdings of (that is, to put off spending) bank deposits or notes or both, the banks can profitably “stretch” their reserves further, making them support a correspondingly higher quantity of bank money. If, on the other hand, people choose to reduce their holdings of bank money by trying to spend them more aggressively, the banks will be compelled to restrict their lending and raise their reserve ratios. The stock of bank-created money will, in other words, tend to adjust so as to offset opposite changes in money’s velocity, thereby stabilizing the product of the two.

This last result, far from implying a means by which fractional-reserve banks might fuel business cycles, suggests on the contrary that the equilibrium reserve ratio changes in a free banking system can actually help to avoid such cycles. For according to Friedrich Hayek’s writings of the 1930s, in which he develops his theory of the business cycle most fully, avoiding such cycles is a matter of maintaining, not a constant money stock (M), but a constant “total money stream” (MV).

Voluntary and Involuntary Saving

Hayek’s view is, of course, distinct from Murray Rothbard’s, and also from that of many other Austrian critics of fractional reserve banking. But it is also more intuitively appealing. For the Austrian theory of the business cycle attributes unsustainable booms to occasions when bank-financed investment exceeds voluntary saving. Such booms are unsustainable because the unnaturally low interest rates with which they’re associated inevitably give way to higher ones consistent with the public’s voluntary willingness to save. But why should rates rise? They rise because lending in excess of voluntary savings means adding more to the “total money stream” than savers take out of that stream. Eventually that increased money stream will serve to bid up prices. Higher prices will in term raise the demand for loans, pushing interest rates back up. The increase in rates in turn brings the boom to an end, launching the “bust” stage of the cycle.

If, in contrast, banks lend more only to the extent that doing so compensates for the public’s attempts to accumulate balances of bank money, the money stream remains constant. Consequently the increase in bank lending doesn’t result in any general increase in the demand for or prices of goods. There is, in this case, no tendency for either the demand for credit or interest rates to increase. Instead of being self-reversing, the investment “boom,” if it can be called such, is not inevitably self-reversing. Instead, it can go on for as long as the increased demand for fiduciary media persists, and perhaps forever.

As I’m not saying anything here that I haven’t said before, I have a pretty darn good idea what sort of counterarguments to anticipate. Among others I expect to see claims to the effect that people who hold onto balances of bank money (or fiduciary media or “money substitutes” or whatever one wishes to call bank -issued IOUs that serve as regularly-accepted means of exchange) are not “really” engaged in acts of voluntary saving, because they might choose to part with those balances at any time, or because a bank deposit balance or banknote is “neither a present nor a future good,” or something alone these lines.

Balderdash. To “save” is merely to refrain from spending one’s earnings; and one can save by holding on or adding to a bank deposit balance or redeemable banknote no less than by holding on to or accumulating Treasury bonds. That persons who choose to save by accumulating demand deposits do not commit themselves to saving any definite amount for any definite length of time does not make their decision to save any less real: so long as they hold on to bank-issued IOUs, they are devoting a quantity of savings precisely equal to the value of those IOUs to the banks that have them on their books: as Murray Rothbard himself might have put it — though he certainly never did so with regard to the case at hand — such persons have a “demonstrated preference” for not spending, that is, for saving, to the extent that they hold bank IOUs, where “demonstrated preference” refers to the (“praxeological”) insight that, regardless of what some outside expert might claim, peoples’ actual acts of choice supply the only real proof of what they desire or don’t desire.  According to that insight, so long as someone holds a bank balance or IOU, he desires the balance or IOU, and not the things that could be had for it, or any part of it. That is, he desires to be a creditor to the bank against which he holds the balance or IOU.

And so long as banks expand their lending in accord with their customers’ demonstrated preference such acts of saving, and no more, while contracting it as their customers’ willingness to direct their savings to them subsides, the banks’ lending will not contribute to business cycles, Austrian or otherwise.

Of course, real-world monetary systems don’t always conform to the ideal sort of banking system I’ve described, issuing more fiduciary media only to the extent that the public’s real demand for such media has itself increased. While  free banking systems of the sort I theorize about in my book tend to approximate this ideal, real world systems can and sometimes do create credit in excess of the public’s voluntary savings, occasionally without, though (as we’ve seen) most often with, the help of accommodative central banks. But that’s no reason to condemn fractional reserve banking. Instead it’s a reason for looking more deeply into the circumstances that sometimes allow banking and monetary systems to promote business cycles.

In other words, instead of repeating the facile cliché that fractional reserve banking causes business cycles, or condemning fiduciary media tout court, Austrian economists who want to put a stop to such cycles, and to do so without undermining beneficial bank undertakings, should inquire into the factors that sometimes cause banks to create more fiduciary media than their customers either want or need.

[Cross-posted from Alt-M.org]

Pages