Thursday, August 16, 2018

Is the Fed's Floor Falling Out?


David Beckworth has a piece up at Alt-M arguing that Trump's fiscal policy is causing problems for the "floor" system the Fed now uses to implement monetary policy.  Under the floor system, the Fed sets the rate of interest it pays on bank reserves, and it can then keep the banking system flush with cash without overstimulating the economy, since banks will prefer to hold excess cash as reserves.  It only works if the interest rate on reserves is above the rate banks can earn in the private market, otherwise they will drain their reserves to buy other assets.  A loose fiscal policy, David argues, is raising market rates through Treasury borrowing, thereby making it hard to maintain the floor system.

I take a different view: the Fed's current predicament, I would argue, says only a little about current fiscal policy but a lot about past monetary policy.  In particular, it says that, contrary to much conventional market wisdom at the time, monetary policy was too tight during the zero interest rate period.  But I'll get to that.

First, let's ask: is the floor system really in danger? I would say, no, or rather, it is only in danger if the Fed doesn't really like it.  In principle there is no particular problem with continuing the floor system if the Fed wishes to do so.

The floor system always depends on having an adequate supply of base money.  If there's not enough base money, the market rate rises, and the intended floor is no longer binding.  How much base money is enough?  That's something the Fed has to make an educated guess about.  If it's expecting a normal fiscal policy and instead gets a big tax cut paired with a lot of new spending, then it will find that its educated guess was wrong and the required amount of base money is more than it expected.  That's what's happening now.

But the situation is easily remedied. If the Fed wishes to continue the floor system, it simply has to create more base money through open market operations.  Or alternatively, it can raise the interest rate on reserves so that it is no longer in danger of being below the market interest rate.  But the Fed doesn't seem inclined to do the latter: there is no particular sign that interest rate pressure from fiscal policy is causing it to adopt a faster schedule for raising rates.

And the Fed has good reason not to raise rates more quickly.  We can debate about the particulars, but overall the Fed sees no strong sign that the economy is about to overheat, and even if it did, there's a case for allowing the inflation rate to rise temporarily to compensate for years of undershooting the target.  But surely fiscal policy is stimulating the US economy.  Presumably the Fed feels that this stimulus is appropriate for now, and in the absence of such fiscal stimulus, the Fed would be providing its own stimulus by slowing the increases in the interest rate it pays on reserves.

And in that case it would end up in the same predicament it is in now.  Interest rates overall would be lower, but market interest rates would still be rising more quickly than the Fed's floor rate.

So what is special about the Fed's current predicament?  Why do people suggest that the floor system is in trouble, rather than just saying the Fed needs to create more base money through open market operations, as one would normally do when the current floor is becoming problematic?

I think the answer is clear: the Fed created so much base money via open market operations during the zero interest rate period—via the Large Scale Asset Purchases (LSAP's), or Quantitative Easing (QE), as they were commonly known—that it figured it wouldn't have to create much more any time in the near future. It's kind of like when I spend $400 at the supermarket and then think, "I'll never have to go shopping again." It always turns out that I have to resume my market operations sooner than expected.

The upshot is this: the Fed thought it was doing a huge, huge thing with those LSAP's, but with the benefit of today's hindsight, it turns out they weren't so huge.  (Despite my $400 grocery purchase, I'm already running out of sandwich bread, and I don't have time to go shopping today.)  The Fed could have done a lot more.  And maybe if it had done more, the recovery would have been faster.

Tuesday, June 6, 2017

Record Job Openings Not As Impressive As It Sounds


I'm seeing a lot of headlines about a record high number of job openings.  While it's technically true that the number of job openings (as reported in the Job Openings and Labor Turnover Survey from the US Bureau of Labor Statistics) is at a record high, this statement needs lots and lots and lots of qualification.

The most important qualification is that the data only go back to December 2000.  If you were paying attention at the time, you remember that the dotcom bubble had already burst, and the US economy was heading into a recession.  And the subsequent housing boom wasn't really a broad hiring boom, so the JOLTS data don't give us an appropriate comparison for record job openings.

The second qualification is that the US economy has gotten larger over time, and the raw number of job openings largely reflects this scale increase rather than a boom in hiring.  The job openings rate is 4%, which is high for the series but has already been hit twice (July 2015 and July 2016).

The third qualification is that the jump in job openings in April was mostly in hotel and restaurant businesses, so it isn't a broad-based increase.

To put April's report in perspective, I estimated the job openings rate going back to 1951.  Prior to 1997, the estimates are based on the Conference Board's newspaper Help Wanted Index, normalized by total non-farm payrolls.  From 1997 to 2000, I adjusted the Help Wanted Index to reflect the increasing market share of online job advertising.  And I linked these data with the JOLTS data (which I use since December 2000) to allow interpreting the earlier data in terms of job openings rate.





From this chart it looks like today's 4% is historically typical.  Others may link the data differently and have somewhat different results, but in any case we're not in an unprecedented job opening boom.

Monday, September 8, 2014

Despite Tasci and Ice, I Still See an Increase in Structural Unemployment


[Update: Corrected an error in my analysis, though it oddly turns out to have very little effect on the relevant results.  Corrections are in italics.  I had fit the equation with AR(1) for the wrong time period, ending in 2005 instead of 2014.  Note that all fits should now be 12/2000 to 6/2014, the period of available JOLTS data.]


Back in 2010, there was a jump in US job openings (from an extremely low level) that was not accompanied by a commensurate decline in the unemployment rate.  Some saw this pattern as an indication of increased structural unemployment, with job openings becoming harder to fill from a given pool of unemployed.  At the time, I argued that it was not so:  job openings arise, and it takes time for them to reduce the unemployment rate; necessarily, there is a period when the unemployment rate remains higher than what would earlier have been associated with that number of job openings.   Then in 2012, I changed my mind.  A closer look at the data, including the additional two years that had passed, showed that, for a given number of job openings, the amount of hiring had declined.  That shift in the “matching function” suggested a change in the underlying relationship between unemployment and job openings, not just a temporary dynamic effect associated with the time it takes to fill new openings.

Recently some research has come out of the Cleveland Fed (cited approvingly by Paul Krugman) purporting to show that I was right the first time.  Specifically, Murat Tasci and Jessica Ice conclude that “there is no shift” in the Beveridge curve (the empirical relationship between job openings and unemployment).  They show that, in the years since that initial jump in job openings, the unemployment rate has fallen faster, and vacancies (job openings) have risen more slowly, ostensibly leading them back to the relation they had before the apparent shift in 2010.

I must say, first of all, that I don’t quite see their charts even appearing to show what they claim.  It’s true that, in vacancy-unemployment space, the point for 2014Q2 is very close to the point for 2008Q3; so, in a sense, any shift that was purported to have happened after 2008Q3 would now seem to have been an illusion.  But when I look at their chart, it looks like the shift actually happened between 2008Q2 and 2008Q3, when the unemployment rate rose and the vacancy rate failed to fall.  For the first two quarters of 2008, the not-yet-Great Recession looked much like the previous recession; then in 2008Q3 it appeared to shift to a new locus.  That apparent shift has not been reversed.

Comparing recent experience to the previous business cycle, it’s clear that we’re seeing a very different pattern this time around, not just in the intensity of the recession but also in the relationship between vacancies and unemployment.  Tasci and Ice have perhaps succeeded in demolishing the view that the large increase in vacancies in early 2010 represented a shift in the underlying relationship, but to my mind, that view has always been a straw man.  In any case it’s not a view that I ever held:  my research seemed to suggest changes in August 2006 and July 2010, the latter happening just after the alleged jump, not before or during.

Anyhow, in the light of the recent work, I decided to update my research, and, as my title suggests, my overall view hasn’t changed from 2012, though the details are a little different.  In my 2012 post I presented a model of the Beveridge curve, and my updated results can be described in terms of that model, but for the sake of universality I’m going to present them in a more agnostic way.
Start with the conventional “matching function,” which gives new hires as a function of unemployment and vacancies.  Using the JOLTS data (and using the absolute levels of hires, openings, and unemployment, as I did in my 2012 post), we can try to fit a matching function of the from lnH = a + b*lnV + c*lnU.  When I do this, I invariably get a negative value for c (regardless of specification details such as the inclusion of terms for autocorrelated residuals).  No plausible theory of the matching function gives a negative value for c.  (Surely it’s easier, not harder, to find people to hire if there are more people looking for jobs.)  So I re-fit, leaving out the U term.  (To put this another way, I’m fitting the equation with the constraint that c is non-negative, and I find the constraint to be binding.)

So I fit the equation with c=0, and I get a=4.52 and b=0.48, which would imply that hires are approximately proportional to the square root of vacancies, the same result I got in 2012.  Also as in 2012, I find that the residuals are autocorrelated (a Durbin-Watson statistic of 0.5, far from the ideal 2.0), presumably because the relationship has shifted over time.  So again I fit with an AR(1) term, and again I find that this gets but this time it is not sufficient to get rid of the Durbin-Watson problem. (The Durbin-Watson statistic goes to 2.9, still far from where it should be.)  So I added an MA(1) term and an AR(2) term, and this finally seems to be enough to handle the serial correlation problem.  This time, with the AR(1) term ARMA(2,1) terms, a=5.71 a=5.68 (although this isn’t very meaningful because the value of “a” is effectively shifted by the AR(1) term ARMA(2,1) terms), and b=0.34 b=0.33, which would imply that hires are approximately proportional to the cube root of vacancies (but in a way that shifts over time).

The interesting part, though, is what the residuals look like, so here they are:




[Note: This is the corrected chart.  The picture that originally appeared here is still up on Twitter and looks pretty similar.]

A couple of things are pretty clear about this picture.  First, there was a shift that took place between 2005 and 2008.  (The shift seems to be gradual, but given the amount of noise, it’s plausible that the shift could have happened in certain particular months, or even in just one particular month.  From the chart, the most decisive part of shift seems to happen between November 2007 and January 2008, which, probably not coincidentally, was also the turning point of the business cycle.)  Second, the shift does not appear to have been reversed.  (If you look closely, you might see another shift in 2010, which then seems to be reversed in 2013, but both the shift and the reversal could easily be noise, and in any case the original 2005-2008 shift has clearly not reversed.)

So here’s my conclusion:  something really did happen to make the Beveridge curve shift, and it was a persistent change.  Whether it was genuinely “permanent” we of course don’t know yet (since the current business cycle, one hopes, has a way to go, and the shift could be reversed later in the cycle), and whether it was truly a “structural” change is a question that is above my pay grade.  But I’m going to go with my best guess based on the available data and say that it looks like there was an increase in structural unemployment associated with the 2008 recession (or with what preceded and/or followed it).



DISCLOSURE: Through my investment and management role in a Treasury directional pooled investment vehicle and through my role as Chief Economist at Atlantic Asset Management, which generally manages fixed income portfolios for its clients, I have direct or indirect interests in various fixed income instruments, which may be impacted by the issues discussed herein. The views expressed herein are entirely my own opinions and may not represent the views of Atlantic Asset Management. This article should not be construed as investment advice, and is not an offer to participate in any investment strategy or product

Friday, December 13, 2013

Stochastic Dynamic Inefficiency, Secular Stagnation, and the Natural Discounted Growth Rate


(I could have put a "wonkish" warning at the top, but come on, just look at the title!)

Let's start with the concept of the natural real interest rate, which is already ubiquitous. The real interest-rate is the interest rate corrected for expected inflation, and the natural real interest rate is the real interest rate consistent with price stability.  (I leave the definition of price stability ambiguous because it depends on your theory of the price level and inflation, which is a whole 'nother blog post.)

Now market monetarists like Scott Sumner and Nick Rowe point to a deficiency in the natural real interest rate concept: the natural real interest rate depends on the expected real growth rate. And monetary policy affects the expected real growth rate, so if policymakers try to set the actual interest rate equal to the natural rate, they are chasing a moving target. To me, this criticism suggests that the appropriate approach is to adjust the nominal interest rate not for expected inflation but for expected nominal growth. (While we're at it, we can also replace "price stability" with "nominal growth stability" and be done with our futile attempts to measure the aggregate price level.)

Most economists think that the natural real interest rate is normally positive. I have my doubts, but never mind, because I'm ditching the whole concept. Once we start correcting for expected normal growth rather than expected inflation, we are clearly not dealing with a natural rate concept that can be presumed to be normally positive.  If we are talking about a risk-free interest rate, then the need for physical capital returns to compensate for risk would make it very hard to achieve a long-run growth rate [an equilibrium with the interest rate] as high as the [growth] interest rate, let alone higher. To come up with a number that's usually positive, I suggest that we reverse the sign. Instead of talking about a "natural growth-adjusted interest rate," let's talk about a "natural discounted growth rate."

We can also talk about an "actual discounted expected growth rate."  (The discount is "actual," determined by the observable interest rate, but the associated growth rate is only "expected" because it is not known with any confidence when the interest rate is set.) If the actual rate equals the natural rate, you get a normal employment level and nominal income growth stability (or price stability, if you insist). If the actual rate is higher, you get accelerating inflation. If the natural rate is higher, you get depressed economic conditions, with excess unemployment and deflationary pressure, if not actual deflation. (Wicksellians, please recall that I have reversed the signs compared to the usual natural rate theory.)

Now that I've defined the natural discounted growth rate, I can define "secular stagnation" in the context of an NGDP target. Secular stagnation means that the natural discounted growth rate exceeds the growth rate of the NGDP target path.  In other words, the target path would require a negative nominal interest rate.  Under a level targeting regime, an attempt to pursue such a path will result in either monetary instrument instability or a "zigzag" growth pattern in which recessions alternate with inflationary catch-up periods. Under a growth rate targeting regime, you'll just keep missing the target from below, much like most of the developed world's central banks today. 

But if you play with the numbers, you can probably convince yourself that secular stagnation, by my definition, seems unlikely. A reasonable NGDP target path growth rate is maybe 5%.  Do we really think that the potential growth rate exceeds the natural interest rate by more than five percentage points? It's remotely conceivable, but my guess would be that our problem today is not secular stagnation (in this sense) but a flawed monetary policy regime.

Now if the target nominal growth rate (and the associated possibility of secular stagnation) is one of our bookends for the natural discounted growth rate, the other bookend is pretty clear: it's zero. Some readers will immediately recognize zero as the criterion for dynamic efficiency. Ignoring the issue of risk for a moment, an economy with a strictly positive natural discounted growth rate would be dynamically inefficient. Overall welfare could be improved by instituting a stable Ponzi scheme that transfers consumption backward across generations.

Does my assertion that the natural discounted growth rate is almost certainly strictly positive imply that we actually live in a dynamically inefficient economy? In an important sense, I think it does.  The thorny issue here is risk, and some will argue that the relevant interest rate for dynamic inefficiency is not the risk-free rate.  But I disagree.  The US government can produce assets that are considered virtually risk free, and a stable Ponzi scheme operated by the US government could presumably produce such assets yielding any amount up to the growth rate.  At today’s Treasury interest rates, which are clearly less than expected growth rates, marginal investors are (we can presume, since the assets are freely traded) indifferent between these low-yielding Treasury securities and investments that represent newly created capital.  So, given the risk preferences of the marginal investor, the government could, by operating a stable Ponzi scheme, be producing assets that have a higher risk-adjusted return than newly created capital.  Given the risk preferences of the marginal investor, it’s inefficient for the government not to be producing such assets.

It’s important to recognize that, under a typical scenario, the marginal investor will end up worse off, in a material sense, from earning the growth rate, compared to earning the actual return on capital.  In a material sense, the economy is dynamically efficient.  But the concept of risk preferences models a subjective good – call it “security” – and we’re not producing enough of this good.  The history of interest rates and growth rates suggests that we have seldom produced enough security, but the deficiency today is clearly worse than usual.  



DISCLOSURE: Through my investment and management role in a Treasury directional pooled investment vehicle and through my role as Chief Economist at Atlantic Asset Management, which generally manages fixed income portfolios for its clients, I have direct or indirect interests in various fixed income instruments, which may be impacted by the issues discussed herein. The views expressed herein are entirely my own opinions and may not represent the views of Atlantic Asset Management. This article should not be construed as investment advice, and is not an offer to participate in any investment strategy or product. 

Friday, July 5, 2013

Taper Paradox

Suppose you get invited to a party.  You don’t expect it to be a very good party, because the mood in town is pretty bleak:  most people likely won’t even show up, and those that do won’t be much fun.  So you decide, “Unless I hear something good about this party, I’m just going to stay home.”  

Then you get a call from a friend who is at the party, and he convinces you that it’s better than expected and you should come.  You assure him you’ll be there.  So you start getting ready, but you’re still in no hurry to get to the party.  

Then you get a call from the host:  “Joe tells me you’re coming.  That’s great.  Can I ask you a huge favor?  If you have some rum, can you bring it?  The party is starting to get lively, but we’re going to run out of rum soon, and the liquor stores are closed.  If someone doesn’t bring some rum, we may have to take away the punch bowl earlier than expected.”  You don’t have any rum, unfortunately, but you know you can get to the party quickly if you hurry.  Assuming you like rum punch, does the knowledge that the punch bowl might be taken away early make you more or less eager to get to the party quickly?

I may have strained the traditional punch bowl metaphor here, so let me try to tell the more complicated story in economic terms.  According to some theories, national economies (and the world economy, perhaps) exhibit multiple equilibria.  If everyone expects everyone else to spend a lot, then it’s rational to spend a lot (e.g. buy a new car in anticipation of keeping or getting a job, build a factory in anticipation of selling a lot of output, etc.).  If everyone expects everyone else not to spend a lot, then it’s not rational to spend a lot.  So depression and recovery become alternative self-fulfilling prophecies.  As FDR famously put it, “The only thing we have to fear is fear itself.”

Normally, though, at least from the period from 1940 to 2007 in the US, fear itself isn’t a real problem.  Why not?  Because these equilibria depend on the interest rate, and the Fed controls the interest rate (at least in the short run), and usually the Fed can make the interest rate so low that the “bad,” low-demand equilibrium is no longer feasible.  A lot of projects that normally wouldn’t be worth doing (or purchases that wouldn’t be worth making) when demand is weak, become worthwhile even with low demand when they can be financed very cheaply.   But if people do these projects and make these purchases, demand won’t be weak.  So if the Fed keeps interest rates low enough, or even just credibly threatens to keep interest rates low enough, the low-demand equilibrium reduces itself to absurdity.

But recently we have faced the problem that interest rates can’t go below zero.  So we are back in FDR’s “fear itself” world, with multiple feasible equilibria.  In the low-demand equilibrium, the Fed struggles by keeping interest rates as low as it can get them, but that isn’t enough.  Barring more aggressively creative policies than the Fed has been willing to implement (retroactive NGDP level path targeting, anyone?), it just has to wait until people get more optimistic.  Or until people expect other people to get more optimistic.  Or until people expect other people to expect other people to get more optimistic.  Or until…well, you get the idea.  There’s reason to expect this optimism to come eventually, because capital depreciates (e.g. cars wear out, a growing population needs new places to live, etc.), so there will eventually be reason to expect higher demand.  But, as the Japanese have learned, the wait can be a very long one.

There’s a trick here, if you’re a prescient investor/entrepreneur.  Someday the demand will be back.   Someday the optimism will be back.  And the Fed will no longer have to struggle by keeping interest rates at levels that seem ridiculously low but still aren’t low enough.  But right now interest rates are still very low.  Suppose you could guess when the economy was about to recover and finance a project at the low “bad equilibrium” interest rates while subsequently benefitting from the demand that will come when the economy recovers.  You’d stand to make a lot of money.

So if, for whatever reason – even if it’s for no real reason at all – there’s a shift toward optimism, it’s like yelling fire in a crowded theater.  (If you don’t like my “punch bowl” cliché, I have plenty of others.)  Everyone wants to be that prescient investor/entrepreneur who finances cheaply in the bad equilibrium and gets windfall demand in the good equilibrium.  Once you make your mind up to go to the party, you want to make damn sure you get there before the punch runs out.  As we say in wonkspeak, systems with multiple equilibria often exhibit highly nonlinear dynamics.  There is a tipping point, a straw that breaks the camel’s back.  (Really, plenty of others.)

What happens, then, when the Fed starts to talk about tapering its bond purchases?  It depends.  If we take the camel’s back to represent economic depression, there are (at least) three possibilities.  If the camel’s back is already clearly broken, the tapering talk should already have been anticipated, and it will have little effect.  If the camel’s back is still strong, then the tapering talk will make it even stronger, fortifying the depression against the already ineffectual straws of optimism.  But suppose the camel’s back is just in the process of breaking: some people have concluded that it’s definitely not going to hold, some are getting very close to that conclusion, and some still need convincing.  What happens then?

I’m pretty sure it’s ambiguous, depends on how you set up and calibrate the model, etc..  I imagine someone has tried to model this formally, but I’m too lazy for that (and being a private sector economist, rather than an academic, I don’t get paid to do theoretical modeling).  In any case, it seems quite plausible to me that, under reasonable conditions that may approximate those we have faced over the past month, tapering talk could accelerate the shift from the bad equilibrium to the good one.  That acceleration would be consistent with the observation that the dramatic moves in the bond market have had only a little apparent impact on the stock market.  (If people are discounting the same cash flows at a much higher discount rate, stock prices should have gone down considerably, but they’ve barely declined at all, which suggests that expected cash flows have risen.)

Note that tapering talk implies that (1) the Fed, which may have better information than we do, is more optimistic than we thought and (2) if you were nearly convinced that the depression is over, you had to make up your mind and act on your belief as quickly as possible, or you would lose the opportunity to profit from it.  Given the existence of multiple equilibria, it’s quite possible that the Fed’s tapering talk has had the paradoxical effect of accelerating the recovery, which would explain why markets now seem to expect the Fed to start raising short-term rates sooner than the Fed itself has implied.

Do I think the Fed did the right thing by strategically engaging in verbal tightening at just the time that it would have a paradoxical effect?  No.  For one thing, the Fed obviously didn’t anticipate this response, and in any case the interpretation I’ve suggested here is highly speculative.  And even if my interpretation is right, and even if the Fed is cleverer than we think and actually intended it this way, I still don’t think they did the right thing.  Accelerating the recovery is a good thing, all other things equal, but it’s not the most important thing.  The most important thing is for the Fed to assure us, in no uncertain terms, that it will continue to support the recovery until there is no ambiguity left.  My guess is that, given what I imagine the Fed’s preferences to be, starting to tighten (verbally) now will turn out to have been the right thing to do.  But my guess, even if it is the best guess based on the information I have, is subject to a lot of uncertainty.  From the point of view of the recovery, mentioning the taper last week was a risky move, and even if the risk pays off, I don’t think it’s a risk the Fed should have taken.





DISCLOSURE: Through my investment and management role in a Treasury directional pooled investment vehicle and through my role as Chief Economist at Atlantic Asset Management, which generally manages fixed income portfolios for its clients, I have direct or indirect interests in various fixed income instruments, which may be impacted by the issues discussed herein. The views expressed herein are entirely my own opinions and may not represent the views of Atlantic Asset Management. This article should not be construed as investment advice, and is not an offer to participate in any investment strategy or product. 

Wednesday, February 13, 2013

Why Doves Are Really Hawks


Machismo is a type of commitment mechanism. 

If you’re a perfectly rational nerd, people will always expect you to do the rational thing.  You won’t be able to make credible threats unless it would be rational to carry out the threat.  And it seldom will be.  After all, how often is it really rational to whoop someone’s ass?

On the other hand, if you’re a tough, macho badass, people will always expect you to do the tough, macho badass thing.  You’ll always be able to make credible threats, because carrying out threats is always the tough, macho badass thing to do.  And since the threats are credible, you mostly won’t have occasion to carry them out.

This principle has a traditional application to monetary policy.  If your central banker is a perfectly rational nerd, he’s going to let the inflation rate get too high, because he won’t be able to make a credible threat to cause a recession.  People won’t expect him to carry out the threat, because in most cases it won’t be rational to carry out the threat.  After all, how often is it really rational to cause a recession?

On the other hand, if your central banker is a tough, macho badass, he’s not going to let the inflation rate get too high, because he will be able to make a credible threat to cause a recession.  People will expect him to carry out the threat, because causing a recession is the tough, macho badass thing to do (for a central banker).  And since the threat is credible, people will keep their prices down, and he won’t have to carry it out.  (OK, the economics is a little more complicated, but that’s the general idea.)

So what kind of central banker do you want if you hope to keep the inflation rate from getting too high?  Obviously you want a tough, macho badass.  You want the kind of central banker that likes to pick up small animals in his talons so that he can crush them to death and serve them for dinner.  You sure as hell don’t want the kind who just likes to fly around looking pretty and making cute cooing noises.

That theory made a lot of sense in the 1980’s, but the world has changed.  The inflation rate hasn’t been too high for 20 years.  We are in the middle of a minor depression, and the way to get out of it is to threaten inflation.  Tell people they had damn well better start doing something useful with their cash or else, as soon as you get a chance, you’re going to make its purchasing power start evaporating.  Of course, when the time comes, it won’t be rational to carry out that threat.  If you’re a perfectly rational nerd, the threat won’t be credible.

So what kind of central banker do you want if you hope to get out of this depression?  Obviously you want a tough, macho badass.  You want the kind of central banker that likes to pick up small animals in his talons so that he can crush them to death and serve them for dinner.  I’m certainly no expert in ornithology, but it just seems to me that “dove” is not the right term for that kind of central banker.



DISCLOSURE: Through my investment and management role in a Treasury directional pooled investment vehicle and through my role as Chief Economist at Atlantic Asset Management, which generally manages fixed income portfolios for its clients, I have direct or indirect interests in various fixed income instruments, which may be impacted by the issues discussed herein. The views expressed herein are entirely my own opinions and may not represent the views of Atlantic Asset Management. This article should not be construed as investment advice, and is not an offer to participate in any investment strategy or product

Wednesday, January 9, 2013

Inflation vs. Price Level Targeting


I don’t have time for a real blog post, but here’s a quickie in an attempt to keep this blog alive.

Dave Altig and Mike Bryan of the Atlanta Fed’s Macroblog argue here that it wouldn’t make much difference if the Fed were doing price level targeting (in which the future target path stays fixed even when you miss a target, so you need catch-up inflation or catch-up disinflation) rather than inflation targeting.  Their evidence is mostly from a chart like this (my replication using monthly data, which you can confirm looks fairly similar to theirs which appears to use annual data):



Quoting from their blog post:
Consider the first point on the graph, corresponding to the year 1993….This point on the graph answers the following question:

By what percent would the actual level of the personal consumption expenditure price index differ from a price-level target that grew by 2 percent per year beginning in 1993?
The succeeding points in the chart answer that same question for the years 1994 through 2009.

In my case, as I said, it’s monthly, and it goes all the way to 2012, but the idea is the same.

OK, fine.  So here it looks like a price level target would have produced roughly the same results as the Fed’s (unofficial until January 2012) inflation target, and whether it would have undershot or overshot depends on when you start the target path.  In particular, people who argue that we are undershooting right now don’t seem to have much of an argument unless they start the target path in 2008 or later.

BUT…

The problem with this chart is that it uses the headline PCE price index, whereas during most of this time (until January 2012 when the official inflation targeting policy was introduced), the Fed was perceived to be targeting core PCE inflation (excluding food and energy, that is), not headline inflation.  Price-setters were making their decisions largely under that assumption.  It makes no sense to go back to 1993 and set up a target path using the headline price index when that index was irrelevant to the policy that the Fed seemed to be following at the time.

Moreover, targeting the price level using the headline price index is a bad idea anyhow.   If you're going to use a price level target (and I do think it would be better than an inflation target), then you don't want to use a price index that will be subject to shocks that are volatile but persistent. A one-time increase in the price of oil should not require a subsequent compensating decline in other prices to offset it (nor should a one-time decrease in the price of oil require a subsequent burst of inflation to offset it). Theoretical arguments would suggest using an index of sticky prices, but the core is a reasonable approximation.

Here’s what my chart above looks like when you use the core PCE price index instead of the headline index.


Very different.  By this measure we are undershooting now no matter when you start the target path.  And unless you cherry-pick the starting point in 2003 or 2011, the size of the undershoot is not insignificant.  If you compare to the 1990’s, the Fed was already slightly behind when the Great Recession began, and they have fallen further and further behind since then.  Price level targeting, using the core price index, would require the Fed to promise a significant amount of catch-up inflation in the coming years.



DISCLOSURE: Through my investment and management role in a Treasury directional pooled investment vehicle and through my role as Chief Economist at Atlantic Asset Management, which generally manages fixed income portfolios for its clients, I have direct or indirect interests in various fixed income instruments, which may be impacted by the issues discussed herein. The views expressed herein are entirely my own opinions and may not represent the views of Atlantic Asset Management. This article should not be construed as investment advice, and is not an offer to participate in any investment strategy or product.

Sunday, September 23, 2012

James Medoff, Stagflation, the Phillips Curve, and the Greenspan Boom


James Medoff, my thesis advisor in graduate school and later my collaborator and business associate, died on Saturday, September 15 after a long struggle with multiple sclerosis.  In the field, he was probably best known for his work on labor market institutions, and particularly for his work with Richard Freeman on the impact of unionization.  But by the time I started working with him, most of that was in the past.  I was a student of macroeconomics, not labor economics, but I was intrigued by a paper he had written with Katharine Abraham entitled “Unemployment, Unsatisfied Demand for Labor, and Compensation Growth, 1956-1980.” It seemed to provide a critical missing piece in the puzzle of macroeconomics.

Why was there stagflation (stagnation and inflation at the same time) in the 1970’s?  When I was in graduate school, there were two popular (complementary) explanations.  First, the Fed had been too easy because it didn’t adequately account for the way inflation expectations would become ingrained.  (In the cartoon version of this idea, the Fed goes from believing in a static downward-sloping Phillips curve to realizing – much too late in the game – that the long-run Phillips curve is vertical, but in reality there were certainly some steps inbetween.)  Second, there were oil shocks, shocks to aggregate supply which drove prices up and employment down.  A third explanation you might also hear was that the Fed had responded to political pressure from the Johnson and Nixon (and possibly Carter) administrations and loosened at the wrong times.

Doubtless there was something to all of these ideas, but the Medoff-Abraham paper suggested a completely different explanation.  Essentially what it said was that there was not nearly as much “stag” in the stagflation as we thought.  The labor market, it suggested, had been booming during much of the 1970’s despite the appearance of high unemployment.  The implication was that the unemployment of the 1970’s was largely “structural” (at least that’s the term used in debates about today’s unemployment), and once you realized that, the accompanying inflation shouldn’t surprise you.

When I took James’ graduate course in 1989, this idea was particularly important, because the situation was beginning to reverse itself.  The plateau in structural unemployment lasted from maybe 1975 to maybe 1987, and after that it began to decline.  By the time I graduated, in 1994, this decline was well underway, and our data were suggesting that the US economy could support considerably lower unemployment rates without sparking inflation.  James was invited to the Fed’s meeting of academic consultants that year to make the case for lower unemployment, and I went along to help and observe the discussion.  As I recall, there were about 10 people at the table, and James was the only one saying that it was OK to keep interest rates low and let the unemployment rate fall further.

Naturally, rather than listen to a single maverick, the Fed kept raising rates, and maybe that was for the best, since the recovery turned out to be stronger than most people (including us) had expected.  But over the next few years, something unusual happened.  The unemployment rate kept coming down, and the inflation kept not happening, and now it was Alan Greenspan himself, not some out-of-the-mainstream labor economist from Harvard, who was insisting (against some substantial resistance) that it was OK to keep interest rates low and let the unemployment rate continue falling.

Did James Medoff ultimately influence monetary policy, and was he therefore partly responsible for the boom of the late 1990’s?  Who knows?  If I had Alan Greenspan’s ear, I might ask him.  At his funeral, James’ daughter Susanna said that, in fifth grade, she had wanted to dress up as her father for “Dress as Your Hero Day,” but the teacher wouldn’t let her, so she dressed as Alan Greenspan instead.  In those days, a lot of adults may have considered Greenspan a hero, but I doubt many other fifth graders did.  For the record, I still think Greenspan did a remarkable job with the macroeconomic aspects of monetary policy, and his hero status (since rescinded by most commentators) was not without justification.

In any case, I feel that the research I mentioned is relevant today in a couple of ways.  For one thing, the saga of structural-vs-cyclical unemployment goes on today.  Using techniques similar to those used by Abraham and Medoff, my best guess is that, after the long decline that began around 1988, structural unemployment reached a trough in 2005 and has been rising since then.  (However, I see no particular evidence of a discontinuous increase during the Great Recession, or immediately before or after, and the increase since 2005 has not been particularly rapid, so I don’t buy the view that “our problem is structural.”)

Another way the research is relevant is that it reframes the 1970’s.  By the end of the 1970’s, most economists were convinced that, as textbooks put it during my undergraduate years and maybe still do, “the long-run Phillips curve is vertical.”  In other words, there is no long-run tradeoff between unemployment and inflation.  In the minds of most economists this conclusion was necessitated by the experience of the 1970’s, during which it seemed to become obvious that higher inflation was not generally associated with lower unemployment.  But if much of the problem of the 1970’s was structural, then the conclusion is not so obvious.  Perhaps, conditioning on the structure of the labor market, a downward-sloping Phillips curve still exists, even in the long run.  Indeed, more recent evidence suggests that there is such a tradeoff after all, at least at low inflation rates.

This is important because the US seems to be in the middle of that tradeoff right now.  If you believe there is no tradeoff, if you believe the long-run Phillips curve is vertical, then it's hard to explain how there's still any inflation at all after almost 5 years during which we had first an extremely deep recession and then a painfully slow recovery that has left output still well below any reasonable estimate of the economy's potential.  After 5 years, we should surely be making our way toward the long run, that vertical Phillips curve at full employment. If we're not, it must be because demand is astonishingly weak, and that astonishingly weak demand should be associated with an inflation rate that falls lower and lower until it becomes negative. (This is the flip side of an overheated economy that produces ever-accelerating inflation.) But that isn't happening. Instead we're seeing something that looks a little bit like the old-fashioned downward-sloping static Phillips curve, where low, but not necessarily falling, inflation rates are associated with persistent excess unemployment.

I admit this isn't what I expected. I wrote a blog post a couple of years ago predicting deflation. Even after having questioned the conventional wisdom, I had found it too strong to resist. The vertical long-run Phillips curve, I thought, might not be quite right, but it was "a close enough approximation," and if I denied this, I'd face excommunication from the Church of Macroeconomics. Deflation was coming, I thought.  I was wrong.

I never got a chance to discuss this question with James. During his last years he found it increasingly difficult to think and express himself clearly, so it's unlikely we could have had a productive discussion. But I can imagine what he would have said 20 years ago. He would have talked about his contacts in industry and how they weren't about to destroy morale by cutting wages, even if the economy stayed weak for several years. After some discussion I think we would have come to the conclusion that the vertical long-run Phillips curve was actually a pretty crummy approximation. That's certainly what I think now. I'm going to have to pay more attention in the future to what Hypothetical James Medoff has to say. He lives on.









DISCLOSURE: Through my investment and management role in a Treasury directional pooled investment vehicle and through my role as Chief Economist at Atlantic Asset Management, which generally manages fixed income portfolios for its clients, I have direct or indirect interests in various fixed income instruments, which may be impacted by the issues discussed herein. The views expressed herein are entirely my own opinions and may not represent the views of Atlantic Asset Management. This article should not be construed as investment advice, and is not an offer to participate in any investment strategy or product.

Monday, August 27, 2012

The Fed and Fiscal Responsibility


If the US goes off the fiscal cliff – that is, if tax increases and spending cuts go into effect in 2013 as currently scheduled – can monetary policy actions offset the macroeconomic impact?  Ben Bernanke doesn’t think so – indeed he’s certain they can’t – and he has said as much.

But on some level he must be wrong.  True, it’s hard to think of any feasible monetary policy action that would both be strong enough and have a sufficiently quick impact to offset the fiscal cliff directly.  But what matters more for monetary policy is not the direct effect but the effect on expectations.  Surely the Fed could alter expectations of future monetary policy in such a way that the resulting increase in private spending would be enough to offset the decreased spending due to fiscal tightening.  Just think, for example, if the Fed were to increase its long-run inflation target.  If nothing else, a sufficiently large increase in long-run US inflation expectations would make the dollar sufficiently unattractive to result in an export boom that would offset the fiscal tightening.  More important, perhaps, it would make currency and Treasury securities less attractive to Americans and encourage them to do other things with their wealth, such as buying houses and durable goods and investing in productive capacity.

Of course that isn’t going to happen.  To get the Fed to do something as drastic as increasing its long-run inflation target, we’d need more than a fiscal cliff; we’d probably need something like a repeat of the 1930’s.  But at this point the Fed has substantial amount of flexibility even within the confines of its long-run target, because it hasn’t specified how that target would best be implemented.  It hasn’t said, for example, whether the target should be interpreted as a growth rate target – where policy constantly begins with a clean slate, ignoring previous missed targets – or a level path target – where policy always attempts to compensate for earlier misses and regain the original target path.  If the latter case prevails, the Fed hasn’t said whether the target path would be retroactive and if so how far back it would be retroactive (for example, choosing 2007 as a base year for the target path instead of 2012).  Moreover, while the Fed has affirmed its commitment to its dual mandate, it hasn’t said how its inflation targeting approach would interact with its employment mandate.

One way to implement the long-run inflation target would be as follows.  First, estimate the economy’s potential output path that was, as of 2007, consistent with maximum employment.  Then add to this a 2% inflation path starting from the 2007 price level.  Express the result as a target path for nominal GDP, and project that path into the future at the estimated future growth rate of potential output plus 2%.  Pursue this path as a level path target.

Because nominal GDP has fallen so far below the path that would, in 2007, have been consistent with 2% inflation at estimated potential output, this approach implies a very dramatic period of catch-up.  Essentially, the Fed would be committing to follow a very aggressive pro-growth, pro-inflation policy over the medium run as soon as it is able to get some traction on the economy.  But it would be doing so in a way that is consistent with its 2% long-run inflation target.

The effect on expectations would be quick and dramatic.  By promising either growth or inflation or both, the Fed would make hoarding cash (or other safe assets) look like a clearly losing proposition.  Depending on whether you expect inflation or growth, either your money will lose its purchasing power, or you will miss out on a lot of profits as real assets recover.  My guess is that, with this change in the medium-run outlook, the resulting increase in private spending over the short run would more than offset the fiscal cliff.  Your guess may be different, but in any case we’re talking about an impact considerably larger than what can be accomplished with the kind of changes in its balance sheet that the Fed typically contemplates now when it thinks about trying to stimulate the economy.  If Ben Bernanke were contemplating anything like what I am suggesting, he clearly wouldn’t be justified in being certain of his inability to offset the fiscal cliff.

OK, this isn’t going to happen either.  At least it’s highly unlikely.  Ben Bernanke isn’t going to have his “Volcker moment,” as Christina Romer called it, just in time to offset a huge tightening in fiscal policy.  And, with any luck, the tightening in fiscal policy won’t be as huge as current law prescribes:  after the election, hopefully, either one party will be in power, or Democrats and Republicans will be able to come to enough of an agreement to prevent disaster.

But the sad thing is that preventing disaster almost certainly means putting the US back on an unsustainable fiscal path – because there’s very little chance that Congress will be able to agree on a credible long-run fiscal plan at the same time that it agrees on a way to avoid going over the cliff in the short run.  Assuming that we do go over the cliff and that the Fed doesn’t offset the impact, the long-run fiscal results may not be much better, because the growth impact of the fiscal shock – allowing for hysteresis effects – will undo at least part of the improvement in the budget.  For those whose primary concern is fiscal sustainability, the best-case scenario would be that we do go over the cliff and that the Fed acts aggressively to offset the macroeconomic impact.

Again, it isn’t going to happen.  And that’s kind of sad.  The Fed’s timidity is creating a situation where the only realistic choices – for the moment anyhow – are economic disaster and fiscal irresponsibility.  Doesn’t that mean that the Fed bears some responsibility for the fiscal problems that are eventually likely to emerge?

  


DISCLOSURE: Through my investment and management role in a Treasury directional pooled investment vehicle and through my role as Chief Economist at Atlantic Asset Management, which generally manages fixed income portfolios for its clients, I have direct or indirect interests in various fixed income instruments, which may be impacted by the issues discussed herein. The views expressed herein are entirely my own opinions and may not represent the views of Atlantic Asset Management. This article should not be construed as investment advice, and is not an offer to participate in any investment strategy or product

Monday, April 23, 2012

An Ultraminimalist Model of the Beveridge Curve, or, How I Learned to Start Worrying and Love Structural Unemployment


Where do businesses find people to hire?  A few new employees – graduating students, for example – are recruited from outside the labor force, but I’m going to ignore them (as later I will also ignore retirees, figuring that they roughly offset each other).  Most new employees come either from among the unemployed or from other firms.  Hiring the unemployed is easy inasmuch as they’re usually knocking at your door asking for jobs.  On the other hand, the selection process might be difficult, since they aren’t doing a job now, so you have to make an educated guess as to whether they’ll be good at the job for which you’re hiring.  Hiring people from other firms is difficult in that you have to go out and actively recruit them, as well making an offer that justifies leaving their old job, but the selection process is easier, because all you have to do is find someone who is already doing a job similar to the one for which you’re hiring. 

So there is a tradeoff.   Presumably the terms of this tradeoff depend on how many people are unemployed:  if only a few people are unemployed, then the number of qualified unemployed applicants will be low, and they’ll be in demand from other firms, so you’ll have to pay them well, so you might as well just try to poach someone directly from another firm; if a lot of people are unemployed, the number of qualified unemployed applicants will be high, and they’ll be willing to accept less attractive offers, so poaching might not be worth it.  So here’s the crux of my model:  the fraction of new hires that comes from the unemployed depends on how many unemployed there are.  Using “H” for total hires, “He” for “hires out of employment,” and “Hu” for “hires out of unemployment,” we have Hu/H=f(U), where f() is some increasing function, and for adding-up, we have H=He+Hu.

Now, why do people quit their jobs?  Some retire, but I’ve already said I’m going to ignore them.  Some quit for other personal reasons, and I’m going to ignore them too.  A few people quit, especially when the labor market is strong, because they don’t like their job and figure it will be easy enough to find a new one.  I’m also going to ignore them.  Most people who quit, I believe, quit because they already have another job lined up.  In other words, if we ignore all the categories I’m ignoring, then the number of quits equals the total number of new hires minus the number of new hires that are hired out of unemployment, or Q=H-Hu.  Putting this proposition together with the one at the end of the last paragraph and solving simultaneously, we get Q=H*(1-f(U)).

OK, what about layoffs?  It may sound crazy at first, but I’m essentially going to ignore layoffs.  I’m going to assume that they happen at a constant rate.  We do know that layoffs tend to spike during the early part of a recession (or in the case of the recent recession, in the middle, when the “Great Recession” took over from the “little recession” that was already in progress).  But the typical spike is fairly small compared to the total number of layoffs.  (We notice those layoffs more because they result in significant spells of unemployment, whereas non-recession layoffs often result in just changing jobs, or in brief spells of unemployment that often aren’t long enough to justify filing an unemployment claim.)  So the “constant layoffs” assumption isn’t too far from the truth.  Also, layoff spikes are clearly “disequilibrium” phenomena that induce changes in the unemployment rate rather than explaining how a given unemployment rate is maintained.  In thinking about the Beveridge curve, I’m interested in the equilibrium relationship between unemployment and job openings.

And here’s the equilibrium condition.  I’ll ignore longer run changes in the labor force and the capital stock and define equilibrium as constant total employment (which implies constant total unemployment, since I’m ignoring labor force changes).  Constant employment implies that hires equal separations.  I’ll ignore “other separations” and assume all the separations are either quits or layoffs.  Then we have H=Q+L (where L stands for “layoffs,” not “labor”).

Since I’ve assumed that layoffs are constant, we have three variables here, U, H, and Q.  We’re more interested in hires than quits, so we can solve to eliminate Q, and we get H=L /f(U).  Since f() is an increasing function, this gives as an inverse equilibrium relationship between hires and unemployment.

Ultimately we want to relate unemployment to job openings, since that’s what the Beveridge curve is about.  How do hires relate to job openings?  One traditional approach is to fit a “matching function” in which hires are an increasing function of both job openings and unemployment.  The theory is that it should be easier to fill job openings when there are a lot of unemployed people looking for jobs.  I tried fitting such a function using JOLTS data, and the coefficient on unemployment consistently came out with the wrong sign, no matter how many polynomial time trends or dummy variables I put in, and even when I included an interaction term between unemployment and the availability of extended unemployment benefits.  Actually, that result is what motivated this model.  While obviously a high unemployment rate will reduce the number of people who quit their jobs in order to fill job openings, it does not apparently result in those openings being filled any more quickly.  So my matching function is a one-variable function. H=m(V), where V (“vacancies”) is the number of job openings.

Empirically, I fit H=m(V) as H=a*V^b, where a and b are fitted constants.  (Why do I use that form? Tradition, I suppose:  it just seemed reasonable.  It allows for the intuitive special case where b=1, so that job openings fill at a constant rate, but one casual look at the data will tell you that b<1 in reality: openings fill more quickly when there are fewer of them.)  The fit is pretty good (“log V” explains 78% of the variance in “log H” with a slope coefficient of about 0.5, implying that H is proportional to the square root of V), but there is an obvious pattern in the residuals.  (The Durbin-Watson statistic is a mere 0.7 – in case this post isn’t wonkish enough already.)  The cumulative sum of the residuals peaks in July 2006, suggesting that there may be a structural break in August.  A casual look at the residuals strongly suggests another structural break in July 2010.  Both purported structural breaks go in the same direction:  a decline in the number of hires associated with any given number of job openings.  So, contrary to what I said in 2010, it does look like we are seeing more structural unemployment now than in the past.  (In my defense, the first break occurs long before the recession, so I was right to assert that recession had not produced an increase in structural unemployment; and the second break occurs just when I was making that assertion, so I had no data from after the break.)

After I had done all this pseudo-theorizing, I decided to do a little pseudo-test of my pseudo-model, and it actually holds up surprisingly well (allowing the function f(U) to have the same form as m(V), because don’t all functions have that form, damn it!).   There is a nice, linear-looking, downward-sloping relationship between the log of hires and the log of unemployment.  Log unemployment explains almost 90% of the variance in log hires, with a slope coefficient of -0.37, and the coefficients are robust to the inclusion of an ARMA(1,1) residual process that results in a Durbin-Watson statistic of precisely 2.0.  (Ooooh, talk nerdy to me, Baby!)  There is no obvious pattern in the residuals.  Surprisingly, there are only 3 significant outliers (March 2003, November 2008, and May 2010; call them “Iraq War,” “Post-Lehman,” and “Census”).  At least I find that surprising, because this is an equilibrium model of a system that obviously, in real life, is subject to shocks that move it out of equilibrium – as we know from the fact that the unemployment rate changes a lot.  If you take this test at face value, it suggests that the equilibrating forces (which I haven’t tried to model) are very strong.

So what does all this imply about the natural rate of unemployment?  To answer that question we need a model of aggregate supply, and I happen to have one up my sleeve.  Here’s my model:  there’s a constant natural rate of job openings.  That’s it.  If firms have an unusually large number of positions to fill, they bid up wages, and you get accelerating inflation.  If firms have an unusually small number of positions to fill (like right now, but even more like three years ago), they start to let wages erode, and you get decelerating inflation (although research now suggests that it’s very difficult to erode wages that already aren’t rising, so this won’t work very well unless there is some substantial inflation or productivity growth to begin with – but all that belongs in another post).  Somewhere there’s a happy medium rate of job openings, such that wages tend to continue rising at a rate consistent with the expected rate of inflation.  That’s the natural rate of jobs openings.  Or the Non-Accelerating Inflation Rate of Job Openings (NAIRJO).   Or the Non-Accelerating Inflation Rate of Vacancies (NAIRV).

If the relationship between hiring and unemployment is stable, as it appears to be, then my model implies that shifts in the matching function will determine a shifting relationship between the (assumed constant) NAIRV and the NAIRU (Non-Accelerating Inflation Rate of Unemployment, a.k.a. the natural rate of unemployment).  For what it’s worth, my estimates suggest that the hypothesized August 2006 and July 2010 shifts in the matching function would, collectively, increase the NAIRU by a factor of about one and a third.  So if the NAIRU was 4.5% (my best guess, which happens to be conveniently divisible by 3) in July 2006, it is 6% now.  Of course, by the time the unemployment rate gets down to 6%, there’s a good chance that the matching function will have shifted again, but as for which direction and how far, your guess is as good as mine.

UPDATE: Posted scatter of log hires vs log unemployment on Twitter.

UPDATE2: Posted graph of series hires/sqrt(openings) on Twitter.


DISCLOSURE: Through my investment and management role in a Treasury directional pooled investment vehicle and through my role as Chief Economist at Atlantic Asset Management, which generally manages fixed income portfolios for its clients, I have direct or indirect interests in various fixed income instruments, which may be impacted by the issues discussed herein. The views expressed herein are entirely my own opinions and may not represent the views of Atlantic Asset Management. This article should not be construed as investment advice, and is not an offer to participate in any investment strategy or product

Thursday, March 15, 2012

Federal Funds and the Paradox of Conditional Promises

The Fed’s Open Market Committee met on Tuesday and issued a statement. What changed in this statement compared to the Fed’s previous statement? Here’s what I think is the most important change. On January 25, the Fed said:

In particular, the Committee decided today to keep the target range for the federal funds rate at 0 to 1/4 percent and currently anticipates that economic conditions--including low rates of resource utilization and a subdued outlook for inflation over the medium run--are likely to warrant exceptionally low levels for the federal funds rate at least through late 2014.


On March 13, the Fed said:

In particular, the Committee decided today to keep the target range for the federal funds rate at 0 to 1/4 percent and currently anticipates that economic conditions--including low rates of resource utilization and a subdued outlook for inflation over the medium run--are likely to warrant exceptionally low levels for the federal funds rate at least through late 2014.


See the change? What, you don’t? You must have forgotten to put on your X-ray vision goggles!

There is a change, but it isn’t visible to the naked eye. At least it’s not visible if you just look at the words. What we have here is a case of the shifting relationship between signifier and signified.

Suppose your spouse calls from work and says, “I’ll be home in two hours.” Then an hour later, your spouse calls again and says, “I’ll be home in two hours.” The words are the same, but the meaning has changed: changed enough, perhaps, to make the difference between a hot dinner and a cold one. The phrase, “in two hours,” is the same, but the time to which it refers has changed.

In the Fed’s statement, what has changed is the referent for the word “conditions.” I just looked at a chart of the Citigroup Economic Surprise Index, and one thing I note is that, for the past six weeks (and for some months before that), it has remained consistently positive, and indeed consistently above +35, indicating that we have been receiving positive economic surprises. A rational forecaster will not be expecting the same conditions between now and 2014 as they had been expecting on January 25. Logically, if the conditions expected today are likely to warrant the same thing as conditions expected in January were likely to warrant, then the Fed must have changed its idea of what kind of conditions would warrant that.

By repeating the language in its earlier statement, the Fed has in effect announced a change in its reaction function. If the Fed had an explicit economic target for the next three years, it would have to change that target in order to continue being consistent with the language in its statement. By apparently doing nothing, the Fed has eased monetary policy.

Now the effect of an easing of monetary policy is that the economy is likely to be stronger than what was likely before the easing. After all, that’s the whole point of easing monetary policy. And that’s where things get tricky.

What does the Fed’s statement, implying that it expects to keep the federal funds rate low, mean about the likely actual future path of the federal funds rate? It means that the federal funds rate is likely to rise sooner than you previously expected. By promising – quite sincerely – to keep the federal funds rate low, the Fed is increasing the chance that the economy will call its bluff and force it to raise the federal funds rate. This is the paradox of a conditional promise.

It’s similar to the argument I made a couple of years ago with respect to bond yields (and which, in that case, the subsequent experience of QE2 seemed to bear out). Somewhat like the way the leader of a cartel can push prices up by threatening to cut prices if anyone defects, a central bank can raise bond yields by threatening to cut them. A similar logic applies to the federal funds rate, even though, in this case, the rate is under the Fed’s (almost) direct control. By specifying a more stringent criterion for raising the rate, the Fed actually increases the chance that the criterion will be met.

Think about it this way. Suppose the Fed had an explicit economic target such as nominal GDP. The Fed’s repetition of its “likely to warrant” language, in the face of an improved outlook, is like an increase in its nominal GDP target. If the Fed had such a target, and if it increased the target, what would you expect the effect to be on interest rates two-and-a half years hence? Surely a higher target would mean that future interest rates are likely to be higher rather than lower.

If you ask me for my best guess, I still expect that the Fed will most likely end up sticking to the late 2014 timetable. After all, we did have the worst recession in 70 years and have barely started to recover even three years later. And under current law, federal fiscal policy is scheduled to drive directly into a brick wall next year. But the Fed’s repetition of its “likely to warrant” language, because it makes me a little more confident in the economy, makes me a little less confident in my prediction about the federal funds rate.

UPDATE (4/25/2012):  Fed projections (PDF) issued today call my whole argument into question.  Looking at the participants' assessments for the "appropriate timing of policy firming," while the median date is the same as in January, as reflected in the statement, the average has declined, with some of the 2016 "ultra-doves" having moved back to 2014 and 2015.  Meanwhile, the improved outlook for 2012 is largely offset by a weaker outlook for 2013 and 2014.  So it is not clear that there is any change in the Fed's reaction function.



DISCLOSURE: Through my investment and management role in a Treasury directional pooled investment vehicle and through my role as Chief Economist at Atlantic Asset Management, which generally manages fixed income portfolios for its clients, I have direct or indirect interests in various fixed income instruments, which may be impacted by the issues discussed herein. The views expressed herein are entirely my own opinions and may not represent the views of Atlantic Asset Management. This article should not be construed as investment advice, and is not an offer to participate in any investment strategy or product.

Thursday, October 27, 2011

Kelly Evans on NGDP Targeting and Sustainable Growth

Kelly Evans of The Wall Street Journal has taken a lot of heat from advocates of nominal GDP targeting over her Monday column on the subject. (To her credit, she has engaged with Scott Sumner on the subject in the comments section of his blog post responding to her column.) While I’m also an advocate of NGDP targeting, and I agree with many of their criticisms, I think there are certain points on which her argument is being too quickly dismissed. In particular, both Scott Sumner and Karl Smith point to the following passage:

One worrying aspect of GDP growth prior to 2007 was that it came even as real household incomes stagnated. Assuming that boom-era growth rates were sustainable, and not fueled by a surge in house prices and a credit boom that simply pulled forward demand from the future, is a huge leap in logic.

I think there is some confusion on both sides regarding this point, and to clear it up we need to make a distinction between the demand side and the supply side. Usually when economists talk about “sustainable” growth, they’re referring to the supply side: some growth rates are not sustainable because they deplete the supply of resources too quickly. (In particular, an output growth rate is not sustainable if it exceeds the sum of population growth and labor productivity growth, because we would eventually run out of willing and qualified workers and end up in a wage-price spiral.) But here Kelly Evans seems to be referring to demand sustainability rather than supply sustainability.

Is demand sustainability, in this aggregate sense, a meaningful concept? Many economists would say no, because aggregate demand – demand for everything except money itself – is really just the inverse of the demand for money (or for financial assets in general), and there is no limit on the sustainability of the supply of money: we can always print more. And indeed we can always print more money, but the problem is, will we? Aggregate demand sustainability isn’t meaningful in an absolute sense, but it is meaningful if we condition on the growth of some nominal quantity such as the money supply, the price level, or nominal GDP. A certain level of aggregate demand may not be sustainable at a given rate of inflation, or at a given rate of NGDP growth, and thus there is no guarantee that the trajectory of nominal aggregate demand prior to 2007 was sustainable.

When Kelly Evans refers to a “boom that simply pulled forward demand from the future,” Karl Smith interprets this to mean that people were living above their means. But this is a supply-side interpretation: their means (supply) were not sufficient to sustain the pattern of consumption. I believe that the relevant interpretation is a demand-side one: people were choosing (demanding) a certain pattern of consumption based on false information. To say that their demand was “pulled forward from the future” is to say that they would, had they known the truth, have preferred to consume in the future rather than in the present (or in some cases, that their lenders, had they known the truth, would have preferred that the borrowers consume in the future instead of borrowing from them and consuming in the present)

The underlying problem over the past decade is excessive patience: everyone (by which I mean, mostly, the Chinese) wants to defer their expenditures into the future at the same time. But everyone can’t do that at the same time. In a perfect world, we would solve this problem by allowing prices to drop temporarily, far enough to convince enough people to take advantage of the low prices by spending today instead of in the future. But in the real world, price adjustment doesn’t happen quickly, and it often causes more problems than it solves.

So how do you get people to shift their expenditures into the present? One way is by fooling them. Make them think they’re richer than they really are. Make them think there are ultra-safe assets available to safeguard their future spending capacity. Find the people who want to spend today but don’t have any money, and make someone else think it’s safe to lend them money. But this solution is…unsustainable.

The sustainable solution, in theory at least, is to generate an expected inflation rate high enough that – at some positive interest rate – enough people will choose to spend money today instead of in the future. But that solution may not be on the table. Inflation rates much higher than 2% are heavily frowned upon by…just about everyone, it seems, except a few economists. Is 2% high enough? Who knows?

NGDP targeting is another solution, but is it sustainable? As I discussed at the end of my last blog post, and as Nick Rowe expands upon, NGDP (level) targeting would eventually succeed in raising demand, because every time it failed, it would then promise a yet more aggressive (and therefore more inflationary) policy. But what happens after it succeeds? Unless people have become less patient, we’re back where we started: everyone tries to shift expenditures into the future at the same time. The economy gets depressed again, and the cycle repeats.




DISCLOSURE: Through my investment and management role in a Treasury directional pooled investment vehicle and through my role as Chief Economist at Atlantic Asset Management, which generally manages fixed income portfolios for its clients, I have direct or indirect interests in various fixed income instruments, which may be impacted by the issues discussed herein. The views expressed herein are entirely my own opinions and may not represent the views of Atlantic Asset Management. This article should not be construed as investment advice, and is not an offer to participate in any investment strategy or product.

Monday, October 24, 2011

Can Knut Wicksell Beat Up Chuck Norris?

Nick Rowe argues that NGDP targeting is a way of dealing with coordination failure. Businesses don’t want to hire if nobody’s buying, and households don’t want to buy if nobody’s hiring. So they’re all hoarding money instead. The way to fix it is that you have Chuck Norris threaten to beat up anyone who hoards money. Then businesses start hiring and households start buying (or else they both buy riskier assets, and the people who sold those assets do the hiring and buying, because they also don’t want to be beat up for hoarding the proceeds).

In the simplest version of the argument, beating people up is a metaphor for inflation. But if you don’t believe the Fed can produce more inflation (as many economists believe that the Bank of Japan has tried and failed to produce a positive inflation rate over the past 20 years), you can take beating people up as a metaphor for reducing asset returns. Even if the Fed can’t produce inflation, it can bid down the returns on a lot of assets until people get fed up and start buying riskier assets that can finance new expenditures. Some people don’t even think the Fed can do that, because maybe people have such a strong need for safety that they will only hoard more cash if other safe asset returns go down. I’m not 100% sure myself, but, for the sake of argument, I’m going to assume that the Fed can, if it is aggressive enough in buying safe assets, convince people to buy enough risky assets to get the economy going again.

Nick’s point, though, is that the Fed can do this without actually reducing the return on safe asests (and presumably without producing a lot of inflation either). Chuck Norris can clear a room without actually beating anyone up. The threat is enough. Similarly, in Nick’s view, the Fed can fix a coordination failure by threatening to reduce the return on safe assets, but it won’t have to carry out that threat if it’s credible. In fact, asset returns will go up, because the improved economy will make businesses more profitable, thus raising the return on risky assets and inducing people to abandon safe assets even if the yields go up. Paradoxically, by credibly threatening to push asset returns down, the Fed succeeds in pushing them up.

OK, fine. I’ll note that Ben Bernanke is no Chuck Norris, but perhaps President Romney will replace him with Chuck Norris, or with the antimatter counterpart of Paul Volcker (who was the Chuck Norris of inflation fighting). I’ll also note that Chairman Norris will enter with a considerable handicap, given that many are uncertain about the Fed’s ability to succeed in convincing people to abandon safe assets. If the threat were credible and everyone knew it to be credible, then everyone would know that stock prices are going up and they really ought to sell their bonds as quickly as they can, and we’d immediately be on the path to the good equilibrium. But even with Chuck Norris as Fed Chairman, a lot of people are going to think, “What if the Fed fails? At least cash is safe.” The threat alone quite possibly won’t be enough: Chuck Norris may well have to beat up a bunch of people – QE, Walker style – before the room clears.

But OK, I’m not opposed to violence, when it’s the only way to get something done. Only here’s my concern: how do we know that coordination failure is the real problem?

Flash back to 2006. There was no coordination failure then. Firms were hiring. Households were buying. Commerce was functioning smoothly. Very smoothly, too, in the sense that the economy was neither overheating (no rising inflation, no labor shortage) nor driving interest rates abnormally high. (The 10-year TIPS yield ranged from 1.95% to 2.68% in 2006, consistently below the perceived long-run real growth rate of the US economy.)

Yet that smoothness was based on being completely out of touch with reality – or at least out of touch with what most people today regard the reality to have been. By most accounts, housing prices were inflated, making people feel wealthier than they really were, and lots of seemingly safe assets were available, which, as it turned out, were not at all safe. Even with interest rates relatively low, this deception was apparently necessary in order to get households to buy and firms to hire in sufficient quantities to achieve full employment. Since the deception is no longer feasible, interest rates will presumably have to be a lot lower – even if we rule out coordination failure – in order to induce enough buying and enough hiring today to achieve full employment.

But how much lower? We can’t say exactly. Today 10-year TIPS are yielding close to zero. Is that low enough, if it weren’t for coordination failure? Maybe. Maybe not. Your guess is as good as mine. I can certainly imagine that could we fix the coordination failure (if there is one) and still end up producing well below our capacity.

That’s where Knut Wicksell comes in. Wicksell was the early 20th century economist who argued that prices would tend to go up or down depending on whether the interest rate was below or above its “natural” level (which varied over time). Modern interpretations allow for sticky prices and wages, so instead of falling prices, you get unemployment when the interest rate is too high. As I suggested in a post last year, and in the paragraph above, the “natural interest rate” could be negative, in which case a higher inflation rate is the only way to achieve full employment.

For practical purposes I advocate the same policy that Nick does – nominal GDP targeting – but I’m a bit less optimistic about how quickly and smoothly we could approach the target. And, given a choice, I’d probably favor a more aggressive target than Nick would. One of the implications of the Wicksellian analysis (which is not so clear if you think coordination failure is the only problem) is that more aggressive targets are easier to hit, because they imply higher inflation rates and therefore a lower floor on the real interest rate.

The important thing is to set a target path and stick to it even if you keep missing the first few targets by larger and larger margins. If the natural interest rate is negative, the early targets may be impossible to hit, but if you continue trying to hit the subsequent targets, those targets will imply higher inflation rates. Suppose your target path rises by 5% per year. A 10% increase in NGDP over two years may not imply enough inflation to get the real interest rate down to its natural level, but if NGDP doesn’t rise at all in those two years, the target path will now imply 20% NGDP growth over the subsequent two year period. That would require a lot of inflation – certainly enough to be consistent with a very negative natural real interest rate. Chuck Norris may take his hits in the first few years, but Knut is eventually going down.




DISCLOSURE: Through my investment and management role in a Treasury directional pooled investment vehicle and through my role as Chief Economist at Atlantic Asset Management, which generally manages fixed income portfolios for its clients, I have direct or indirect interests in various fixed income instruments, which may be impacted by the issues discussed herein. The views expressed herein are entirely my own opinions and may not represent the views of Atlantic Asset Management. This article should not be construed as investment advice, and is not an offer to participate in any investment strategy or product.

Friday, May 13, 2011

Fixing What’s Wrong with the Taylor Rule

I see four problems with the original Taylor rule:

  1. It’s not really a rule at all. The Taylor rule depends on an estimate of potential output. In practice, most of the discretion that goes into central banking is in the estimate of potential output. Even “discretionary” central bank policy is effectively constrained by the consensus of what would be considered reasonable policy actions, and any of those actions can be rationalized by changing your assumption about potential output. Usually, a central bank that has committed to following a “strict” Taylor rule has roughly the same set of options available as one that is ostensibly operating entirely on its own discretion.

  2. It doesn’t self-correct for missed inflation rates. Since the inflation rate in the Taylor rule is over the previous four quarters, the rule “forgets” any inflation that happened more than four quarters ago. This is a problem for four reasons:

    • It leaves the price level indeterminate in the long run, thus interfering with long-term nominal contracting and decisions that involve prices in the distant future.

    • It leaves the central bank without an effective tool to reverse deflation when the expected deflation rate exceeds the natural interest rate.

    • It reduces the credibility of central bank attempts to bring down high inflation rates, because the bank always promises to forgive itself when it fails.

    • It aggravates the “convexity” problem described below, because the central bank effectively ignores small deviations from its inflation target, even when they accumulate.

  3. It doesn’t allow for convexity in the short-run Philips curve. If the estimate of potential output is too low, for example, and the coefficient on output is sufficiently low, then, if the short-run Philips curve is convex, the central bank will allow output to persist below potential output for a long time before “realizing” that it has made an error. In the extreme case, where the short-run Phillips curve is L-shaped, the central bank may allow actual output to be permanently lower than potential output. More generally, the convexity problem can be aggravated by hysteresis effects, in which lower actual output leads to lower potential output, so that the central bank’s wrong estimate of potential output becomes a (permanently) self-fulfilling prophecy.

  4. It can prescribe a negative interest rate target, which is impossible to implement. This appears to have been the case for at least part of 2009 and 2010, although there is disagreement about the details.

So how do we fix these problems? I suggest the following solutions:

  1. Adopt a fixed method for estimating potential output. (One might allow future changes to the method, but they should be implemented only with a long lag: otherwise, they’ll interfere with the central bank’s credibility, since they can be used to rationalize discretionary policy changes.) Since I like simplicity, I suggest the following method: take the level of actual output in the 4th quarter of 2007 (when most estimates have the US near its potential) and increase it at an annual rate of 3% (the approximate historical growth rate of output) in perpetuity.

  2. Replace the target inflation term with a target price level term. In other words, express it as a deviation from a target price level that rises over time by the target inflation rate. To be clear what I mean by the “target inflation term,” take Taylor’s original equation
    r = p + .5y + .5(p - 2) + 2 (where p refers to the inflation rate)
    and note that I am referring to the “p – 2” term but not to the initial “p” term, which is not really a target but part of the definition of the instrument (an approximation of the real interest rate). In my new formulation, “p – 2” becomes “P – P*,” where “P is (100 times the log of) the actual price level and P* is (100 times the log of) the target price level (i.e., what the price level would be if the inflation rate had always been on target since the base period).

  3. Increase the coefficient on output. If you wish, in order to avoid a loss in credibility, you can also increase the coefficient on the price term by the same amount. What we have then is a more aggressive Taylor rule. It doesn’t solve the convexity problem completely, but it does assure that, when output is far from target, the central bank will take aggressive action to bring it back (unless the price level is far from target in the other direction). That way at least you don’t end up with a long, unnecessary period of severe economic weakness. (John Taylor claims that, according to David Papell’s research, there is “no reason to use a higher coefficient, and…the lower coefficient works better.” But that research only looks at changing the coefficient on the output term without either changing the coefficient on the inflation term or replacing it with a price term, as I suggest above. Having a too-small coefficient on the output term, as in the original rule, is only a second-best way of achieving the results that those other changes would achieve.)

  4. “Borrow” basis points from the future when there are no more basis points available today. In other words, if the prescribed interest rate is below zero, the central bank promises to undershoot the prescribed interest rate once it rises above zero again, such that the number of basis-point-years of undershoot exactly cancel the number of basis-point-years of (unavoidable) overshoot. This method will only work, of course, if the market knows what rule the central bank is following, hence (among other reasons) the need for a rule that really is a rule. If the rule is well-defined, the overshoot will be well-defined, the market will expect the central bank to “pay back” the “borrowed” basis points, and the central bank will be obliged to do so in order to maintain its subsequent credibility.

OK, let’s look at the big picture. What have I proposed? I have proposed nominal GDP targeting (along with a specific method for how to implement it). When the price level term and the output term have the same coefficient and both are specified as a deviation from target, the Taylor rule can be simplified by combining the price level target with the output target. Combining Taylor’s original 2% inflation target (re-expressed as a price level path target as per my suggestion) with my suggested method for estimating potential output, we arrive at a 5% nominal output growth path as the target.

If you wish, you can go further by making the rule forward-looking (using a forecast of nominal GDP instead of a lagged observation) and increasing the coefficient to a very high number. And you can enforce the credibility of the forecast by requiring the central bank to use the forecast implicit in a publicly traded nominal GDP futures contract, so that the market is putting its money where the central bank’s mouth is. You end up with the proposal that Scott Sumner has already made. People seem to think that Scott Sumner’s ideas about monetary policy are far out of the mainstream. But I’m not proposing anything radical here, just trying to fix some problems with the very orthodox Taylor rule.




DISCLOSURE: Through my investment and management role in a Treasury directional pooled investment vehicle and through my role as Chief Economist at Atlantic Asset Management, which generally manages fixed income portfolios for its clients, I have direct or indirect interests in various fixed income instruments, which may be impacted by the issues discussed herein. The views expressed herein are entirely my own opinions and may not represent the views of Atlantic Asset Management. This article should not be construed as investment advice, and is not an offer to participate in any investment strategy or product.