Price level stability as a driver of inequality

Generally speaking, people think about inflation in terms of purchasing power. $1 from 1950 is worth $10 dollars today. It is quite important what that $10 represents, however. For true inflation (real inflation), it would take $10 today to purchase the exact same good for $1 in 1950. That is not always the real world case. Rather, $10 today purchases an enhanced version of the same good as in 1950 (or more likely a good that didn’t exist in 1950).

So what is exactly going on? To understand inflation with respect to the price level, it is necessary to introduce some macrofoundations into the micro analysis (a phrase borrowed from David Glasner). In an economy where there are productivity gains, a steady price level is not the absence of inflation. Productivity means producing the same good for less, and as such, productivity leads to a deflationary state absent an offset in demand. The offset by demand is where the stance of monetary policy comes into play.

If a monetary authority targets a steady price level (or seek 2% growth as is the case with the Fed), they must necessarily counteract deflationary pressures induced by productivity gains. That means, more or less, increasing the money supply so that aggregate demand offsets the downward shift in the aggregate supply curve. This is what George Selgin has called the “productivity norm”.

[Note:Importantly, the situation I have described above is an offsetting of a supply-induced deflation. This must be distinguished between a demand-induced deflation of the price level. If demand is falling, that is an indication (or tautologically equivalent) to national income falling. There may be other reasons (see Great Depression) to think about this situation differently than deflation caused by general increases in productivity.]

The productivity norm, however, does not automatically require inflationary tactics by the Fed. As a society becomes more productive, once-used resources can be redeployed into other income generating activities–a process I will label dynamic growth. Dynamic growth grows the composite basket of goods in the economy and grows national income, as more and more output is produced out of the same stock of capital (which then grows itself). This increase in demand not only alleviates deflationary pressures but it also alters the nature of the price level.

What do I mean by nature of the price level? To be more precise, I mean that a historic price level corresponds to an entirely different basket of goods when compared to a price level in the future. A 1950s car is not the same as a 2016 car, even though their prices may be the same. Furthermore, the price level may incorporate 150 models of cars in 2016 when compared to 5 in 1950 (hypothetical for argument sake). It is difficult to conceptualize a comparative price level, therefore, when the nature of the underlying basket of goods changes so fundamentally.

However, we think of inflation as comparing price levels over time. And our current method of calculating the price level airs on the side of a static state of goods. For purposes of determining the price level, the average car in 2016 with all of its bells and whistles, is considered the same more or less as the average car in 1950. So if a 2016 chevy costs 20k and the 1950 chevy costs 10k, we have a 100% increase in price for purposes of calculating inflation.

This seemingly arbitrary calculation, the difference in price between an entirely different set of goods in different years has real effects on the stance of monetary policy and in turn real effects on the distribution of income. Consider the following three scenarios:

Scenario #1: New product at higher price

This is the scenario I think actually corresponds with a lot of the economic progress we have seen in recent decades. A new product replaces an old one but comes in at the same or at a higher price. Even if it starts at a higher price, there is a minimal impact on the price level as innovation brings the new good down to the original price.

For example, tube tvs were replaced by plasma and lcd tvs. The new plasma tvs were quite expensive when compared to to tube tvs, but they were of a completely different quality. After several years, the price of tvs have come down to the old tube-tv price level. Wheres tube-tvs go for cents on the dollar when compared to 1990 prices.

From the perspective of the monetary authority seeking to maintain a price level, this scenario creates limited problems. The price level may go up initially but returns to the old price quite quickly. Furthermore, the purchasers of tvs are spending more on tvs and less on other goods, which offsets any increase in the aggregate price level.

Imbedded, however, in this scenario is the deflation of the original good. The tube-tv is far less costly than it was before. What is important to see is that the surplus generated is a product of demand and not supply. The tube-tv is not any cheaper to make, but is rather not demanded by any buyers. A surplus only exists for people who buy tube-tvs and who are willing to pay more for tube tvs– in other words a small population.

By substituting plasma tvs for tube tvs in the basket of goods, there is virtually no impact on the price level and therefore no observed inflation or deflation. This is not to say that individuals are not better off, however, as the new tv is certainly better than the old. But for purposes of the monetary authority, their job is accomplished by the mere swap of goods in the CPI basket under the heading TV.

Does this scenario have any impact on income distribution? At the very least it would seem to benefit the bottom-end of the distribution more than the top. Assuming incomes are static, the price of a TV remains the same but the quality of the TV is higher than before. As such, the consumption benefit would be greater (as percentage of income) for the bottom rung of the income distribution.

Scenario #2

A new product is developed, it does not substantially replace any existing products in the CPI basket, and it is highly demanded by the public. The laptop computer may fit this description nicely. In the case of the laptop, the introduction of the product will lower the demand for all other goods in the economy, unless the laptop makes everyone that much more richer and productive (unlikely). This will tend to reduce prices (through demand) unless the CPI basket is updated to include the brand new technology, which would likely take a substantial period of time. It took 15 years for the cell phone to be included in the CPI after its invention, as in interesting historical aside. This reduction in demand of other goods, due to the introduction of the laptop, would generally show up as deflation in the CPI index.

As such, the monetary authority, if it intended to maintain a steady price level, would be required to raise inflation to offset the general demand-induced price inflation. Importantly, however, this deflation is not a “real” deflation. It is a measurement error and one that would induce the monetary authority to raise inflation to maintain the price level.

This type of inflation has the desired impact of keeping the prices of all CPI goods at their previous level and to raise the price of the new good. Secondary distributional impacts would occur, however, if inflation is not distributed equally across the economy. And there is good reason to to doubt equal distribution; namely the observed and theoretical winners (creditors/owners of capital assets) and losers (debtors/sticky wage employees) from inflation.

Therefore, a miscalculation of the price level due to new technological progress could induce real, yet generally unobserved inflation AND that inflation could be driving some residual inequality.

Scenario #3

An existing product is produced at a much cheaper price. Here there is price deflation that is consistent with the productivity norm. The new price level is lower because the same good is being made at a cheaper cost IF the extra income surplus does not lead to corresponding increase in the price of other goods in the CPI basket.

The potential for estimated deflation is higher when considering scenario #2. Specifically, the extra income from a decrease in one CPI good could be spent on goods not included in the CPI price level. This estimated and potentially real, short-term deflation must be offset by the monetary authority targeting a level price.

Once again, we have deflations offset by inflation and the critical question is whether inflation is distributed evenly throughout the economy. If the answer is no, then we have growing inequality baked into the system due a price level target.

Conclusion

Overall, a steady price level target by the Fed actually produces inflation in one dimension that goes generally unseen by many. Counterintuitively, a steady price level is not the absence of inflation. As George Selgin points out, whenever an economy has productivity growth, the natural price level for specific goods should fall. Maintaining a price level would then require either (1) a corresponding increase in demand (through growth and purchasing other goods) which in turn requires the correct demand elasticity, or (2) growth in the money supply to offset falls in the price level. The latter situation is almost always necessary if an economy is slow to adjust.

As such, the baked-in inflation residual of price-level targeting may have implications for inequality that are generally unaccounted for. There’s also reason to think that future innovation, at least in the areas of software and AI, will correspond more closely with the second or third scenario discussed above. New technologies will be of a different type of product or will make existing products much cheaper in a real-sense. Both of those advances require inflation to maintain a price level for the economy.

 

EMH and Monetary Policy

Market Monetarists, such as Scott Sumner, go to great lengths to defend the efficient market hypothesis (EMH). The EMH, if correct, means that markets provide the best possible information about the stance of inflation and other monetary aggregates in real-time. The TIPS spread for example can give the FED instant and constant feedback as to inflation expectations. The obvious objection to the Market Monetarist theory (or at least parts of the theory that rely on targeting market indicators) is that the EMH is incorrect in reality; and there is good evidence that at least strong versions of the EMH are not quite right.

However, the Market Monetarists have forced themselves into a corner that they don’t really need to be in for two reasons. The first is one simply related to the scope of the argument. The second is a broader point about the market monetarist policy prescription.

(1) Instead of arguing for the EMH, they could simply narrow their argument to “the EMH as compared to what.” Sure, there may be individuals who can beat the market in some scenarios. Making that assumption is quite different than arguing that Federal Reserve employees, functioning with political constraints and pressure, are well positioned to beat market forecasts. That is equally true for pension fund managers.

(2) The EMH plays a role in market monetarism through preferred variable targets, specifically a NGDP futures market. A theoretical NGDP futures market would provide the Fed with information about the market’s perceived path of aggregate demand. However, this is merely an extra feedback mechanism for the Fed. Changes in the futures price for NGDP signals to the Fed that they should alter policy. BUT the Fed simply stating that their NGDP target is 5% is by far the more powerful force in the model. Having the target means that the Fed will have to perform fewer actual open market operations, not more. And even if the Fed didn’t have the benefit of immediate market feedback regarding the path of NGDP, it is reasonable to assume that the Fed would be able to reconstruct NGDP through other economic tools and proxy variables within reasonable timeframes. More to the point, the value in a NGDP target is that it sets a long-term expectation for the marketplace and a NGDP futures market may make it more likely that the Fed will adhere to the target. Alternatively, a NGDP futures market may also reduce confidence in the Fed’s ability to hit a target due to delayed reactions to movements in the market and market knowledge of that inaction (for example NGDP markets may rise over the course of a day and Fed policy makers may wait for a sustained jump to alter open market operations. The awareness of the delay, in part because of the existence of the market, may reduce confidence). Overall, the answer to whether a NGDP futures market helps or hurts confidence in the Fed under an NGDP target regime is not as obvious as Market Monetarists make it and their policy prescription is not entirely dependent on an efficient NGDP futures market.

Raising Interest Rates

Many people are calling for the Fed to raise rates. Part 1 of this post will address the arguments for raising interest rates. Part 2 will address whether the Fed even has the ability to raise rates in the way that supporters imagine. Importantly, the second question has normative consequences for the first. The purpose of this post is not to provide a complete analysis of these issues but rather highlight some of the underlying theoretical positions in the debate.

Part 1:

The argument to raise rates come from a couple of sources. First, the John Taylor (Taylor Rule) camp has come out for higher interest rates under a Taylor rule model, which is a formula that provides an ideal interest rate based on macroeconomic inputs. The formula looks like this: i= r* + pi + 0.5 (pi-pi*) + 0.5 ( y-y*); where r*= real fed funds rate, pi= inflation, pi*= target inflation rate, y= real output, and y*= potential output. The starred variables are estimates within the model. Importantly, the real fed fund rate is generally assumed to be 2%. Under an assumption of a 2% real interest rate, nominal interest rates have been artificially low for some time and, the argument goes, the Fed should raise rates to normalize the economy. However, it is entirely possible that the real interest rate in the economy is lower than in other periods due to a combination of demand and supply side factors (this is consistent with the secular stagnation story). If that is true, the Taylor Rule would call for lower interest rates under its own model. So what world are we living in?

One litmus test to determine whether interest rates are artificially low is to observe whether inflation has taken off or if the economy is growing at a faster-than-usual pace. Inflation for 2014 and 2015 was slightly below the Fed’s 2% target rate and NGDP growth has been sluggish at best. Put differently, Taylor’s reasoning for raising rates relies in part on the idea that unnaturally low interest rates will lead to inflation and high growth rates, which we haven’t yet observed. Further, it is difficult to hold that view, as Taylor does, that the economic recovery has been weak and the Obama administration has implemented serious supply-side restraints on the economy; and at the same time call for rates to rise due to a fear of explosive nominal growth.

The second reason for raising rates comes from the Neo-Fischerian camp. Their argument proceeds as follows: low interest rates over a long time period creates an expectation for low inflation, and therefore, raising rates would increase inflation expectations and grow Aggregate Demand. The logic for this position flows from the basic idea that nominal interest rates incorporate inflation expectations and from the observation that many periods of high inflation are accompanied by high interest rates. The Neo-Fischerian story is that the economy adjusts to an expected interest rate target in the long run and they assert that the US has currently adjusted to a low interest rate world. In other words, low interest rates cause low inflation/low aggregate demand in the long run. This is an alternative theory to secular stagnation where low growth is a byproduct of the long term low interest rates rather than real factors in the economy.

There is a basic logical appeal to the Neo-Fischerian camp but it is necessary to recognize the causal assumption in their story. Interest rates lead to inflation expectations. There are two potential problems with this theory. First, the causation story could be reversed. Inflation expectations could actually cause interest rate movements. Where we have observed hyperinflation, this is the story people generally assume. High interest rates in the economy are not causing hyperinflation but rather interest rates are trying to keep pace with high inflation expectations. Second, the Neo-Fischerian story is arguably a long term adjustment story and it is unclear whether the long-term expectations of higher interest rates is baked into the inflation rate calculus immediately. In other words, if inflation expectations across the economy significantly lag the rate increase, then you could face deflationary pressures in the short run with a high nominal rate of interest set by the Fed and a low real rate of interest. Interestingly, the first critique relies on an argument that the economy sets interest rates and can override Fed Reserve attempts. The second critique, instead, assumes the Fed sets interest rates for a non-trivial time period and the economy is slow to adjust.

Overall, it is important to recognize how these two arguments are conceptually distinct. The first set of proponents (the Taylor camp) argue that rates are artificially low given the current state of the economy and continuing low rates risks overheating the economy. The second set (Neo-Fischer) see a need to raise Aggregate Demand and rely on interest rates as an inflation expectation peg in the long term. The two stories not only diverge on the current view of the economy but also diverge on the long term effect of low interest rates. The first imagines an economy with a real interest rate of 2% (with nominal interest rates held low by the Fed) and the economy will overheat if we continue on the current path. The second imagines an economy that has adjusted to a low real interest rate and will remain there for the forseeable future until interest rates are moved upward.

Part 2 Preview:

WSJ: The Fed’s Interest Rate Machine; Fed not responsible for low rates

Interestingly, both of the arguments outlined in Part 1 presume that the Fed can raise or lower rates. This is not an unusual assumption and one that generally aligns with popular belief. However, it is important to describe the Fed’s interest rate setting mechanism and the causal influence of the overnight rate in order to assess the theories in question.

 

 

WSJ: Uber and School Choice

WSJ: James Courtovich on Uber and School Choice

The regulatory battles underway for uber are not unique but rather indicative of a much larger struggle between entrenched interests and innovation. Government induced monopolies are wide reaching across our economy and the creative destruction process is stilted when regulators block market entry.

Tyler Cowen on the Gender Gap

Upshot

Cowen argues that as female participation levels grow, positive reinforcement mechanisms help alter the norms of business interaction and participation. As such, Cowen is not convinced that experimental evidence showing disparate negotiating tactics and results between men and women will hold over time.

Robert Litan- Ted Talk Event

 

Supply Side Restrictions in Healthcare

John Cochrane highlights how “certificate of need” regulations place supply-side constraints on the healthcare market. 

Externalities vs. Transaction Costs

After listening to Terry Anderson on Russ Roberts’ Econtalk this week, I am reminded of an important distinction between externalities and transaction costs. Too often it seems that the term ‘externality’ is used on its own to describe a situation in which an efficient outcome cannot be reached because multiple parties are impacted by a particular action. What ‘externality’ really means in this context is that transaction costs are too high to reach private solutions that internalize the cost of the externality. Importantly, the presence of an externality is not the problem itself. The problem is that impacted stakeholders cannot reach a coordinated solution on their own (often due to an inability to negotiate) and, as a result, an efficient allocation of resources occurs. This difference between ‘externality’ and ‘transaction costs’ underlies the Coase theorem and should be remembered when diagnosing problems that require regulatory solutions. It is not enough to say that an externality exists and, therefore, some type of intervention is required.

Smart House Devices and the Insurance Industry

Forbes profiles SmartThings

“SmartThings, the firm he founded a year after the Colorado trip, is now doing a brisk business selling a $100 hardware hub with a smartphone app and cloud service that connects thousands of gadgets. It also sells more expensive kits crammed with third-party sensors and devices for home security, temperature control and water detection.”

Just as cars have sensors that activate upon certain malfunctions, SmartThings provides the opportunity for your home to give warning signals in order to mitigate damage. The cost of insurance is integrally connected to the expected cost of damage. In a world where catastrophic damages can be limited through either warning signals or automated responses, the cost of insurance should fall accordingly. It may not be long before home insurance companies begin to offer cheaper coverage bundled with smart home system installations. 

Cash for Clunkers Auto Industry Impact

NBER: Mark Hoekstra, Steven Puller, Jeremy West argue that Cash for Clunkers was a net negative on the auto industry due to countervailing policy goals of the program. 

“Cash for Clunkers was a 2009 economic stimulus program aimed at increasing new vehicle spending by subsidizing the replacement of older vehicles. Using a regression discontinuity design, we show the increase in sales during the two month program was completely offset during the following seven to nine months, consistent with previous research. However, we also find the program’s fuel efficiency restrictions induced households to purchase more fuel efficient but less expensive vehicles, thereby reducing industry revenues by three billion dollars over the entire nine to eleven month period. This highlights the conflict between the stimulus and environmental objectives of the policy.”