x
Your preference has been saved. Remove your saved preference.
You must supply both a username and a password to login.
You must supply a username, password and token to login.
Sign on
Passport secured login icon  
 
Wealth Management
Asset Management
Asset Servicing
Insights & Research
About Northern Trust
Search go
Click to close drop down menu.
 
Click to close drop down menu.
Financial Intermediary
Learn about asset management solutions designed for Retirement Plan Advisors, TAMPs/Outsourcing Providers, Subadvisory Program Sponsors, RIAs/Financial Advisors and Trust Institutions.
Institutional
Learn about asset management solutions including: global index management, active equity and fixed income, target retirement date funds, and manager of manager programs.
Individual
Get answers to your investment challenges with asset management capabilities including alternatives, exchange traded funds, fixed income, mutual funds and tax-advantaged equity.
Click to close drop down menu.
 
Northern Trust Logo
 

The View From Here

 
Our analyses of topical issues relating to the economy and financial markets.
.
 
 
 
 
 
 
 
 
.
 

A High-Stakes Numbers Game - Can we trust our economic data?

July 7, 2014

View PDF version

There are three types of lies: lies, damned lies and statistics.
- Benjamin Disraeli

Bring up the subject of economic data with any investment professional, and a roll of the eyes is the likely response. Everyone in the financial community has a favorite war story, among them:

  • In late 1979, when Federal Reserve Chairman Paul Volcker had everyone taking the money supply very seriously, the Thursday afternoon release of M1 was each week’s flash point. Late that year, an unexpected jump of $3.7 billion was reported, leading to a sell-off in bonds and stocks. As it turned out, though, the “gain” was actually a $700 million fall: a certain clerk at a large bank was on vacation, and a replacement had mistakenly added a zero on his submission to the Fed.

  • The slogan, “It’s the economy, stupid!” was a linchpin of Bill Clinton’s 1992 presidential campaign. Harping on slow growth and the loss of employment, the Democrats wrested control of the White House for the first time in 12 years. One year later, the Bureau of Labor Statistics reported that it had overstated the number of jobs lost during the election year by 540,000, but the revision came too late to influence the polls.

These episodes are laughing matters for some but a source of serious concerns for others. Economic information drives decisions in the public and private sectors and serves as the basis for trillions of dollars in financial transactions. Instability in the figures can create an unwanted source of uncertainty for markets and policy-makers. Most recently, the huge revision to gross domestic product (GDP) growth in the United States for the first quarter of 2014 stunned markets and left economists scrambling for explanations.

It therefore seems like a good time to review the ways in which readings on business conditions are assembled and offer some commonsense rules for using the data. Understanding the strengths, weaknesses and limitations of statistics is critical to using them properly. And we must be careful not to expect too much precision from this inherently imprecise discipline.

Getting Things to Add Up

Economic measurement is many centuries old. Accounting for output and price goes back to the dawn of the market mechanism. Historians have been able to construct surprisingly lengthy histories on certain concepts, spanning both time and geography.

But systems that capture activity comprehensively arose less than a century ago. National income and product accounts (NIPAs), which feed the calculation of GDP, were first formulated during the Great Depression. The process provided readings in “near time” that could be used to assess conditions and guide decisions.

It was a remarkable achievement, given the limitations researchers faced at the time. Incomplete collections, long time lags and what we would consider primitive calculation tools by today’s standards were among the obstacles to overcome. There are no natural laws in economics that establish immutable relationships among variables; human behavior can be quite volatile in the short run and prone to significant secular transitions in the longer run.

In one sense, the originators had the advantage of studying an economy which was somewhat easier to account for than it is today. Industries where quantification is relatively straightforward, like agriculture and manufacturing, have lost share to health care and financial services, which are more complicated to assess.

 

The output and price associated with a service are hard to gauge. How can you measure the production of a teacher? Or a doctor? Or a cellular service carrier? Or a bank? In these cases, statisticians are able to collect information on payments made to providers, but determining what services and service levels have been provided is not a simple task.

Ironically, the expanding employment of technology in our economy has made life both easier and harder for economic statisticians. Modern computers have certainly simplified the compilation and analysis of economic data, and we get more information sooner than we ever did before. But the shift toward digital industry has complicated efforts to account for output.

Creating a complete depiction of economic activity was a significant endeavor when begun, and it has only gotten more complicated over time. It is worth keeping this background in mind as we assess the value of the output; we may be asking more from economic statistics than they are able to give.

Revisions, Restatements and Rebasing

Understanding how the various indicators are assembled can help place their movements into proper perspective. The process starts with some basic data, which can take time to accumulate. Staggered availability of information contributes to the revisions that are often made to initial releases. Because of the lags involved with collecting component parts, GDP progresses through advance, preliminary and final stages. It was updated information on medical expenditures that drove the last, large revision to U.S. first-quarter 2014 GDP.

Much of this information is captured through surveys. This technique relies on polling a carefully selected sample that represents a broader population. There is always the risk that this linkage can change over time, rendering the sample less-indicative. And securing consistency in survey responses is an elusive goal in the social sciences, no matter how detailed instructions are.

From there, a multitude of assumptions are required to complete the picture. Some are based in statistical study, while others are established judgmentally. Once annually, changes in measurement methodology are applied to past readings to create what are known as benchmark revisions.

At times, these updated views significantly alter the reported course of the economy. Employment readings that initially disappointed markets later turn out to be much better than expected. Initial reports of GDP growth have been reversed upon further review. Decisions taken on the first release can therefore look erroneous in retrospect. 



In light of this, some have suggested that the numbers are being cooked up by analysts whose skills and objectivity are questionable. Nothing could be further from the truth: economic statisticians are not political appointees, they are incredibly skilled, and they take the work very seriously.

The agencies that produce economic data are very transparent about what they are doing. (Those who really want to get deeply into the detail of how GDP is put together can find it in the NIPA Handbook published by the Bureau of Economic Analysis.) Yet as earnest as they are, the statisticians struggle with the inherent limitations of the discipline. This can be well-illustrated by two areas that are particularly important to monetary policy.

How Much Do Things Really Cost?

Twice this spring, I was asked if the Federal Reserve was manipulating the inflation rate in order to justify continued monetary accommodation. Since the Fed does not compute the inflation rate, I could confidently deny those allegations. But the mere suggestion reflects ongoing controversy over the measurement of the price level.

Collecting the needed data is a massive effort. Representatives visit retail outlets and service establishments to collect approximately 80,000 indications every month. Establishing that products are comparable from period to period can be challenging; a dozen eggs may not change all that much, but computers do. Analysts try to adjust for this using hedonic modeling, which reduces a product to its component features. A stable retail price for an improving product will therefore be seen as deflationary, since you’re getting more for your money.

Services present their own challenges for inflation calculations. A prominent example is something called owner’s equivalent rent (OER), which is the cost of lodging that homeowners implicitly pay to themselves. To assess this, analysts try to find a rented dwelling that is similar to owned property; making a perfect match, though, is difficult. (Even as housing prices were rising rapidly 10 years ago, OER was subdued.) OER accounts for about 25% of the Consumer Price Index (CPI), so the accuracy of this component has a significant influence.

Public concern over the measurement of inflation typically centers on two aspects. The first is the basket of goods and services used in the exercise and how the components should be weighted. The second is the tendency of policy-makers to focus on “core” measures that exclude food and energy prices. Different definitions of inflation can create pretty significant differences in outcomes. Since pay and benefits are indexed to inflation in many places, these distinctions are of more than academic concern.



The Federal Reserve favors the deflator on Personal Consumption Expenditures (PCE) partly because it adjusts interactively to what people are buying. If expensive beef leads people to switch to chicken, the PCE will pick this up early and reflect a more limited increase in what households are paying for food. The CPI, by contrast, uses a fixed basket of goods that is updated only periodically.

The justification for focusing on core inflation is sometimes more difficult to explain. After all, most of us spend a good deal of our incomes on food and energy. (These two categories account for almost 23% of the CPI.) Nonetheless, the costs of these elements are quite volatile, as shown in the chart above. To avoid getting “whipsawed” by frequent peaks and valleys in food and energy prices, monetary policy focuses on core measures.

Whatever definition is used, it is highly unlikely that the aggregate basket of goods and services used will exactly match what a given individual is buying. Some pensioners will receive more-generous cost-of-living adjustments than they might need, while others will feel shortchanged. There is little that can be done about this.

Over time, the various inflation measures typically move in synch. Trends seen in one can often foreshadow trends in another. Each of them has strong points and vulnerabilities. So developing a sense of inflation requires giving each of them important consideration; relying on a single indicator can be myopic.

How Many People Are Really at Work?

Employment would seem like one of the easiest things to account for: either you’re working or you’re not. Yet the two government surveys (known as the household and establishment series) that attempt to measure this don’t always agree with one another. It is not infrequent to wake up on the first Friday of every month (when the new employment readings come out) and learn that the unemployment rate went down even though we created few new jobs. That happens because the two numbers come from different sources.

The household survey covers about 60,000 homes and is the basis for the unemployment rate we see every month. Part-time workers are counted as employed (even as they aspire to full-time positions) and those “discouraged” workers who have not been looking actively are not considered part of the labor force. These factors tend to understate unemployment in the current environment.

The sample size for the household survey is very small, considering that the American labor force includes nearly 155 million people. The geographic distribution of these surveys may not be ideal for national reporting, given the need to estimate joblessness for individual states. Consistency of responses is a challenge; some who are between full-time opportunities may report themselves to be self-employed, while others will indicate that they are not in the labor force.

The establishment, or payroll, survey encompasses 140,000 businesses and a substantial fraction of American workers (and is consequently considered more reliable). It can, however, be skewed toward large companies and accounts poorly for entrepreneurs.

Like many economic series, the employment indicators attempt to adjust for seasonal factors. Construction employment, for example, peaks in the spring and summer and wanes when the weather worsens. Past patterns are used for this purpose, but when seasonal extremes deviate from their moving averages (as they did during the Polar Vortex winter of 2014) the adjusted figures can show unusual variation.

Results from the two employment surveys tend to move in the same manner over time, monthly but variations in reported job creation between them can be sizeable.



The employment report is arguably the most important one we receive each month. The challenges inherent in assessing labor supply and utilization are especially topical today, as the Federal Reserve attempts to determine how far away we are from full employment.

A Survivor’s Guide

It is said that we’re best off not knowing how legislation and sausage are made. The assembly of economic metrics might fall into the same category. The litany of measurement problems described above might certainly lead some to dismiss the data as unreliable.

But this really isn’t an option for investors. So here are some tips for making the best of what we have to work with.

  • Don’t overreact to a single release. Look at variables over time, and place them into perspective. Some announcements come with error bands around the data point, which are worth reviewing.

  • Be aware of paradigm shifts. When technology advances, or when new taxes or regulations take hold, or when natural disasters strike, it creates discontinuity in data series. It can take time for new normalcy to be established; in the interim, we need to mentally adjust for the distortion.

  • Connect the dots. If something is truly trending, we’ll see confirmation across economic indicators and corroboration with anecdotal information. Until the folio is full, though, caution should be applied before reaching a conclusion.

  • Understand the source. Some economic data, and a lot of interpretation of the data, is offered by people or organizations with a particular point of view. Carefully selected indicators, reviewed over carefully selected time periods, can support a wide range of conclusions. Best to latch onto impartial sources wherever possible.

It is sometimes scary that so much is riding on information that has so many flaws. Additional funding for economic data would certainly help (we spend a pittance on this effort in the United States), and efforts to capitalize on what “big data” might do to help in this area should be extended. Perfect precision is unattainable, but progress is certainly possible.

It’s been said that the definition of delusion is an economist who issues forecasts with decimal points. The same is true of anyone who trusts short-term movements in economic indicators. Yet the data can be extremely useful when placed into the proper context. We’ll look forward to continuing our efforts to help you separate the signal from the noise.

.
The opinions expressed herein are those of the author and do not necessarily represent the views of The Northern Trust Company. The Northern Trust Company does not warrant the accuracy or completeness of information contained herein, such information is subject to change and is not intended to influence your investment decisions.
.
 
 
 
 
 
 
 
.