Uncategorized


With President Biden’s proposed $1.9 trillion package looking likely to pass through budget reconciliation, it’s worth considering the macroeconomic consequences of such massive spending. This is on top of the $2.2 trillion CARES Act in the spring of 2020 and a further $900 billion of pandemic-related relief passed right at the end of 2020. The bottom line is that this new spending is unlikely to cause an inflationary event or meaningfully replace private sector investment.

Full Employment

To start, it’s helpful to understand the framework that the Federal Reserve, Congressional Budget Office, and most mainstream economists use to think about when an economy is “overheating” and when we should fear inflation.

Economists have a concept called “full employment” – the point where everyone who wants a job and is capable of working has found a job. Full employment does not mean an unemployment rate of 0% because it includes both “frictional” and “structural” unemployment: Those who are in between jobs in the natural churn of a dynamic labor market, and those whose skills do not match the needs of the economy right now (we’re not too worried, rightly or wrongly, about people whose skillset is exclusively making VCRs). When the unemployment rate is higher than full employment, economists say the labor market has “slack.” Slack is idle labor sitting around not fulfilling full potential. These are people who could be productive and want to work, but for whatever reason are not employed. During a recession or financial crisis, you can easily observe ample slack: Tons of people are out of work who only recently were very capable of working.

A recession leads to what Keynes called the “paradox of thrift” where an economic slump causes businesses and households to be risk-averse and cut back on spending and investment. This means less aggregate spending in the economy that leads to fewer people being employed, and it becomes a vicious cycle. But when there’s slack in the labor market, government spending can put these people to work. This increased spending will, in theory, get more people back to work and get the labor market back to full employment. It will also give people more confidence to spend again. The Federal Reserve can also use monetary policy to increase investment. In the simplest explanation: The Fed makes it cheaper to borrow and easier for banks to lend out to households and businesses.

But what happens if we try to go past full employment? This is the concern from skeptics of the latest $1.9 trillion fiscal package. When everyone who wants a job and is capable of working has a job, businesses need to compete with each other for workers by raising wages. In a lot of ways, this is great! When we have a “tight labor market,” bargaining power shifts to workers and they can demand higher wages. When there’s a lot of slack, workers have less bargaining power because employers have a lot of readily willing people to choose from and don’t have to compensate as well. But when we’re at full employment and there continues to be an increase in spending, the economy is then “overheating.” Businesses raise wages and then raise their prices. When both of these are going up, we have what is popularly called “inflation.”

Inflation

It’s important to take a slight detour here to describe what inflation is and what it is not. Usually in the press we see inflation defined by the year-over-year increases in the Consumer Price Index, which is a price index of a basket of stuff that your typical American buys. But a big jump in CPI on its own is not inflation. Sometimes it’s because of mathematical quirks, sometimes it’s because of a relative price change. For inflation to be true inflation, it needs to be what was described above: A persistent acceleration of both wages and prices.

You might remember that in the spring of 2020, oil prices on some financial markets went below zero. This is important because when CPI is reported for March or April in 2021, it will be expressing a change compared to the same month in 2020. If energy is insanely cheap or basically free in 2020, it’s going to come across as a big spike in the 2021 numbers. Not because March 2021 has oil spiraling upward to unreasonable numbers, but because it’s being compared to such a low number. This isn’t a purposely deceptive trick anyone’s pulling, it’s just the math of it. But if oil in March 2022 is $60/barrel (roughly where it is now), the CPI in March 2022 will be unaffected by the price of oil. In this way, it’s helpful to think of a big shock like the drop in oil in the spring of 2020 as a one-time event and not true inflation.

Many economists like to look at “Core CPI” which takes out the more volatile components of food and energy. The motivation, I believe, is to get a better sense of price levels by stripping out the noisier aspects. But there can still be some elements that again are one-time events and not true inflation. For example, the CARES Act in the spring of 2020 set limits on rent evictions and moratoriums. Rent is a solid chunk of the CPI basket – after all, it’s a big portion of the cost of living for pretty much everyone. But during this time, people who were not paying rent due to whatever reason were showing up in the data as paying zero in rent. The cost of rent was not zero in any meaningful sense of how we measure prices, but it did cause downward pressure on that part of the Consumer Price Index. When the moratorium is lifted, it might appear that rent for those people goes from zero to $1000/month, or whatever they had always been charged. This could cause the appearance of rent skyrocketing, bringing up CPI and giving an impression of inflation. But it’s not! It’s just a change in policy. So a one-time oil shock or a policy initiative like rent moratoriums are two examples of things that can cause illusory spikes in the metrics we tend to use to measure inflation. Economists and many financial market participants are aware of these things, but it’s still important to acknowledge that looking at CPI purely is not a good gauge on the level/existence of inflation.

Finding Full Employment

But back to our normal programming. It’s difficult to know exactly where full employment is. How do we know when everyone who wants a job and can work has one? It might sound obvious, but it’s not. There are endless sources of noise that can complicate “everyone who wants one” and “everyone can work.” For example, a weak labor market can make people who feel their options are sub-optimal go back to school. Or someone out of work in their early 60s may be nudged into early retirement because they give up on trying to work a few more years. Because both of these groups aren’t actively “looking for work,” they are not considered unemployed. And how do we really measure who is in school for “lack of job” reasons and who’s there despite good job opportunities. Same with early retirement – did someone do this because they’re financially healthy and are happy to retire now, or because they have no good option? In most instances, these groups have a combination of both forces. And usually the last people to get hired in a tight labor market are those that are seen for whatever reason as less desirable to employ – think of people who have been out of the labor force for a long time or the previously incarcerated. In a tight labor market, an employer will bet on hiring these groups that are seen as riskier hires. But otherwise, these groups could be discouraged from job hunting and stop looking, further complicating the goal of knowing when “everyone who wants a job can get one.”

There are a lot of signs that previous estimates of what unemployment rate constitutes “full employment” were very wrong. In December 2019 (pre-pandemic world), the Fed had thought the unemployment rate associated with full employment was 4.1%. And this was in contrast to their projection exactly two years earlier of 4.6%. The historical idea of what unemployment rate represented full employment had to be continually revised downward because we kept getting the unemployment rate lower without any spike in prices. In theory, any decrease in the unemployment rate past that should have led to overheating, and an inflationary event where businesses have no choice but to raise wages and prices in response to there being no one new to hire. But it didn’t happen. The US got its unemployment rate down to 3.5% in February 2020 with CPI and PCE (a different measure of the price level) never going above 2.5%. Furthermore, wages started to tick up (this is good!), suggesting a tightening labor market that wasn’t being accompanied by consumer price increases.

We can look at a few indicators that, in hindsight, show us that the labor market still had room to tighten at this point in time, that there was still slack. First, consider the “Employment to Population” ratio, which is a measure of what percentage of people above the age of 16 are employed.

You can see that February 2020, at 61.1%, was lower than the pre-financial crisis peak in January 2007 of 63.3%, and in fact lower than anytime between February 1987 and December 2008. Remember that underlying that trend is a steady increase since 1950 in the female participation rate – the percentage of women above 16 who have a wage-paying job or are actively looking for one. The female LFPR plateaued in 2000 more or less, and we can probably interpret that as the changing cultural forces of women working reaching somewhat of a ceiling (for now, maybe).

But the increasing participation of women means that those aggregate employment-to-population data include a huge decrease in the employment-to-population ratio for men. Take a look at these charts:

Is this because lots of men have decided to stay home and take care of the kids? Highly doubtful there’s strong data to support that explanation for such a huge drop in the last half century. Are more men deciding to attend higher education? Again, no. Any statistic will be an incomplete snapshot of whatever it is you’re trying to measure, but I think the main takeaway here is that a lot of men were not employed who wanted to be. (This fits well into the narratives suggesting lower marriage rates are in part due to a shortage of “marriageable men,” and also “deaths of despair.”)

In fact, I’d like to put forth that we haven’t been at full employment or full capacity for quite some time. Remember that in the framework of full employment we should see an acceleration of prices. And, the Federal Reserve’s target for price increases is 2% annual changes. This means that with prices increasing at sustained levels faster than that, they’d start to freak out and raise interest rates to stop the economy from overheating. Up until that point, they’re (in theory) happy to let the economy keep getting hotter. So take a look at this chart of the yearly changes of the relevant price index the Fed likes to use.

There were little momentary blips above 2% in the last few decades but nothing we’d consider an inflationary event. In fact, the last time Core PCE was at 2.5% was the first quarter of 1991. There are a number of theories on why the economy hasn’t run hot enough to get there for so long – I take a stab at it later on in this post – but the main takeaway is that we don’t have good data suggesting that the economy has been operating at full capacity for a really long time.

This is all important because when economists try to consider what level of increased government spending will lead to inflation, they try to look at the “output gap” – the difference between what our economy is producing and what the economy is capable of producing. The Congressional Budget Office, the Fed, and others have an estimate for what “full capacity” GDP is, now typically off a benchmark of where we were in February 2020. The problem is that if we weren’t at full employment or full capacity at February 2020 or perhaps the last couple decades, trying to get back to February 2020 levels of employment/GDP will still be short of what we want. February 2020 and the summer of 2007 may have been peaks in their relative cycles, but that doesn’t mean we were at full capacity then. So trying to fill the output gap of where our economy would be on trend after February 2020 without the pandemic is still going to be an undershoot of full capacity.

How to Know It’s Too Much

The financial markets now are expecting inflation in 5 years to be just over 2%.

This might look like a sign to sound the alarm, but it’s just above 2%. And this is likely pricing in the $1.9 trillion package passing, which means that almost $5 trillion in extra government spending from the pandemic – about 25% of the annual output of the United States – has made a pretty minor blip in the inflation expectations rate.

Another cause for concern from spending trillions of dollars from government borrowing is the potential for “crowding out.” The idea here is that government spending through borrowing is using resources that would otherwise be spent on private investment or consumption. If the government is taking money saved by households or businesses, money to invest will become more scarce and so the cost of borrowing for everyone else will become more expensive – interest rates will go up. But in the case of a slack labor market, where the “paradox of thrift” rules, people have made resources idle to save for a rainy day. Government spending gets those resources active.

The political discourse has always been mindful of this crowding out, so we’ve always felt like we’ve gotten to the brink of it without really getting there. The Obama stimulus in 2008 was less than $800 billion and it was enough to give rise to the Tea Party. The dire circumstances of the pandemic have given us an experiment to see just how far we can go. Not since World War II has there been such urgency to get money out the door in such a big way. What’s then amazing is how small $800 billion is compared to what the Federal Government has done already even without the newest potential $1.9 trillion package. And yet…the world hasn’t ended! Interest rates budged up a little when you look at this chart showing the interest rate on 10-year bonds in the last two years:

Whoa there! That’s a turn up for sure. But then zoom out for the longer run picture and that blip up is…not much.

I think the takeaway here is that there has always been a huge amount of excess saving in the world and we never knew how much room there was for the government to borrow because we never wanted to test where the edge was. Turns out there was a lot more money that was willing to go to the US government than we expected. But admittedly, looking purely at nominal interest rates is an oversimplified way of gauging the level of crowding out or pressure on the public fiscal situation. A discussion for another day too is that government spending is not an apples to apples comparison to private spending: The discretion of the different actors can change the quality/value of what that money is being spent on. But to me it very likely that wherever the “edge” is for when government borrowing will lead to crowding out or inflation, we are not at that point right now.

This Time is Different: Capacity Constraints And Household Balance Sheets

What’s indeed very different about our economy right now compared to 2008, 2001, or the vast majority of economic downturns is that our capacity is being restricted by the pandemic. Businesses and households are fully capable of doing things they aren’t doing, not because anything in the economic machinery is broken, but because they or the government have deemed the public health risk is too great for them to operate as they would otherwise. And this is the most legitimate concern for those worried about inflation: Our capacity to produce is limited by the pandemic, so previous levels of income chasing less production will lead to a spiral upward in prices. But this is overstated for a number of reasons.

There’s been a lot of suffering – economic, mental, and of course in terms of public health – in the last year. But it’s important to consider the national aggregates for things like savings and income. Because the suffering has not fallen on everyone equally, the national picture can look quite surprising, namely because those in the top quintile have more or less kept their jobs while saving a lot more money. Take a look at this chart from tracktherecovery.org about the employment picture for top earners versus the lowest earners.

The number of jobs has actually gone up for the highest earners. And then take a look at the national savings data:

Wow, that’s wild! This national number shows that a) people who maintained pre-pandemic income were left to save a lot of money because they weren’t able to spend on vacations, eat at restaurants, and spend on a number of expensive things they’d otherwise enjoy; b) the fiscal packages so far have in part given money to a decent number of people who don’t have an immediate need to spend; c) it’s a once-in-a-century pandemic with a huge recession and so people are going to save in the event that they may lose their job sometime soon.

In fact, nationally, income is looking pretty good!

Pent-Up Demand

So back to the matter at hand: With 2 trillion dollars in personal savings, will there be a mad rush of spending once the economy opens back up later this year, once government restrictions ease up, vaccinations get to a critical point, and (hopefully) most of the pandemic is in the rear view mirror? Well let’s consider where this “pent-up demand” will go. To the extent that people will spend their saved money, where could it go? The industries most impacted by the pandemic aren’t going to be back to where they are in February 2020. Restaurants won’t all reopen, lots of places have gone out of business, supply chains have been decimated. So if people all try to spend like they used to on industries that exist at 50% of their previous levels, those industries can’t keep up with the demand and they’ll have to raise prices. This is what the inflation-hawks are worried about.

While people may splurge on going out to eat a bit more, going on a vacation, or going to a concert, they’re not going to make up for a year’s worth of lost time entirely. They’re not going to the bar to get 10 drinks instead of 2 or get 4 haircuts to make up for the only one they got in 2020. They’re not going to go on every vacation they would have otherwise between March 2020 and August 2021. Or see a year’s worth of movies in one month. And if they did, think about what this would mean: movie tickets that are usually $15 would now be $50. Restaurants with $7 beers will now be $15. But: Most of the places with remaining capacity issues have goods and services with what economists call elastic demand. You’re not going to spend $50 on a movie ticket. Personally, I’ll probably be willing to spend more on some travel just to get the hell out of New York. But for an inflation event to occur, you’d need to see these industries have a huge run-up in prices. More likely, people will shift to spending more on a new streaming service rather than pay $50 on a movie ticket. They’ll have a backyard bbq with some grocery store beers instead of having a $15 beer at a bar. Or they’ll get a Peloton instead of a more expensive gym membership. So the bottom line is that we’d see people shifting their consumption to industries that do not have the supply constraints of the damaged industries. When oil prices surged in the 1970s, people weren’t going to stop heating their homes or driving to work. They probably cut back on gas consumption a little, but their demand was “inelastic,” and thus their consumption was not as sensitive to price changes.

Another element is that, just like every other economic downturn, people will be skittish about spending all their savings. There’s real psychological scarring from a recession and I would think that, more than previous recessions, this one was really in your face in terms of lifestyle and news, no matter what your income situation has been.

What’s the deal with all these inflation worries?!

If you look back at the charts showing the year-over-year changes in price levels, you might wonder why policymakers have been so worried about inflation. It’s true that the typical framework suggests that once inflation happens, stopping that process can be pretty painful. So getting to the edge and then realizing it’s already happening means… it’s too late. But I think that too many of the powers that be have been scarred by the stagflation of the 1970s, where low growth and high inflation wreaked really bad havoc on the economy. (As far as I can tell, not nearly as painful as a financial crisis recession, or the aggregate misery from having an economy with millions of underemployed people.) The Federal Reserve hiked rates tremendously, causing a really bad recession – though seeming to choke off inflation – in the early 1980s and making central bankers vow to never relive the episode again. And many of the people who lived through that were the PhD advisers to people who are now our philosopher-kings of economics. But for the current generation of economists, there’s just no livable experience that justifies being scarred by a painful inflationary episode. A 40-year old person would have been a newborn the last time core PCE year-over-year changes were at 3%. In a sense, this older generation of economists is too busy fighting the last war, and the younger one is much more concerned with inequality, student debt, and sluggish growth. To this extent, one should be optimistic about the future in terms of recalibrating inflation fears.

And there’s further evidence that The Federal Reserve has made it clear it will tolerate above 2% inflation, so don’t expect interest rate hikes any time soon. Jerome Powell, the current Fed Chair, has testified that he acknowledges the Fed has been wrong about their estimate for full employment. Lael Brainard, who is on the Federal Reserve Board of Governors, has also stated that the Fed is baking into its projections a revised estimate for what full employment is. On the monetary policy front, we can be hopeful that we’ll be closer to running the economy as hot as it can get without inflation fears.

On the fiscal front, this $1.9 trillion may not be perfect in its composition – nor an exact filler for any output gap – but we shouldn’t fear the package leading to harmful inflation or meaningful crowding out of private investment.

Advertisement

I wrote a review of Matt Yglesias’s recent book One Billion Americans for the LA Review of Books. Check it out.

Excerpt:

One Billion Americans has a lot of excellent policy ideas: increasing the American population through Yglesias’s prescriptive policies would be beneficial and morally just. And the advantages of population growth themselves should be given more attention. But for those already in agreement with Yglesias, it’s difficult to find much fundamentally new in his arguments or framing. For those in opposition, the book is unlikely to convert.

Yglesias waffles between arguments he believes are pragmatic and those that are encouraging “thinking bigger.” Both of these have value, but he switches between them at random times, operating seemingly at whatever level is the path of least resistance. Are we meant to take his idea of a billion Americans seriously but not literally? His ideas can support a much higher population even if it’s not a billion, but he struggles to establish when it’s time to think pragmatically and when to just think bigger.?

I recently finished Money: The True Story of a Made-Up Thing by Jacob Goldstein, co-host of NPR’s Planet Money podcast. As with Planet Money, the book has something of value for people well-versed in economic history/know-how as well as those completely new to anything economics. The book also shares Planet Money’s uncanny ability to use (often quirky) stories to make a point. I found Money to be a short, digestible, and – given its length – surprisingly comprehensive look at the history behind society’s evolving definition of, use of, and attempts to reign in the power of money.

The biggest takeaway for a reader of any background is found in the book’s conclusion: “…the reminder that there’s nothing natural or inevitable about the way money works now. We know money will be very different in the future; we just don’t know what kind of different it will be.” We all have so intensely internalized what “money” looks like, what it can and cannot be, what government actions will cause inflation, that even people swimming daily in finance and economics forget how malleable the nature of “money” has been.

This valuable lesson comes from the book’s rich history of what societies have used for money and how they’ve dealt with certain difficulties along the way. The barter economy Adam Smith described in Wealth of Nations, the book points out, never really existed. People always exchanged their stuff for what they wanted from other people. There wasn’t any currency as we think of it at first, but there was reciprocal gift-giving. “A power move, like insisting on paying the check at a restaurant,” as Goldstein puts it.

I found it notable that, despite not having the bustling marketplaces we see today everywhere in the world, it was still commerce that brought the earliest communities together: The Greek agora was meant to act as a meeting place for civic discussion, with a sideshow for a place to allow people to exchange their wares. “In the long run, shopping won out over public discourse.” I’ve always thought people underestimate the power of commerce to bring communities together, and this seems to be an illustration of that power. (Sadly perhaps, people have shown themselves to be much more interested in going to the farmer’s market than town hall meetings.)

For as long as groups have declared power over others, they have collected taxes. But without a standard representation of value like money, cloth and grain acted as something the government in China could collect around the 11th century. So everyone had the annoyingly inefficient responsibility to weave or grow to some extent just to pay taxes. The arrival of coin currency from invading Mongols allowed Chinese people to specialize in their crafts and manage to pay taxes by focusing their time on what they did best. This allowed China to flourish centuries before the future economic powerhouses of Europe.

But the man who drove out the Mongols and eventually founded what became the Ming Dynasty wanted to Make China Great Again, and that involved getting rid of the “money” system that had allowed China to be the world’s most advanced nation by the late 14th century. Soon China went back to the cloth-and-grain system and regressed tremendously. The removal of money from China is not exactly the entire story here, but the book shows an interesting experiment about the economic impacts of introducing and then removing money.

The great lesson from this time is that China had, for centuries, thrived under a system where money was not tied to anything like a precious commodity. Money was worth something because everyone else just believed it was worth something. And this is essentially what defines our monetary system today. The dollar cannot be eaten or boiled down into jewelry. Short of the government accepting it as payment for taxes, it doesn’t have any value if everyone decided one day that it no longer had value.

But historically the idea that money was tied to nothing but the government’s enforcement (sometimes by death) of its value typically didn’t sit well with people. Linking currencies to gold or silver was thought to establish credibility, and somewhat limit the ability of warmongering sovereigns to inflate their way out of any problem. (Of course even under such regimes, people would fudge the weight of their coins, and sovereigns often spent their way to fiscal ruin.) The United States dollar was no exception. The convertibility rate of the dollar to gold was essentially fixed, giving predictability to the international financial system and confidence to the users of dollars.

Fast forward to the late 1920s. A banking ‘panic’ – where people all at one time were worried their banks would fail and their deposits would be wiped out – started as any of them did around then. People decided they’d convert all of their dollar deposits into gold. This was before the FDIC insured bank deposits, so people thought the safest place for their savings was out of the bank and into gold. But banks only have so much gold, and as the gold supply dwindled, it created a vicious feedback loop where people’s fear created a run on the bank’s gold deposits.

The Federal Reserve – only created recently in an effort to smooth over panics and financial crises like this – decided to raise interest rates. This was what the playbook said for countries who were using the gold standard. Raising interest rates was as a way to incentivize people to keep their money in banks and prevent the impending bank runs that would cause a financial crisis. The higher rates meant depositors would get higher returns. But raising interest rates also means investment becomes more difficult. And when there’s a contraction of the money supply, economic activity shrinks.

And this, dear readers, is how the Federal Reserve made an otherwise garden variety recession into the Great Depression. FDR, against all the doomsday prediction of his advisors, took the dollar off of gold. It allowed for a speeder recovery and showed, yet again, that our assumption of what money needs to be can cause us to be quite dogmatic about what will happen when the nature of it is changed. FDR, according to Goldstein, stopped the bank runs with his comforting fireside chats. Indeed, when it came to bank runs, it really was that the only fear we have is fear itself. Once everyone thought banks weren’t at risk, they stopped pulling out their money and it became a self-fulfilling prophecy. Yet again, the shared trust in the credibility of the dollar and banks was what gave the system its value.

The history of central banks has shown societies’ delicate experiments with how to best prevent financial crises and the United States is no exception. Alexander Hamilton pushed for a “national bank’ at the country’s outset. Populist Andrew Jackson thought that banks and the coastal elites that ran them had too much control over the inland farmers and countrymen he was claiming to represent. So he got rid of the national bank.

What dominated for decades was a period of ‘free banking’ where any bank could print their own currency. At one point, there were 8,370 different kinds in circulation. A helpful reference book would tell merchants the value of each bank’s bill and, in an effort to prevent counterfeiting, the appearance of each bank’s currencies. It sounds like total chaos, and in some ways it was. The system led to financial panics every decade or so and the US went back to a national bank called the Federal Reserve in the early 20th century for good reason.

But the frictions in the system weren’t as catastrophic as you might think. As Goldstein says, “Travelers typically lost around 1 or 2 percent when they exchanged paper money, in the same ballpark as the fee I pay today when I can’t get to my bank and have to use another bank’s ATM.” Also, perhaps surprisingly, there weren’t all that many totally fraudulent banks. Today, imagining 8,000 different corporations printing their own Monopoly money sounds insane. But it worked better than you’d think!

Today, the Federal Reserve is run by 12 regional banks and it sets short run interest rates on the open market by selling treasury bonds. That might sound like a lot of confusing word soup to someone not totally in the mix of finance and economics. But just know that what the Federal Reserve does today is a little different than what it did pre-Financial Crisis, more transparent than what it did before the 1980s, and much different than what it did when the dollar was still chained to gold before the 1930s.

There is an important distinction between “money” and “currency.” Currency is the coins and bills we can hold in our hands. But money can be numbers on a screen and what banks do. Put another way, there is a concrete tangible amount of currency, but the amount of money in the system is always changing. One of my dollars deposited in a bank can be lent out to someone looking to start a business. That businessperson take the dollar and pays a construction worker to build an office. I still have a dollar, the investor has a dollar, and the construction worker will do something with that dollar. This “money multiplier” is what causes economic activity to thrive, and is in part what stopped China from reaching its full potential during the Ming Dynasty. But this distinction, or at least the ability to have the distinction, is pretty counter-intuitive and seemed primed for disaster for most of history.

Even our definition of a “bank” should be flexible. Today, as with most of history, a bank does two things: takes in and holds people’s money; and lends out money. Why do banks need to do both roles? Money shows a scenario where some institutions are essentially ‘money warehouses’ that store your money, while others are the ones that take risks and lend to borrowers using the money of investors. The mismatch between people wanting to safely deposit their money and the risks banks take is essentially where many frictions in the financial industry lie.

The arrival of cryptocurrencies has shown potential disruption to the idea that only the Federal government can control currency. If our dollar bills have value only because everyone else agrees they do, why would numbers on a ledger be any different? So far, the issue is that the fluctuating value of Bitcoin makes it hard to be used as a ‘storage of value.’ I’ll admit that Bitcoin has been around much longer than I predicted, but it still has too many of the problems of historic attempts at money. Blockchain technology and cryptocurrencies may have some value for society in the future, even it currently looks unlikely to replace the fiat money system we have now. If there’s one lesson about monetary history and Money, it’s that you should never count out changes that at first glance look pretty absurd.

Money has been printed by governments and it’s been printed by private banks. It’s been backed by precious commodities like gold and silver and it’s been backed by nothing. It’s been represented as coin and bills and it’s been represented merely as numbers on a screen. Its confusing nature has caused financial panics even when there was nothing wrong. Our relationship, control, and treatment with the concept of money will continue to change. A book like Money shows us how we should learn to accept and accommodate that inevitable change.

Slavery in the United States is one of its many original sins. How can such an abhorrent institution be consistent with the ideals that America claims to uphold? Does the existence of slavery for centuries – including a large presence at the time of America’s founding documents – negate or supersede all other factors of US history?

The 1619 Project is a collection of written and audio pieces from The New York Times that aims to re-examine the history of the United States from its legacy of slavery. Beginning in mid-August and continuing slowly onward, the project takes its name from the year the Trans-Atlantic slave trade came to the United States.

The material in The 1619 Project should be seen as a valuable addition to the typical US history curriculum. To the extent that conservative critics are right that 1619 leaves out some important context, it’s only because the project’s value is not as a comprehensive overview of US history, and “what people are taught in schools” is so direly minimal anyway.

So how is it possible that slavery – something so universally abhorred today – can somehow make us fall into our partisan tendencies?

The Contributions

I read everything available at this point, have listened to the handful of podcast episodes that have been released, and read a select number of the criticisms. There’s a lot to be learned from the material in the 1619 Project. The history of slavery casts a long shadow in subtle ways on facets of daily American life today from music and traffic jams to healthcare and our democratic processes.

The lead essay of 1619, written by the project’s leader Nikole Hannah-Jones, tells of Hannah-Jones’s father always flying an American flag in their yard when she was growing up. She wondered for a long time why her father seemed so committed to America when her father’s childhood was “an apartheid state that subjugated its near-majority black population through breathtaking acts of violence.” Even after slavery was legally abolished, blacks fought tremendous legal and social forces whose sole intentions were to keep them down. Yet against this context, her father believed in hard work and sought to shape America in away that fulfilled its stated ideals.

This story sets the tone of 1619 and should be viewed as the project’s underlying message: As Hannah-Jones said on Ezra Klein’s podcast, “Those who would have the most right to hate this country have the most abiding love for it.” The rest of the project’s material should thus not be viewed through a lens of scathing criticism and a relentless quest to bring down the country’s credibility. The pieces in the project are stories from slaves and their descendants that show how this wretched institution impacts America’s music, healthcare system, and ethos to this day. Yet through all those centuries of injustice, slave descendants continue to embrace their America. Conservative critics miss the point by thinking any story that sheds a more honest light on our past is motivated by a desire to tarnish its reputation.

My memory of my US history curriculum from high school is not perfect. My recollection is that my schooling mentioned slavery’s role in our country’s history, gave a nod to the obvious contradictions between slavery and the Founding Fathers, and of course described its role in the US Civil War. But I don’t think I ever had a vivid picture of the brutality of slavery. And I don’t remember one story from the viewpoint of a slave.

By focusing more on the lives and experiences of the slaves and descendants themselves, 1619 changes the lens by which the reader sees the country’s past. History in school is narrated from a very politics-centric lens that looks at changes through the vehicle of who was in power and how those changes came to be. We heard a lot more about how Andrew Johnson screwed up Reconstruction than how the slaves themselves were actually living.

Exploring the long shadow of slavery’s history through a variety of verticals helps paint a more vivid picture of slavery’s legacy in America. The social safety net for blacks ran parallel to the superior system for whites through a variety of mechanisms: jobs that were disproportionately worked by blacks were left out of the Social Security Act, the American Medical Association barred black doctors, and medical schools excluded black students.

Kevin Kruse draws a connection from emancipation to a traffic jam today in Atlanta: “Once they had no need to keep constant watch over African-Americans, whites wanted them out of sight. Civic planners pushed them into ghettos, and the segregation we know today became the rule.” One need not look far to see how NIMBYism and hyper-restrictive construction laws have deep racial roots that stem from the desire to racially segregate, either explicitly or implicitly.

One piece by Wesley Morris and complemented by a solid podcast episode tells how black music first crept into the American mainstream. Morris’s piece starts the history with T.D. Rice singing a song he heard first sung by a black man grooming a horse owned by a man with the last name Crow. The anecdote is meant to express the beginning of proto-gospel and other musical characteristics brought over from America creeping into the American mainstream, whereas previous white music derived mainly from the Irish harmonies and classical music from Continental Europe. Also from Rice’s lyrics, the “Jim Crow” term was born. It’s not hard to find traces of black music across all genres now, but the story shows just how deep these forgotten roots are.

The emphasis on black history also uncovers stories that are woefully under-appreciated: How many people know there was a “Black Wall Street” in Tulsa, Oklahoma that was burned down by white rioters in 1921? And how can one deny the parallel of voter disenfranchisement today with the ugly history of suppression found in Jamelle Bouie’s essay?

Industries run on the back of slaves had a tremendously big presence in the United States economy for centuries. In her podcast interview with Ezra Klein, Jones notes, “By the eve of the Civil War, the Mississippi Valley was home to more millionaires per capita than anywhere else in the United States…the combined value of enslaved people exceeded that of all the railroads and factories in the nation.” Did you know that? Also under-appreciated compared to King Cotton, Queen Sugar was so profitable that during the antebellum reign Louisiana was the second most wealthy state in the country.

The Critics

I have taken a look at a select handful of the criticisms of the 1619 Project, basically from conservatives who think it doesn’t give America due credit for what it did right and how it’s unique. In more ways that I predicted, I’m fairly sympathetic to some of these criticisms. By and large, we would benefit from knowing a greater context for a lot of these stories, especially to compare America to the history of other countries. Some of the causal connections between slavery and modern-day systems are at least oversimplified. Where the criticisms go wrong is by arguing that a lack of full context negates the stories or points the pieces in 1619 make. Instead, the stories are a new angle with which to view the country’s past, despite not being a completely comprehensive look.

One way to interpret the framework of most critics is this: a) America was one of many – most – countries that had slavery, so slavery is not a unique identifier; b) the emphasis on slavery thus overlooks the things that actually made us unique. Rich Lowry leans on this thinking heavily in his numerous critiques of 1619, arguing that slavery was ubiquitous in world history and many countries had the slave trade longer and more intensely than the United States did. One point specifically, which I think most people don’t acknowledge, was that in the Western Hemisphere it was actually Brazil that was the most common destination for African slaves and the country that abolished the practice the latest. 95% of slave transported across the Atlantic, he notes, went to countries south of the United States. And in Lowry’s words, “Both Brazil and the United States had slavery; only one of them had the Constitution and Declaration of Independence.”

Indeed, most people casually make the connection between the existence of free labor and America’s prosperity today. But because the institution was so ubiquitous across the Western hemisphere, it’s a leap to say the explanatory variable for America’s current riches is its history with slavery. In fact, if slavery were indeed the significant characteristic in the long-run for the bottom line of a country’s economic output, a country like Brazil would be the richest of them all, and Canada incredibly poor.

Slavery was more likely a force against a more productive economic order in the South. In Acemoglu and Robinson’s terminology, slavery is the ultimate “extractive” institution – a set of rules that does not plant the seeds of broad prosperity in the long-run like “inclusive” institutions, but instead soaks up revenues from a resource with no propensity towards increasing productivity. A focus on exploiting slave labor in order to sell natural resources in the South made it focus less on things like public education and other factors that are better for sustained productivity gains that ensure prosperity in the long-run.

A counter to this, of course, is to note that slavery can cast a shadow on other countries and it still have an impact on our daily lives. So perhaps the right phrasing is to say that it is inaccurate to think of slavery as a unique American identifier, but instead a very significant part of our history.

Andrew Sullivan takes issue with 1619’s statement that “our democracy’s ideals were false when they were written.” In his mind, focusing on the contradictions and hypocrisy’s of the Founding documents glosses over the ways that they represented a seismic shift in how anyone would even think about the ideals of liberty and equality under the law. To his credit, I would say that we view Athenian ideals of democracy more than two thousand years ago as rightfully credited for theorizing and developing a model for how democracy works. Of course, Athens had slaves, women had no rights, and who Athenian leaders considered entitled to these ideals was very limited. We should hold higher standards for the birth of a nation only centuries old, but the point remains: Discrediting the philosophy of America’s founding because of the contradictions at the time is a step too far.

In fact, Hannah-Jones even proves the power of the ideals of the United States – no matter how contradictory or unrealized at the time of their founding – by showing that her father, a man not treated with the respect he deserves by American society, still perseveres to get America to the point that it lives up to its ideals. The ideals under the umbrella of American Exceptionalism – though still not fully realized – are at least powerful enough to make her father think this way. The contradictions are present and we must acknowledge them, but it’s hard to argue that the American ethos is not an unusual aberration compared to other countries.

Sullivan believes though that explaining everything through the lens of the history of slavery is overlooking everything else. Indeed, the story of why America doesn’t have universal healthcare right now may be in part because of racist intentions, as argued in 1619, but a larger part of it is because of an odd history of price controls leading to employers offering insurance and then the status quo inertia making it politically difficult to scrap the system.

Sullivan:

Take a simple claim: no aspect of our society is unaffected by the legacy of slavery. Sure. Absolutely. Of course. But, when you consider this statement a little more, you realize this is either banal or meaningless. The complexity of history in a country of such size and diversity means that everything we do now has roots in many, many things that came before us. You could say the same thing about the English common law, for example, or use of the English language: no aspect of American life is untouched by it.

What Sullivan is thus arguing is that he does not deny how slavery’s past puts itself into every facet of our lives today, but that he views the 1619 Project as overemphasizing, or perhaps even going so far as to being essentially monocausal, the role of slavery in the context of every other force in our history. There may be some truth to what he’s saying, but I don’t view 1619 as trying to argue that its stories are the defining script of American history; I see them as a fresh set of perspectives about stories that have so far gone untold.

Jones says in her interview with Ezra Klein that “Capitalism is not a system of morality” and Ezra responded that there are many countries that are just as capitalistic as the US but have different manifestations. I take issue with the fact that capitalism is inherently an amoral or immoral system. Read literally, yes capitalism is indeed not a system of morality – our set of moral values and culture come separately from our economic system. Rather, I would argue that the market economy is a set of rules and institutions whose endpoint for production and morality is a result of the interaction between these legal frameworks and the culture found in the system. I imagine the conservatives who have criticized the 1619 Project and many proponents of the market economy take statements like this to be throwing a grenade in any hopes of finding common ground.

I’d be much more sympathetic to the arguments these conservative critics are making if they demonstrated more support for righting the remedies we “all agree are terrible atrocities.” I believe it is possible to think simultaneously a) the Founding documents were a huge leap in how the world thought about equality under the law and b) the racial wealth gap is the product of centuries of injustice that needs to be corrected. But too often writers like Lowry and Sullivan will, in the same sentence, talk about how terrible treatments of blacks were in American history and then downplay the need or justness of ways to correct those injustices.

The Brutality of American Capitalism

One piece by Matthew Desmond argues that an unusually high degree of brutality in American capitalism has its roots in slave plantations. This paper in particular irked me. He makes a lot of weakly supported connections and faulty analysis connecting slavery and the contemporary American economy.

Desmond writes, “Like today’s titans of industry, planters understood that their profits climbed when they extracted maximum effort out of each worker.” This is a widespread economic fallacy that I can’t ignore. Adam Smith put this to rest as far back as 1776 in the Wealth of Nations. Smith’s contemporaries thought it natural that a firm will pay its workers only at subsistence level, believing workers’ struggles would motivate them to be more productive for fear of losing the ability to survive. Smith recognized that no, this was not the profitable approach: Paying workers more will increase productivity because they can take better care of themselves, it will reduce turnover, and workers won’t accept such direly low wages because they have outside options. Firms need to compete to some extent for even the lowest skilled workers. In modern-day terminology, “efficiency wages” actually suggest that paying workers more is a profitable venture. In this regard, slavery, for all its brutality and poor health for the slaves, can be argued as counter-productive. In any case, the American workplace is not one where capitalists are soaking all the spirit out of its workers. Supervisors know that morale, culture, and overall health impact the productivity of its workers.

I also wonder how tenuous the connection is between corporate practices today that have their roots in the slavery industry centuries ago. The point Desmond hopes to make is that our corporate structure and financial system still resemble the times of slavery in nontrivial ways. He claims that asset depreciation or even spreadsheets have their roots in the American plantation and that workplace hierarchies displayed “a level of complexity equaled only by large government structures, like that of the British Royal Navy.” And that “enslaved people were used as collateral for mortgages centuries before the home mortgage became the defining characteristic of middle America.”

Does the use of Excel to increase efficient resource use, or having multiple layers of management at large corporations really deserves to be tied in to the history of slavery? And does the use of collateral for loans in our financial system today really mean we have a brutal system because slaves used to be held as collateral? Slave-run industries like cotton and sugar were a huge part of the economy for centuries. It only makes sense that changes to corporate governance were not excluded from these industries and yes, slave-owners tried to be efficient. Steel was also used to keep slaves in bondage, but is it fair to connect our use of steel today to its use during slavery? Indeed, any industry that co-existed with slavery can be tied to it in some way; does that mean that our economy today has traces of its brutality? If train lines were accelerated to support the slave trade, is it fair to make a moral connection between train tracks now to the institution of slavery?

These connections to me are thus unconvincing, and it makes the reader question why collateralized debt has been around long before American slavery or why other countries that had long experiences with slavery don’t have the “brutal” economies America does. America may have rugged individualism in its national fabric – “brutality” in Desmond’s mind – but this ethos existed before and independent of the existence of slavery.

Overall

A deeper story needs to be told for America’s experience with slavery than “slavery is really bad!” 1619 makes a significant contribution by telling stories, giving a personal touch with real people from history, of how slaves and their descendants experienced life and continue to impact American through various channels today.

One 1619 piece by Khalil Gibran Muhammad mentions that tourists to a local sugar plantation museum in Louisiana are warned by locals that the museum is misrepresenting the past. Earlier this year, stories were told of how tourists consistently complain about being reminded so much of the history of slavery. Against this context, it’s hard to argue that Americans wouldn’t benefit by learning more about slavery’s role in US history and confronting its brutality more vividly. It’s an inconvenient history for many people, but it’s vital to understanding our past.

I did not grow up around guns at all. As far as I remember, no one I knew in my extended network owned a gun. We did not hunt, we did not feel the need to have one for self-defense, and we didn’t go to a shooting range for recreation. I think I asked my immediate family in the last few years how many of them had even shot a gun before, and it was only about half. In a country of 300 million people, I imagine there are a lot of people that grew up just like me. And it’s in this context that so many Americans are absolutely perplexed by the support for gun rights even after events like mass shootings. I’ve tried to understand the passion for gun rights that so many Americans have that leads to a stalemate in the legislative process. In the end, one’s view of whether gun ownership is a fundamental right is probably the determinant on how one looks at the entirety of gun legislation.

Imagine a right that you consider to be very sacred. Now imagine that 1) a bunch of people are harmed by an incident and a legislative abridgment of that right is thought to be a meaningful way to reduce future harm. Or consider 2) a slight abridgment of the right – a law that imposes restrictions on your sacred right only in very exceptional circumstances, but those exceptional circumstances can also be seen as the most egregious instances of this right.

Rather than continue in the abstract, I’ll apply the thinking in the above paragraph to actual issues. If you consider the civil liberties of privacy and protection from unwarranted government surveillance to be really important, will a terrorist attack or an increase in organized crime change your value of that? Was the Patriot Act justified because of September 11th? Probably not. For people who view the 4th amendment to be particularly sacred, a terrorist attack that killed thousands of people does not change their fundamental right to protection from intrusive government surveillance. People who are a little more waffley on the 4th amendment argued that we’d need to compromise a little for the sake of security. I mean, we have to do something. This applies to point number 1 in the paragraph above.

For point #2, consider this logic in the context of abortion access, specifically partial birth abortion. Partial birth abortion is incredibly rare, and reproductive rights advocates often point to it as a distraction from the overall issue, saying that it paints a misleading picture of what typical abortions really look like. And if one is of the mindset that abortion choice is a fundamental right, a partial birth abortion ban is chipping away at that right with a slippery slope towards bigger and more meaningful restrictions.

Maybe you can see where I’m going here with the analogue to gun rights. For scenario 1, a mass shooting does not sway gun rights enthusiasts away from their sacred right to bear arms. Other peoples’ abuse of gun use does not change the right to own a gun, just as people planning a terrorist attack under the guise of privacy from FBI surveillance does not change my right to privacy either. In fact, any statistics showing how gun ownership leads to accidental deaths in the home or more suicides are totally irrelevant. The rush to “do something” in both circumstances – tighter gun control or the Patriot Act – was viewed by opponents as a knee-jerk reaction that encroached on fundamental liberties.

In scenario 2, one can see parallels between partial birth abortion and an assault weapons ban. Most gun deaths are not caused by assault weapons, even though their existence and use appear to be the least reasonable instance of self-defense or hunting. Legislation restricting ownership or even banning them is an encroachment on a fundamental right to own a gun, anyway. And it’s only time before laws keep chipping away at the guns people are allowed to own and everything falls into the category.

I think there’s also some truth to the idea that those seeking partial birth abortion bans do see it as a roadmap to outlawing abortion entirely. Similarly, those banning assault weapons would be open to the idea of outlawing gun ownership entirely – or at least they’re comfortable with getting to that point.

In these scenarios, the important difference is where your “square one” is. If your square one is that gun ownership is a fundamental right, all logic flows from that and restrictions on gun ownership are almost always dubious and a huge burden of proof is placed against legislation that restricts it. If your square one does not include a particularly passionate defense of gun ownership, then you probably see all gun control as completely reasonable.

I of course am in the latter camp, struggling to see why people are willing to tolerate a society that allows so much gun violence. But the more important thing is that I don’t see why gun ownership is held so sacred to so many people. And if the starting point for such a critical mass of Americans is holding this right so closely to their heart, is there much room for meaningful overlapping compromise? If the 2nd amendment is so important to so many in the electorate, will any significant efforts to limit ownership be seen as an unjustified violation of rights? Maybe people are less extreme than we assume they are. That the binary of “pro choice” and “pro life” is as exaggerated as the “gun rights” vs “gun restrictions” dichotomy. I hope that’s the case.

Tyler Cowen tries to argue in his latest book Big Business that big businesses are in reality not the villains they’re often made out to be and, in fact, deserve our praise. While the book presents strong counter-intuitive arguments about the good that big business does in America, I suspect readers skeptical of large private enterprise will walk away unconvinced. In the big picture, critics of big business are still likely to assume that some combination of more regulation, smaller businesses, and public ownership would be a superior alternative to the status quo.

In a huge ecosystem of large corporations, Cowen emphasizes that a fair assessment needs to look at the “net net” of the total impact of big business and not just the worst offenses. Cowen acknowledges the salesmen swindling low-information customers, dentists recommending more appointments than necessary, and pharmaceuticals striking shady deals with doctors to dish out addictive drugs. But his underlying thesis is that we need to look at the net effect.

The Good

While admitting these egregious offenses, Cowen claims “the propensity to commit fraud is essentially just an extension of the propensity of people to commit fraud.” He points to a survey showing that 53% of people admitted to lying in their online dating profile. One study estimated that we tell an average of nearly two lies per day and most often those are to people we are the closest to and not total strangers. Another showed 31% of people having completely fabricated information on their resumes and 76% “embellished the truth.” Indeed, when we look at big business in the modern economy we often evaluate things as they are and think of alternatives as we wish to be. It’s worth considering the possibility that big business is no more dishonest than we are as individuals.

In fact, Cowen argues that big business is incentivized to be even more honest than individuals or small companies. Because they have an (inter)national brand to uphold, big businesses are more incentivized to avoid the PR disasters that come from customer negligence in a world of viral social media. Further, there is evidence big businesses are more likely to treat their workers better than their mom-and-pop counterparts.

The NFL can shamefully exclude Colin Kaepernick because of his politics, but often overlooked is the idea that the profit motive can be a positive force for social justice. Cowen points to our national reckoning with sexual assault to argue that private business can be a better force for good in these regards than the public alternative. Allegations against men in the entertainment industry were met with swift action – think of Kevin Spacey, Jeffrey Tambor, etc. – while a man with a long history of unambiguously immoral treatment of women sits in the Oval Office. Roy Moore only barely lost in the Alabama Senate race. Market forces can be seen as a villainous determinant to cut corners and exploit people unfairly, but it can also be a force for social justice under some circumstances. As Sam Hammond argued in Liberal Currents, corporate capitalism and social justice are not always opposing forces.

Too many arguments in favor of scrapping the entire system assume that a radically redefined economic system and culture will mold to their ideal reality. But what happens when we put government in control of every industry and Donald Trump is the one running that government? Fox News is an easy target for the ills of profit-driven media, but would an entirely publicly-owned media landscape just mean Trump hires Roger Ailes to run PBS?

Cowen spends the majority of the book tackling the common criticisms of big business: CEO pay, the financial industry, big tech companies, and corporate influence over government. The gravity of these statements need to be analyzed through his “net net” framework and does add counter-intuitive arguments to the conversation, even if not always entirely convincing.

CEOs today work in a more demanding environment, he argues, needing to steer through a globalized economy full of public relations issues, foreign investment, and regulatory know-how. How important is leadership to a company’s performance? The top 4 percent of corporate performers are responsible for the entire increase in the U.S. stock market since 1926. Cowen offers evidence that these higher demands are borne out by higher performance. For example, Chinese firms could improve their productivity by 30 to 50 percent by bringing management quality up to the standard of Americans, Indian firms 40 to 60 percent. One study says a company’s leader accounts for 5 to 6 percent of the value of a company. Under this backdrop, Cowen believes higher pay is warranted under the greater demands.

An important stylized fact is that the main driver of inequality is not from changing pay scales within firms, but changing pay scales between firms. In other words, superstar firms that are torching the competition with higher productivity are paying all of their workers better, and Cowen believes this rise of superstar firms is thanks in large part to good CEOs.

The benefits of the financial industry are not always obvious for the typical citizen but Cowen tries to paint a brighter picture. He points to the role of credit in supporting the country’s biggest projects and the strong correlation between prosperous countries and the health of their financial sectors. American venture capital, he believes, is the envy of the world and funds some of our greatest success stories – without ever expecting a bailout. The American banking system is more fragmented than any other high-income country in the world, and the proliferation of smaller banks during the Great Depression shows “breaking up the banks” is no guarantee in preventing catastrophe.

Contemporary tech companies give us unparalleled power at our fingertips, often for free. The cost of privacy has become the common public rallying cry but Cowen still believes their value to each and every one of us far exceeds the cost. We’ve become so accustomed to free email, free mapping, one-day shipping, and reliable spreadsheets that it’s easy to only focus on what appears to be corrupting market power. But only recently did companies like Kodak, Myspace, General Motors, IBM, AOL, and Blackberry seem to be too dominant. The image of too-powerful tech titans complicates our appreciation for the value of these companies, in Cowen’s mind. The common criticism of brain-rot through the internet and smartphones is strikingly familiar to the doomsday predictions of yesteryear about the opera, rock and roll, and the novel.

The election of Donald Trump shows the hold of big business on government is not nearly as strong as portrayed, Cowen believes. Business leaders most often state their priorities to be predictability, more open immigration, and free trade – a clear opposite to Trump’s policies. The $3 billion companies spend annually on lobbying is pennies compared to the $200 billion they spend on advertising. Farm subsidies – one of the most offensive instances of crony capitalism in the Federal budget – only accounts for $20 billion a year out of a $4.4 trillion budget.

In Search of a Better Alternative

But to all of the good of businesses, a skeptical outsider would rightly point out that these realities exist within the current system. What if we lie on our resumes because it’s a brutal rat race economy? Or we lie on our dating profiles because the market economy conditions us to be self-interested and cut corners to get ahead? It’s true that Monsanto supplies the food that keeps me alive, tech giants allow me to communicate with my family, and big pharmaceutical companies produce drugs that fight infections. Every prosperous society has indeed depended on a well-oiled financial system. And the dignity of work that employers give us through jobs is indeed important. But why are these actions necessarily being done in the most optimal way?

Feudal lords could be given credit for the food given to peasants or the dignity their work provides, tyrannical leaders for military protection, and the DMV for making sure our roads are safe. Skeptics of the market economy believe that we could have a world that is more prosperous, more egalitarian, and more ethical under a different regime. Just as an anarcho-capitalist would refute gratitude towards roads or a public school education with “well, the private sector could do it better,” any critique of the status quo asserts a superior alternative outside big business.

Incrementalists who criticize big business may just want more regulation or more support for small business, while radicals prefer more public ownership. I sense that many of Cowen’s observations on the goods that big business provides will fall on deaf ears to skeptics whose prior beliefs are that we could have an even better regime.

Of course, Cowen is up against an insurmountable foe in many of those skeptical arguments. Critics of the status quo can struggle to find strong counterfactuals in order to prove there is a better system out there. Saying that “culture and economy would shift under a different system to one where we’d all be moral, not run the rat race, cut corners, or tolerate pollution” is a tough argument to prove or disprove when it is so hypothetical.

In Cowen’s (wonderful) podcast, he always asks the guest about their “production function” – what habits/routines the guests do to ensure their highest productivity. In a recent Ezra Klein Show podcast episode about workism, Ezra brings up how an inevitable part of capitalism is the encouragement to always maximize productivity…even doing something like meditation or wellness as a means to counteract the toxins of modern life. But it’s still under a framework of “optimizing” time. Can this cultural reliance on “productivity” actually make us miss the point, even when we appear to be cognizant of mindfulness? For an infovore like Cowen, the current culture and system gives him every opportunity he can to learn and explore new things. But for the vast majority of us, are smart phones instead just giving us a bigger portfolio of addictive distractions from more important matters?

As a response to skeptics, Cowen points to data he believes reveals that – despite our self-reported disdain for tech and working – we love our smart phones and love working. He says that the fact Americans work longer hours now than they did in 1950 shows we necessarily like our jobs better. But what if we are just being motivated to “keep up with the Joneses” and none of the extra work is actually making us better? Similarly, he argues few people actually leaving Facebook despite all the public criticism shows that people like it a lot more than they let on. But the powerful network effects and addictive qualities of social media are not always the easiest thing to shake off. It seems a far jump to assume these facts necessarily reveal strong-willed rational decision-making. It’s not encouraging that the people who designed the notification mechanisms for phone apps don’t let their own children use them.

So Why the Hate?

The last chapter of Big Business addresses a lingering question: If big business is so good, why does everyone seem to hate it? While the vast majority of the population loathe the post-Citizens United saying that “corporations are people,” Cowen believes we indeed do anthromorphosize corporations. In fact, projecting human qualities onto our outside world is how we have long attempted to understand and relate to it. In all of recorded history, civilizations have told stories of the weather and natural forces as gods with faces, arms, and legs. “When it comes to our cars, our ships, and our pets, we give them names, talk about their loyalty, and feel abandoned or let down if they disappoint us.”

It is this humanizing fact that makes us inevitably disappointed by corporations’ performance. We want them to be our fuzzy friends that take care of us but in the end they are actually just … “faceless” corporations. It presents a case that we will never be grateful enough for what big businesses do for us. Cowen says hating corporations is like hating your parents – the people who give you everything but also enforce rules. This might be true…but again, couldn’t oppressive feudal landlords fit the same description?

 

It’s important to view any analysis of big business in “net net” terms by focusing not only on the most outrageous failures, but the tremendous good big business brings to our lives. To these points, Cowen does a service by providing under-appreciated defenses of the most common shortcomings of big business. I agree with Cowen’s point of view and think big business needs more appreciation. In the end, skeptics may be impossible to sway as they rely on non-falsifiable hypotheticals. But a better appeal to their stronger arguments would likely leave a stronger impression on the critics of big business.

Finland was experimenting with a scheme resembling a universal basic income since 2016 but recently axed the funding for it. The initial 800 euros a month in unconditional income had been skimped down to 560 and eventually lost enough political support to keep it going. UBI advocates were excited by the prospect of the Finnish policy, in part because it would give researchers another opportunity to gather another solid dataset analyzing the effects of unconditional income payments. While I too am disappointed the Finnish government has pulled the plug on the policy, I am slowly drifting more into the “Jobs Guarantee” camp as time goes on. Giving cash to people has its benefits, but I don’t think it amply addresses the pressures high-income countries face from increased automation and globalization.

A Universal Basic Income – UBI – is a sum of money given to every citizen in a polity regardless of income and with no conditions attached. Though the amount of money and all the nuances can differ, that’s the basic idea. Unlike food stamps, the money does not need to be spent on food. Unlike TANF, the money is given to everyone regardless of income or need. The idea has diverse bipartisan support in a time when finding bipartisan support for anything is hard to come by. Those on the political left tend to support UBI because it gives everyone a base level of income, liberates many from the drudgery of bad jobs, and can give individuals the opportunity to pursue creative pursuits or take risks that they’d otherwise shy away from when struggling to pay rent. Those on the political right tend to support UBI because many UBI proposals get rid of all the clunkiness of a welfare state and replace it with one check: Forget all the administrative costs and bureaucracy involved in running dozens and dozens of agencies with different goals, all of these are eliminated and replaced by a simple transfer mechanism.

I was initially drawn to the idea of UBI for the reasons beloved by both the political right and left. It promises a simple, clean solution that’s more empowering and less paternalistic than telling people how they need to spend their money. Most importantly, the success thus far of charities like Give Directly in lower-income countries compared to old-fashioned targeted aid confirms a basic tenet of UBI: Give people money and they can be trusted to spend it optimally.

But high-income countries like the United States and Scandinavia are not like the beneficiaries of Give Directly. In many ways, high-income countries today are a land of plenty. We have no widespread shortage of basic material necessities like housing, infrastructure, food, or clean drinking water. One could argue the most significant economic pressures affecting high-income countries today are most strongly experienced by those left behind by increasing automation and globalization. Liberalized trade and capital policies are a net positive for the world, I still believe, but the downside effects to the “losers” of globalization were under predicted. As robots replace old middle-class jobs and rust belt work is sent overseas, many people are left with few work alternatives and can turn to opioids or scorched-earth politicians to soothe the pain.

The loss these communities feel is one of income, sure. But it’s also a loss of the myriad non-income benefits of employment: identity, community, purpose, and meaning. Middle Eastern Gulf countries that have significantly higher income because citizens are given a share of oil revenues don’t experience life satisfaction levels as high as their income would suggest. In fact, some evidence suggests their happiness level is more closely related to the income they have “earned” (in the traditional sense of the word) than what they have access to. To me, this shows the magnitude of neglect the policy commentariat has given to the importance of employment for life satisfaction that is unrelated to income.

I’m not advocating the elimination of the welfare state, or even against some sort of UBI altogether. Instead, I think UBI fans need to realize it is not the panacea they make it out to be. Economists specifically are too stuck in their models that suggest utility is purely a function of income. Models of the labor market that imply people frictionlessly moving from the coal mines to pink-collar healthcare jobs or from manufacturing to computer programming totally neglect these aspects. Equilibria in labor markets is a beautiful idea on the blackboard, but people do not effortlessly move across the country or world like electronic money. Among other issues, the old-fashioned rust-belt American men will not suddenly become nurses because “that’s where the work is.” Their identity, sometimes for generations, is tied strongly to a certain type of employment, and coal miners will struggle with the identity change of becoming a male nurse. In the same way, the professional class in America during a recession will shun working at a hardware store or cleaning houses and instead prefer unemployment, largely because it does not fit their identity. And despite the caricature of Welfare Queens or lazy people on the dole, the vast majority of people get purpose and meaning out of their job. Giving them a UBI check will help them pay the rent and put food on the table (nontrivial matters to take care of), but it won’t give them the community or purpose that showing up to work gives them.

A public Jobs Guarantee – the hot topic right now on Economics Twitter – has varied manifestations, but all proposals seemingly target this benefit of employment. I’m skeptical on many grounds, though I have hope that a good proposal emerges. A JG can act complementary with a UBI, so I don’t mean to dismiss the benefits of UBI entirely or see it as one-or-the-other. Instead, I think UBI cheerleaders need to realize that giving money will be a significantly incomplete substitute for the purpose/community/identity that a job gives.

The disturbing effect “Fox and Friends” has on our public discourse has brought into focus the power of corporate media in the era of Fake News. The President has repeatedly and immediately tweeted opinions after they’ve been discussed on Fox and Friends. The tendency for Fox News in general to pander to their audience – seemingly  spewing whatever nonsense their viewers want to hear – naturally makes one question whether profit-driven media is a part of the problem these days, and whether public broadcasting could be a solution. But like many leftist dreams of government correcting the ills of the market, the idea that public broadcasting would necessarily be an improvement has become even more unlikely in the age of Trump.

In the recent Bruenig-Caplan debate on socialism vs capitalism, my strongest takeaway was perhaps this: One of the most underrated arguments against socialism is that socialists’ arguments are always in favor of policy outcomes rather than institutions that will lead to these desired outcomes while safeguarding against abuses of power. If the drive to profits is what keeps organizations like Fox News, Breitbart, and tabloid magazines spewing outright lies and propaganda instead of cold-hard facts, a public or non-profit alternative must be an improvement…right? Here’s the thing with any leftist idea that the government solution will be an improvement to the current market landscape: The theorizing is based on the idea that their benevolent bureaucrat is in power. It never seems to account for the probability (and now reality) that an unbelievably incompetent and shameless bigot like Donald Trump will be the one controlling the government. If our media industry consisted entirely of BBC- or PBS-type companies, the officials our country elected would now be dictating what they broadcast as “news.” Imagine if Donald Trump and/or a Republican-controlled Congress decided who was heading the state-run media agencies. The Fox News of today would appear harmless to the filth and propaganda the government would broadcast with taxpayer dollars. A likely outcome would be that a Roger Ailes-type would be elected Media Chairman. Hopes for a technocratic appointment like Federal Reserve Chairman can’t be guaranteed.

In other times, the libertarian scaremongering about government-run media looked simply overdramatic. “If we have the government run tv stations and newspapers, we’ll turn out like North Korea or Saudi Arabia!” The BBC, for one, is a fine organization, and any plausible criticisms of it having bias or skew are within a reasonable margin of error for how much a media organization can venture from optimality. There’s no reason to think that just because the government takes over certain operations, we’ll end up with an authoritarian regime. But government-run institutions are accountable directly or indirectly to the officials we elect, and when those elected officials really suck, the mediocrity trickles downward. With Donald Trump in office and an acquiescent Congress behind him, that libertarian scaremongering is not so farfetched.

One could argue a choice between government-run media versus a totally profit-driven one is a false dichotomy. Could an independent oversight board make sure the public media companies don’t venture into lunacy? Well who appoints those boards? Even the Congressional Budget Office is under threat from partisan hackery these days. I haven’t seen a plausible policy scheme that would convincingly ensure a public media organization from becoming a wing of taxpayer-funded propaganda under the Trump administration. Are there other regulations, subsidies, or vouchers, that could give a better alternative? Maybe there’s one out there, I just haven’t heard of it.

The unfortunate irony of many leftists complaining about Trump’s abuse of Executive power so far is the same people’s silence during the Obama administration. It’s totally fine when Obama did it because it agreed with certain policy prescriptions. In an age of immeasurable and unbelievable outrage, it can be hard to have a clear head and accurately critique government actions. But consider this whenever Trump does something you deem terribly awful: if what he is doing is within his legal scope as head of the Executive branch, should we reconsider how much power we give the Executive branch, or do you just not like how he’s using that power? Put another way, policy proposals are really easy to get enthusiastic about when the only reality you can envision is when Your Person is in power. But the next time you consider government presence as an alternative to market forces, try to picture what a government run by The Other Team can do with that power.

It gets back to a theme I find particularly relevant this day in age that applies to every public policy debate: when are individuals/institutions best held accountable through market forces and when are they best held accountable through the democratic process? As toxic as Fox News is for our culture, I’d take our profit-driven landscape over a Trump-run state media anyday.

I’ve been thinking lately about the power of language and the role of government- and society-enforced censorship. Many people – including yours truly – hold seemingly contradictory views on the power of words and liberal public policy. When is it permissible/optimal for the government and society to enforce norms on what is “ok” regarding language and rhetoric?

I’ll start by saying that the youthful me believed censorship to be almost always wrong. Books like Fahrenheit 451 and 1984 put censoring in the context of stopping radical ideas, free thought, questioning authority, and artistic works that made people uncomfortable. Music and video games, of course, were common targets for censorship.

Here’s a clip of Frank Zappa testifying in front of Congress, claiming that words in music are only words. Essentially, a “sticks and stones can break my bones but words can never hurt me” argument.

I don’t think Zappa was telling 100% of the story. Words of course can hurt. Language is incredibly powerful. Deirdre McCloskey believes a change in rhetoric was a huge impetus for the Industrial Revolution. Using the n-word or any other racial slur should not be tolerated. We should be conscientious of using inaccurate words like “Indians” to describe “Native Americans” and it goes without saying that we shouldn’t call them offensive terms like “savages.” Pronoun usage is an important consideration for people who identify as trans or non-binary genders.

Much of “censorship” comes not from a Parental Warning on an album cover as much as social norms of people telling their peers “yo, that’s not ok” when they use language that is not deemed permissible. So when is censorship, or more broadly “socially enforced norms on language,” acceptable? The liberal tradition is based on the idea of people being able to live together, even if not living the same lives; it’s accepting differences of preferences, tastes, and values.

[I think when to give certain views a platform under the pretense of “diversity of thought” is a slightly different conversation to have. This has been a hot topic recently, with Kevin Williamson having been fired from The Atlantic for some extreme anti-abortion comments and climate-denier Bret Stephens being hired to the NYT editorial board. Giving a platform to flat-earthers and holocaust-deniers to “hear both sides” is not the ideal we’re striving for, but where this boundary lies I am not sure. However, again, I think this is a different conversation.]

The standard liberal recipe for free speech is that offending someone is fine, but you can’t threaten/slander someone else. What happens if the standard for “threat” is as low as writing Trump on a sidewalk at Emory University? Is using an incorrect gender pronoun really considered a threat or slanderous? Is showing a gay couple on tv considered a threat? Consider your opinion on these three matters and what your reasoning is. Is your reasoning consistent about when it is “ok” to do something even if you disagree with it? Remember that the formal legal system is often uninvolved in these judgements of tolerating certain behavior.

This debate is interesting to me because, like many topics, we have our own intuition about what is right, use boilerplate rhetoric to defend our position, and yet never really fully consider the roots of our view. I’m not going to shame someone for being in favor of gun rights, but I probably will shame them for denying slavery existed. My shaming is a form of censorship, even if it’s not government-imposed. In most situations, people are in favor to some extent of disapproving certain beliefs. We all recognize the power of language.

The point I want to get across is that if one holds the view that racial slurs are harmful to our social fabric, one implicitly recognizes the power of language and expression in certain contexts and needs to acknowledge the power of art/music to also be powerful. To this extent, I naturally tend towards the position that people’s views on censorship or political correctness are very likely to fall in line with their own preferences/beliefs rather than a well-grounded philosophy on when or when not to censor. I will call out use of the n-word but not push for censorship of music. Why? Well, it probably has a lot to do with how I don’t like racism but I really like music that is usually the target of censorship. What needs to be fully recognized is that my opposition to music censorship cannot claim that the lyrics are unharmful. I could argue that I don’t trust the government to make that judgement for us, but I don’t think I can use the Zappa defense that the words are meaningless.

 

 

 

 

 

 

Next Page »