Why The Minimum Wage Creates Jobs

When Seattle became the first major city in the country to enact a $15/hour minimum wage back in 2014, mainstream economists and the business community predicted that it would end up hurting those low-wage workers it was meant to help. The cost of living would increase dramatically, as businesses increase prices in order to absorb their rising labor costs. Jobs and working hours in the service industry would be cut, leaving thousands of workers unemployed.

Five years have passed since then, and we now have a lot of data on the impact that the new minimum wage has had for Seattle workers. The evidence is clear: the wage hike has overwhelmingly benefited workers in Seattle, and the city’s economy as a whole. Employment in the food service sector has steadily increased since 2010, with no discernible slowdown due to the minimum wage increase, even though restaurants tend to have some of the highest labor costs of any industry.

Seattle employment

The wage hike seems to have had little to no impact on the cost of living in Seattle, with consumer prices going up an average of 2.3% from 2014 to 2017, compared to 1.9% from 2011 to 2014. This could easily be statistical noise, but even if it isn’t, low-wage workers’ 50% wage hike (going from $9.57 before the increase to $15 today) more than makes up for the rising cost of living.

Seattle isn’t the only city that has increased its minimum wage in recent years, and the data from those municipalities tell the same story. But have you ever wondered why minimum wages don’t lead to unemployment and major price increases? This is the question I’d like to answer.

The argument against the minimum wage

The basic argument that mainstream economists make against the minimum wage is quite simple; it’s based on a naive supply-and-demand model of the labor market. These economists argue that when the price of any commodity goes up, the demand for that commodity will go down. Since labor is a commodity under capitalism, it is assumed that firms will demand less labor if the price of labor (the wage) is propped up to an artificially high level. You may remember these supply and demand diagrams if you ever took an introductory economics class in high school:

Neoclassical min wage

The market wage is plotted on the vertical axis, and the number of jobs offered is plotted on the horizontal axis. Demand for labor is assumed to be “downward-sloping” because less labor is demanded as the wage increases. It’s argued that a minimum wage reduces demand for labor without reducing its supply, leading to unemployment.

The Keynesian critique

keynes

The flaw in this point of view was exposed by the British economist John Maynard Keynes in the 1930’s. The thirties were a time of mass unemployment, and mainstream economists were using the exact same supply-and-demand model of the labor market to argue that mass unemployment was a result of wages being too high. Workers were simply too prideful to accept lower wages in the midst of the Great Depression, and they were getting in the way of the market automatically self-adjusting to return to full employment.

But Keynes pointed out that there is a kind of feedback loop between workers’ wages and employment. Businesses will only employ more workers if they need to boost production in response to increasing demand. But most demand comes from workers’ wages— if wages fall, demand will also fall, causing businesses to lay off even more workers, in a downward spiral. On the other hand, if wages rise, demand will rise, causing businesses to hire more workers. This is a fundamental instability in capitalism. Once the economy starts going downhill, market forces will tend to make the downturn even worse. Government intervention is needed to prop up demand during recessions and get the economy out of slumps.

Effects of the minimum wage

If we apply this Keynesian reasoning to the minimum wage, we will find that a minimum wage increase should increase consumer demand, and thereby create jobs, rather than destroying them. Of course, there are limits to this. If the minimum wage were increased to some very high level, say $100/hour, prices would have to increase dramatically in order to keep up with costs and the chaos and uncertainty involved would likely cause a recession.

Additionally, if a state with a lot of manufacturing jobs tries to boost its wages much higher than surrounding states, companies will likely start to move those jobs to lower-wage states. Service jobs are very unlikely to leave an area in response to wage increases, because they basically have to locate themselves wherever the customers are. The same is not true of manufacturing or tech companies, which is why it’s important for the federal government to implement strong labor protections and to pursue a trade policy that protects American workers from the global “race to the bottom.”

Automation McDs

It is sometimes argued that higher minimum wages encourage the automation of low-wage jobs, because they make hiring human workers more expensive relative to robots. Over the long run, there is actually some truth to this— but this is a good thing. Here’s why. First of all, technological progress will eventually lead to the automation of most low-wage jobs anyway, so higher minimum wages simply speed up an inevitable process. Furthermore, in the context of high consumer demand created by a minimum wage hike, workers laid off by automation are likely to find other, better jobs relatively quickly. Besides, the Left should want to speed up the automation low-wage jobs. These are mundane, boring jobs that most people don’t want. They key thing is to use government policy to ensure that those who lose their jobs due to automation are able to find better, higher paying, more fulfilling jobs quickly. Free college and job training programs, along with aggressive stimulus programs to keep the economy running at full employment, can ensure that all workers benefit from automation.

American workers deserve a raise

To sum up, minimum wage increases have four positive effects:

  1. Low-wage workers’ incomes increase, lifting many households out of poverty;
  2. New jobs are created, due to increasing consumer demand;
  3. Pressures to automate increase, eliminating the most menial jobs over time;
  4. Inequality is reduced, as income is redistributed from profits to wages.
LPR
Labor force participation rate, 2008-2018

American workers could certainly use a substantial minimum wage increase. Income inequality is high, and the labor force participation rate, which measures the proportion of working-age adults who are either working or looking for work, has never recovered from the Great Recession. This means that there are millions of Americans out there who would like to work, but have given up the job search. Increasing the federal minimum wage to $15/hour would reduce income inequality, and would help to employ discouraged workers by stimulating the creation of new living wage jobs. Pegging the minimum wage to the cost of living and productivity gains would also help to ensure that workers share in economic growth going forward.

The next time an Econ 101 student tries to tell you that the minimum wage kills jobs, you can tell them that they simply don’t understand how the economy works.

Elizabeth Warren Doesn’t Deserve Your Vote

Today, Elizabeth Warren announced that she will be forming an exploratory committee to consider a presidential run in 2020. That means she’s almost certainly running for president.

Certain progressive groups are celebrating Warren’s announcement, hailing her as a champion of “bold, inclusive populist ideas.” Even many committed Bernie Sanders supporters view the announcement as a positive development, since it guarantees that at least one progressive candidate will be in the race.

Warren’s Problematic Past

It’s true that Elizabeth Warren has worked hard in the last few years to cultivate a reputation for being a strong progressive leader, in the same vein as Senator Bernie Sanders. But if we look underneath her populist façade, we will find that her basic political philosophy profoundly neoliberal and committed to free market capitalism.

The most striking evidence of this is the fact that Warren spent much of her adult life as a member of the Republican Party. When asked about this in 2011, she explained:

“I was a Republican because I thought that those were the people who best supported markets. I think that is not true anymore,” Warren said. “I was a Republican at a time when I felt like there was a problem that the markets were under a lot more strain. It worried me whether or not the government played too activist a role.”

In these telling remarks, Warren makes it clear that her most fundamental political commitment is the protection of free markets and private property. In fact, her reasoning for becoming a Democrat in 1995 was that Reagan’s neoliberal agenda had actually undermined markets, rather than protecting them. She apparently failed to realize the tremendous harm that Reagan’s policies were inflicting on workers, the environment, and the poor while Reagan was in office. Indeed, when she was asked whether she voted for Ronald Reagan in 1980 and 1984, she declined to comment.

Warren’s deep, enduring commitment to capitalism is the common thread that connects her early days as a Reaganite Republican with her liberal progressivism today. In a recent interview, she reaffirmed her support for free markets, declaring that she is “capitalist to the bone.” This is a fundamentally right-wing and neoliberal perspective, because it prioritizes markets and private property over human needs. No progressive, let alone democratic socialist, should support a candidate with views like this.

Elizabeth Warren’s long history of conservatism stands in stark contrast to Bernie Sanders’s record. Senator Sanders has been an outspoken socialist for over 50 years. As a student at the University of Chicago, Sanders was a member of the youth wing of the Socialist Party USA, and was deeply involved civil rights activism throughout the 1960s. In the 1970s, he ran for office on the Liberty Union Party ballot line multiple times, championing socialist and anti-war causes. In 1981 he was elected mayor of Burlington, Vermont as an open socialist, and spent the rest of the decade battling developer interests in the city and building affordable housing. As a US Representative in the 1990s and early 2000s, he consistently opposed Bill Clinton’s right-wing policies, and was an early opponent of the Iraq War and the Patriot Act.

An Unreliable Progressive

Since Warren joined the Democratic Party in the 1990s, she has been a very unreliable supporter of progressive causes. During her 2012 campaign for the US Senate, she refused to endorse Medicare for All— a shortcoming for which her primary challenger, Marisa DeFranco, criticized her on numerous occasions. When she was asked about her views on Medicare for All in June 2012, she explained:

“I think right now what we have to do — I’m serious about this — I think you’ve got to stay with what’s possible. And I think what we’re doing – and look at the dust-up around this – we really need to consolidate our gains around what we’ve got on the table [the Affordable Care Act].”

This quote is very telling about her overall political philosophy: Warren is an unwavering pragmatist, focused on incremental improvements to existing institutions, rather than radical change. In this respect, Elizabeth Warren is much closer to Hillary Clinton than to Bernie Sanders.

It wasn’t until 2017, after Bernie’s presidential campaign popularized Medicare for All, that Warren publicly endorsed the idea. Even today, it’s not clear how committed she is to the principle of publicly provisioned healthcare for all Americans. She has repeatedly proposed halfway measures that would actually expand the subsidized private health insurance market that Obamacare created. Her commitment to pragmatism means that a Warren administration would, at most, carry out a modest expansion of the Affordable Care Act’s programs. Like Obama, Warren would likely weaken her bargaining position from the outset by conceding the “political unacceptability” of Medicare for All, and instead advocate for more subsidies and tougher regulations on private insurers.

Tellingly, Senator Warren refused to endorse Bernie Sanders during the 2016 Democratic primaries, instead assuming a position of “neutrality.” Presumably Warren was concerned about maintaining her strong relationship with the establishment wing of the party. But the 2016 primaries were not a contest that any principled progressive could sit out. It was the most high-profile struggle yet between the two major wings of the Democratic Party: the neoliberal establishment wing, and the insurgent, social democratic wing. If Warren had endorsed Sanders, it likely would have tipped the scales in his favor during the Massachusetts primary, which he ended up losing by just 1.4 points. Warren’s cowardice during the historic 2016 primary race is simply inexcusable.

Warren Would Lose to Trump

Furthermore, we have good reason to believe that if Elizabeth Warren were to win the Democratic nomination for president in 2020, she would likely lose the general election to Donald Trump. At the very least, she would be much less competitive against Trump than other potential Democratic nominees, especially Senator Bernie Sanders.

She underperformed in her home state

One sign of Warren’s poor electability is her weak performance in her re-election campaign for the US Senate. In November, Elizabeth Warren won re-election by 24 points. That may sound like a lot, until you realize that Hillary Clinton managed to win Massachusetts by 27 points in 2016— a much less favorable year for Democrats overall. In fact, Harry Enten from FiveThirtyEight has shown that Warren was one of the worst performing Democratic Senate candidates of 2018. When taking into account the demographics and overall partisanship of Massachusetts, her vote share was 7 points lower than what would be expected from a generic Democratic candidate.

One potential reason for Warren’s weak performance in her re-election campaign is the massive public relations blunder that she made in October, when she released the results of a DNA test that supposedly proved that she has some Native American heritage. Native leaders quickly denounced this PR stunt, pointing out that DNA is irrelevant to the legal and cultural criteria for Native American heritage that are accepted by all Native tribes in the United States.

She’s unpopular

Warren’s cynical ploy to gain media attention and recognition for her alleged Native identity clearly backfired on her. A recent Politico/Morning Consult poll found that just 30% of voters view Warren favorably, while 38% view her unfavorably.

Warren favorability

It seems that Americans like Warren less, the more they get to know her. Back in August 2017, a Gallup poll found that 34% of voters viewed Warren favorably, compared to 31% viewing her unfavorably. In other words, Elizabeth Warren’s net favorability rating has gone down by 11 points in just over a year. This is a terrible sign for Warren’s general election prospects, if she were to win the nomination in 2020.

Compare these dismal poll numbers with those of Senator Bernie Sanders. Gallup has found that 53% of voters view him favorably, compared to 38% viewing him unfavorably. These numbers have stayed quite stable since September of 2016:

Sanders favorability

Senator Sanders’s 15% net favorability rating should speak for itself. Sanders enjoys a much broader appeal than Warren does, especially among those independent voters that we need to win over in order to have any chance of defeating Trump in 2020. Among independents, Sanders has a 54% favorability rating, compared to a dismal 22% for Warren (see here, pg. 351). Given these numbers, nominating Elizabeth Warren would be suicidal.

She’s a spoiler candidate

In short, progressives and socialists should not be happy about Elizabeth Warren’s candidacy. Elizabeth Warren has been sliding in recent polling among likely Democratic primary voters, and she’s not likely to get very far, but it’s important that she drops out of the race as quickly as possible. A prolonged Warren primary campaign would pull valuable funds, volunteers, and votes from Bernie Sanders, effectively splitting the progressive wing of the party and benefiting the more explicitly establishment candidates, like Joe Biden.

The 2020 elections offer a historic opportunity to make an avowed democratic socialist president of the United States. Urgent social democratic programs, like Medicare for All, free college tuition, and a Green New Deal would have a real chance of being enacted under a Sanders administration. By entering the Democratic primary race, Elizabeth Warren is getting in the way of all of this. She simply does not deserve your vote.

Government Debt is Actually Good

There’s a common narrative about the national debt that is repeated over and over in the media. It goes like this:

Over the last 50 years, the national debt has increased dramatically. This is a very bad thing, because the government will need to repay this debt someday. We are passing on a huge burden to our children and grandchildren, who will have to live with greatly restricted government spending while we pay down our $21 trillion national debt. We need to start cutting public spending and raising taxes now before something very bad happens. We wouldn’t want the United States to become the next Greece, now would we?

Elected officials and pundits across the mainstream political spectrum buy into this narrative to one degree or another. It’s a very powerful tool in the hands of the Right, because they can use it to argue for massive cuts to public services. The good news, however, is that this narrative is completely wrong in every way.

The mainstream narrative is based on assumptions that can be traced back to the gold standard system, which the United States abandoned in 1971. Under the gold standard, the federal government voluntarily agreed to restrict its own spending by promising to exchange US dollars for gold at a fixed price. Since the supply of gold was finite, and much smaller than the total value of all US dollars outstanding, the government had to avoid creating too much money. There was always the risk that investors might try to exchange too many dollars for gold all at once, thereby depleting the stock of gold held by the government. The government was forced to restrict its own spending well before that point in order to avoid a crisis.

The government can never run out of money

Today, however, the American government no longer promises to exchange US dollars for gold. This is a dramatic change, which greatly increases the policy space that is available to us. It means that, because the American government is the monopoly issuer of the US dollar, it can never run out of money or become insolvent. The Treasury spends money by simply electronically crediting bank accounts, and taxes by debiting bank accounts. There is nothing to “run out of.” Unlike many European countries, which have agreed to use the euro as a common currency, the American government has complete control over the supply of dollars and can therefore deficit spend without limit.

This may seem like an extreme claim, but you don’t have to take it from me. Listen to former Federal Reserve chairman Alan Greenspan, a Reagan appointee, answer a question about the solvency of Social Security:

As Greenspan said, government spending is only limited by the real resources that are available. If the government spends too much, it could put a strain on the productive capacity of the economy, thereby driving up prices and creating inflation. But as long as the workers, the raw materials, and the tools are available, the government can “afford” anything. And even if the economy is at full capacity at the moment, the government may decide to use taxes or inflation to bring resources into public use that are currently being used by the private sector.

In fact, while taxes aren’t strictly needed to fund spending, they can be used to prevent inflation. If the government simply spent $4 trillion into the economy each year without taxing any of that money back out, the likely result would be a high rate of inflation. In other words, while the government doesn’t need your money in order to spend, it needs you not to have it, in order to prevent excessive inflation.

Public debt is private savings

But if the government doesn’t need to “get” money from the private sector before it can spend, this raises the question: why does the government bother with issuing debt at all?

Treasury bond
When the government deficit spends, it creates new Treasuries out of nothing, auctions them off, and spends the proceeds back into the economy. The new Treasuries add to private savings.

The reality is that practice of selling government debt in the form of Treasury bonds is largely a relic of the gold standard era, when the government offered  Treasuries to investors as an alternative to exchanging dollars for gold. This is a public policy decision, which Congress could change at any time. Instead of issuing Treasuries, the government could directly create new bank reserves to finance its deficits. This would have the benefit of ending the $400 billion in interest payments on Treasury securities each year, which is paid disproportionately to investors with high incomes. On the other hand, there are some good reasons for the government to continue issuing these interest-bearing bonds. Treasury securities are a virtually risk-free asset that the whole world relies on to hedge against uncertainty in the market. Without Treasuries, capitalists might be tempted to invest their money in riskier assets with higher returns, which could increase financial volatility.

Those who ring the alarm about the growing national debt tend overlook the fact that any debt, whether public or private, is necessarily someone else’s savings. If you owe your bank $1000, your debt to the bank is an asset from the bank’s point of view. Similarly, all $21 trillion of the federal government’s debt count as savings for someone in the private sector. In fact, because the government will almost certainly never pay off its debts, it’s quite misleading to call government liabilities debts at all. It would be just as correct, and arguably more appropriate, to call the national debt “net private savings.” The national debt is the sum total of all the dollars that the US government has spent into the economy, without taxing them back. Government deficits increase private savings, while government surpluses reduce them.

Public deficits are private sector surpluses

Deficits Since 1968

Furthermore, history shows that expanding private savings is good public policy. The US government has run a budget deficit for all but four years since 1969. Those four years of surplus were during Bill Clinton’s presidency— and they were immediately followed by a major recession. This is not a coincidence. The private sector has a strong desire to save in the aggregate, in order to hedge against uncertainty and plan for the future. But if the private sector as a whole is saving— spending less than its income— then some other sector must be spending more than its income. This is usually the government (although trade deficits can also make up the difference).

Sectoral Balances Since 1990
The balance of payments between the public sector, the private sector, and the foreign sector must all sum to zero.

Because of the private sector’s desire to save, there will always be demand for new Treasury securities. Even in the event that there is a shortage of demand for Treasuries, the central bank can buy them as a last resort, as it did during the 2008 financial crisis. This means that the government can run a deficit indefinitely. When the public sector tries to run a surplus, this usually forces the private sector into deficit, as it did during the Clinton administration. Neoliberal economists have it exactly backwards: public sector deficits are sustainable, private sector deficits are not.

The national debt will never be repaid

Now that we’ve established that there is always demand for new Treasury bonds, it should be clear why the idea of repaying the national debt is nonsensical. Barring some massive national catastrophe, bondholders will never try to “call in” their Treasuries en masse. And even if this did happen, the central bank could simply buy up the bonds as needed. United States Treasury securities are literally the most trusted financial asset in the world. If investors start to seriously question the full faith and credit of the US government, the national debt will be the least of our problems.

Furthermore, any attempt to repay the national debt would wreak havoc on the economy well before we could get anywhere near full repayment. The government would have to commit itself to unprecedented— say, $1 trillion a year— budget surpluses for decades. Just as budget deficits are a stimulus to the economy, budget surpluses are a contractionary force. If the United States made a serious attempt to repay its debt, it would experience a severe recession in just the first few years. At that point, the political pressure to resume deficit spending and stimulate the economy would be intense and irresistible. The national debt cannot and will not be repaid.

Even if it could be achieved, paying back the national debt would mean destroying the entirety of the net savings that the private sector has accumulated since 1836 (the last time the US government was debt-free). Owners of US Treasury securities don’t even want the US to repay all of its debts, since they would have to give up their risk-free, interest-bearing assets! The entire global financial system depends on Treasury securities. In a world where the United States had eliminated its debts, we could expect financial markets to be substantially more volatile than they are today. The idea that the national debt must be, should be, or could be repaid is totally absurd.

We need more deficit spending, not less

We’ve now established that governments with their own sovereign currencies should run budget deficits almost all of the time. The question is not whether to run a deficit, but how big the deficit should be. This depends largely on the amount of unused capacity in the economy, and particularly the unemployment rate. Maintaining full employment should be the primary goal of any government’s fiscal policy, because unemployment  causes suffering for those without jobs, and it makes us all poorer by failing to fully utilize the productive capacity of the economy.

LPR
Labor force participation rate in the US, 2008-2018 (source)

While the unemployment rate in the US has been steadily falling since the 2008 financial crisis (it now stands at 3.7%), we know that there are still millions of Americans who would like a job but can’t find one. The labor force participation rate, which measures the proportion of working-age adults who are employed or actively looking for work, fell from 66% in 2008 to 63% in 2014, and has stayed constant since then. This means that, while the official unemployment rate is down, this is due in large part to unemployed people giving up on the job search and dropping out of the labor force. This is a human tragedy, which could be ended tomorrow with substantially more deficit spending targeted at creating new good-paying jobs. This could include a “Green New Deal” to rebuild our crumbling infrastructure and move toward renewable energy, or a federal job guarantee program. Despite what neoliberal economists may say, the federal budget deficit is actually too small, not too large. We really can afford nice things.

Why the Soviet Union Failed

For over a century, socialists all over the world have been haunted by the legacy of the Russian Revolution, and the Communist state that it created. The Soviet project began with noble intentions: it aimed to create an egalitarian socialist republic of workers and peasants, where exploitation and oppression would end once and for all. With hindsight however, we can say definitively that this revolution utterly failed to achieve its purpose.

The Russian Revolution was a failure in two respects. Firstly, it categorically failed to create the kind of free and equal society it claimed to be fighting for. Instead of ending oppression, the Soviet Union pioneered an entirely new form of oppression, one more totalizing than any that had come before it. In doing so, it did irreparable damage to the ideas of “socialism” and “communism” that animated the imaginations of so many reformers, idealists, and revolutionaries at the beginning of the 20th century.

Boris Clinton

The second and final failure of the Soviet project was its inability to overcome its shortcomings and transform itself into a freer, more open society. The democratic and market reforms of Soviet leader Mikhail Gorbachev in the late 1980s quickly led to the total unraveling of Communism, first in the Soviet satellite states of Eastern Europe, and then in Russia itself. The radical pro-capitalist reformer Boris Yeltsin was elected Russian president in 1991, and he implemented a program of neoliberal shock therapy, rapidly privatizing public enterprises and dismantling the Soviet welfare state. The Soviet Union officially dissolved itself later that year, signaling the final restoration of capitalism in Russia and Eastern Europe.

It’s very important for democratic socialists in the 21st century to understand the underlying causes of the failure of Communism, so that we can be sure to not make the same mistakes in the decades ahead. The Soviet project did not fail simply because of the incompetence of this or that Soviet leader. Rather, the Soviet Union’s failure can be traced back to the Russian Revolution itself.

The revolution betrayed

The leaders of the Russian Revolution, the Bolsheviks, were profoundly committed to the cause of socialism. In this respect they were arguably no different from the leaders of the more moderate socialist parties, like the German Social Democratic Party. The thing that distinguished the Bolsheviks from most other socialist parties in 1917 was their deep conviction that global socialist revolution was right around the corner, rather than a more distant prospect, and their willingness to do anything to achieve their aims.

lenin colorOut of all the Bolshevik leaders, none of them was more instrumental in securing the Bolshevik seizure of power than Vladimir Lenin. As an orthodox Marxist, Lenin understood that Russia was not economically, culturally, or institutionally ready for socialism. Nevertheless, he was deeply convinced that it was necessary for socialists in Russia to seize state power, and hold onto it at all costs, in order to inspire the working class in industrialized Germany to start a socialist revolution there.

Unfortunately for the Bolsheviks, the German revolution never came. Isolated from the outside world, the new Soviet state instated increasingly brutal and repressive measures in order to cling onto power. It banned all opposition parties one-by-one, and eliminated every independent democratic institution in the country. The Bolshevik seizure of power also led to a convulsive and bloody civil war that left 10 million dead. By the end of the war the Bolsheviks presided over a country totally decimated by war and famine— there was no one who could offer a serious opposition to the new Soviet state.

The Soviet economic model

The newly established Soviet Union quickly consolidated itself into a totalitarian dictatorship under Joseph Stalin. The USSR was a one-party state, where any form of dissent or protest was quickly crushed by the secret police. The totalitarian power of the Communist Party would tolerate no internal opposition, and it brought the entire economy into direct state control. This highly centralized form of state ownership and planning led to severe economic problems throughout the Soviet Union’s existence, ranging from chronic shortages of commodities, bureaucratic inefficiencies, and a lopsided model of development which prioritized heavy industry and armaments over consumption goods.

But despite these inefficiencies, Soviet Communism was a coherent, viable social system whose components reproduced and reinforced one another over time. And once the Soviet Union had established itself as a rising great power and a viable alternative to capitalism, movements for social justice and national liberation consciously modeled themselves off of the Soviet example. In this way, Communism became a global phenomenon. Throughout the 20th century, wherever a Communist Party took power, it tried to establish the same basic kind of social system that existed in the Soviet Union.

János Kornai’s analysis of Communism

Kornai

One of the best and most well-known analyses of the Communist system was formulated by the Hungarian economist János Kornai in the early 1990s. Kornai started his intellectual career as a reformer, making policy proposals that he believed could have made the Communist system more efficient and dynamic. But over time, he became more and more convinced that Communism could not be reformed. As the Soviet puppet states of Eastern Europe were collapsing, Kornai wrote his most famous book entitled The Socialist System: The Political Economy of Communism.

Kornai breaks down the Communist system into five “blocks” of institutional characteristics, ordered from the most to the least fundamental. He argued that the undivided power of the Communist party was the most essential feature of the Communist system, from which all else followed.

Base and Superstructure.png
The theoretical model of society used by orthodox Marxists

For many socialists, this is a radical notion. It is at odds with the traditional Marxist view that economic relationships form the “base” of society, which shapes the political “superstructure.” But the history of Communist regimes should make it clear that the seizure of state power by the Communist party precedes the nationalization of the major industries. The Communist party despises private ownership and sees it as a threat to its power. Because the Party enjoys a monopoly on the use of violence in its territory, it has all the power it needs to shape property relations to suit its interests. As Kornai writes:

“It is not the property form state ownership that erects the political structure of classical socialism over itself. Quite the reverse: the given political structure brings about the property forms it deems desirable.”

This is an important point, which we will revisit later. The “original sin” of Communism was not the nationalization of the means of production per se, but rather the departure from democracy.

Kornai Coherence
Kornai’s model of Communism

The shortage economy

Since virtually the entire economy is in state hands, the Communist Party must centrally determine prices and production targets for every commodity and service in the country. This means that prices are irrational, and have little to do with the cost of production or the scarcity of commodities. It also means that the managers of state-owned enterprises are remunerated based on how effectively they carry out the directives of the plan, rather than how well they serve consumers. They have no incentive to improve product quality or to produce above production targets in order to meet excess demand, since consumers have no choice but to buy products from the state.

breadline
Soviet citizens queuing up for limited food rations in 1990

This lack of incentive to “chase after” consumer demand creates a seller’s market, as opposed to the buyer’s market that usually prevails in capitalist economies. In a seller’s market, the producer does not need the consumer, but the consumer does need the producer. Because the state decides what is produced and in what quantities, consumers are systematically frustrated by chronic shortages of the commodities they desire. They are coerced into making forced substitutions between consumption goods, substituting those goods that are available for the ones they actually want.

Another important feature of Communist economies is the soft budget constraint. This means state enterprises are not constrained by the threat of going out of business if they don’t balance their books. The state will almost always bail out loss-making firms, since the bankruptcy of an enterprise will incur substantial costs on the state, and will reflect poorly on the higher-level bureaucrats who allowed the firm to fail. The soft budget constraint contributes to the pervasive inefficiencies and waste of resources that the Soviet Union was famous for.

Was there anything good about Communism?

Many socialists, while criticizing the totalitarian and anti-democratic nature of the Soviet Union, like to point out the positive aspects of Communism. To be sure, there were a few important achievements. Communist countries usually had very generous welfare provisions, guaranteeing healthcare, education, housing, and old-age pensions to all citizens. Most Communist states completely abolished unemployment, one of the persistent evils of capitalist society. Finally, many Communist states were able to achieve very high levels of economic growth.

But of these three achievements, only the abolition of unemployment can really be said to be something unique to Communism. Capitalist economies need unemployment to discipline workers and thereby prevent spiraling inflation. Under Communism, unemployment is simply seen as wasteful. The state does not need unemployment to discipline labor, since it is able to use overt violence and coercion to accomplish the same goal. Independent trade unions are almost always banned, and strikes are brutally suppressed. The abolition of unemployment under Communism certainly comes at a steep price for workers. And as far as welfare provisions go, many capitalist countries have built welfare states that rival or even surpass those of Communist regimes. It certainly isn’t necessary to bring the entire economy into state control before everyone can be given access to healthcare and education.

Command economies aren’t great for growth

The high levels of economic growth seen in Communist countries were due to the Communist party’s ability to use its undivided political power to enforce austerity on consumers, and funnel a large portion of national income into investment projects. In other words Communist states are able to exploit their workers even more than capitalists can, to force economic growth. But this growth is very unbalanced. Heavy industry and military production are overly prioritized, while light industry and production for direct consumption are left lagging behind.

Screen Shot 2018-10-28 at 1.47.51 AM
Soviet per capita GDP growth, 1950-1991 (source)

Furthermore, the growth rates achieved by Communist countries were not exceptional. During the 20th century, we’ve seen examples of mixed-economy, state capitalist countries achieving consistently higher growth rates than Communist countries ever could. For example, South Korea, Taiwan, and Japan all had per capita GDP growth rates in excess of 6% for decades during the postwar period. Over the same period, the Soviet Union experienced an average per capita growth rate of just 3-5%.

This seems to show that, given our current level of technological development, the best recipe for growth is a strong, interventionist state managing a market economy, rather than the abolition of markets and private property altogether. The main reason for this is that investments are utilized very inefficiently in Communist countries compared to capitalist countries. State bureaucrats under Communism face little or no personal risk if a major investment project fails, while capitalists risk financial ruin if they make the wrong investment decisions. While the bureaucracy may confidently push forward with investment projects that may or may not constitute a prudent use of resources, capitalists must tread more cautiously, leading to higher efficiency.

Per capita growth
Many capitalist economies, including Japan and Spain, were able to surpass Soviet per capita economic growth rates during the postwar period

These statistics should make it clear that in economics as well as in politics, the Soviet model was a failure. Needless to say, it is not something that democratic socialists should be trying to replicate in the decades ahead.

Friedrich Hayek’s failed prediction

Hayek

Now that we’ve seen the serious failures of the Communist model, we can turn to the issue of how to avoid a replay of Communism in the 21st century. I have argued that the key is that we reaffirm our commitment to democracy. But we should be prepared to answer criticisms by right-wing anti-Communist authors, such as Friedrich Hayek, who have argued that it was state planning of the economy itself which led to Communist totalitarianism. For Hayek, socialism and democracy are simply incompatible. Hayek wrote in his book The Road to Serfdom:

Planning leads to dictatorship because dictatorship is the most effective instrument of coercion and, as such, essential if central planning on a large scale is to be possible. There is no justification for the widespread belief that, so long as power is conferred by democratic procedure, it cannot be arbitrary; it is not the source of power which prevents it from being arbitrary; to be free from dictatorial qualities, the power must also be limited.

Hayek, who was writing just before the end of World War II, made a bold empirical prediction. He predicted that the expansion of state intervention into the economy in liberal democracies like the United Kingdom would inevitably lead to the destruction of democracy in those societies. His message was that the road of social democratic reform was a road to serfdom.

Luckily for the socialist project, however, the history of the 20th century has conclusively proven that Hayek was wrong. Countries like France, which had widespread state ownership of industry and “indicative planning” of investment throughout the postwar period, have shown no signs of sliding into authoritarian rule. India, for all its problems, has maintained a strong democratic tradition while centrally directing its economy through Five-Year Plans. The most extreme case is Portugal, which nationalized most of its economy during its transition to democracy in the mid-1970s, but never showed signs of returning to dictatorship.

Furthermore, we know that societies with muscular welfare states and redistributive taxation, like Sweden and Norway, actually have healthier democracies than countries without such protections. As long as we socialists maintain our commitment to peaceful, incremental social reform through the institutions of representative democracy, there is no risk that our policies will inadvertently lead to Communist totalitarianism.

The alternative: piecemeal social engineering

Karl Popper

The idea that liberal democracy and civil liberties are essential for protecting against tyranny was forcefully advocated by the famous Austrian philosopher Karl Popper. In Popper’s magnum opus on politics, entitled The Open Society and Its Enemies, Popper sought to diagnose the underlying causes of both Nazi and Stalinist totalitarianism. He pointed out that authoritarian ideologies tend to have in common the idea that they have unlocked the key to understanding history, and that they know what the future utopian society ought to look like in great detail. Because totalitarian ideologues are deeply convinced that they know the inevitable destiny of humanity, they feel justified in taking state power, by violence if necessary, and imposing their philosophy on their fellow citizens without any accountability or appeal.

Popper called this point of view historicism. Marxism-Leninism is a prime example of historicism, because it professes to hold the one true “scientific” method for understanding and predicting history. It concludes from this that it has the right to take state power, by violence if necessary, and to hold onto power indefinitely until it has established its vision of the Communist utopia. Nazism is another example. Nazis believe that all history can be explained by the struggle of different races for power. They further believe that the Aryan race is the master race, superior to all others, which will inevitably prove its supremacy by conquering Europe and ultimately the world.

Democratic socialists should not be historicists. We know that both capitalism and Communism have serious flaws, but we know very little about what alternative, superior models could emerge in the future. Socialists should recognize the fundamental uncertainty of our political project. We want to create a world where exploitation is abolished once and for all, but no such society has ever existed. We therefore have no rational grounds for being confident in any specific model for a future socialist society. All we can do is experiment with different ideas, expanding upon policies that have already been proven— like a robust welfare state, worker codetermination on corporate boards, and the nationalization of monopolies. We can try new things too, like a job guarantee, promoting worker-owned cooperatives, developing a prize system to spur innovation without private property, and much more. But what we can’t do is engage in utopian social engineering, confidently imposing an untested blueprint for society on our fellow citizens.

The alternative to the historicist approach is what Popper calls piecemeal social engineering. The idea is to focus on concrete social ills, and to formulate incremental reforms that can address these ills. If the new policy has negative effects, we can always reverse it through the institutions of representative democracy. Over time, we may be so successful at solving social problems that we will approach something that might be called a utopia. But the key is that we don’t know what the utopia will look like in advance, or how it will come about.

Democracy is essential for piecemeal social engineering because it is the only political system that allows rulers to be removed from power by the masses without violence. Through democracy, we can reverse bad policies and sack would-be dictators. Freedom of expression, association, and the press are also essential, because they allow the masses to criticize the policies of the government and organize against them. The combination of democracy and civil liberties constitute what Popper calls an open society. Democratic socialists should be the foremost proponents and defenders of the open society. The freedoms that the open society affords are a firm foundation for adding the positive freedoms socialists are fighting for (guarantees of healthcare, housing, employment, etc.)

The abject failure of Communism has led many to believe that we will never be able to transcend capitalism. But we have good reason to believe that the abundance necessary to build our socialist utopia is coming. Technological development over the next century promises to automate most involuntary work, making the employer-employee relationship and therefore capitalism obsolete. Let’s just not impose our utopia on society before it’s ready.

How to Avoid Human Extinction

I want to talk about the very long term future of the human race. Most people only think about what the future might be like for themselves and their children’s generation. But the fact of the matter is that, if things go well for humanity over the next one or two centuries, there isn’t really anything stopping our civilization from thriving for billions of years into the future.

Yes, our Sun will die out in a few billion years and engulf the Earth, but before then we will have plenty of time to figure out interstellar travel and find a new home for Earth’s inhabitants. Even after the last star dies out, our distant descendants could power their civilization using black holes. If there is any chance at all that we could build a happy, thriving civilization that could take advantage of these vast, cosmological expanses of time, we had better start figuring out how.

The emergence of existential risk

BostromNow that I’ve got you thinking about the far future, it’s time to think about how human civilization could go wrong in ways that stop us from reaching our potential. The philosopher Nick Bostrom has introduced the concept of existential risk to refer to any threat that would either lead to the extinction of humanity, or would permanently and drastically curtail our potential as a species. While we’re not used to thinking about it, existential risk is really, really important. Actually, almost by definition, it’s by far the most important thing anyone could ever worry about.

For most of human history, we didn’t have to worry about existential risk. Of course, there has always been some small risk that a natural event (like an asteroid impact) could cause humanity to go extinct. But for most of human history, we simply didn’t have the technology necessary to destroy ourselves as a species. And our communication, coercion, and surveillance technology wasn’t good enough to allow any person or group to enforce a dystopian social system on the rest of humanity indefinitely.

As our technology has improved, however, we have had to face the specter of human-caused existential disaster. We can never put this genie back in its bottle. For every year that our civilization continues to exist, there is some small, but nonzero probability that we will destroy ourselves one way or another. Over a decade or a century, this probability is compounded ten or a hundred times over. If we want our civilization to survive for billions of years, we will have to make the probability of catastrophe vanishingly small, and keep it that way.

Nuclear holocaust

TrinityScientists and policymakers first began to worry about human extinction with the advent of nuclear weapons. Soon after July 1945, when the United States army detonated its first nuclear weapon, scientists raised serious concerns that this technology would enable wars of destruction and death on a scale never before seen in human history. And when the USSR carried out its first nuclear test in 1949, this risk became very real. There were now two hostile powers on Earth that each had the capacity to initiate nuclear war. So far, humanity has been very lucky. We’ve narrowly escaped catastrophe on several occasions— during the Cuban Missile Crisis for example, President Kennedy reckoned that the probability of war was “between 1 in 3 and even.”

While tensions between nuclear powers aren’t nearly as high now as they were during the Cold War, nuclear war remains a real possibility as long as there are multiple competing states with large nuclear arsenals. The only real long term solution is to concentrate all the world’s nuclear weapons in the hands of some transparent, democratic global institution like the United Nations. This way, the incentive for arms races would be eliminated.

Synthetic biology

While the results of nuclear war would be truly catastrophic, it’s not actually clear that the most likely outcome of such a war would be human extinction. There are many ways in which small numbers of humans could survive the nuclear winter and gradually re-establish civilization. Synthetic biology, on the other hand, presents a more serious scenario for the total annihilation of humanity.

Using genetic engineering techniques, governments as well as terrorist groups will be able to design ultra-deadly, highly communicable viruses and release them into the ecosystem, starting a global pandemic. This scenario is all the more worrying because the expertise and equipment needed to design such a virus will likely not be very great. Already, you can buy all the equipment needed for CRISPR-Cas9 gene editing online for $159, and middle schoolers are using the technology in their science classrooms. It will be impossible for governments to keep these technologies away from bad actors.

The solution here is to fight fire with fire. States will need to develop rapid-response systems that can develop and distribute vaccines to protect against synthetic pathogens in a matter of days. Since it’s virtually impossible to stop the spread of pathogens across national borders, these protective measures will be much more effective if they are implemented at the global, rather than national level.

Nanotechnology

An even more serious threat than synthetic biology is nanotechnology. Nanotechology is the ability to precisely manipulate atoms and molecules in order to build nano-sized machines on a mass scale. Nanomachines have enormous promise: they could ultimately be used to clean up the environment, roam our bloodstreams to protect against disease, create ultra-powerful computers, and much more. But they are also incredibly dangerous, especially if they become self-replicating. Governments could deploy swarms of self-replicating nanomachines as deadly weapons, capable of killing millions and wreaking havoc on ecosystems and infrastructure. As nanotechnology becomes cheaper and more widely available, rogue actors could also use it to inflict tremendous harm. As the nanotechnologist Eric Drexler writes:

“Early assembler-based replicators could beat the most advanced modern organisms. ‘Plants’ with ‘leaves’ no more efficient than today’s solar cells could out-compete real plants, crowding the biosphere with an inedible foliage. Tough, omnivorous ‘bacteria’ could out-compete real bacteria: they could spread like blowing pollen, replicate swiftly, and reduce the biosphere to dust in a matter of days. Dangerous replicators could easily be too tough, small, and rapidly spreading to stop — at least if we made no preparation. We have trouble enough controlling viruses and fruit flies.”

Self-replicating nanotechnology therefore qualifies as an existential risk— the nightmare scenario would lead to the annihilation of the human race. The solution is to develop a rapid-response “nanotechnological immune system” that could use satellites to detect swarms of dangerous nanomachines and neutralize them before they become too powerful. This immune system would need to be global in order to be effective— if any region isn’t sufficiently protected, the problem could get out of control before other governments can react. It’s also important to concentrate offensive nanotechnological capabilities in the hands of a global institution because, without this, there will be a very strong incentive for states to engage in deadly arms races that could lead to war.

Artificial intelligence

The dangers of artificial intelligence have been getting a lot more media attention in the past few years, and for good reason. In a recent survey of AI researchers, it was found that most experts in the field agree that there is at least a 70% chance that artificial intelligences will exceed human abilities in all domains before the year 2100. This means that superintelligences— AIs that dramatically outperform humans in all domains— could very well become a serious threat during this century. The problem with superintelligence is that, once these systems become sufficiently powerful, they will effectively replace human beings as the dominant life form on Earth. It will be immensely important for us to develop the capability to ensure that these superintelligences have value systems that are aligned with our own. This is referred to as the goal alignment problem.

The goal alignment problem is all the more worrying when you consider the fact that a superintelligence would, by definition, be better than human beings at AI development. This could lead to a recursive self-improvement loop, where the system modifies itself to make itself more intelligent, thereby making it more capable of making further improvements to itself, and so on, many times over. This scenario is referred to as an intelligence explosion. If this seems like an implausible idea, we should consider that an AGI would be able to copy itself onto millions of computers over the Internet, thereby increasing its raw computational power by several orders of magnitude in a matter of days. Such a system would have all of human knowledge at its disposal, and it would have the processing power to understand it all, find patterns in the chaos, and make plans based on its findings. It would be nearly unstoppable.

Whichever organization kicks off an intelligence explosion first would quickly open up a very large lead over other research teams. The superintelligence would therefore have no peer competitors to keep it in check. It could use manipulation, coercion, and advanced technologies to shape the future of humanity in accordance with its preferences, which may or may not be the same as those who designed it. If it becomes widely known that artificial general intelligence is just around the corner, corporations and states might be motivated to engage in an arms race to become the first organization to start an intelligence explosion. The stakes involved would be astronomically large: indefinite world domination. Such an arms race could also lead to pre-emptive war in an effort to delay the research progress of rivals.

The solution here is global political integration and public oversight of artificial intelligence research. Governments should start investing public research funds into the problem of AI goal alignment. Ideally, public research into AI should be done at a global level, to reduce the incentive for arms races. AI experts disagree about the likelihood of an intelligence explosion, but we had better be prepared for the worst case scenario. If such an explosion does occur, it needs to happen under the careful oversight of a transparent, democratic, and benevolent international organization. That way, we can ensure that the immense benefits of superintelligence are shared with all of humanity.

Climate change

The climate crisis will almost certainly not lead to complete human extinction, but it is nevertheless a very serious problem, and one that requires a coordinated global response. This will be especially true if geoengineering— the deliberate engineering of the environment to counteract climate change— becomes necessary. Governments might unilaterally embark on their own efforts to change the composition of the atmosphere, starting feuds that could quickly lead to war. Global political integration would allow for binding international emissions regulations, and coordinated global investment in renewable energies. It seems unlikely that the climate crisis can be solved without much more political integration than we now have.

How to avoid catastrophe: a political solution

united-nations.pngWe’ve seen that every kind of existential risk we face could be mitigated much more effectively with global political integration. What we need is a democratic United Nations with real teeth; a world state that could put an end to arms races and take steps to protect all humanity. As long as there is fragmentation and anarchy at the international level, our species will not be able to survive for the long term. Humanity needs to be united, it needs a single voice.

But we will have to avoid the pitfalls that have plagued regional attempts at political integration, like the European Union. Europe is in severe crisis right now because it attempted economic integration (free trade and a single currency) before implementing political integration (a central government with the power to tax and spend). This model can only lead to a race to the bottom, and it won’t do anything to address the very real existential risks that our species will face this century. Neoliberal free trade deals are not what we need— we need a democratic world state, empowered to take bold action on the most pressing issues of our time.

Integration will be a gradual process, and it will require bold political leadership in the rich countries in order to ensure it happens. Nations will need to be prepared to sacrifice some of their sovereignty in exchange for security. As the effects of climate change continue to compound, we can be hopeful that there will be some movement in this direction. None of this will happen automatically, however. The political Left in particular has a duty to make global political integration one of its long-term priorities. We should begin to argue for integration on security grounds: climate change, nuclear weapons, and emerging technologies are all serious threats to public safety, and they can only be tackled at the international level. Once established, the world state could implement worker-friendly policies and set global labor standards, since corporations will have no where else to go. Unlike individual nation-states, it will not have to implement austerity in order to achieve “competitiveness.”

The specter of totalitarianism

Critics will argue that, by ending the competition between states, global political integration would open the door for a global totalitarianism. The concern is that if a power-hungry demagogue were ever elected as the global head of state, they could quickly consolidate power, ending democratic elections and establishing a global autocracy from which there would be no appeal. This is clearly a concern that should not be taken lightly.

Big Brother.jpgThe problem is that totalitarianism will increasingly become a threat in the future, with or without a world state. Currently, elected leaders in parliamentary democracies don’t usually become dictators because they know that the bureaucracy, the police, and the military won’t follow orders that are clearly unconstitutional or illegal. But as more and more of the military is automated and replaced with autonomous weapons, there is a real risk that power-hungry leaders could ignore the rule of law and use their totally obedient “droid army” to coerce everyone into following their commands. If autonomous weapons and modern surveillance technology were used to enforce a global, indefinitely stable totalitarianism, this itself would qualify as an existential catastrophe, arguably no better than extinction.

There are technical and institutional solutions to this problem, but we will have to be proactive in implementing the proper security protocols. Autonomous weapons systems should be designed to require the approval of many different state officials in order to be fully deployed, so as to ensure that one president or rogue general couldn’t use them to carry out a one-man coup d’état. Once we develop the right security protocols, we will be able to use them to protect against despotism both at the national and international levels. Global political integration won’t make the risk any more serious than it already is. In fact, a world state could actually be our greatest defense against regional totalitarianism, allowing us to ensure that civil liberties and democratic elections are protected in all member states.

Grow or die: the need for space colonization

Once we’ve established a well-intentioned, democratic world state, we can start planning to hunker down for the long haul. We will need to reduce the risk of species-wide catastrophe to negligible levels— and the best way to do that is to become a multi-planetary species.

Right now, if a catastrophe occurs on Earth, there is no other world that humanity can turn to. Since the catastrophes we’ve discussed are likely to happen suddenly, without advance warning, there’s no possibility that a self-sufficient colony could be established on the Moon or Mars in order to save the human race. This is why it’s imperative for our species to establish a self-sufficient presence on another world— it would give us a back-up if anything goes horribly wrong on Earth. And our first destination should be Mars.

While there has been much fanfare in recent years about Elon Musk’s successful forays into private space travel, it is very important that the first colonies on Mars are established by governments, not corporations. This is the only way to ensure that Mars is a new frontier open to all humanity, not a playground for billionaires. And to the greatest extent possible, Mars colonization should be undertaken by global coalitions of governments, not individual states. We will need to minimize the tendency for nation-states to fight over Martian territory and resources.

Colonised MarsOver time, it is inevitable that Martian society will start to assert its political independence from Earth— the communication and travel delays are simply too great to maintain a strong centralized state encompassing both Earth and Mars. But this need not be a bad thing. As long as Earth and Mars each have strong, democratic, planetary governments that can keep advanced technologies under control in their own jurisdictions, there will be little to worry about. The long distances between planets (let alone between star systems) will strongly discourage war. And if war does break out, no one planet will be strong enough to annihilate or conquer all the others. Once humanity spreads out across the galaxy, the species will truly be secure. The vast distances of space will ensure that humanity will once again be unable to destroy itself— even if it wanted to.

Why Automation Will Kill Capitalism Forever

In the past few years, journalists, scientists, and tech CEOs alike have begun to sound the alarm about the disruptive effects that upcoming advancements in robotics and artificial intelligence will have on the job market. The introduction of self-driving cars alone will result in over 4 million job losses over the next two decades, as truckers and bus drivers are replaced by autonomous vehicles. Robots and computer kiosks are already replacing jobs in food service and retail, and machine learning algorithms are even starting to replace skilled white-collar jobs, like accountants, middle managers, and programmers. When it comes to automation, there really is no place to hide.

Automation in historical context

Of course, automation isn’t a new phenomenon— technology has led to dramatic job losses before, particularly in industries like agriculture and manufacturing. We can roughly group the history of automation into three major “waves.”

Farm JobsFirst, the Industrial Revolution led to a precipitous decline in the share of the population working on farms. For all of world history up until the 18th century, well over half the workforce was directly employed in food production. But the introduction of machinery into agriculture dramatically increased productivity, freeing up farm laborers to work in manufacturing.

The next wave came after World War II. Technological advances greatly increased the productivity of factories, which freed up industrial workers to work in new service jobs. These service jobs largely consist of mental labor, such as reading, writing, and planning; interpersonal labor, such as interacting with customers; and light physical labor that requires dexterity, such as preparing food.

The problem is that services can be automated, too. Contemporary advances in robotics and artificial intelligence are taking aim at just those skills which service jobs require: planning and pattern recognition, interacting with humans, and manipulating objects in complex and changing environments. This really will be the final wave of automation. Once machines comes to dominate the service sector, humans simply won’t have any useful skills left that can’t be done more efficiently and more cheaply by machines.

Now, many reasonable people want to hold onto the idea that humans are irreplaceable. Can machines really become as creative, intelligent, and sophisticated as human beings?

The answer to this question is yes. Science tells us that at the end of the day, humans are machines. We’re immensely complex, fleshy, biological machines, but we are machines nonetheless. There’s nothing a human can do that can’t ultimately be replicated by a machine, given enough engineering and research effort. It’s precisely the profound intelligence and creativity of human beings that allows us to understand the secrets behind our own capabilities, and design machines that can surpass us in many ways.

Peak automation

While machines will likely become more capable than humans in all domains by the end of this century, we won’t have to wait that long to see immense disruptions in the job market and society as a whole due to automation. So far, whenever automation has led to job losses in one sector, markets have adjusted by introducing new jobs in another sector. The problem is that we know this pattern cannot continue indefinitely. There will be a point at which the further introduction of automation technology will result in long-term net job losses for the economy as a whole.

We can call this point “peak automation.” Firms will lay off workers, and many of these workers will simply find that there is no employment to be had for them. All available job openings will require skills that they do not possess, and cannot afford to acquire. The long-term unemployed population will gradually increase, and this will in turn lead to a reduction in consumption spending and aggregate demand. Declining demand will prompt further layoffs, leading to further reductions in demand, in a downward spiral. Investor confidence will collapse, and a deep recession or depression will result.

As always, states will find that the best way to get the economy up and running again is to do Keynesian deficit spending. But fiscal stimulus alone will not solve this crisis. Unacceptably high levels of unemployment will be a recalcitrant feature of the new economy, because unskilled laborers simply will not be needed in large numbers, and there will be diminishing returns on productivity gains from adding additional skilled workers. Bringing the economy to full capacity will require more substantial state interventions into the market. This will include state-mandated reductions in the working week, expanded social programs to prop up demand, and tuition-free higher education and job training programs.

Of course, many states will be reluctant to take such left-wing measures to address the crisis— the wealthy will certainly lobby strongly against them. But countries that take a more left-wing approach will tend to economically outperform those that do nothing. And pressure from the electorate will become intense as more and more workers are laid off.

The unemployable population

As automation continues, states will face competitive pressure to keep as much of their population as possible gainfully employed in those jobs that remain— research scientists, engineers, and managers. Demands of efficiency will favor the nationalization of the most thoroughly automated industries. Higher education will take up a very large portion of GDP, as the government tries to funnel as many workers as possible into STEM-related fields. But we must recognize that not everyone is cut out for or interested in becoming a scientist, an engineer, or a manager. The state will have to figure out what to do with the rest of the population— the people who no longer need to work.

Of course, as decent, reasonable people, we would all like to make sure that the unemployable population has all of its basic needs taken care of, rather than being left to starve on the streets. Luckily there will be a strong economic rationale for the state to do the right thing here, since it will be necessary to prop up consumer demand. We can imagine, however, that some states might opt for a much darker solution to this problem.

There is an uncomfortable truth here, though. As long as it’s necessary for some skilled workers to be employed, these workers will need to be given privileges or advantages over the unemployable population in order to incentivize them to work. We cannot count on the idea that pure altruism or a sense of national duty will sufficiently motivate the scientists, engineers, and managers of the future. This means that a new kind of class division might emerge, between those who work and are given privileges for doing so, and those who live off of the state. A caring, left-wing government should do its best to minimize the inequality between these classes and facilitate a high degree of mobility between them— while working to accelerate progress in automation in order to hasten the end of class divisions once and for all.

Why a UBI isn’t good enough

It’s striking to note that even many of the most wealthy businesspeople in the world, like Elon Musk and Richard Branson, have recognized that a radical change in the economy will be necessary in order to adapt to the next wave of automation. The most popular policy prescription is a universal basic income, a government program which would provide a livable income to every citizen regardless of employment or financial means. Most of these pro-UBI billionaires hope that this policy would allow markets and private ownership of capital to continue indefinitely— effectively a bandaid solution to adapt capitalism to an increasingly jobless world.

But there are good reasons to believe that capitalism and a UBI can’t coexist for long. Such an arrangement would likely lead to a great amount of civil unrest and social instability. Class divisions would be made much more obvious and grotesque in such a scenario, and the unemployable majority would look at their trillionaire overlords with envy and disgust. It would quickly become clear to everyone that the owners of capital are not providing anything of use to society, and are simply extracting rents at the expense of the general population.

The capitalists, on the other hand, wouldn’t approve of being taxed at high rates in order to give handouts to the unemployable. They would prefer a situation in which the wealthy could simply trade amongst themselves. Even today, most of the tech entrepreneurs who are speaking out in favor of a UBI don’t want to fund it by raising taxes on themselves— they’re advocating to replace the entire existing social safety net with a meager cash payment.

But the wealthy cannot hold onto power forever. The rich may seem invincible now, but they only have power so long as the state continues to enforce their claims on property. If the institutions of parliamentary democracy and universal suffrage survive the turmoil, the masses will use them to wrest power away from big business. We will use state power to bring society’s resources and machinery into public ownership, so that they can be managed democratically to further the interests of all humanity. Everyone will be provided everything they need— not just to live, but to thrive and pursue their dreams and passions in a world of freedom and abundance. This utopian, Star Trek-like future is called democratic socialism. It is the stage in history when humanity will finally grow out of its infancy.

Labor-Based Parties Are Illegal in the US — Good Thing We Don’t Need One

The United States is unique among advanced capitalist nations in that it never spawned a mass labor-based political party. Instead, early 20th century American labor unions opted for a non-partisan strategy of “pure and simple unionism” in which organized labor would lobby major political parties from the outside. Today, American labor has come to largely align itself with the Democratic Party, a loose coalition that includes wealthy donors and powerful business interests. The unfortunate result is that the American working class lacks an unapologetic political voice.

What is a labor party, anyway?

Many leftists want to remedy this situation by building a new labor-based party in the United States, modeled off those in Europe. Traditional labor-based parties, such as the British Labour Party, are founded by labor unions for the purpose of furthering the interests of working people. Unions formally affiliate to these parties, providing financial and organizational support in exchange for a large degree of control over the selection of party candidates. Labor parties also rely on a dues-paying party membership, which is given a binding say over candidate selection and the overall policy of the party. They can also revoke the party membership of sitting elected officials if they stray too far from the party platform— as happened, famously, to Labour prime minister Ramsay MacDonald in 1931. These are externally organized parties, where ordinary people come together and recruit their own representatives to contest elections in order to gain power they don’t already have.

Labour Party

Australia Labor

New Democratic Party

But externally organized parties never really took off in the United States, for various structural reasons. In the US, property restrictions on voting rights were removed much earlier on in the 19th century than in most other places in the world. This meant that universal white male suffrage preceded the rise of the labor movement in the US. Elected officials felt the need to establish mass-oriented political parties which could mobilize voters to elect their allies to office. These internally organized parties were built from inside the state, downward into civil society. They were designed to serve the interests of the elected officials who created them. And once these parties gained a foothold, they created partisan divisions among workers that made it more difficult for labor unions to try to unite their members around a single labor-based party.

The hollowing out of American political parties

party-boss.jpg

Initially, these internally organized parties were more or less controlled by elected officials and party bosses. Decisions about candidate selection were made behind closed doors by party insiders. But over the decades, pressure from popular movements began to break through this entrenched, corrupt political machine. The Progressive Era of the early 20th century saw the introduction of the first primary elections— but these were sporadic and usually non-binding. It wasn’t until the chaotic 1968 Democratic convention that the modern American primary system took shape. State governments established primary elections all across the country, and the results of these primaries were made binding. American political parties effectively relinquished their control over their own ballot lines.

Since the opening up of the primary system in the 1970’s, a new conception of political parties has entrenched itself in the minds of American voters as well as in the law. American political parties have come to be seen as state-regulated public utilities that are open to all who wish to enter, rather than private associations of voters and candidates. These “public” parties have remarkably little power, beyond making non-binding endorsements and coordinating fundraising efforts.

State laws ban traditional labor-based parties

The public utility model of political parties is legally imposed onto any party that seeks to gain ballot access in the United States. As Seth Ackerman wrote in a popular article in Jacobin magazine,

“Normally, democracies regard political parties as voluntary associations entitled to the usual rights of freedom of association. But US state laws dictate not only a ballot-qualified party’s nominating process, but also its leadership structure, leadership selection process, and many of its internal rules…” – A Blueprint for a New Party

In most states (around 47 out of 50) there are laws on the books which require political parties to participate in state-run primary elections and abide by the results. Georgia is one example:

“…all nominees of a political party for public office shall be nominated in the primary preceding the general election in which the candidates’ names will be listed on the ballot.” – Georgia Code § 21-2-151

This means that those leftists who want to launch a new labor-based political party in the United States won’t be able to escape the primary system. Neoliberal and right-wing elements could easily enter the primary race of a new labor party and use their fundraising advantage to take the party’s nomination. This isn’t just a theoretical problem— it’s something that the Green Party has actually struggled with for years.

screenshot-from-2018-08-15-09-46-56.png
Green Party can’t control its ballot line, either

In fact, just this year, the far-right activist James Condit, Jr. was able to enter the Green Party primary for the 2nd congressional district in Ohio. He ran unopposed and won the party’s nomination with just 43 votes. The Ohio Green Party has publicly condemned Condit and is encouraging its supporters to vote against him— but they have no legal authority to stop him from appearing on the general election ballot as the Green nominee.

CA primary funny

This problem is even more acute in states like California, Louisiana, or Washington. These states have a “top-two” primary system, where candidates from all political parties run together in a non-partisan primary, and the top two vote-getters advance to the general election. One side effect of this system is that candidates can identify themselves on the ballot with any registered political party they choose, as a matter of self-identification. The parties have absolutely no control. Do we really want to pour resources into getting a new labor party legally recognized, only to have blockchain startup CEOs and “transhumanist lecturers” running on its ballot line?

Given the legal structure that exists in the United States today, the project of building a new mass political party with control over its own ballot line, whose candidates are selected by dues-paying party members and unions, is simply impossible. Labor party activists would have to embark on an ambitious project of electoral reform in almost every state in the Union, fighting for legislation that would empower political parties at the expense of primary voters. This would be seen by most working people as an anti-democratic move. Leftists shouldn’t be fighting to strengthen parties— instead, we should be fighting alongside Our Revolution activists to weaken the party system even more, by establishing open primaries and eliminating superdelegates.

America’s weak party system means that we will have to work especially hard to keep our elected officials accountable. Accountability involves keeping politicians reliant on and fearful of the movements and organizations that got them elected in the first place. In countries where traditional labor-based parties are legal, the state makes it easy to maintain a modicum of accountability by allowing parties to simply revoke the party membership of those who stray from the party platform. But the American state won’t make it so easy for us. If the Left is going to build power in the United States, we will have to get very good at winning primaries, and unseating those who stray too far from our preferred policies.

A new party of a new type?

In his article A Blueprint for a New Party, Seth Ackerman rightly points out that the Left shouldn’t be obsessed with having our own, independent ballot line— what matters is that we can build up a powerful coalition of civil society organizations that can recruit and throw its weight behind left-wing candidates for public office. For Ackerman, the choice of ballot line would be a pragmatic decision, based on the local conditions.

One problem with Ackerman’s article, however, is that he doesn’t seem to recognize the fact that, in nearly all cases, the pragmatic choice is to run left-wing candidates as Democrats. Working people usually vote based on party identification, so running on a third party or independent ballot line simply makes the campaign much more difficult, with no obvious benefit. In effect, Ackerman’s “party of a new type” would be a membership organization inside the Democratic Party, seeking to capture the Democrats by winning primary elections. We should be honest about this— the project of capturing the Democratic Party is nothing to be ashamed of.

Many argue that, even if we run candidates as Democrats today, any new party should have a long-term goal of developing its own ballot line and completely breaking from the Democrats. But there’s no obvious reason why this would be a good or necessary thing. As we’ve already established, state laws mandate that the Democratic Party must abide by the results of its primary elections. In most states, Democratic leaders couldn’t close up their primaries even if they wanted to— even if they felt threatened by an insurgent left-wing movement to capture the party. Democratic lawmakers would have to try to push an electoral reform bill through the state legislature in order to end primaries, a blatantly anti-democratic move that would provoke a strong media backlash. Now that the primary system has been opened up, it will be nearly impossible for party elites to close it back up again.

Building power without a party

“[R]ather than dismissing the Democrats and pinning our hopes on a third party, the American left must rethink which kinds of goals can be accomplished in the realm of American party politics, and which cannot… The burden of the American left is to build the power of the working class without the assistance of [a] working-class party.”
— Adam Hilton, Left Challenges Inside the Democratic Party

The Democratic Party is a hollow bureaucratic shell that cannot be transformed into a labor-based party. But we can’t build a new labor party from scratch, either, because American electoral law makes it impossible. The good news is that we don’t need a traditional labor-based party. We can establish an unapologetic political voice for working people by building a network of civil society organizations that can project power inside the Democratic Party. This movement would secure its hegemony by consistently winning a solid majority of Democratic primary elections across the country.

momentum_logo

American socialists should look to the left wing of the British Labour Party as a model. Labour has been effectively captured by socialists in the last few years— and it didn’t take a “party within a party” to accomplish this. Rather, the Corbynite wing of the Labour Party consists of a loose network of civil society organizations and labor unions, informally led by a group named Momentum. Given the success of Corbynista movement, it should be even easier for a left-wing coalition to take the reigns of the Democratic Party, which is much more open and porous than Labour has ever been.

As I discussed in a previous post, the reason the Democratic Party hasn’t been captured by a Momentum-like organization yet is that the overall political conditions haven’t been favorable since the 1970’s, when the primary system first opened up. The neoliberal crisis of capitalism, the defection of Southern Dixiecrats to the Republican Party, and an eight year long Reagan presidency shifted the entire political discourse far to the right in ways that we are just beginning to recover from. Today however, working people are hungry for a new kind of politics that truly represents their interests. The conditions are ripe for the Left to capture the Democratic Party. We simply have to recognize that this is in fact our aim, and dedicate resources to achieving it.