Five Things I Learned from the Second Machine Age

I’ve resolved to (a) start reading more non-fiction and (b) start writing about what I read, so I hope this is the first of a continuing series in which I share a few of the things I’ve learned in my recent reading.

Erik Brynjolfsson and Andrew McAfee’s (BM) The Second Machine Age is one of the most talked about tech and economics books of the last year. Like Zero to One or The Lean Startup, it feels like a must-read for entrepreneurs, investors and policy makers.

There are plenty of summaries out there on the web, but I wanted to talk briefly about the five ideas that had the biggest impact on me:

  1. Things look impossible for a long time, until they’re suddenly not
  2. “Combinatorial innovation” means there are more ideas than ever, but finding the good ones is challenging
  3. Innovation has a measurement problem
  4. We’re in a difficult time for human labour
  5. Humans need to rethink how they work with machines

1. Things look impossible for a long time, until they’re suddenly not

Some of the most extraordinary technological advances of the last few years (e.g. self-driving cars, bipedal robots, human-level computer vision) “have taken place in areas where improvement has been frustratingly slow for a long time and where the best thinking often led to the conclusion that it wouldn’t speed up” (p37)

How come? BM note that innovation today is (a) exponential, (b) digital and (c) combinatorial (p37) – i.e. it’s changing faster, touching more sectors and more transferable than ever before. Each layer of innovation enables another: “The exponential, digital and recombinant powers of the second machine age have [made possible] two of the moment important one-time events in our history” the emerge of real, useful artificial intelligence and the connection of the most of the people of the planet via a common digital network”. (p89)

2. “Combinatorial innovation” means there are more ideas than ever, but finding the good ones is challenging
Combinatorial innovation “is an innovation-as-building block view of the world, where both the knowledge pieces and the seed ideas can be combined and recombined over time” (p80).

Prosaically, you might blame this idea for “Facebook for pets” and a hundred other “X for Y” startups ideas – but it’s also the driver that allows barely funded startups to use libraries like Torch, rather than having to re-implement deep learning from scratch.

The sheer number of these building blocks though means that there are a mind-blowing number of potential combinations: “‘Is growth over?’… Not a chance. It’s just being held back by our inability to process all the new ideas fast enough” (p81).

The good news is that innovation is increasingly distributed; there is no central authority whose ability to assess innovation is the bottleneck. “With more eyeballs, more powerful combinations will be found” (p82). For example, BM highlight Innocentive – whose users solve around 30% of scientific problems posted by organisations who had previously been stumped – and Affinova, which allows the crowd to evaluate very large numbers of combinations of building blocks quickly. They also note that mechanisms like MOOCs unlock talent (and eyeballs) that would otherwise be hidden. When Stanford ran its AI course online, the “top performer in the course at Stanford was only the 411th best among all online students” (p199)

3. Innovation has a measurement problem
Innovation seems obvious, but is hard to measure. “The official GDP numbers miss the value of new goods and services added to the tune of 0.4 percent of additional growth each year” – equivalent to a fifth of long-term average productivity growth (p118)

It’s also hard to quantify the new kinds of capital that dominate the second machine age: “intellectual property, organisational capital, user-generated content, and human capital” (p118).

However, BM see technology as part of the solution and point to new non-governmental macro-level statistics, such as efforts to measure inflation using online prices; attempts to use satellite mapping of electric light usage at night to estimate economic growth; and the use of search term volumes to learn more about unemployment (p122). This seems a fascinating space; at EF, I’d love to fund people who are interested in solving this measurement problem*.

4. We’re in a difficult time for human labour
In the US, 1999 (!) was the peak year for real median household income – $54,392 – and by 2011 it was still only $50,054 (p128). Between 1983 and 2009, the top 20 percent of the income distribution gained more than 100 percent of the national increase in wealth in the period; “the bottom 80 percent of the income distribution actually saw a net decrease in their wealth” (p130). More broadly, labour’s share of national income has fallen, from an average of 64.3% between 1947 and 2000 to 57.8% by 2010 (p143).

There’s no single factor explanation, but automation appears to be playing some role. Employment optimists often argue that entrepreneurs will always find ways to use human labour, so even if existing jobs are automated, new jobs will be created. That is, “as long as there are unmet needs and wants in the world, unemployment is a loud warning that we simply aren’t thinking hard enough about what needs doing” (p181). However, BM also argue that replacement “has happened to many other inputs to production that were once valuable, from whale oil to horse labour. They are no longer needed in today’s economy even at zero price” (p178). This is the scary stuff: horse labour plummeted in the first two decades of the 20th century; human labour might in the 21st.

5. Humans need to rethink how they work with machines
However, the story is much more subtle than one of wholesale automation leading to redundancy of human labour. Chess, as one of the fields where computers first and most visibly outpaced human mental labour, holds a number of lessons.

BM note that in chess, it’s now clear that “Weak human + machine + better process is superior to a strong computer alone and, more remarkably, superior to a strong human + machine + inferior process” (p188). This is another area I’d like to fund at EF*.

A few areas seem promising for humans, particularly “ideation, large frame pattern recognition, and the most complex forms of communication”. BM quote Gary Kasparov on the idea that computers are still bad at generating “a new idea” and note, “As we look across examples of things we haven’t see computers do yet, this idea of the ‘new idea’ keeps recurring” (p190).

It’s an interesting and perhaps encouraging thought, though it feels as though what counts as a “new idea” is likely to become blurred over time. Are Google’s deep dream images ‘original’? In one sense, it’s very obvious where they come from – we can see their non-AI origins; in another, no human has – or even could – have generated them. As BM say, “one man’s creativity is another machine’s brute-force analysis” (p202).

Their conclusion is that “You’ll be paid in future based on how well you work with robots” (p191). That’s going to radically shake up the hierarchy of jobs – and has profound implications for everything from education to welfare policy.

There are many, many more fascinating ideas in the book, but I want to keep these posts relatively short. I strongly recommend reading the book. It’s thoughtful and provocative. If you’re in London and would be interested in reading and discussing it as a group, let me know.

*If you’re interested in either of these ideas, I’d love to talk. Just send me an email.


You must be logged in to post a comment Login