Hide table of contents

This is a short piece cross-posted from the new Global Priorities Project website. Paul Christiano has explored the case for self-driving cars as a target for philanthropy based on direct societal effects; we ask what a longer-term perspective might mean.

By Owen Cotton-Barratt and Sebastian Farquhar

Self-driving cars are coming. A review of existing legislation by the Department for Transport, released Wednesday, said the cars could be tested legally on any roads in Britain, so long as there was a human driver in the car who could assume control and would take responsibility for any accident. Google’s prototypes have been driving unaided in pre-mapped environments. Google entered discussions last month with major car manufacturers about getting production lines rolling within the next two to five years.

Pundits gush about the potential benefits, but the effect of driverless cars helping us build a legislative framework for future automation technologies, though often seen as a challenge, may be a huge opportunity.

Many of the benefits of self-driving cars are well known. First, car crashes kill a lot of people. 1,700 people die in traffic accidents each year in Britain and worldwide the total is 1.2 million¹, almost as many as die from HIV/AIDS. These are overwhelmingly caused by human error, and it seems likely that the technology could soon become reliable enough to eliminate a large proportion of these.

Second, people spend a lot of time driving. Some of this is enjoyed, but much is merely endured. Freeing people to use their travel time in work, study or leisure would be a substantial boost to the economy and to wellbeing. Moreover, the effects on congestion, emissions and accessibility could be substantial.

But there is another big reason to push ahead adopting driverless cars that gets neglected. We are living in a world of increasing automation. We need to adapt to that: technically, legally, and socially. Self-driving cars seem like a big step today, but we can expect much greater automation in our lifetimes, for example in medical diagnosis. A recent paper estimates that around half of current jobs in developed economies may be automated away in the coming decades.

We have a lot to learn. How can we produce systems which are robust to unknown errors in their code? Which are resistant to any remote sabotage? How we can structure our laws so that liability is clear when things fail? How can we structure the incentives so that it is in everyone’s interest to prevent them from failing? What new areas of employment can we create to take advantage of this liberation of human time and expertise?

All of these will take time, and will almost certainly not be done right from the beginning.The longer we wait on grappling with the practical issues that driverless cars present, the longer before we get an opportunity for our society to adapt to the next great force to shape humanity. That will make us less prepared for future increases in automation, which may be much more dramatic and essential to get right. If this were the last time we had to face this challenge, it might make sense to take things slowly. But because there is so much yet to come, the societal knowledge we gain from experimentation is very valuable.

Slowing things down now might make it easier for us in the short term. It is likely, however, that it will make it harder for society to adapt to a more sudden and alarming change in the future, when we can no longer hold back the rising tide of technology.

Endnotes:

1 The UK figure, from the Department for Transport, is from their 2013 annual report as 2014 data have not yet been finalised. The world-wide figure, provided by the World Health Organisation, is for the year 2010. It is likely that this represents a significant underestimate as car ownership in less economically developed nations, which often have poor road safety, has risen in that time.

8

0
0

Reactions

0
0

More posts like this

Comments10
Sorted by Click to highlight new comments since: Today at 8:52 AM

Is there any ways we as average individuals can presently aid in such efforts?

I don't know of any particular opportunities. My personal guess is that this is good enough to be worth being open to good opportunities, but not good enough to put much work into seeking them out.

I just finished reading David Owen's book 'The Conundrum' which is a exploration of unintended consequences and macroeconomic effects of Jevon's Paradox. It's too long for me to properly summarize right now, but he makes what seemed to me strong arguments that: there are many situations where efficiency gains open up technology frontiers which lead to more consumption (transistors were not just "much more efficient vacuum tubes"); and that automobile consumption has been one of the most damaging technologies of the 20th century, leading to vast sprawl in N America, with all it's environmental issues.

Certainly it's possible that automated cars may have their own frontier effect (decentralized fleets of micro cars allowing people to more easily live in dense urban areas?) but a very obvious effect of auto-autos is that they'll basically be "cheaper" to own in many ways, which means there will be more of them consumed and potentially a lot more environmental impact from them.

I think the argument that robo-cars represents a test bed for dealing with widespread automation is a pretty interesting one, but it's not clear at all to me that robotic cars are a technology that, on balance is going to make things better in the short/medium term.

Anyway, you'll probably find The Conundrum interesting. I found it via Russ Robert's excellent Econtalk podcast, where he had a discussion with Owen a couple of years ago

Hi Owen and Sebastian,

The assumption behind your argument seems to be that slowing (resp. accelerating) progress in automation will result in faster (resp. slower) changes in the future rather than e.g. uniform time translation. Can you explain the reasoning behind this assumption in more detail?

That isn't the assumption that's meant to be driving the argument. I think there are two main factors:

(i) Pushing self-driving cars relative to other automation is likely to increase societal wisdom regarding automation faster. They are very visible and have macro-level effects, and will require us to develop new frameworks for dealing with them. In contrast, better AI in computer games has to a first approximation none of these effects, but could also feed into long-term automation capabilities.

(ii) Pushing for adoption of self-driving cars is useful relative to pushing for improvements in the underlying automation technology, because it will give us longer to deal with these issues for a given automation level (because we can assume that improvements in automation will continue regardless of adoption; although note that adoption may well speed up automation a bit too).

I actually think the assumption you mention is probably true too -- because the rest of the economy is likely to continue to grow it will be cheaper relative to wealth to improve automation later, so it could go faster. But this effect seems rather smaller to me, and as increasing automation isn't the only driver of increasing societal wisdom, I'm much more sceptical about whether it's good to speed automation as a whole.

Thx for replying!

I'm still not sure I follow your argument in full. Consider two scenarios:

  1. Self-driving cars are adopted soon. Progress in automation continues. Automation is eventually adopted in other areas as well.

  2. Self-driving cars are adopted later. Progress in automation still continues, in particular through advances in other field such as computer game AI. Eventually, self-driving cars and automation in other areas are adopted.

In each of these scenarios, we can consider the time at which a given type/level of automation was adopted. You claim that in scenario 2 these times will be spaced denser than in scenario 1. However, a priori it is possible to imagine that in scenario 2 all of these times occur later in time but with the same spacing.

What am I missing?

I agree that it's possible that your scenario 2 just shifts everything back uniformly in time, but think in expectation the spacing will be denser.

Toy model: looking at the spacing between self-driving cars and some future automation technology X. A major driver of the time X is adopted is technological sophistication. Whether or not we adopt self-driving cars now won't have too much effect on the point when we reach technological sophistication level for technology X. If we had the same social position either way, this would mean that we would adopt X at roughly the same time regardless of when we adopt self-driving cars. Of course social views might be different depending on what happened with self-driving cars.

If we want to maximise the time between self-driving cars and X, we'd be best adopting the cars as soon as possible (given technological constraints), and pushing back adoption of X as long as possible.

Your toy model makes sense. However, if instead of considering the future automation technology X we consider some past (already adopted) automation technology Y, the conclusion would be opposite. Therefore, to complete your argument you need to show that in some sense the next significant step in automation after self-driving cars is closer in time than the previous significant step in automation.

I see what you're thinking. We break the symmetry not by thinking that the next step is going to be closer in time, but that the next step(s) are going to be more important to get right than either self-driving cars or earlier automation.

In a way, the two are interchangeable: if we define "steps" as changes of given magnitude then faster change means more densely spaced steps.

There is another effect that has to be taken into account. Namely, some progress in understanding how to adapt to automation might be happening without the actual adoption of automation, that is, progress that occurs because of theoretical deliberation and broader publicity for the relevant insights. This sort of progress creates an incentive to move all adoption later in time.