Agricultural research and development

,

Crossposted from the Giving What We Can blog

Foreword: The Copenhagen Consensus and other authors have highlighted the potential of agricultural R&D as a high-leverage opportunity. This was enough to get us interested in understanding the area better, so we asked David Goll, a Giving What We Can member and a professional economist, to investigate how it compares to our existing recommendations. – Owen Cotton-Barratt, Director of Research for Giving What We Can

 

Around one in every eight people suffers from chronic hunger, according to the Food and Agricultural Organisation’s most recent estimates (FAO, 2013). Two billion suffer from micronutrient deficiencies. One quarter of children are stunted. Increasing agricultural yields and therefore availability of food will be essential in tackling these problems, which are likely to get worse as population and income growth place ever greater pressure on supply. To some extent, yield growth can be achieved through improved use of existing technologies. Access to and use of irrigation, fertilizer and agricultural machinery remains limited in some developing countries. However, targeted research and development will also be required to generate new technologies (seeds, animal vaccines and so on) that allow burgeoning food demand to be met.

Agricultural research and development encompasses an extremely broad range of activities and potential innovations. A 2008 paper issued by Consultative Group on International Agricultural Research (von Braun et al., 2008), an international organization that funds and coordinates agricultural research, identifies 14 ‘best bets’. These include developing hybrid and inbred seeds with improved yield potential, better resistance to wheat rust, increased drought tolerance and added nutritional value, but also encompasses the development new animal vaccines, better fertilizer use and improved processing and management techniques for fisheries.

Notable successes in seed development seem to have generated immense social benefit. The high-yielding varieties that spread through the ‘Green Revolution’ are often credited with driving a doubling of rice and wheat yields in Asia from the late 60s to the 90s, saving hundreds of millions of people people from famine (see, for instance, Economist, 2014). Given the prevalence of hunger and the high proportion of the extremely poor that work as farmers, agricultural research and development seems to offer a potential opportunity for effective altruism.

Existing benefit-cost estimates are promising, though not spectacular. The Copenhagen Consensus project ranked R&D to increase yield enhancements as the sixth most valuable social investment available, behind deworming and micronutrient interventions but ahead of popular programmes such as conditional cash transfers for education (Copenhagen Consensus, 2012).

The calculations that fed into this decision were based on two main categories of benefit. First, higher yield seeds allow production of larger quantities of agricultural output at a lower cost, bolstering the income of farmers. Around 70 per cent of the African labour-force work in agriculture, many in smallholdings that generate little income above subsistence (IFPRI, 2012). Boosting gains from agriculture could clearly provide large benefits for many of the worst off. Second, decreased costs of production lead to lower prices for food, allowing consumers to purchase more or freeing up their income to be spent elsewhere.

Projecting out to 2050, these two types of benefit alone are expected to outweigh the costs of increased R&D by 16 to 1 (Hoddinott et al., 2012). By comparison, the benefit-cost ratios estimated within the same project for salt iodization (a form of micronutrient supplement) range between 15 to 1 and 520 to 1, with the latest estimates finding a benefit-cost ratio of 81 to 1 (Hoddinott et al., 2012), and most of the estimates reported to the Copenhagen Consensus panel for the benefit-cost ratio of conditional cash transfers for education fall between 10 to 1 and 2 to 1 (Orazem, 2012). Using a very crude method, we can also convert the benefit-cost ratios into approximate QALY terms. Using a QALY value of three times annual income and taking the income of the beneficiaries to be $4.50 a day (around average income per capita in Sub-Saharan Africa), agricultural R&D is estimated to generate a benefit equivalent to one QALY for every $304.

Other types of benefit were not tabulated in the Copenhagen Consensus study, but should also be high. Strains that are resistant to drought, for instance, could greatly reduce year-to-year variation in crop yields. More resilient seeds could mitigate the negative effects of climate change on agriculture. Lower food prices may lead to better child nutrition, with life-long improved health and productivity. Finally, higher yields may decrease the potential for conflict due to the pressure on limited land, food and water resources resulting from climate change and population growth. Each of these benefits alone may justify the costs of research and development but, with our limited knowledge, they are not easily quantified.

The high benefit-cost ratio found by the Copenhagen Consensus team is broadly consistent with other literature. Meta-analysis of 292 academic studies on this topic has found that the median rate of return of agricultural R&D is around 44% (Alston et al., 2000). A rate of return, in this sense, indicates the discount rate at which the costs of an investment are equal to the benefits – rather like the interest rate on a bank account. More recent studies, focusing on research in Sub-Saharan Africa, have found aggregate returns of 55% (Alene, 2008).

Unfortunately, the rate of return on investment is not directly comparable to a benefit-cost ratio; the methodology applied often deviates from the welfare based approach applied by the Copenhagen Consensus team and the two numbers cannot be accurately converted into similar terms. Nonetheless, a crude conversion method can be applied to reach a ballpark estimate of the benefit-cost ratio implied by these studies. Assuming a marginal increase in spending on research is borne upfront and that research generates a constant stream of equal benefits each year from then on, the benefit-cost ratio for an investment with a 44% rate of return at a 5% discount rate is 9 to 1.

There are, however, at least two reasons to treat these high benefit-cost estimates with skepticism.

First, estimating the effect of research and development is difficult. One problem is attribution. Growth in yields can be observed as can spending on research and development, but it is much more difficult to observe which spending on research led to which increase in yields. If yields grew last year in Ethiopia, was this the result of research that occurred two years ago or ten years ago? Were the improved yields driven by spending on research within Ethiopia, or was it a spillover from research conducted elsewhere in the region or, even, research conducted on another continent? Estimating the effect of R&D spend requires researchers to adopt a specific temporal and spatial model dictating which expenditures can effect which yields in which countries. Teasing out causality can therefore be tricky, and some studies have suggested that inappropriate attribution may have led to systematic bias in the available estimates (e.g. Alston et al., 2009).

Another problem is cherry picking. Estimates garnered from meta-analysis are likely to be upwardly biased because studies are much more likely to be conducted on R&D programmes that are perceived to be successful. Failed programmes, on the other hand, are likely to be ignored and, as a result, the research may paint an overly optimistic picture of the potential impact of R&D.

Second, for new technologies to have an impact on the poor, they need to be widely adopted. This step should not be taken for granted. Adoption rates for improved varieties of crops remain low throughout Africa; farmer-saved seeds, which are unlikely to be improved, account for around 80 per cent of planted seeds in Africa compared to a global average of 35 per cent (AGRA, 2013). To some extent, this is because previous research has been poorly targeted at regional needs. The high-yield varieties developed during the Green Revolution require irrigation or very high levels of rainfall. New seed development was focused on wheat and rice, rather than alternative crops such as sorghum, cassava and millet. High yielding varieties required extensive fertilizer use. All of these features rendered them unsuitable for the African context, and explain why it was not easy to replicate the Asian success story elsewhere (Elliot, 2010).

However, there are more structural features of many developing countries that will limit adoption. Lack of available markets for surplus production can mean that smallholders can see limited benefit from larger harvests, especially when new seeds are costly and require additional labour and expensive fertilizer. Weak property rights undermine incentives to invest, given that farmers may be unable to hold on to their surplus crop or sell it at a fair price. Unavailability of credit means that, even when it makes good economic sense for farmers to invest in improved seeds, they may not be able to raise the initial capital required. The benefit-cost estimates discussed above, based on a synthesis of evidence from a diverse set of contexts, may underestimate the difficulties with adoption in more challenging countries.

Even in Asia during the Green Revolution, high-yield varieties were adopted first and foremost by large agricultural interests rather than smallholders (Wiggins et al., 2013). If this was the case for newly developed seeds, the impact on the poorest would be more limited than suggested in the Copenhagen Consensus study. They could still benefit from lower food prices and increased employment in the agricultural sector, but in extreme scenarios smallholders may even lose out due to low cost competition from larger farms that adopt new seeds.

In combination, the difficulties with estimating the effects of R&D and the potential barriers to adoption suggest that the estimated benefit-cost ratios reported earlier are likely to be upwardly biased. The benefit-cost ratios estimated are also lower than those associated with Giving What We Can’s currently recommended charities. For instance, the $304 per QALY estimate based on the Copenhagen Consensus benefit-cost ratio, which appears to be at the higher end of the literature, compares unfavourably to GiveWell’s baseline estimate of $45 to $115 per DALY for insecticide treated bednets (GiveWell, 2013). The benefit-cost ratios also appear to be lower than those associated with micronutrient supplements, as discussed earlier. While there are significant benefits that remain unquantified within agricultural R&D, the same is also true for interventions based on bednet distribution, deworming and micronutrient supplements. As a result, while this area could yield individual high impact opportunities, the literature as it stands does not seem to support the claim that agricultural R&D is likely to be more effective than the best other interventions.

References

  • Food and Agricultural Organisation,’The State of Food and Agriculture 2013’ (2013)
  • von Braun, J., Fan, S., Meinzen-Dick, R., Rosegrant, M. and Nin Pratt, A., ‘What to Expect from Scaling Up CGIAR Investments and ‘Best Bet’ Programs’ (2008)
  • Copenhagen Consensus, ‘Expert Panel Findings’ (2012)
  • Hoddinott, J., Rosegrant, M. and Torero, M. ‘Investments to reduce hunger and undernutrition’ (2012)
  • Orazem, P. ‘The Case for Improving School Quality and Student Health as a Development Strategy’ (2012)
  • Alliance for Green Revolution in Africa, ‘Africa Agriculture Status Report 2013: Focus on Staple Crops’, (2013)
  • International Food Policy Research Institute, ‘2012 Global Food Policy Report’, (2012)
  • Elliot, K., ‘Pulling Agricultural Innovation and the Market Together’, (2010)
  • Wiggins, S., Farrington, J., Henley, G., Grist, N. and Locke, A. ‘Agricultural development policy: a contemporary agenda’ (2013)
  • Givewell, ‘Mass distribution of long-lasting insecticide-treated nets (LLINs)’, 2013, http://www.givewell.org/international/technical/programs/insecticide-treated-nets, retrieved July 10th 2014
  • The Economist, ‘A bigger rice bowl’, May 10th 2014

How to treat problems of unknown difficulty

,

Crossposted from the Global Priorities Project

This is the first in a series of posts which take aim at the question: how should we prioritise work on problems where we have very little idea of our chances of success. In this post we’ll see some simple models-from-ignorance which allow us to produce some estimates of the chances of success from extra work. In later posts we’ll examine the counterfactuals to estimate the value of the work. For those who prefer a different medium, I gave a talk on this topic at the Good Done Right conference in Oxford this July.

Introduction

How hard is it to build an economically efficient fusion reactor? How hard is it to prove or disprove the Goldbach conjecture? How hard is it to produce a machine superintelligence? How hard is it to write down a concrete description of our values?

These are all hard problems, but we don’t even have a good idea of just how hard they are, even to an order of magnitude. This is in contrast to a problem like giving a laptop to every child, where we know that it’s hard but we could produce a fairly good estimate of how much resources it would take.

Since we need to make choices about how to prioritise between work on different problems, this is clearly an important issue. We can prioritise using benefit-cost analysis, choosing the projects with the highest ratio of future benefits to present costs. When we don’t know how hard a problem is, though, our ignorance makes the size of the costs unclear, and so the analysis is harder to perform. Since we make decisions anyway, we are implicitly making some judgements about when work on these projects is worthwhile, but we may be making mistakes.

In this article, we’ll explore practical epistemology for dealing with these problems of unknown difficulty.

Definition

We will use a simplifying model for problems: that they have a critical threshold D such that the problem will be completely solved when D resources are expended, and not at all before that. We refer to this as the difficulty of the problem. After the fact the graph of success with resources will look something like this:

Of course the assumption is that we don’t know D. So our uncertainty about where the threshold is will smooth out the curve in expectation. Our expectation beforehand for success with resources will end up looking something like this:

Assuming a fixed difficulty is a simplification, since of course resources are not all homogenous, and we may get lucky or unlucky. I believe that this is a reasonable simplification, and that taking these considerations into account would not change our expectations by much, but I plan to explore this more carefully in a future post.

What kind of problems are we looking at?

We’re interested in one-off problems where we have a lot of uncertainty about the difficulty. That is, the kind of problem we only need to solve once (answering a question a first time can be Herculean; answering it a second time is trivial), and which may not easily be placed in a reference class with other tasks of similar difficulty. Knowledge problems, as in research, are a central example: they boil down to finding the answer to a question. The category might also include trying to effect some systemic change (for example by political lobbying).

This is in contrast to engineering problems which can be reduced down, roughly, to performing a known task many times. Then we get a fairly good picture of how the problem scales. Note that this includes some knowledge work: the “known task” may actually be different each time. For example, proofreading two pages of text is quite the same, but we have a fairly good reference class so we can estimate moderately well the difficulty of proofreading a page of text, and quite well the difficulty of proofreading a 100,000-word book (where the length helps to smooth out the variance in estimates of individual pages).

Some knowledge questions can naturally be broken up into smaller sub-questions. However these typically won’t be a tight enough class that we can use this to estimate the difficulty of the overall problem from the difficult of the first few sub-questions. It may well be that one of the sub-questions carries essentially all of the difficulty, so making progress on the others is only a very small help.

Model from extreme ignorance

One approach to estimating the difficulty of a problem is to assume that we understand essentially nothing about it. If we are completely ignorant, we have no information about the scale of the difficulty, so we want a scale-free prior. This determines that the prior obeys a power law. Then, we update on the amount of resources we have already expended on the problem without success. Our posterior probability distribution for how many resources are required to solve the problem will then be a Pareto distribution. (Fallenstein and Mennen proposed this model for the difficulty of the problem of making a general-purpose artificial intelligence.)

There is still a question about the shape parameter of the Pareto distribution, which governs how thick the tail is. It is hard to see how to infer this from a priori reasons, but we might hope to estimate it by generalising from a very broad class of problems people have successfully solved in the past.

This idealised case is a good starting point, but in actual cases, our estimate may be wider or narrower than this. Narrower if either we have some idea of a reasonable (if very approximate) reference class for the problem, or we have some idea of the rate of progress made towards the solution. For example, assuming a Pareto distribution implies that there’s always a nontrivial chance of solving the problem at any minute, and we may be confident that we are not that close to solving it. Broader because a Pareto distribution implies that the problem is certainly solvable, and some problems will turn out to be impossible.

This might lead people to criticise the idea of using a Pareto distribution. If they have enough extra information that they don’t think their beliefs represent a Pareto distribution, can we still say anything sensible?

Reasoning about broader classes of model

In the previous section, we looked at a very specific and explicit model. Now we take a step back. We assume that people will have complicated enough priors and enough minor sources of evidence that it will in practice be impossible to write down a true distribution for their beliefs. Instead we will reason about some properties that this true distribution should have.

The cases we are interested in are cases where we do not have a good idea of the order of magnitude of the difficulty of a task. This is an imprecise condition, but we might think of it as meaning something like:

There is no difficulty X such that we believe the probability of D lying between X and 10X is more than 30%.

Here the “30%” figure can be adjusted up for a less stringent requirement of uncertainty, or down for a more stringent one.

Now consider what our subjective probability distribution might look like, where difficulty lies on a logarithmic scale. Our high level of uncertainty will smooth things out, so it is likely to be a reasonably smooth curve. Unless we have specific distinct ideas for how the task is likely to be completed, this curve will probably be unimodal. Finally, since we are unsure even of the order of magnitude, the curve cannot be too tight on the log scale.

Note that this should be our prior subjective probability distribution: we are gauging how hard we would have thought it was before embarking on the project. We’ll discuss below how to update this in the light of information gained by working on it.

The distribution might look something like this:

In some cases it is probably worth trying to construct an explicit approximation of this curve. However, this could be quite labour-intensive, and we usually have uncertainty even about our uncertainty, so we will not be entirely confident with what we end up with.

Instead, we could ask what properties tend to hold for this kind of probability distribution. For example, one well-known phenomenon which is roughly true of these distributions but not all probability distributions is Benford’s law.

Approximating as locally log-uniform

It would sometimes be useful to be able to make a simple analytically tractable approximation to the curve. This could be faster to produce, and easily used in a wider range of further analyses than an explicit attempt to model the curve exactly.

As a candidate for this role, we propose working with the assumption that the distribution is locally flat. This corresponds to being log-uniform. The smoothness assumptions we made should mean that our curve is nowhere too far from flat. Moreover, it is a very easy assumption to work with, since it means that the expected returns scale logarithmically with the resources put in: in expectation, a doubling of the resources is equally good regardless of the starting point.

It is, unfortunately, never exactly true. Although our curves may be approximately flat, they cannot be everywhere flat — this can’t even give a probability distribution! But it may work reasonably as a model of local behaviour. If we want to turn it into a probability distribution, we can do this by estimating the plausible ranges of D and assuming it is uniform across this scale. In our example we would be approximating the blue curve by something like this red box:

Obviously in the example the red box is not a fantastic approximation. But nor is it a terrible one. Over the central range, it is never out from the true value by much more than a factor of 2. While crude, this could still represent a substantial improvement on the current state of some of our estimates. A big advantage is that it is easily analytically tractable, so it will be quick to work with. In the rest of this post we’ll explore the consequences of this assumption.

Places this might fail

In some circumstances, we might expect high uncertainty over difficulty without everywhere having local log-returns. A key example is if we have bounds on the difficulty at one or both ends.

For example, if we are interested in X, which comprises a task of radically unknown difficulty plus a repetitive and predictable part of difficulty 1000, then our distribution of beliefs of the difficulty about X will only include values above 1000, and may be quite clustered there (so not even approximately logarithmic returns). The behaviour in the positive tail might still be roughly logarithmic.

In the other direction, we may know that there is a slow and repetitive way to achieve X, with difficulty 100,000. We are unsure whether there could be a quicker way. In this case our distribution will be uncertain over difficulties up to around 100,000, then have a spike. This will give the reverse behaviour, with roughly logarithmic expected returns in the negative tail, and a different behaviour around the spike at the upper end of the distribution.

In some sense each of these is diverging from the idea that we are very ignorant about the difficulty of the problem, but it may be useful to see how the conclusions vary with the assumptions.

Implications for expected returns

What does this model tell us about the expected returns from putting resources into trying to solve the problem?

Under the assumption that the prior is locally log-uniform, the full value is realised over the width of the box in the diagram. This is w = log(y) – log(x), where x is the value at the start of the box (where the problem could first be plausibly solved), y is the value at the end of the box, and our logarithms are natural. Since it’s a probability distribution, the height of the box is 1/w.

For any z between x and y, the modelled chance of success from investing z resources is equal to the fraction of the box which has been covered by that point. That is:

(1)Chance of success before reaching z resources = log(z/x)/log(y/x).

So while we are in the relevant range, the chance of success is equal for any doubling of the total resources. We could say that we expect logarithmic returns on investing resources.

Marginal returns

Sometimes of greater relevance to our decisions is the marginal chance of success from adding an extra unit of resources at z. This is given by the derivative of Equation (1):

(2)Chance of success from a marginal unit of resource at z = 1/zw.

So far, we’ve just been looking at estimating the prior probabilities — before we start work on the problem. Of course when we start work we generally get more information. In particular, if we would have been able to recognise success, and we have invested z resources without observing success, then we learn that the difficulty is at least z. We must update our probability distribution to account for this. In some cases we will have relatively little information beyond the fact that we haven’t succeeded yet. In that case the update will just be to curtail the distribution to the left of z and renormalise, looking roughly like this:

Again the blue curve represents our true subjective probability distribution, and the red box represents a simple model approximating this. Now the simple model gives slightly higher estimated chance of success from an extra marginal unit of resources:

(3)Chance of success from an extra unit of resources after z = 1/(z*(ln(y)-ln(z))).

Of course in practice we often will update more. Even if we don’t have a good idea of how hard fusion is, we can reasonably assign close to zero probability that an extra $100 today will solve the problem today, because we can see enough to know that the solution won’t be found imminently. This looks like it might present problems for this approach. However, the truly decision-relevant question is about the counterfactual impact of extra resource investment. The region where we can see little chance of success has a much smaller effect on that calculation, which we discuss below.

Comparison with returns from a Pareto distribution

We mentioned that one natural model of such a process is as a Pareto distribution. If we have a Pareto distribution with shape parameter α, and we have so far invested z resources without success, then we get:

(4) Chance of success from an extra unit of resources = α/z.

This is broadly in line with equation (3). In both cases the key term is a factor of 1/z. In each case there is also an additional factor, representing roughly how hard the problem is. In the case of the log-linear box, this depends on estimating an upper bound for the difficulty of the problem; in the case of the Pareto distribution it is handled by the shape parameter. It may be easier to introspect and extract a sensible estimate for the width of the box than for the shape parameter, since it is couched more in terms that we naturally understand.

Further work

In this post, we’ve just explored a simple model for the basic question of how likely success is at various stages. Of course it should not be used blindly, as you may often have more information than is incorporated into the model, but it represents a starting point if you don’t know where to begin, and it gives us something explicit which we can discuss, critique, and refine.

In future posts, I plan to:

  • Explore what happens in a field of related problems (such as a research field), and explain why we might expect to see logarithmic returns ex post as well as ex ante.
    • Look at some examples of this behaviour in the real world.
  • Examine the counterfactual impact of investing resources working on these problems, since this is the standard we should be using to prioritise.
  • Apply the framework to some questions of interest, with worked proof-of-concept calculations.
  • Consider what happens if we relax some of the assumptions or take different models.

Ben Kuhn on the effective altruist movement

,

Ben Kuhn is a data scientist and engineer at a small financial technology firm. He previously studied mathematics and computer science at Harvard, where he was also co-president of Harvard College Effective Altruism. He writes on effective altruism and other topics at his website.


Pablo: How did you become involved in the EA movement?

Ben: When I was a sophomore in high school (that’s age 15 for non-Americans), Peter Singer gave his The Life You Can Save talk at my high school. He went through his whole “child drowning in the pond” spiel and explained that we were morally obligated to give money to charities that helped those who were worse off than us. In particular, I think at that point he was recommending donating to Oxfam in a sort of Kantian way where you gave an amount of money such that if everyone gave the same percentage it would eliminate world poverty. My friends and I realized that there was no utilitarian reason to stop at that amount of money–you should just donate everything that you didn’t need to survive.

So, being not only sophomores but also sophomoric, we decided that since Prof. Singer didn’t live in a cardboard box and wear only burlap sacks, he must be a hypocrite and therefore not worth paying attention to.

Sometime in the intervening two years I ran across Yvain’s essay Efficient Charity: Do Unto Others and through it GiveWell. I think that was the point where I started to realize Singer might have been onto something. By my senior year (ages 17-18) I at least professed to believe pretty strongly in some version of effective altruism, although I think I hadn’t heard of the term yet. I wrote an essay on the subject in a publication that my writing class put together. It was anonymous (under the brilliant nom de plume of “Jenny Ross”) but somehow my classmates all figured out it was me.

The next big update happened during the spring of my first year of Harvard, when I started going to the Cambridge Less Wrong meetups and met Jeff and Julia. Through some chain of events they set me up with the folks who were then running Harvard High-Impact Philanthropy (which later became Harvard Effective Altruism). After that spring, almost everyone else involved in HHIP left and I ended up becoming president. At that point I guess I counted as “involved in the EA movement”, although things were still touch-and-go for a while until John Sturm came onto the scene and made HHIP get its act together and actually do things.

Pablo: In spite of being generally sympathetic to EA ideas, you have recently written a thorough critique of effective altruism.  I’d like to ask you a few questions about some of the objections you raise in that critical essay.  First, you have drawn a distinction between pretending to try and actually trying.  Can you tell us what you mean by this, and why do you claim that a lot of effective altruism can be summarized as “pretending to actually try”?

Ben: I’m not sure I can explain better than what I wrote in that post, but I’ll try to expand on it. For reference, here’s the excerpt that you referred to:

By way of clarification, consider a distinction between two senses of the word “trying”…. Let’s call them “actually trying” and “pretending to try”. Pretending to try to improve the world is something like responding to social pressure to improve the world by querying your brain for a thing which improves the world, taking the first search result and rolling with it. For example, for a while I thought that I would try to improve the world by developing computerized methods of checking informally-written proofs, thus allowing more scalable teaching of higher math, democratizing education, etc. Coincidentally, computer programming and higher math happened to be the two things that I was best at. This is pretending to try. Actually trying is looking at the things that improve the world, figuring out which one maximizes utility, and then doing that thing. For instance, I now run an effective altruist student organization at Harvard because I realized that even though I’m a comparatively bad leader and don’t enjoy it very much, it’s still very high-impact if I work hard enough at it. This isn’t to say that I’m actually trying yet, but I’ve gotten closer.

Most people say they want to improve the world. Some of them say this because they actually want to improve the world, and some of them say this because they want to be perceived as the kind of person who wants to improve the world. Of course, in reality, everyone is motivated by other people’s perceptions to some extent–the only question is by how much, and how closely other people are watching. But to simplify things let’s divide the world up into those two categories, “altruists” and “signalers.”

If you’re a signaler, what are you going to do? If you don’t try to improve the world at all, people will notice that you’re a hypocrite. On the other hand, improving the world takes lots of resources that you’d prefer to spend on other goals if possible. But fortunately, looking like you’re improving the world is easier than actually improving the world. Since people usually don’t do a lot of due diligence, the kind of improvements that signallers make tend to be ones with very good appearances and surface characteristics–like PlayPumps, water-pumping merry-go-rounds which initially appeared to be a clever and elegant way to solve the problem of water shortage in developing countries. PlayPumps got tons of money and celebrity endorsements, and their creators got lots of social rewards, even though the pumps turned out to be hideously expensive, massively inefficient, prone to breaking down, and basically a disaster in every way.

So in this oversimplified world, the EA observation that “charities vary in effectiveness by orders of magnitude” is explained by “charities” actually being two different things: one group optimizing for looking cool, and one group optimizing for actually doing good. A large part of effective altruism is realizing that signaling-charities (“pretending to try”) often don’t do very much good compared to altruist-charities.

(In reality, of course, everyone is driven by some amount of signalling and some amount of altruism, so these groups overlap substantially. And there are other motivations for running a charity, like being able to convince yourself that you’re doing good. So it gets messier, but I think the vastly oversimplified model above is a good illustration of where my point is coming from.)

Okay, so let’s move to the second paragraph of the post you referenced:

Using this distinction between pretending and actually trying, I would summarize a lot of effective altruism as “pretending to actually try”. As a social group, effective altruists have successfully noticed the pretending/actually-trying distinction. But they seem to have stopped there, assuming that knowing the difference between fake trying and actually trying translates into ability to actually try. Empirically, it most certainly doesn’t. A lot of effective altruists still end up satisficing—finding actions that are on their face acceptable under core EA standards and then picking those which seem appealing because of other essentially random factors. This is more likely to converge on good actions than what society does by default, because the principles are better than society’s default principles. Nevertheless, it fails to make much progress over what is directly obvious from the core EA principles. As a result, although “doing effective altruism” feels like truth-seeking, it often ends up being just a more credible way to pretend to try.

The observation I’m making here is roughly that EA seems not to have switched entirely to doing good for altruistic rather than signaling reasons. It’s more like we’ve switched to signaling that we’re doing good for altruistic rather than signaling reasons. In other words, the motivation didn’t switch from “looking good to outsiders” to “actually being good”–it switched from “looking good to outsiders” to “looking good to the EA movement.”

Now, the EA movement is way better than random outsiders at distinguishing between things with good surface characteristics and things that are actually helpful, so the latter criterion is much stricter than the former, and probably leads to much more good being done per dollar. (For instance, I doubt the EA community would ever endorse something like PlayPumps.) But, at least at the time of writing that post, I saw a lot of behavior that seemed to be based on finding something pleasant and with good surface appearances rather than finding the thing that optimized utility–for instance, donating to causes without a particularly good case that they were better than saving or picking career options that seemed decent-but-not-great from an EA perspective. That’s the source of the phrase “pretending to actually try”–the signaling isn’t going away, it’s just moving up a level in the hierarchy, to signaling that you don’t care about signaling.

Looking back on that piece, I think “pretending to actually try” is still a problem, but my intuition is now that it’s probably not huge in the scheme of things. I’m not quite sure why that is, but here are some arguments against it being very bad that have occurred to me:

  • It’s probably somewhat less prevalent than I initially thought, because the EAs making weird-seeming decisions may be doing them for reasons that aren’t transparent to me and that get left out by the typical EA analysis. The typical EA analysis tends to be a 50000-foot average-case argument that can easily be invalidated by particular personal factors.
  • As Katja Grace points out, encouraging pretending to really try might be optimal from a movement-building perspective, inasmuch as it’s somewhat inescapable and still leads to pretty good results.
  • I probably overestimated the extent to which motivated/socially-pressured life choices are bad, for a couple reasons. I discounted the benefit of having people do a diversity of things, even if the way they came to be doing those things wasn’t purely rational. I also discounted the cost of doing something EA tells you to do instead of something you also want to do.
  • For instance, suppose for the sake of argument that there’s a pretty strong EA case that politics isn’t very good (I know this isn’t actually true). It’s probably good for marginal EAs to be dissuaded from going into politics by this, but I think it would still be bad for every single EA to be dissuaded from going into politics, for two reasons. First, the arguments against politics might turn out to be wrong, and having a few people in politics hedges against that case. Second, it’s much easier to excel at something you’re motivated at, and the category of “people who are excellent at what they do” is probably as important to the EA movement as “people doing job X” for most X.

I also just haven’t noticed as much pretending-stuff going on in the last few months, so maybe we’re just getting better at avoiding it (or maybe I’m getting worse at noticing it). Anyway, I still definitely think there’s pretending-to-actually-try going on, but I don’t think it’s a huge problem.

Pablo: In another section of that critique, you express surprise at the fact that so many effective altruists donate to global health causes now.  Why would you expect EAs to use their money in other ways–whether it’s donating now to other causes, or donating later–, and what explains, in your opinion, this focus on causes for which we have relatively good data?

Ben; I’m no longer sure enough of where people’s donations are going to say with certainty that too much is going to global health. My update here is from of a combination of being overconfident when I wrote the piece, and what looks like an increase in waiting to donate shortly after I wrote it. The latter was probably due in large part to AMF’s delisting and perhaps the precedent set by GiveWell employees, many of whom waited last year (though others argued against it). (Incidentally, I’m excited about the projects going on to make this more transparent, e.g. the questions on the survey about giving!)

The giving now vs. later debate has been ably summarized by Julia Wise on the EA blog. My sense from reading various arguments for both sides is that I more often see bad arguments for giving now. There are definitely good arguments for giving at least some money now, but on balance I suspect I’d like to see more saving. Again, though, I don’t have a great idea of what people’s donation behavior actually is; my samples could easily be biased.

I think my strongest impression right now is that I suspect we should be exploring more different ways to use our donations. For instance, some people who are earning to give have experimented with funding people to do independent research, which was a pretty cool idea. Off the top of my head, some other things we could try include scholarships, essay contest prizes, career assistance for other EAs, etc. In general it seems like there are tons of ways to use money to improve the world, many of which haven’t been explored by GiveWell or other evaluators and many of which don’t even fall in the category of things they care about (because they’re too small or too early-stage or something), but we should still be able to do something about them.

Pablo: In the concluding section of your essay, you propose that self-awareness be added to the list of principles that define effective altruism. Any thoughts on how to make the EA movement more self-aware?

Ben: One thing that I like to do is think about what our blind spots are. I think it’s pretty easy to look at all the stuff that is obviously a bad idea from an EA point of view, and think that our main problem is getting people “on board” (or even “getting people to admit they’re wrong”) so that they stop doing obviously bad ideas. And that’s certainly helpful, but we also have a ways to go just in terms of figuring things out.

For instance, here’s my current list of blind spots–areas where I wish there were a lot more thinking and idea-spreading going on then there currently is:

  • Being a good community. The EA community is already having occasional growing pains, and this is only going to get worse as we gain steam e.g. with Will MacAskill’s upcoming book. And beyond that, I think that ways of making groups more effective (as opposed to individuals) have a lot of promise for making the movement better at what we do. Many, many intellectual groups fail to accomplish their goals for basically silly reasons, while seemingly much worse groups do much better on this dimension. It seems like there’s no intrinsic reason we should be worse than, say, Mormons at building an effective community, but we’re clearly not there yet. I think there’s absolutely huge value in getting better at this, yet almost no one putting in a serious concerted effort.
  • Knowing history. Probably as a result of EA’s roots in math/philosophy, my impression is that our average level of historical informedness is pretty low, and that this makes us miss some important pattern-matches and cues. For instance, I think a better knowledge of history could help us think about capacity-building interventions, policy advocacy, and community building.
  • Fostering more intellectual diversity. Again because of the math/philosophy/utilitarianism thing, we have a massive problem with intellectual monoculture. Of my friends, the ones I enjoy talking about altruism the most with now are largely actually the ones who associate least with the broader EA community, because they have more interesting and novel perspectives.
  • Finding individual effective opportunities. I suspect that there’s a lot of room for good EA opportunities that GiveWell hasn’t picked up on because they’re specific to a few people at a particular time. Some interesting stuff has been done in this vein in the past, like funding small EA-related experiments, funding people to do independent secondary research, or giving loans to other EAs investing in themselves (at least I believe this has been done). But I’m not sure if most people are adequately on the lookout for this kind of opportunity.

(Since it’s not fair to say “we need more X” without specifying how we get it, I should probably also include at least one anti-blind spots that I think we should be spending fewer resources on, on the margin: Object-level donations to e.g. global health causes. I feel like we may be hitting diminishing returns here. Probably donating some is important for signalling reasons, but I think it doesn’t have a very high naive expected value right now.)

Pablo: Finally, what are your plans for the mid-term future?  What EA-relevant activities will you engage in over the next few years, and what sort of impact do you expect to have?

Ben: A while ago I did some reflecting and realized that most of the things I did that I was most happy about were pretty much unplanned–they happened not because I carefully thought things through and decided that they were the best way to achieve some goal, but because they intuitively seemed like a cool thing to do. (Things in this category include starting a blog, getting involved in the EA/rationality communities, running Harvard Effective Altruism, getting my current job, etc.) As a result, I don’t really have “plans for the mid-term future” per se. Instead, I typically make decisions based on intuitions/heuristics about what will lead to the best opportunities later on, without precisely knowing (or even knowing at all, often) what form those opportunities will take.

So I can’t tell you what I’ll be doing for the next few years–only that it will probably follow some of my general intuitions and heuristics:

  • Do lots of things. The more things I do, the more I increase my “luck surface area” to find awesome opportunities.
  • Do a few things really well. The point of this heuristic is hopefully obvious.
  • Do things that other people aren’t doing–or more accurately, things that not enough people are doing relative to how useful or important they are. My effort is most likely to make a difference in an area that is relatively under-resourced.

I’d like to take a moment here to plug the conference call on altruistic career choice that Holden Karnofsky of GiveWell had, which makes some great specific points along these lines.

Anyway, that’s my long-winded answer to the first part of this question. As far as EA-relevant activities and impacts, all the same caveats apply as above, but I can at least go over some things I’m currently interested in:

  • Now that I’m employed full-time, I need to start thinking much harder about where exactly I want to give: both what causes seem best, and which interventions within those causes. I actually currently don’t have much of a view on what I would do with more unrestricted funds.
  • Related to the point above about self-awareness, I’m interested in learning some more EA-relevant history–how previous social movements have worked out, how well various capacity-building interventions have worked, more about policy and the various systems that philanthropy comes into contact with, etc.
  • I’m interested to see to what extent the success of Harvard Effective Altruism can be sustained at Harvard and replicated at other universities.

I also have some more speculative/gestational interests–I’m keeping my eye on these, but don’t even have concrete next steps in mind:

  • I think there may be under-investment in healthy EA community dynamics, preventing common failure modes like unfriendliness, ossification to new ideas, groupthink etc.–though I can’t say for sure because I don’t have a great big-picture perspective of the EA community.
  • I’m also interested in generally adding more intellectual/epistemic diversity to EA–we have something of a monoculture problem right now. Anecdotally, there are a number of people who I think would have a really awesome perspective on many problems that we face, but who get turned off of the community for one reason or another.

Crossposted from Pablo’s blog

Audio recordings from Good Done Right available online

,

This July saw the first academic conference on effective altruism. The three-day event took place at All Souls College, one of the constituent colleges of the University of Oxford. The conference featured a diverse range of speakers addressing issues related to effective altruism in a shared setting. It was a fantastic opportunity to share insights and ideas from some of the best minds working on these issues.

I’m very pleased to announce that audio recordings from most of the talks are now available on the conference website, alongside speakers’ slides (where applicable). I’m very grateful to all of the participants for their fantastic presentations, and to All Souls College and the Centre for Effective Altruism for supporting the conference.

Crossposted from the Giving What We Can blog

‘Special Projects’ at the Centre for Effective Altruism

,

This is a short overview of a talk that I gave alongside William MacAskill and Owen Cotton-Barratt at the Centre for Effective Altruism Weekend Away last weekend.  This post does not contain new information for people familiar with the Centre for Effective Altruism’s work.  

New projects at the Centre for Effective Altruism are incubated within the Special Projects team.  We carry out a number of activities before choosing which ones to scale up.  The projects that we are currently working on are listed below.

Screen Shot 2014-06-20 at 2.17.06 pmThe Global Priorities Project is a joint research initiative between the Future of Humanity Institute at the University of Oxford and the Centre for Effective Altruism.  It attempts to prioritise between the pressing problems currently facing the world in order to establish in which areas we might have the most impact.  You can read more on about the project here.

Through the Global Priorities Project we are also engaged in policy advising for the UK Government.  Our first report to be published under this initiative is on unprecedented technological risk.  Our team regularly visits Government departments and No. 10 Downing Street to discuss policy proposals that we are developing as part of this work.

We are also scaling up our effective altruism outreach.  As part of this work we are developing EffectiveAltruism.org into a landing page for people new to effective altruism.  We are also developing outreach activities to coincide with the release of multiple books on effective altruism in 2015, including one by our co-founder William MacAskill which will be published by Penguin in USA, and Guardian Faber (the publishing house of the national newspaper) in the UK.

We have also launched Effective Altruism Ventures, a commercial company that will hold the rights to William MacAskill’s upcoming book, which will also engage in outreach activities related to effective altruism.  This company is not part of the Centre for Effective Altruism.

If you have any questions about any of these projects, please do not hesitate to contact me at firstname.lastname@centreforeffectivealtruism.org or in the comments below.

The timing of labour aimed at reducing existential risk

,

Crossposted from the Global Priorities Project

Work towards reducing existential risk is likely to happen over a timescale of decades. For many parts of this work, the benefits of that labour is greatly affected by when it happens. This has a large effect when it comes to strategic thinking about what to do now in order to best help the overall existential risk reduction effort. I look at the effects of nearsightedness, course setting, self-improvement, growth, and serial depth, showing that there are competing considerations which make some parts of labour particularly valuable earlier, while others are more valuable later on. We can thus improve our overall efforts by encouraging more meta-level work on course setting, self-improvement, and growth over the next decade, with more of a focus on the object-level research on specific risks to come in decades beyond that.

Nearsightedness

Suppose someone considers AI to be the largest source of existential risk, and so spends a decade working on approaches to make self-improving AI safer. It might later become clear that AI was not the most critical area to worry about, or that this part of AI was not the most critical part, or that this work was going to get done anyway by mainstream AI research, or that working on policy to regulate research on AI was more important than working on AI. In any of these cases she wasted some of the value of her work by doing it now. She couldn’t be faulted for lack of omniscience, but she could be faulted for making herself unnecessarily at the mercy of bad luck. She could have achieved more by doing her work later, when she had a better idea of what was the most important thing to do.

We are nearsighted with respect to time. The further away in time something is, the harder it is to perceive its shape: its form, its likelihood, the best ways to get purchase on it. This means that work done now on avoiding threats in the far future can be considerably less valuable than the same amount of work done later on. The extra information we have when the threat is up close lets us more accurately tailor our efforts to overcome it.

Other things being equal, this suggests that a given unit of labour directed at reducing existential risk is worth more the later in time it comes.

Course setting, self-improvement & growth

As it happens, other things are not equal. There are at least three major effects which can make earlier labour matter more.

The first of these is if it helps to change course. If we are moving steadily in the wrong direction, we would do well to change our course, and this has a larger benefit the earlier we do so. For example, perhaps effective altruists are building up large resources in terms of specialist labour directed at combatting a particular existential risk, when they should be focusing on more general purpose labour. Switching to the superior course sooner matters more, so efforts to determine the better course and to switch onto it matter more the earlier they happen.

The second is if labour can be used for self-improvement. For example, if you are going to work to get a university degree, it makes sense to do this earlier in your career rather than later as there is more time to be using the additional skills. Education and training, both formal and informal, are major examples of self-improvement. Better time management is another, and so is gaining political or other influence. However this category only includes things that create a lasting improvement to your capacities and that require only a small upkeep. We can also think of self-improvement for an organisation. If there is benefit to be had from improved organisational efficiency, it is generally better to get this sooner. A particularly important form is lowering the risk of the organisation or movement collapsing, or cutting off its potential to grow.

The third is if the labour can be used to increase the amount of labour we have later. There are many ways this could happen, several of which give exponential growth. A simple example is investment. An early hour of labour could be used to gain funds which are then invested. If they are invested in a bank or the stock market, one could expect a few percent real return, letting you buy twice as much labour two or three decades later. If they are invested in raising funds through other means (such as a fundraising campaign) then you might be able to achieve a faster rate of growth, though probably only over a limited number of years until you are using a significant fraction of the easy opportunities.

A very important example of growth is movement building: encouraging other people to dedicate part of their own labour or resources to the common cause, part of which will involve more movement building. This will typically have an exponential improvement with the potential for double digit percentage growth until the most easily reached or naturally interested people have become part of the movement at which point it will start to plateau. An extra hour of labour spent on movement building early on, could very well produce a hundred extra hours of labour to be spent later. Note that there might be strong reasons not to build a movement as quickly as possible: rapid growth could involve increasing the signal to noise ratio in the movement, or changing its core values, or making it more likely to collapse, and this would have to be balanced against the benefits of growth sooner.

If the growth is exponential for a while but will spend a lot of time stuck at a plateau, it might be better in the long term to think of it like self improvement. An organisation might have been able to raise $10,000 of funds per year after costs before the improvement and then gains the power to raise $1,000,000 of funds per year afterwards — only before it hits the plateau does it have the exponential structure characteristic of growth.

Finally, there is a matter of serial depth. Some things require a long succession of stages each of which must be complete before the next begins. If you are building a skyscraper, you will need to build the structure for one story before you can build the structure for the next. You will therefore want to allow enough time for each of these stages to be completed and might need to have some people start building soon. Similarly, if a lot of novel and deep research needs to be done to avoid a risk, this might involve such a long pipeline that it could be worth starting it sooner to avoid the diminishing marginal returns that might come from labour applied in parallel. This effect is fairly common in computation and labour dynamics (see The Mythical Man Month), but it is the factor that I am least certain of here. We obviously shouldn’t hoard research labour (or other resources) until the last possible year, and so there is a reason based on serial depth to do some of that research earlier. But it isn’t clear how many years ahead of time it needs to start getting allocated (examples from the business literature seem to have a time scale of a couple of years at most) or how this compares to the downsides of accidentally working on the wrong problem.

Consequences

We have seen that nearsightedness can provide a reason to delay labour, while course setting, self-improvement, growth, and serial depth provide reasons to use labour sooner. In different cases, the relative weights of these reasons will change. The creation of general purpose resources such as political influence, advocates for the cause, money, or earning potential, is especially resistant to the nearsightedness problem as they have more flexibility to be applied to whatever the most important final steps happen to be. Creating general purpose resources, or doing course setting, self-improvement, or growth are thus comparatively better to do in the earlier times. Direct work on the cause is comparatively better to do later on (with a caveat about allowing enough time to allow for the required serial depth).

In the case of existential risk, I think that many of the percentage points of total existential risk lie decades or more in the future. There is quite plausibly more existential risk in the 22nd century than in the 21st. For AI risk, the recent FHI survey of 174 experts, the median estimate for when there would be a 50% chance of reaching roughly human level AI was 2040. For the subgroup of those who are part of the ‘Top 100′ researchers in AI, it was 2050. This gives something like 25 to 35 years before we think most of this risk will occur. That is a long time and will produce a large nearsightedness problem for conducting specific research now and a large potential benefit for course setting, self-improvement, and growth. Given a portfolio of labour to reduce risk over that time, it is particularly important to think about moving types of labour towards the times where they have a comparative advantage. If we are trying to convince others to help use their careers to reduce this risk, the best career advice might change over the coming decades from help with movement building or course setting, to accumulating more flexible resources, to doing specialist technical work.

The temporal location of a unit of labour can change its value by a great deal. It is quite plausible that due to nearsightedness, doing specific research now could have less than a tenth the expected value of doing it later, since it could so easily be on the wrong risk, or the wrong way of addressing the risk, or would have been done anyway, or could have been done more easily using tools people later build etc. It is also quite plausible that using labour to produce growth now, or to point us in a better direction, could produce ten times as much value. It is thus pivotal to think carefully about when we want to have different kinds of labour.

I think that this overall picture is right and important. However, I should add some caveats. We might need to do some specialist research early on in order to gain information about whether the risk is credible or which parts to focus on, to better help us with course setting. Or we might need to do research early in order to give research on risk reduction enough academic credibility to attract a wealth of mainstream academic attention, thereby achieving vast growth in terms of the labour that will be spent on the research in the future. Some early object level research will also help with early fundraising and movement building — if things remain too abstract for a long time, it would be extremely difficult to maintain a movement. But in these examples, the overall picture is the same. If we want to do early object-level research, it is because of its instrumental effects on course setting, self-improvement, and growth.

The writing of this document and the thought that preceded it are an example of course setting: trying to significantly improve the value of the long-term effort in existential risk reduction by changing the direction we head in. I think there are considerable gains here and as with other course setting work, it is typically good to do it sooner. I’ve tried to outline the major systematic effects that make the value of our labour vary greatly with time, and to present them qualitatively. But perhaps there is a major effect I’ve missed, or perhaps some big gains by using quantitative models. I think that more research on this would be very valuable.

On ’causes’

,

Crossposted from the Global Priorities Project

This post has two distinct parts. The first explores the meanings that have been attached to the term ‘cause’, and suggests my preferred usage. The second makes use of these distinctions to clarify the claims I made in a recent post on the long-term effects of animal welfare improvements.

On the meaning of ‘cause’

There are at least two distinct concepts which could reasonably be labelled a ‘cause’:

  1. An intervention area, i.e. a cluster of interventions which are related and share some characteristics. It is often the case that improving our understanding of some intervention in this area will improve our understanding of the whole area. We can view different-sized clusters as broader or narrower causes in this sense. GiveWell has promoted this meaning. Examples might include: interventions to improve health in developing countries; interventions giving out leaflets to change behaviour.
  2. A goal, something we might devote resources towards optimising. Some causes in this sense might be useful instrumental sub-goals for other causes. For example, “minimise existential risk” may be a useful instrumental goal for the cause “make the long-term future flourish”. When 80,000 Hours discussed reasons to select a cause, they didn’t explicitly use this meaning, but many of their arguments relate to it. A cause of this type may be very close to one of the first type, but defined by its goal rather than its methods: for example, maximising the number of quality-adjusted life-years lived in developing countries. Similarly, one could think of a cause a problem one can work towards solving.

These two characteristics often appear together, so we don’t always need to distinguish. But they can come apart: we can have a goal without a good idea of what intervention area will best support that goal. On the other hand, one intervention area could be worthwhile for multiple different goals, and it may not be apparent what goal an intervention is supposed to be targeting. Below I explain how these concepts can diverge substantially.

Which is the better usage? Or should we be using the word for both meanings? (Indeed there may be other possible meanings, such as defining a cause by its beneficiaries, but I think these are the two most natural.) I am not sure about this and would be interested in comments from others towards finding the most natural community norm. Key questions are whether we need to distinguish the concepts, and if we do then which is the more frequently the useful one to think of, and what other names fit them well.

My personal inclination is that when the meanings coincide of course we can use the one word, and that when they come apart it is better to use the second. This is because I think conversations about choosing a cause are generally concerned with the second, and because I think that “intervention area” is a good alternate term for the first meaning, while we lack such good alternatives for the second.

Conclusions about animals

In a recent post I discussed why the long-term effects of animal welfare improvements in themselves are probably small. A question we danced around in the comments is whether this meant that animal welfare was not the best cause. Some felt it did not, because of various plausible routes to impact from animal welfare interventions. I was unsure because the argument did appear to show this, but the rebuttals were also compelling.

My confusion at least was stemming, at least in part, from the term ‘cause’ being overloaded.

Now that I see that more clearly I can explain exactly what I am and am not claiming.

In that post, I contrasted human welfare improvements, which have many significant indirect and long-run effects, with animal welfare improvements, which appear not to. That is not to say that interventions which improve animal welfare do not have these large long-run effects, but that the long-run effects of such interventions are enacted via shifts in the views of humans rather than directly via the welfare improvement.

I believe that the appropriate conclusion is that “improve animal welfare” is extremely unlikely to be the best simple proxy for the goal “make the long-term future flourish”. In particular, it is likely dominated by the proxy “increase empathy”. So we can say with confidence that improving animal welfare is not the best cause in the second sense (whereas it may still be a good intervention area). In contrast, we do not have similarly strong reasons to think “improve human welfare” is definitely not the best approach.

Two things I am not claiming:

  • That improving human welfare is a better instrumental sub-goal for improving the long-term future than improving animal welfare.
  • That interventions which improve animal welfare are not among the best available, if they also have other effects.

If you are not persuaded that it’s worth optimising for the long-term rather than the short-term, the argument won’t be convincing. If you are, though, I think you should not adopt animal welfare as a cause in the second sense. I am not arguing against ‘increasing empathy’ as possibly the top goal we can target (although I plan to look more deeply into making comparisons between this and other goals), and it may be that ‘increase vegetarianism’ is a useful way to increase empathy. But we should keep an open mind, and if we adopt ‘increasing empathy’ as a goal we should look for the best ways to do this, whether or not they relate to animal welfare.

Will we eventually be able to colonize other stars? Notes from a preliminary review

,

Crossposted from the Global Priorities Project

Summary

I investigated this question because of its potential relevance to existential risk and the long-term future more generally. There are a limited number of books and scientific papers on the topic and the core questions are generally not regarded as resolved, but the people who seem most informed about the issue generally believe that space colonization will eventually be possible. I found no books or scientific papers arguing for in-principle infeasibility, and believe I would have found important ones if they existed. The blog posts and journalistic pieces arguing for the infeasibility of space colonization are largely unconvincing due to lack of depth and failure to engage with relevant counterarguments.

The potential obstacles to space colonization include: very large energy requirements, health and reproductive challenges from microgravity and cosmic radiation, short human lifespans in comparison with great distances for interstellar travel, maintaining a minimal level of genetic diversity, finding a hospitable target, substantial scale requirements for building another civilization, economic challenges due to large costs and delayed returns, and potential political resistance. Each of these obstacles has various proposed solutions and/or arguments that the problem is not insurmountable. Many of these obstacles would be easier to overcome given potential advances in AI, robotics, manufacturing, and propulsion technology.

Deeper investigation of this topic could address the feasibility of the relevant advances in AI, robotics, manufacturing, and propulsion technology. My intuition is that such investigation would lend further support to the conclusion that interstellar colonization will eventually be possible.

Note: This investigation relied significantly on interviews and Wikipedia articles because I’m unfamiliar with the area, there are not many very authoritative sources, and I was trying to review this question quickly.

Why did I look into this question?

  • If people are likely to eventually colonize space, then it increases the potential scale and duration of civilization. This could affect arguments about the importance of trying to affect the long-run future of civilization. For example, Nick Bostrom[1] and I[2] have appealed to the possibility of interstellar space colonization in our arguments for the importance of reducing existential risk or otherwise affecting distant future generations.
  • In correspondence, some people have tried to resist arguments for the extreme importance of the distant future by rejecting the claim that there is a reasonable chance of colonizing space in the future. I don’t believe these arguments essentially depend on the feasibility of space colonization. However, I believed that the evidence for the feasibility of space colonization was strong enough to carry that argument, and I wanted to test that assumption.

Clarifying the question

Will we eventually be able to colonize other stars? I focused on a version of this question assuming unlimited time horizons and “business as usual” (no major catastrophes or unexpected reversals of global trends), with the aim of establishing settlements which could function independently of Earth-based civilization. My version of “business as usual” could be disputed, especially by people who believe ecological constraints may spell an end to innovation and economic development in the coming century or two, leaving insufficient time for the technological developments necessary to make a project of this kind feasible. Apart from noting the potential problem, that is a discussion best left for another investigation.

Range of current opinion on this topic

A recent, lengthy report by the National Research Council discussing rationales and approaches stated that it currently isn’t known whether we’ll eventually be able to create self-sufficient off-Earth settlements,[3] but only two of 285 pages were devoted to this issue. People who have done the most in-depth work on the feasibility of space colonization generally believe it is possible. For instance, I found several books and academic papers published on this topic, and they are all written by people who think it is possible,[4] but I found no books or academic papers arguing for that interstellar colonization is impossible. I discuss whether this is due to a selection effect below. I found some journalistic pieces and magazine articles arguing for the impossibility of space colonization because of worries about microgravity[5] or getting enough energy,[6] but I generally found them very unconvincing due to lack of depth and failure to discuss obvious counterarguments. For example, the articles arguing from challenges of microgravity to the impossibility of space colonization failed to consider the most obvious solution: rotating the spaceship to induce artificial gravity. For the most part, the articles arguing from energy/propulsion challenges didn’t discuss any specific problems with existing proposals to get enough energy using alternatives to the current chemical rockets we have today, such as nuclear fission, fusion, beam propulsion, solar sails, or antimatter (more on this below). An exception was a brief essay written for an Edge.org competition by Ed Regis, which did quickly discuss some of these potential propulsion methods[7] (in addition to several of the other challenges I discuss below). However, Regis makes the strange choice of highlighting the most speculative and unlikely propulsion methods, and does not address fission or fusion. It’s possible that the scientists quoted have more developed arguments that the journalists have failed to communicate, which would suggest that better critiques along these lines are possible, but it isn’t clear to me how these critiques would be developed based on what has been said.

Of the four people I interviewed on this topic, only one (Charles Stross) was pessimistic about our prospects for colonizing the stars. I sought out Charles Stross specifically because he had been referenced by other sources as a notable critic of the feasibility of space colonization[8] and because his essay on the topic had the best pessimistic arguments I had seen. From my perspective, his most compelling concerns were about motivation to try in light of large expenses and scale-related challenges for getting a civilization off the ground (more on these below). However, even Stross believes that, if we develop advanced AI/robotics, we are likely to colonize the stars. I asked the four people I interviewed for more references from people who believe we can’t colonize the stars, and their comments[9] suggest I am not missing any major critiques of the feasibility of interstellar colonization. Comments from these interviews also suggested to me that people publishing work on this topic generally believe interstellar colonization is feasible.[10]

Perhaps only people who think space colonization is likely to be feasible get excited enough to write books and careful scientific articles on the subject. But, for a few reasons listed below, it seems this could only partly explain the distribution of informed opinion on the feasibility of space colonization.  First, according to Geoffrey Landis, informed experts in nearby fields, such as researchers at NASA, generally haven’t thought very much about whether interstellar colonization is possible,[11]and don’t have opinions one way or the other. If there were good arguments for the infeasibility of space colonization that just weren’t being considered by the people I spoke with, I would expect that this would be the group that was aware of them. Second, I have cited a reasonable number of journalistic articles and blog posts arguing for the infeasibility of space colonization, and these articles are surprisingly bad if you think some informed people have convincing arguments for skepticism about the feasibility of space colonization. Third, some claims about interstellar colonization–e.g. by Hawking–have received significant interest from journalists and the public, and if skeptics had other convincing arguments that we probably wouldn’t ever be able to colonize the stars, I would expect someone to make these arguments more generally known.

What are the potential obstacles to interstellar colonization?

Big picture, interstellar colonization requires success at each of the following steps (though not necessarily on the first attempt):

  1. Attempt to colonize space
  2. Get everything you need to build a civilization into a spaceship
  3. Get the spaceship going fast in the right direction
  4. Have enough of what you need to build a civilization survive/remain intact during the voyage
  5. Slow the spaceship down when you’re getting close enough to your target location
  6. Build a civilization at your target location

This review will discuss these steps out of order because it will help with clarifying what we need to bring on a trip of this kind and what kind of motivation would be needed in order to make the attempt.

Speeding up and slowing down

Proxima Centuari—the closest star outside our solar system—is 4.2 light years away.[12] Currently, Voyager 1 is moving away from the sun at 17 km/s, faster than any other human-made object.[13] At its current speed, it would take over 70,000 years to reach travel that distance (though it isn’t going in that direction).[14] The voyage would take decades even at a significant fraction of the speed of light. The longer the trip, the harder it is to ensure that all critical parts of the system (including passengers or descendants of passengers) survive the journey. These issues are discussed in the next sections. This section will focus on propulsion methods and energy requirements.

Charles Stross estimated that it would take at least 1018 Joules of energy to accelerate a 2000 kg spacecraft to 10% of the speed of light, and an equal amount to slow it down—making generous assumptions such as 100% efficient energy conversion and no reaction mass. This was equivalent to five days of human civilization’s total electrical energy production in 2007,[15] or about one day of civilization’s whole energy production.[16] However, there has been a long history of exponential growth in energy production and economic productivity.[17] If these trends continue long enough, interstellar colonization will become much more affordable.[18]

It might be objected that there are some physical reasons to be skeptical that this kind of exponential growth in energy output will continue for long enough to carry this argument. For example, Tom Murphy has argued—based on thermodynamic principles—that hundreds of years of increases in energy production at this exponential rate would have absurd consequences, such as increasing the Earth’s surface temperature to boiling temperature.[19]  However, Murphy’s calculations suggest that it would be possible to have 100 times as much energy output on Earth as we currently do without substantially affecting Earth’s temperature,[20] which would make the energy cost of an interstellar mission not unreasonable on assumptions like Stross’s. More importantly though, an interstellar mission could be launched from interplanetary space, substantially weakening the original line of argument.

Some proposed propulsion technologies—such as nuclear fission, fusion, anti-matter, beam propulsion, and solar sails—might address these challenges. I encountered disagreement about whether these proposed technologies were known to be feasible or not. Robert Zubrin claimed in our interview that there were specific technologies which were known to be physically possible, and could allow us to travel at a few percent of the speed of light.[21] Ed Regis said we didn’t know whether some of these technologies were feasible,[22] but he didn’t discuss fission, fusion, beam propulsion, or solar sails, which seem much less speculative than the technologies whose feasibility he questioned. I have not looked at these proposals closely, but the Orion project, which was led by Freeman Dyson, received substantial funding and attention from serious people.[23] My intuition is that nuclear propulsion is the least speculative, and would be sufficient for bringing a spacecraft to a significant fraction of the speed of light.

Surviving the journey: challenges for human passengers

Microgravity

The human body is adapted to the level of gravitational force normally experienced on the surface of the Earth, and extended exposure to a zero-g environment has various adverse health effects, including loss of bone and muscle mass, retinal damage, redistribution of bodily fluids toward the upper half of the body, balance disorders, and loss of taste and smell.[24] According to Wikipedia, we have very limited knowledge about the potential effects on the very young and the elderly.[25] In addition, attempts to breed mice and fish eggs in space have come out badly, and the prevailing view is that microgravity is the source of the problem.[26] This may be a problem because—given the distance to other stars—human reproduction may be necessary in transit.

A simple solution to this problem is to induce artificial gravity by rotating the vessel/habitat,[27] though some other solutions have been proposed as well.[28]

Cosmic radiation

Extended exposure to cosmic radiation damages DNA, and might cause cancer or other negative health effects. Earth’s atmosphere and/or magnetic field prevent these problems near Earth’s surface. Cosmic radiation could also damage electronic equipment.[29]

Some proposed solutions to this problem include mass shielding and magnetic shielding. Mass shielding would definitely work, but increases the energy requirements for travel.[30] It is less certain that magnetic shielding would work and magnetic shielding would require less mass,[31] but it may create other health risks.[32]

Zubrin suggested that cosmic radiation was not a significant concern for interstellar travel. In support of this, he argued that we’ve had people working close to nuclear reactors on nuclear submarines for over 50 years without major problems.[33] There is some additional background on this obstacle on Wikipedia, “Health threat from cosmic rays.”

Human lifespan

Interstellar voyages would take decades even at a significant fraction of the speed of light, and centuries or even millennia at more modest speeds. Therefore, it may be impossible to complete the trip within a human lifetime.

Proposed solutions to this problem include “generation ships” designed to rear children and train them to continue the voyage mid-flight, extending the human lifespan, suspended animation/re-animation upon arrival.[34]

Surviving the journey: general physical challenges

Interstellar medium

From notes from a conversation with Charles Stross:

When travelling at a few percent of the speed of light, collisions with interstellar matter could cause significant damage to a vessel. Like radiation, this is something that might be overcome with appropriate shielding, though there are trade-offs between mass from shielding and the amount of energy necessary for propulsion.

The other people I spoke with about this issue also believed that this challenge could be overcome with appropriate shielding.[35] However, in his brief critique of the feasibility of interstellar missions, Ed Regis claimed that “a high-speed collision with something as small as a grain of salt would be like encountering an H-bomb,” suggesting he did not find it clear that this challenge could be overcome.[36]Some simple calculations, however, suggest that Regis’s claim could only be realistic under very extreme assumptions. Assuming that 100% of the kinetic energy of the salt grain were converted to an explosion, a one-milligram mass could only produce a one-megaton explosion if the spacecraft were travelling at extremely close to light speed. At 10% of light speed the impact would be equivalent to about 100 kg of TNT, which is about 10 million times smaller. At 1% of light speed, the impact would be equivalent to about 1 kg of TNT.[37] These smaller explosions would not be negligible, but would probably be within the range of explosions that some bunkers and armored vehicles are capable of withstanding today.

In the paper I saw which investigated this issue most deeply, the authors recommended that, to be conservative, interstellar missions conducted at ≥ 0.1c prepare for the possibility of hitting dust particles of 0.01 mg. These collisions would carry only 1% of the kinetic energy discussed in the paragraph above.[38]

Mechanical integrity over a long voyage

From notes from a conversation with Geoffrey Landis:

Interstellar colonization may require making machines that will work for hundreds, thousands, or tens of thousands of years. Few machines can do this now, but perhaps appropriate repair systems would solve the problem. This is something that has not been done, and it’s not clear that it could be done.

This concern was echoed by Anders Sandberg, though, in his estimation, this is a solvable engineering problem.[39]As with collisions with interstellar matter, this problem could potentially be addressed through trial and error if multiple colonization or exploration attempts are made.

Voyager 1, mentioned above, was launched on September 5, 1977, has had no critical mechanical failures for over 36 years, and is expected to continue to function until 2025.[40] Therefore, designing a spacecraft which would operate without a critical mechanical error for decades would not be wholly unprecedented.

Building a civilization on the other end

Genetic diversity

A small population establishing a space colony would risk genetic damage due to inbreeding. This challenge might be addressed by bringing tens or hundreds of people (at increased energy costs) or bringing a few people plus enough embryos/gametes.[41]

Hospitable location

According to Wikipedia, “There are 59 known stellar systems within 20 light years from the Sun, containing 81 visible stars,” with some of the most appealing targets for interstellar travel in that range.[42]

Stross argued that for most places on Earth throughout most of Earth’s history, the planet wouldn’t be very hospitable to humans, and this raises questions about how hard it would be to find a hospitable planet.[43]

Potential solutions to this challenge include terraforming, searching through many planets, living off planets, and doing colonization with advanced AI, robotics, and manufacturing. Conceivably, advances in AI, robotics, and manufacturing would allow a civilization to thrive in space, eliminating the need for finding habitable planets.

Infrastructure and scale requirements

From a conversation with Charles Stross:

If the goal of space colonization is to create another civilization that is viable independently of the continued success of Earth-based civilization, there will be enormous challenges putting in place the large infrastructure necessary for an industrial civilization. Our current industrial civilization—including systems for manufacturing, science, education, entertainment, support systems for people who aren’t working, etc.—probably requires at least a billion people. It is extremely hard to see how to create a space colony capable of replacing our current industrial civilization without at least hundreds of thousands of people.

This objection has received relatively little attention from people optimistic about space colonization. An exception to this is the science fiction novel Learning the World by Ken MacLeod, which focuses on a “generation ship” which has been travelling for thousands of years.

Possible solutions to this problem include bringing a large number of people, bringing a few people with a plan to build up a large infrastructure, and creating self-replicating robots that would be capable of building a civilization.[44]

I did not see much detailed discussion of how this could be done,[45] but my intuition is that with advances in AI/robotics that would create machines capable of doing essentially all the tasks humans do, it should be possible for machines to build a civilization, even in a highly hostile environment. Stross expressed a similar view, though he had more uncertainty about whether such advances would ever be made.[46] This is an area where someone with different background assumptions would be especially likely to disagree with me, and I’d have to do substantially more work to convince them of my position.

Unknown obstacles

Two people I spoke with, Sandberg[47] and Zubrin,[48] thought that the impossibility of space colonization would require the discovery of new physics. Landis thought it was possible that there were unknown obstacles that could make space colonization impossible, and pointed to Fermi’s Paradox as a reason that one might expect this.[49] Some possible explanations of Fermi’s Paradox would point to other reasons that space colonization is impossible, but others would not. For example, crucial technologies could fail for some reason we can’t appreciate, we might be living in a computer simulation without real stars outside our solar system, or some aliens could be preventing interstellar colonization. On the other hand, some step we’ve already passed (e.g. the creation of the first life, multicellular organisms, intelligent life, industrialization) may happen only extremely rarely.[50]

Will to do it

People I spoke with disagreed about whether people would ever attempt interstellar colonization. Stross argued that interstellar colonization could only have a very uncertain and very long-term economic payoff, which would make it very hard to finance by appeal to economic motives.[51] However, Carl Shulman has forthcoming blog post in which he argues that there will probably eventually be strong economic incentives for interstellar colonization.

Other potential motives for colonizing space include curiosity and adventure, increasing civilization’s ability to survive global catastrophes,[52] or increasing the size of civilization. More generally, in a world with great diversity of motivations, it may not be too hard to find someone who wants to try to colonize space.[53] Historically speaking, Gerard O’Neill’s L5 Society—which had the aim of colonizing (interplanetary) space—had over 10,000 members when it merged with the National Space Institute.[54]

On the opposing side, something that could gather the energy necessary for interstellar travel might be weaponized, and this could—imaginably—create political opposition to interstellar travel.[55] Some further relevant notes from my conversation with Stross:

“However, other ideological perspectives—such as perspectives emphasizing the sacredness of nature—might oppose space colonization. Opposition from these perspectives could prevent an optimistic minority from colonizing space even if it were feasible.”

Implications of advanced AI, robotics, and manufacturing for the feasibility of interstellar colonization

Advances in AI, robotics, or manufacturing could reduce many of the challenges of space colonization noted above.[56] Spelling this out a bit more, microgravity, cosmic radiation, human lifespan, genetic diversity, and habitability are either unproblematic or much less problematic with robots instead of humans. They could also reduce the amount of mass required for the mission, and therefore reduce the energy requirements. However, the use of advanced AI/robotics was rarely discussed in the materials I read, and was not emphasized by the people I interviewed, with the exception of Anders Sandberg.

Conclusions and questions for further investigation

My impression is that the most informed people thinking about these issues believe that space colonization will eventually be possible, and that they believe this for reasons that make sense to me. In my view, the most uncertain step is the part where we build a civilization upon arriving at another star system. However, my intuition—as stated above—is that advances in AI and robotics will make it possible for machines to substitute for humans in building a civilization, even in environments that would be very inhospitable to humans.

Further investigation into this topic might focus on the following questions:

  1. Does this list contain all significant known obstacles to interstellar colonization?
  2. How likely is it that there are unknown obstacles that might make interstellar colonization impossible?
  3. Given sufficiently advanced AI, robotics, and molecular manufacturing, is interstellar colonization definitely feasible?
  4. Given business as usual, is it likely that these advances in AI, robotics, and molecular manufacturing will eventually be made?
  5. Do we know, as Zubrin claimed, that some physically possible propulsion technologies could get us to several percent of the speed of light? If so, which are they and how do we know this?

My process

This research draws on interviews with Anders Sandberg, Geoffrey Landis, Robert Zubrin, and Charles Stross. Charles Stross was the most credible skeptic of the feasibility of space colonization that I could find. He was cited as a critic of The High Frontier on the book’s Wikipedia page and multiple people I spoke with referred me to Stross as a skeptic that was worth speaking to.

A list of many, but not all, of the sources I considered, how I found them, and how closely I looked into them is available here. I began by looking for articles arguing for or against the feasibility of space colonization, searching Google, Google Scholar, and Amazon.com with terms like “space colonization,” “space settlement,” “possible,” “impossible,” “feasible,” and “infeasible,” and then followed up on articles referenced in the most relevant sources I found. None of the people I interviewed were aware of notable articles arguing for the impossibility of space colonization that I had missed, though I asked all of them specifically about this. I spent 36 hours on this project.

I am grateful to Robin Hanson, Pablo Stafforini, Carl Shulman, Anders Sandberg, and Toby Ord for feedback on a draft of this review.

Sources

Armstrong, Stuart and Anders Sandberg. 2013. “Eternity in six hours: Intergalactic spreading of intelligent life and sharpening the Fermi paradox.” Acta Astronautica 89 (2013): 1-13. URL: http://www.sciencedirect.com/science/article/pii/S0094576513001148.

Beckstead, Nick. 2013. “On the Overwhelming Importance of Shaping the Far Future,” 2013. PhD Thesis. Department of Philosophy, Rutgers University. URL: http://www.nickbeckstead.com/research.

Beckstead, Nick. 2014. Notes from a conversation with Geoffrey Landis. URL: http://www.nickbeckstead.com/conversations/landisapr2014.

Beckstead, Nick. 2014. Notes from a conversation with Anders Sandberg. URL: https://docs.google.com/viewer?a=v&pid=sites&srcid=ZGVmYXVsdGRvbWFpbnxuYmVja3N0ZWFkfGd4OjY4YTcyZjM3ZDFlYTc4NDc.

Beckstead, Nick. 2014. Notes from a conversation with Charles Stross. URL: http://www.nickbeckstead.com/conversations/stross.

Beckstead, Nick. 2014. Notes from a conversation with Robert Zubrin. URL: http://www.nickbeckstead.com/conversations/zubrinmar2014.

Bostrom, Nick. 2003. “Astronomical Waste,” Utilitas 15(3): 308-314. URL: http://www.nickbostrom.com/astronomical/waste.html.

Dyson, Freeman. 1965. “Death of a Project,” Science 149:141-144.” URL: http://www.patrickmccray.com/wp/wp-content/uploads/2013/11/1965-Dyson-Death-of-a-Project.pdf

Finkel, Alan. 2011. “Forget space travel: it’s just a dream,” Cosmos Magazine. URL: http://cosmosmagazine.com/planets-galaxies/the-future-space-travel/.

Freitas Jr, Robert A., and William Zachary. 1981. “A self-replicating, growing lunar factory.” Princeton/AIAA/SSI Conference on Space Manufacturing. Vol. 35. URL: http://www.rfreitas.com/Astro/GrowingLunarFactory1981.htm.

Gilster, Paul. 2004. Centauri Dreams: Imagining and Planning Interstellar Exploration. Springer.

Landis, Geoffrey. 1991. “Magnetic Radiation Shielding: An Idea Whose Time Has Returned?,” Space Manufacturing 8: Energy and Materials from Space 383-386. URL: http://www.islandone.org/Settlements/MagShield.html.

Landis, Geoffrey. 2004. “Interstellar flight by particle beam.” Acta Astronautica 55:931-934. URL: http://www.sciencedirect.com/science/article/pii/S009457650400133X.

Mallove, Eugene F., and Gregory L. Matloff. 1989. The Starflight Handbook: A Pioneer’s Guide to Interstellar Travel.

McLellan, Heather. 2011. “Microgravity Makes Interstellar Travel Impossible, Say Experts,” Escapist Magazine. URL: http://www.escapistmagazine.com/news/view/113507-Microgravity-Makes-Interstellar-Travel-Impossible-Say-Experts.

Murphy, Tom. 2011. “Galactic Scale Energy,” Do the Math. URL: http://physics.ucsd.edu/do-the-math/2011/07/galactic-scale-energy/.

National Research Council. 2014. Pathways to Exploration: Rationales and Approaches for a U.S. Program of Human Space Exploration. Washington, DC: The National Academies Press. URL: http://nap.edu/catalog.php?record_id=18801.

O’Neill, Gerard. 1977. The High Frontier: Human Colonies in Space. William Morrow and Company.

O’Neill, Ian. 2008. “Bad News: Interstellar Travel May Remain in Science Fiction,” Universe Today. URL: http://www.universetoday.com/17044/bad-news-insterstellar-travel-may-remain-in-science-fiction/.

Parker, Eugene. 2006. “Shielding Space Travelers.” Scientific American 294(3): 40-47.URL: http://engineering.dartmouth.edu/~d76205x/research/shielding/docs/Parker_06.pdf.

Piersma, Theunis. 2010. “Why space is the impossible frontier,” New Scientist. URL: http://www.newscientist.com/article/mg20827860.100-why-space-is-the-impossible-frontier.html#.U4ig4Pk7t8F.

Regis, Ed. 2013. “Being Told That Our Destiny Is Among The Stars,” in What Should We Be Worried About? URL: http://edge.org/responses/q2013.

Sato, Rebecca. 2007. “The “Hawking Solution”: Will Saving Humanity Require Leaving Earth Behind?,” The Daily Galaxy. URL: http://www.dailygalaxy.com/my_weblog/2007/05/the_hawking_sol.html.

Stross, Charles. 2007. “The High Frontier, Redux,” Charles’s Diary. URL: http://www.antipope.org/Charles/blog-static/2007/06/the-high-frontier-redux.html.

Wikipedia, “Artificial gravity.” URL: http://en.wikipedia.org/wiki/Artificial_gravity.

Wikipedia, “Effect of space flight on the human body.” URL: http://en.wikipedia.org/wiki/Effect_of_spaceflight_on_the_human_body.

Wikipedia, “Fermi’s Paradox.” URL: http://en.wikipedia.org/wiki/Fermi_paradox.

Wikipedia, “Health threat from cosmic rays.” URL: http://en.wikipedia.org/wiki/Health_threat_from_cosmic_rays.

Wikipedia, “The High Frontier.” URL: http://en.wikipedia.org/wiki/The_High_Frontier:_Human_Colonies_in_Space.

Wikipedia, “Interstellar travel.” URL: http://en.wikipedia.org/wiki/Interstellar_travel.

Wikipedia, “L5 Society.” URL: http://en.wikipedia.org/wiki/L5_Society.

Wikipedia, “Proxima Centauri.” URL: http://en.wikipedia.org/wiki/Proxima_Centauri.

Wikipedia, “Self-replicating Spacecraft.” URL: http://en.wikipedia.org/wiki/Self-replicating_spacecraft.

Wikipedia, “Space colonization.” URL: http://en.wikipedia.org/wiki/Space_colonization.

Wikipedia, “Voyager 1.” URL: http://en.wikipedia.org/wiki/Voyager_1.

[1] “As a rough approximation, let us say the Virgo Supercluster contains 10^13 stars. One estimate of the computing power extractable from a star and with an associated planet-sized computational structure, using advanced molecular nanotechnology, is 10^42 operations per second. A typical estimate of the human brain’s processing power is roughly 10^17 operations per second or less. Not much more seems to be needed to simulate the relevant parts of the environment in sufficient detail to enable the simulated minds to have experiences indistinguishable from typical current human experiences. Given these estimates, it follows that the potential for approximately 10^38 human lives is lost every century that colonization of our local supercluster is delayed; or equivalently, about 10^29 potential human lives per second.

While this estimate is conservative in that it assumes only computational mechanisms whose implementation has been at least outlined in the literature, it is useful to have an even more conservative estimate that does not assume a non-biological instantiation of the potential persons. Suppose that about 10^10 biological humans could be sustained around an average star. Then the Virgo Supercluster could contain 10^23 biological humans. This corresponds to a loss of potential equal to about 10^14 potential human lives per second of delayed colonization.” Bostrom 2003, “Astronomical Waste.”

[2] “The lion’s share of the expected duration of our existence comes from the possibility that our descendants colonize planets outside our solar system. There are many stars that we may be able to reach with future technology (about 1013 in our supercluster). Some of them will probably have planets that are hospitable to life, perhaps many of these planets could be made hospitable with appropriate technological developments. Some of these are near stars that will burn for much longer than our sun, some for as much as 100 trillion years (Adams, 2008, p. 39). If multiple locations were colonized, the risk of total destruction would dramatically decrease, since it would take independent global disasters, or a cosmological catastrophe, to destroy civilization. Because of this, it is possible that our descendants would survive until the very end, and that there could be extraordinarily large numbers of them.” Beckstead 2013, “On the Overwhelming Importance of Shaping the Far Future,” p. 57.

[3] When listing potential  rationales for space exploration, they wrote:

Human Survival. It is not possible to say whether off-Earth settlements could eventually be developed that would outlive human presence on Earth and lengthen the survival of our species. This is a question that can only be settled by pushing the human frontier in space.”

National Research Council, “Pathways to Exploration: Rationales and Approaches for a U.S. Program of Human Space Exploration.” p. S-2. Further discussion on pp. 2-26-27.

[4] Examples include O’Neill 1977, Mallove and Matloff 1989, Landis 2004, Gilster 2004, and Armstrong and Sandberg 2013.

[5] “”Giving birth in zero gravity is going to be hell because gravity helps you [on Earth],” said Athena Andreadis, a biologist from the University of Massachusetts Medical School. “You rely on the weight of the baby.”

All of this means that we’re not going anywhere, perhaps not even Mars, until we master either artificial gravity or some seriously speedy travel methods. Although this news won’t come as a surprise to anybody who’s put serious thought into interstellar travel, it is humbling to be reminded of these things from time to time. Humans are perfectly adjusted for life on Earth; as Andreadis noted, we’ll have to adapt to both the journey and the destination if we’re ever to leave.” McLellan 2011. “Microgravity Makes Interstellar Travel Impossible, Say Experts.”

“Hawking, Obama and other proponents of long-term space travel are making a grave error. Humans cannot leave Earth for the several years that it takes to travel to Mars and back, for the simple reason that our biology is intimately connected to Earth.

To function properly, we need gravity. Without it, the environment is less demanding on the human body in several ways, and this shows upon the return to Earth.” Piersma 2010. “Why space is the impossible frontier.”

[6] “Already there are huge challenges facing the notion of travelling to Proxima Centauri, but in a recent gathering of experts in the field of space propulsion, there are even more insurmountable obstacles to mankind’s spread beyond the Solar System. In response to the idea we might make the Proxima trek in a single lifetime, Paulo Lozano, an assistant professor of aeronautics and astronautics at MIT and conference deligate said, “In those cases, you are talking about a scale of engineering that you can’t even imagine.”

OK, so the speed simply isn’t there for a quick flight over 4.3 light years. But there is an even bigger problem than that. How would these interstellar spaceships be fuelled? According to Brice N. Cassenti, an associate professor with the Department of Engineering and Science at Rensselaer Polytechnic Institute, at least 100 times the total energy output of the entire world would be required for the voyage. “We just can’t extract the resources from the Earth,” Cassenti said during his conference presentation. “They just don’t exist. We would need to mine the outer planets.”” O’Neill 2008, “Bad News: Interstellar Travel May Remain in Science Fiction.”

“”Human expansion across the Solar System is an optimist’s fantasy. Why? Because of the clash of two titans: physics versus chemistry.

In the red corner, the laws of physics argue that an enormous amount of energy is required to send a human payload out of Earth’s gravitational field to its deep space destination and back again.

In the blue corner, the laws of chemistry argue that there is a hard limit to how much energy you can extract from the rocket fuel, and that no amount of ingenuity will change that.”” Finkel 2011, “Forget space travel: it’s just a dream.”

[7] “But traveling at significantly faster speeds requires prohibitive amounts of energy. If the starship were propelled by conventional chemical fuels at even ten percent of the speed of light, it would need for the voyage a quantity of propellant equivalent in mass to the planet Jupiter. To overcome this limitation, champions of interstellar travel have proposed “exotic” propulsion systems such as antimatter, pi meson, and space warp propulsion devices. Each of these schemes faces substantial difficulties of its own: for example, since matter and antimatter annihilate each other, an antimatter propulsion system must solve the problem of confining the antimatter and directing the antimatter nozzle in the required direction. Both pi meson and space warp propulsion systems are so very exotic that neither is known to be scientifically feasible.” Regis 2013, “Being Told That Our Destiny Is Among The Stars.”

[8] “It’s hard to think of notable pessimists. Charles Stross wrote a good essay arguing for pessimism about space colonies, and he might be a good person to talk to. (Nick raised Stross as the most credible pessimist he was aware of.)” Notes from a conversation with Geoffrey Landis.

“Science fiction writer Charles Stross wrote a critical essay with a similar title on the feasibility of interstellar space travel and making practical use of various moons and planets in our own Solar System: The High Frontier: Redux.” Wikipedia, “The High Frontier.”

In our conversation on this topic, Anders Sandberg recommended that I speak with Charles Stross.

[9] “Stross has kept his eye on this literature over the years. He was disappointed to know that I didn’t find any critiques of the feasibility of space colonization that were superior to his, though he wasn’t aware of any more developed critiques.” Notes from a conversation with Charles Stross

[10] “The people in the above groups would probably have opinions that are generally along the same lines as Dr. Landis. Some would be more optimistic, and some would be less optimistic. They would generally agree that space colonization is possible in principle, and most of the disagreement would be about how hard it is.” Notes from a conversation with Geoffrey Landis

[11] “Most people at NASA generally haven’t thought deeply about this question.” Notes from a conversation with Geoffrey Landis

[12] “Proxima Centauri (Latin proxima, meaning “next to” or “nearest to”) is a red dwarf about 4.24 light-years from the Sun, inside the G-cloud, in the constellation of Centaurus. It was discovered in 1915 by Scottish astronomer Robert Innes, the Director of the Union Observatory in South Africa, and is the nearest known star to the Sun…” Wikipedia, “Proxima Centauri.”

[13] “Travelling at about 17 kilometers per second (11 mi/s)[35] it has the fastest heliocentric recession speed of any human-made object.” Wikipedia, “Voyager 1.”

[14] “Voyager 1 was traveling at 17,043 m/s (38,120 mph) relative to the Sun (about 3.595 AU per year). It would need about 17,565 years at this speed to travel a complete light year.[3] To compare, Proxima Centauri, the closest star to the Sun, is about 4.2 light-years (or 2.65×105 AU) distant. Were the spacecraft traveling in the direction of that star, 73,775 years would pass before reaching it.” Wikipedia, “Voyager 1.”

[15] “It’s going to be pretty boring in there, but I think we can conceive of our minimal manned interstellar mission as being about the size and mass of a Mercury capsule. And I’m going to nail a target to the barn door and call it 2000kg in total….Now, let’s say we want to deliver our canned monkey to Proxima Centauri within its own lifetime. We’re sending them on a one-way trip, so a 42 year flight time isn’t unreasonable. (Their job is to supervise the machinery as it unpacks itself and begins to brew up a bunch of new colonists using an artificial uterus. Okay?) This means they need to achieve a mean cruise speed of 10% of the speed of light. They then need to decelerate at the other end. At 10% of c relativistic effects are minor — there’s going to be time dilation, but it’ll be on the order of hours or days over the duration of the 42-year voyage. So we need to accelerate our astronaut to 30,000,000 metres per second, and decelerate them at the other end. Cheating and using Newton’s laws of motion, the kinetic energy acquired by acceleration is 9 x 1017 Joules, so we can call it 2 x 1018 Joules in round numbers for the entire trip….our entire planetary economy runs on roughly 4 terawatts of electricity (4 x 1012 watts). So it would take our total planetary electricity production for a period of half a million seconds — roughly 5 days — to supply the necessary va-va-voom.” Stross 2007, “The High Frontier, Redux.”

[16] “In 2008, total worldwide energy consumption was 474 exajoules (132,000 TWh). This is equivalent to an average power use of 15 terawatts (2.0×1010 hp).”

[17] “Since the beginning of the Industrial Revolution, we have seen an impressive and sustained growth in the scale of energy consumption by human civilization. Plotting data from the Energy Information Agency on U.S. energy use since 1650 (1635-1945, 1949-2009, including wood, biomass, fossil fuels, hydro, nuclear, etc.) shows a remarkably steady growth trajectory, characterized by an annual growth rate of 2.9%.” Murphy 2011, “Galactic-Scale Energy.”

[18] “In 1968, Freeman Dyson looked at the economics of interstellar colonization, and it was prohibitively expensive. But he pointed out that there has been a long history of exponential growth in economic productivity and energy production. If these trends continue long enough, interstellar colonization will become much more affordable. On the other hand, exponential trends will not last forever.” Notes from a conversation with Geoffrey Landis.

[19] This is not clearly summarized in the text, but it is clear in the following graph:

And the following caption:

“Earth surface temperature given steady 2.3% energy growth, assuming some source other than sunlight is employed to provide our energy needs and that its use transpires on the surface of the planet. Even a dream source like fusion makes for unbearable conditions in a few hundred years if growth continues. Note that the vertical scale is logarithmic.”

and clarificatory remarks:

“absorbs abundant energy from the sun—far in excess of our current societal enterprise. The Earth gets rid of its energy by radiating into space, mostly at infrared wavelengths. No other paths are available for heat disposal. The absorption and emission are in near-perfect balance, in fact. If they were not, Earth would slowly heat up or cool down. Indeed, we have diminished the ability of infrared radiation to escape, leading to global warming. Even so, we are still in balance to within less than the 1% level. Because radiated power scales as the fourth power of temperature (when expressed in absolute terms, like Kelvin), we can compute the equilibrium temperature of Earth’s surface given additional loading from societal enterprise.”

Murphy 2011, “Galactic-Scale Energy.”

[20] Eyeballing the graph in the above footnote, there is not a substantial temperature increase after 200 years, and at that point we have 100 times as much energy on his model. “This post provides a striking example of the impossibility of continued growth at current rates—even within familiar timescales. For a matter of convenience, we lower the energy growth rate from 2.9% to 2.3% per year so that we see a factor of ten increase every 100 years.” Murphy 2011, “Galactic-Scale Energy.”

[21] “He said that there are identifiable propulsion technologies which would work—given the laws of physics as they are currently known—and these could get us up to a few percent of the speed of light, allowing us to get to the nearest stars in decades rather than millennia.” Notes from a conversation with Robert Zubrin.

[22] “champions of interstellar travel have proposed “exotic” propulsion systems such as antimatter, pi meson, and space warp propulsion devices. Each of these schemes faces substantial difficulties of its own: for example, since matter and antimatter annihilate each other, an antimatter propulsion system must solve the problem of confining the antimatter and directing the antimatter nozzle in the required direction. Both pi meson and space warp propulsion systems are so very exotic that neither is known to be scientifically feasible.” Regis 2013. “Being Told That Our Destiny Is Among The Stars.”

[23] “Orion is a project to design a vehicle which would be propelled through space by repeated nuclear explosions occurring at a distance behind it. The vehicle may be either manned or unmanned; it carries a large supply of bombs, and nlachinery for throwing them out at the right place and time for efficient propulsion; it carries shock absorbers to protect the machinery and the crew from destructive jolts, and sufficient shielding to protect against heat and radiation. The vehicle has, of course, never been built. The project in its 7 years of existence was confined to physics experiments, engineering tests of components, design studies, and theory. The total cost of the project was $10 million, spread over 7 years, and the end result was a rather firm technical basis for believing that vehicles of this type could be developed, tested, and flown. The technical findings of the project have not been seriously challenged by anybody. Its major troubles have been, from the beginning, political. The level of scientific and engineering talent devoted to it was, for a classified project, unusually high.” Dyson 1965, “Death of a Project,” p. 141.

[24] “Microgravity causes bone decalcification and liquid redistribution in humans.” Notes from a conversation with Anders Sandberg.

“Short-term exposure to microgravity causes space adaptation syndrome, a self-limiting nausea caused by derangement of the vestibular system. Long-term exposure causes multiple health problems, one of the most significant being loss of bone and muscle mass. Over time these deconditioning effects can impair astronauts’ performance, increase their risk of injury, reduce their aerobic capacity, and slow down their cardiovascular system. As the human body consists mostly of fluids, gravity tends to force them into the lower half of the body, and our bodies have many systems to balance this situation. When released from the pull of gravity, these systems continue to work, causing a general redistribution of fluids into the upper half of the body. This is the cause of the round-faced ‘puffiness’ seen in astronauts. Redistributing fluids around the body itself causes balance disorders, distorted vision, and a loss of taste and smell.” Wikipedia, “Effect of Space Flight on the Human Body.”

“Because weightlessness increases the amount of fluid in the upper part of the body, astronauts experience increased intracranial pressure. This appears to increase pressure on the backs of the eyeballs, affecting their shape and slightly crushing the optic nerve. This effect was noticed in 2012 in a study using MRI scans of astronauts who had returned to Earth following at least one month in space. Such eyesight problems may be a major concern for future deep space flight missions, including a manned mission to the planet Mars.” Wikipedia, “Effect of Space Flight on the Human Body.”

“If off-world colonization someday begins, many types of people will be exposed to these dangers, and the effects on the elderly and on the very young are completely unknown.” Wikipedia, “Effect of Space Flight on the Human Body.”

[25] “If off-world colonization someday begins, many types of people will be exposed to these dangers, and the effects on the elderly and on the very young are completely unknown.” Wikipedia, “Effect of Space Flight on the Human Body.”

[26] “One potential obstacle to human survival in space is the effect of microgravity on health and reproduction. Microgravity causes bone decalcification and liquid redistribution in humans. In addition, attempts to breed mice and fish eggs in space have come out badly; the prevailing view is that microgravity is the source of the problem.” Notes from a conversation with Anders Sandberg.

[27] “One potential obstacle to human survival in space is the effect of microgravity on health and reproduction….However, this problem could be solved by rotating the station or vessel and thereby inducing artificial gravity.” Notes from a conversation with Anders Sandberg.

[28] See Wikipedia, “Artificial Gravity.”

[29] “Space radiation would damage both humans and electronic equipment left in space for a long time. This problem gets more severe for very long-distance travel at relativistic speeds.” Notes from a conversation with Anders Sandberg.

[30] “It would be possible to prevent the negative consequences of radiation with sufficiently thick shielding on the space vessel, though this increases the mass of the vessel and the amount of resources required to travel. Error-correcting codes and other measures like shielded electronics could prevent fatal damage to computers.” Notes from a conversation with Anders Sandberg.

[31] “This looks like a problem that could be addressed through shielding. Dr. Landis believes this problem can be solved by creating a strong enough magnetic field.” Notes from a conversation with Geoffrey Landis.

“One solution to the problem of shielding crew from particulate radiation in space is to use active electromagnetic shielding. Practical types of shield include the magnetic shield, in which a strong magnetic field diverts charged particles from the crew region, and the magnetic/electrostatic plasma shield, in which an electrostatic field shields the crew from positively charged particles, while a magnetic field confines electrons from the space plasma to provide charge neutrality. Advances in technology include high-strength composite materials, high temperature superconductors, numerical computational solutions to particle transport in electromagnetic fields, and a technology base for construction and operation of large superconducting magnets. These advances make electromagnetic shielding a practical alternative for near-term future missions.” Landis 1991, “Magnetic Radiation Shielding: An Idea Whose Time Has Returned?”abstract.

[32] “A spherical shell of water or plastic could protect space travelers, but it would take a total mass of at least 400 tons—beyond the capacity of heavy-lift rockets. A superconducting magnet would repel cosmic particles and weigh an estimated nine tons, but that is still too much, and the magnetic field itself would pose health risks. No other proposed scheme is even vaguely realistic.” Parker, Eugene. “Shielding Space Travelers.” Pg 42.

[33] “Dr. Zubrin didn’t think space dust was a serious concern, and argued that damage from radiation could clearly be prevented with adequate shielding. He pointed to the fact that we’ve had people working close to nuclear reactors on nuclear submarines for over 50 years without major problems.” Notes from a conversation with Robert Zubrin.

[34] “…if interstellar travel must proceed more slowly, some possible solutions would include (i) a generation ship, (ii) freezing people and re-animating them upon arrival, and (iii) travelling with machines or uploads, possibly creating biological humans upon arrival.” Notes from a conversation with Geoffrey Landis.

See “Slow Missions” under Wikipedia, “Interstellar Travel.”

[35] “There are good theoretical reasons to think there isn’t much millimeter-sized gravel in interstellar space. But if there are sand-sized particles, that would be a challenge for interstellar travel at relativistic or near/relativistic speeds. Shielding may be a solution to this.” Notes from a conversation with Geoffrey Landis.

“There is interstellar dust. If objects are travelling at a high speed, hitting a very small piece of dust could do substantial damage. Travelling at a high speed is desirable for interstellar or intergalactic travel. The larger the vessel, the worse the problem. Intergalactic dust is less common, and would be much less of a problem.

This problem could be averted by creating adequate shielding. At a conservative end, we know that space dust does not prevent comet nuclei from doing interstellar travel at a few tens of km/s. A vessel could simply dig into a comet and use it as shielding (for a very slow trip). Anders believes that other shields could be constructed out of metal and graphene foils which would make interstellar and intergalactic space colonization possible with at higher speeds and with less mass.” Notes from a conversation with Anders Sandberg.

“I mentioned a couple of the potential obstacles to interstellar colonization that came up in my interview with Anders Sandberg: space dust (which might damage vessels moving close to the speed of light) and radiation. Dr. Zubrin didn’t think space dust was a serious concern, and argued that damage from radiation could clearly be prevented with adequate shielding.” Notes from a conversation with Robert Zubrin.

[36] “Even if by some miracle suitable propulsion systems became available, a starship traveling at relativistic speeds would have to be equipped with sophisticated collision detection and avoidance systems, given that a high-speed collision with something as small as a grain of salt would be like encountering an H-bomb.” Regis 2013, “Being Told That Our Destiny Is Among The Stars.”

[37] Calculations were based on WolframAlpha’s formula for relativistic kinetic energy. I am grateful to Anders Sandberg and Carl Shulman for help selecting appropriate formulas and challenging Regis’s calculation.

[38] “Observations over the last decade have revealed an unexpected high-mass tail to the local interstellar grain size distribution. Individual particles with masses as high as 10^-12 kg (4.5 μm radius) are almost certainly present. Moreover, radar detections of interstellar dust particles entering the Earth’s atmosphere ([27] and references therein) imply a population of interstellar grains with masses > 3*10-10 kg (corresponding to radii > 30 μm). While there are difficulties reconciling the presence of such large grains with other astronomical observations [36], a conservative planning assumption would be that particles as large as 100 μm radius (10^-8 kg), and perhaps larger, might be encountered in the course of a several light-year journey through the LISM. The kinetic energy of such large particles striking an interstellar space vehicle with a relative velocity of 0.1c are considerable (4.5*10^6 J), and some kind of active dust detection and mitigation system may need to be considered.

[39] “Someone might question whether it’s possible to create electronic devices that would work for very long periods of time in the presence of radiation, as would be necessary for space colonization. Anders believes this is a solvable engineering problem.” Notes from a conversation with Anders Sandberg.

[40] “Voyager 1 is a 722-kilogram (1,592 lb) space probe launched by NASA on September 5, 1977 to study the outer Solar System. Operating for 36 years, 8 months and 17 days as of 22 May 2014, the spacecraft communicates with the Deep Space Network to receive routine commands and return data. At a distance of about 127.74 AU (1.911×1010 km) from the Earth as of May 8, 2014, it is the farthest human-made object from Earth….

On September 12, 2013, NASA announced that Voyager 1 had crossed the heliopause and entered interstellar space on August 25, 2012, making it the first human-made object to do so….The probe is expected to continue its mission until 2025, when it will be no longer supplied with enough power from its generators to operate any of its instruments.” Wikipedia, “Voyager 1.”

[41] “In 2002, the anthropologist John H. Moore estimated that a population of 150–180 would allow normal reproduction for 60 to 80 generations — equivalent to 2000 years.

A much smaller initial population of as little as two women should be viable as long as human embryos are available from Earth. Use of a sperm bank from Earth also allows a smaller starting base with negligible inbreeding.

Researchers in conservation biology have tended to adopt the “50/500″ rule of thumb initially advanced by Franklin and Soule. This rule says a short-term effective population size (Ne) of 50 is needed to prevent an unacceptable rate of inbreeding, whereas a long‐term Ne of 500 is required to maintain overall genetic variability. The Ne = 50 prescription corresponds to an inbreeding rate of 1% per generation, approximately half the maximum rate tolerated by domestic animal breeders. The Ne = 500 value attempts to balance the rate of gain in genetic variation due to mutation with the rate of loss due to genetic drift.” Wikipedia, “Space colonization.”

[42] Wikipedia, “Interstellar Travel.” See the article for more detail.

[43] “Finding a hospitable location: If you consider all the past and future locations on Earth over time, only a very small fraction of them (probably less than 0.1%) would be habitable for humans. Why? 80% of the Earth’s surface is covered by water, much of the Earth is too cold, there has only been enough oxygen for humans during the last 600 million years, a billion years from now the Earth will no longer be habitable for humans, and so on.” Notes from a conversation with Charles Stross.

[44] For a discussion of the last possibility, see Wikipedia, “Self-replicating Spacecraft.”

[45] While commenting on this paper, Anders Sandberg pointed me to Freitas 1981, “A self-replicating, self-growing lunar factory,” which attempted to outline something along these lines. I have not yet looked closely at this paper, and feel it would take a substantial effort on my part to tell whether the idea would hold up. But someone who wanted to explore these issues further might look at this paper and other papers citing it to get some idea of what kind of thinking has already been done on this topic.

[46] “Given the existence of mind uploading or advanced AI, Stross sees no insurmountable obstacle to interstellar colonization. In this scenario, it seems that a near-light-speed colonization wave could occur roughly along the lines envisioned by Robert Bradbury.

In Stross’s view, it is not settled whether mind uploading or advanced AI are feasible in principle. If the mind is inherently analog or quantum, digital uploads may be impossible.” Notes from a conversation with Charles Stross.

[47] “This would be very surprising to Anders. In his view, it would probably require learning new physics for us to learn that space colonization is in principle infeasible for some reason not listed here.” Notes from a conversation with Anders Sandberg.

[48] ““I asked Dr. Zubrin whether he could imagine anything we could learn—consistent with everything we currently know about physics—that would mean space colonization would be impossible. He said he could not think of anything remotely plausible fitting the description. It’s just a question of how fast we can get there.” Notes from a conversation with Robert Zubrin.

[49] “It’s possible that there are unknown obstacles. Most of the obstacles we’ve discussed seem like they could be overcome. If they can and these are the only obstacles, that makes the Fermi Paradox more puzzling. Someone could argue that this suggests that interstellar colonization is impossible for some unknown reason.” Notes from a conversation with Geoffrey Landis.

[50] See Wikipedia, “Fermi’s Paradox” for further discussion.

[51] “Creating a civilization of this size outside of the Earth would require a very large economic investment, and one that could (at best) have only a very long-term payoff. We currently do not have institutions which function well for highly unprecedented ventures which would require decades—or perhaps even centuries—to pay off. This issue is further explored in Stross’s novel Neptune’s Brood.” Notes from a conversation with Charles Stross.

[52] “Professor Stephen Hawking, celebrated expert on the cosmological theories of gravity and black holes, believes that traveling into space is the only way humans will be able to survive in the long-term. He has said, “Life on Earth is at the ever-increasing risk of being wiped out by a disaster such as sudden global warming, nuclear war, a genetically engineered virus or other dangers … I think the human race has no future if it doesn’t go into space.”” Sato 2007, “The “Hawking Solution”: Will Saving Humanity Require Leaving Earth Behind?”

[53] “Anders can see various reasons people might want to do this eventually: diversity of preferences, desire to do research, desire to continue civilization.” Notes from a conversation with Anders Sandberg.

“There is a lot of diversity in the world, and Landis’s intuition is that enough people would want to do it.” Notes from a conversation with Geoffrey Landis.

[54] “In 1986 the Society, which had grown to about 10,000 members, merged with the 25,000 member National Space Institute, founded by German rocket engineer and Project Apollo program manager Wernher von Braun of NASA’s Marshall Space Flight Center to form the present-day National Space Society.” Wikipedia, L5 Society.

[55] “A possible issue is that because it might require so much energy to do interstellar colonization, anything that could produce that much energy could probably be weaponized. Conceivably, there could be opposition from a larger group of people uninterested in space colonization to creating something that could be used to create a dangerous weapon.” Notes from a conversation with Geoffrey Landis.

[56] “Given the existence of mind uploading or advanced AI, Stross sees no insurmountable obstacle to interstellar colonization. In this scenario, it seems that a near-light-speed colonization wave could occur roughly along the lines envisioned by Robert Bradbury.

In Stross’s view, it is not settled whether mind uploading or advanced AI are feasible in principle. If the mind is inherently analog or quantum, digital uploads may be impossible.” Notes from a conversation with Charles Stross.

“Space colonization is especially likely to be possible if advanced AGI and molecular manufacturing are possible—as Anders believes they are—though he also thinks it is possible if they are impossible.” Notes from a conversation with Anders Sandberg.

Human and animal interventions: the long-term view

,

Crossposted from the Global Priorities Project

This post discusses the question of how we should seek to compare human- and animal-welfare interventions. It argues: first, that indirect long-term effects mean that we cannot simply compare the short term welfare effects; second, that if any animal-welfare charities are comparably cost-effective with the best human-welfare charities, it must be because of their effect on humans, changing the behaviour of humans long into the future.

In the search for the most cost-effective interventions, one major area of debate is whether the evidence favours focussing on human or animal welfare. Some have argued for estimates showing animal welfare interventions to be much more cost-effective per unit of suffering averted, with an implication that animal welfare should perhaps therefore be prioritised. However this reasoning ignores a critical difference between the two types of intervention: the long-term impact. There are good arguments that the long-term future has a large importance, and that we can expect to be able to influence it.

The intention here is not to attack the cost-effectiveness estimates, which may well be entirely correct as far as they go. However, like most such assessments they only consider the immediate, direct impact of interventions and compare these against each other. For example, disease relief schemes would be compared by looking at the immediate welfare benefits to the humans or animals cured of disease.

What this process misses out is that interventions to improve human welfare have ongoing positive effects throughout society. It has been argued that healthy humans with access to education, food and clean water are far more likely to be productive, and contribute to the economic development of their society, with knock-on improvements for everyone who comes after them. Also, not having to worry about their basic needs may free them up to spend more time considering and improving the circumstances of others around them.

The upshot of this is that it is likely interventions in human welfare, as well as being immediately effective to relieve suffering and improve lives, also tend to have a significant long-term impact. This is often more difficult to measure, but the short-term impact can generally be used as a reasonable proxy.

By contrast, no analogous mechanism ensures that an improvement in the welfare of one animal results in the improvements in the welfare of other animals. This is primarily because animals lack societies, or at least the sort of societies capable of learning and economic development. Even though many animals are able to learn new skills, they are extremely limited in their ability to share this knowledge or even pass it on to their offspring.

The result is that short-term welfare benefits to animals cannot be used as even a rough proxy for long-term improvements in the same way as they can for humans. So even if the short-term estimates suggest that animal welfare interventions are more cost-effective, it is certainly questionable whether this aspect dominates when considering the overall benefits.

This does not, of course, rule out the possibility that the most effective interventions could have an animal welfare element. For example, a shift in society towards vegetarianism would reduce the number of animals kept in poor conditions today, as well as improving human welfare in numerous ways (such as using fertile land more efficiently to grow crops rather than cattle). Moreover, if it could achieve a lasting improvement in societal values, it might have a large benefit in improved animal welfare over the long-term.

A push towards vegetarianism is one sort of value-shifting intervention. It is possible that this or another such intervention could be more effective than direct improvements to human welfare, but in order to assess this we need to  to model how changing societal values today will influence the behaviour of future generations. This should be a target for further research.

Humans are uniquely placed to pass on the benefits of interventions to the rest of society and to future generations, and if we ignore these future-compounding effects we may achieve less good than we could have. For many types of human welfare intervention, we can use the short-term benefits to humans as a proxy for ongoing improvements in a way that is not possible – and may be misleading – when it comes to improvements to animal welfare. Although it is difficult to quantify, this hidden benefit may be enough to make the best human-focussed interventions more cost-effective than the best animal-focussed ones, even if the reverse seems true looking at the short run.

Apples and oranges? Some initial thoughts on comparing diverse benefits

,

An important part of effective altruism is comparing the value of different altruistic endeavors. Many altruistic endeavors bring about different kinds of good things, for instance protection of children from incapacitating diseases, and extra years of quality education. To find the best causes, we need some way of evaluating these next to one another. How much extra education in the developing world is worth the same as an extra year of healthy life?

Answering such questions is notoriously tricky, and GWWC faces an even harder problem of answering them in such a way that other people are happy to use our judgements, and thus our recommendations. One can’t just opt out of answering them, for the same reason one shouldn’t choose one’s partner by their social security number just because it’s hard to weigh up a good sense of humor against kind eyes.

So how can we make such evaluations? GWWC research has been looking into it a bit, and this blog post will tell you some of what we think. We’ll look at various methods from economics and the social sciences, and discuss the advantages and disadvantages of different approaches.

Any attempt to compare the value of two things has two basic parts. The first step is often to parcel out all of the main ways each item might have benefits. For instance, if a person in Malawi doesn’t contract malaria next year, this will translate to various good things: less suffering and more joy for that person, less stress for their family, less congestion at the local hospital, a few thousand dollars of productive work, slightly different expectations among people nearby about how well their lives are likely to go.

This can go on for more steps – a few thousand dollars of productive work might be factored out in terms of increased prosperity for the family, and more prosperity spread among the wider world. Prosperity for the family might in turn be factored out as better nutrition for them, more education for their children, some investments that will bring them further prosperity, etc.

Another thing that often happens at this stage is that items are made more abstract. For instance one particular girl’s recovery from schistosomiasis might be equated to some number of ‘quality adjusted life years’ or ‘QALYs’. This makes evaluation more tractable. Now we can evaluate many similar things at once, a little bit inaccurately, instead of doing thousands of different evaluations. For instance if we convert many different health interventions into QALYs saved, then we can compare a QALY to a year of school, and we automatically have a comparison of lots of different health interventions.

The second step is to actually compare the value of the items you end up with. This might involve for instance seeing how money much a person is willing to pay to avoid some suffering. You could do this by observing how much money and effort they invest in avoiding the flu, in flu season. Or you might ask them directly, ‘if you could pay $1000 to avoid this surgery, would you take that offer?’. Or instead of looking at money they will sacrifice, you might just ask about how they feel. For instance you might ask a blind person repeatedly over a period how satisfied they are with their life, and do the same with a person who is not blind.

Comparing value can be done immediately without the previous step of converting things, or after so many iterations of the previous step as to turn health and education into unrecognizable buckets of value. Many ways of comparing the value of A against B have been developed, especially in the social sciences. I have collected some of them into a menu, which describes their upsides and downsides, and suggests when we might expect them to be most appropriate.

Note that the methods in the menu are generally comparing benefits to identifiable groups of people. We may care about people for whom it’s hard to make these measurements, such as future generations, but we’ll need different or further methodology to know how to compare these effects to the direct benefits we can measure. Nonetheless it’s valuable to understand how we should compare diverse benefits today, and this may be a key component in a more general analysis.

Also, we won’t usually be able to choose freely from the list of methods. GWWC will probably not have the resources to go and find out how much an additional parent improves the life outcomes for a person in rural China. The hope is rather that this list might help guide choices of comparison when a few different methods happen to be available. I’ll give you a few examples of evaluation methods on the list, then in the rest of this post I’ll describe some of key choices it offers and mention some of the factors that might suggest one choice over another.

Appetizers

To give you a concrete idea of what we are talking about, here are some examples of how we might use different methods from the list to figure out how good something is for a person:

Willingness to pay: if a person is willing to pay $100 for a textbook, you can infer that they prefer having the textbook to having $100, and therefore to other things they could buy for $100.

Instantaneous reports of subjective wellbeing: a person’s phone pings them throughout the day and and asks something like, ‘zero to ten, how good do you feel right now?’ Using this method, we can build a hedonic profile for different individuals and (hopefully) discover the correlates and causes of happiness, as well as their relative importance.

The standard gamble: A person is asked what probability of dying they would be willing to accept for an intervention that would improve their health in some way.

Averting behavior method: Use the total costs people pay to avoid extra sunburn as a lower bound for how much a healthy ozone layer is worth to them.

What to elicit: preferences, happiness, or another proxy of good?

Whether you ultimately care to optimize for good feelings, people getting what they want, or something else is a controversial moral question. I won’t address this question here except to note that different answers can lead us to prefer different measurement techniques. For instance, it is relatively easy to observe a person’s preference by looking at which things they choose. Their happiness in different situations is harder to observe, so you might do better just asking them directly if that’s what you care about. On the other hand, there are many reasons to suspect people’s answers to ‘how happy are you’ across time and between people don’t exactly correspond to the true landscape of happiness, so even if you cared about happiness you might take a person’s preferences seriously, if you thought they liked to be happy.

When the situation you want to evaluate will have ramifications for other people than the one(s) directly affected, the preferences and feelings of the people involved will be a poor proxy for the overall good or bad done. For instance, suppose you are considering educating a young woman in a developing country. This should help her, but it is thought to also have large effects on her children and others in her community. If you are hoping to eventually lift her region out of poverty through this type of action, you are relying mostly on good from such indirect effects. In this case, discovering how much the woman would like to be educated is not very useful. Even if she wouldn’t like to be educated at all it may be a good intervention. Similarly, if a person does not understand or care very much about their own future, their views on the value of improving it are not very useful. Asking a child how much they would like to be dewormed is not very useful.

One might deal with this in the earlier step, by equating ‘one child dewormed’ into a small amount of health, education, and physical development. A child might know better how much they dislike being sick, or adults may be able to shed light on the long term costs of malnourishment. Or you might just abstract a child’s illness to be generic illness, and ask how much better informed people wish to avoid such illness.

How to elicit: ask or observe?

In many situations what people think or say they value differs from what they choose given the opportunity. There are many possible reasons for this, from hypocrisy to difficulty envisaging the situation of interest accurately and without a biased focus. To the extent one believes a person’s behavior better indicates their ‘real’ preferences, observing choices will be more accurate than asking. However a person’s speech may better reflect their preferences on reflection than their behavior, which usually comprises a combination of considered preferences and unendorsed urges. If you wish to respect only the former, this is some reason to ask instead of observing.

Asking also has the benefit of making it relatively easy to isolate the issue of interest; natural situations rarely allow one to pin a preference to a specific factor, as so many factors change. For instance, if you see that parents choose to send their children to school at substantial cost, you might like to infer that they expect the education to benefit their children substantially. But it could also be that they expect their children to look poor if they don’t go to school, and be treated worse. Then if the school didn’t exist their children would not be worse off – they are just worse off if the school does exist, and they don’t go to it.

Who to elicit it from?

The best people to evaluate A and B would seem to be people who have experienced both, and are currently either experiencing both or neither. This way they will hopefully be familiar with both items, and not too biased by the details of one being more salient at the moment.

Often it will be hard to find people in exactly that situation though. You will often have a trade off between people who are familiar with one item and not the other (so know more but might be biased) and people who are not familiar with either (so know little, but probably less biased).

This is all only an issue if you are interested in evaluating A and B for the person directly involved. If you want to know about the overall social benefits of fewer people being blind for instance, compared to fewer people being crippled, you may be better off asking someone who is neither blind nor crippled.

Comparison or separate evaluation?

Suppose you want to know whether it is better to get a five year old to go to school for an extra year, or to avoid them getting malaria for one year, and you intend to figure it out by asking their parents. One way you could do it is to ask each parent ‘do you think your child will be better off if she goes to school this year and gets malaria, or if she doesn’t go to school or get malaria this year?’. Another is to ask some parents how valuable they think going to school is, and other parents how valuable they think avoiding malaria is, then comparing those values.

If you ask a person to compare two things at once, you will often get different answers to if you get them to just compare one or the other. One reason is that when they can see things side by side, they compare on the characteristics that seem salient. When they can only see one thing, they tend to ignore characteristics if they don’t feel like they can get any grasp on how good the actual number for the characteristic is, even if it seems important. In our earlier example, suppose that for both the school and the health interventions, the parents are told how much extra income their child is likely to have in the future as a result of the project, and zero-to-ten how much their child will like it at the time. Suppose education will bring about much more future income than the health intervention on offer, but makes the child 2/10 happy instead of 8/10 happy. Then when the parents look at both interventions together, they might weigh up the present costs and future gains and choose the education intervention. When they look at the projects separately though, they might have fairly similar mental images of two different largish gains in future income, whereas happiness looks quite different on the given scale. So implicitly they focus more on the difference in happiness, and so the comparison could come out the other way.

***

These have been several of the considerations which make some evaluation methods more appropriate than others, at different times. The social science literature has a lot more to say. Hopefully our menu nonetheless summarizes enough important points to strengthen our research comparing interventions, and can be built upon in the future as we carry out such comparisons.

Crossposted from the Giving What We Can blog