Parenthood and effective altruism



The choice to become a parent is one of the biggest and most binding life decisions we can make. How might that decision go if you want to incorporate principles of effective altruism in the big decisions of your life? In considering the costs and attractions of parenthood, I will make the case that having children can indeed be consistent with those principles. I’ll also explain why I think the EA movement should be open to the idea of parenthood as part of an EA life.

How does parenthood fit into a life based on effective altruism?

I am not a parent myself: I have to make my estimation of the costs and some of the benefits from the data I can gather, including experiences and opinions of parents. However this decision can only be made from such a point of inexperience. Indeed, one of the reasons this is a difficult decision is that there are few realistic options for changing one’s mind after one has had a child, and for women the option is only open for a limited (but unknown) timeframe.

What are we asking?

It can be useful to consider big decisions abstractly, to determine the ideal sequence of actions to take in our life without consideration for our individual psychologies. This can make us aware of the decision space, and encourage us to cultivate helpful desires. However, when we make decisions we must consider who we are. Within the effective altruist movement we seek to ask how we as individuals can achieve the most good possible. If a proposed action would render a particular person miserable, it’s highly unlikely they will be able to stick to it. Moreover, the question of potential parenthood is usually being asked by two people whose relationship offers them mutual support. The well-being and psychological needs of both those people must factor in the analysis.

What are the costs of a child?

Estimates of the financial costs of child-raising range from the considerable to the enormous, but the figures available are not robust or free from bias. A UK insurance group has estimated £220,000 to raise a child from birth to 21[1], with childcare (£60k) and education (£70k) forming the bulk of this. The number has been criticised on a number of grounds[2] by economist Tim Harford,[3] who also a pointed out the obvious benefit to a company which sells life insurance of inflating the accepted cost of parenthood. The Joseph Rowntree Foundation and the Child Poverty Action Group made a more conservative estimate of £148,000 for a couple raising one child to the age of 18.[4] This figure is derived from their minimum income standard (or MIS[5]), which is their calculation of the minimum support needed for two adults and one child. Currently set at £468 per week, the MIS outstrips the minimum wage two people would earn in the UK, suggesting that two adults earning minimum wage would be unable to support any children. The fact that many do suggests the MIS is not actually a minimum.[6]

While meeting the needs of your child will cost a significant amount – especially paid childcare in the early years – the manner in which you do it has some flexibility. As adults, as part of an EA life, we have chosen to limit our consumption and live more simply. These simpler lives may certainly exceed a ‘minimum standard’ – they can be downright lovely. Some of the reasons for this include cultivating tastes for less expensive pleasures and recreations, as well as leveraging non-financial resources such as education, or proximity to friends and family. The same principles, I believe, can be extended to our children without deprivation.

Bryan Caplan’s excellent book Selfish Reasons to Have More Kids[7] reviews the evidence from 40 years of adoption and twin studies with a frankly liberating result: barring actual deprivation or trauma, children are largely who they are going to be as a result of their genetic makeup. In long-term measures of well-being, education and employment, parental influence exerts a temporary effect which disappears when we are no longer living with our parents. So costly added extras (music lessons, coaching and tutoring, private school fees) are probably not going to change your child’s life in the long term.  (However, data on the antenatal environment suggests benefit to taking iodine, but avoiding ice-storms and licorice during pregnancy.[8]) Sharing time together and finding common interests can build a good relationship and help a child develop without major costs.

Taking this information into account, I estimate that for two parents living in the UK, the cost of raising a child to independence while providing what’s needed for their health, education and wellbeing will be between £150-£200,000. If we estimate 18-21 years for them to reach such independence, this gives an approximate range of £3500-£4800 per parent per year. But we don’t ‘lose’ our children when they are no longer dependent upon us: it might be better to consider the cost as a lifetime cost, so that if you become a parent at 30 and expect to live till you are 80, the cost is more like £2000 per parent per year over one’s remaining years of life.

This considerable cost can be put into perspective with other spending decisions. The annualised lifetime cost of parenthood is less than many spend annually on a car and petrol. It is less than the cost differences at stake in deciding which city to live in, or whether to live rurally. It’s much less than the difference in wages between two potential careers. For this reason I think it is a real mistake to consider the financial costs of parenthood as being categorically different to many of the other lifestyle choices we make. It would seem irrational to choose a profession that would have significant adverse effect on your well-being for the sake of a £2000 p.a. increase in wages.

In addition to straightforward financial outlay, parenthood comes with costs of time and opportunity. Loss of flexibility and leisure mean you won’t be able to take all opportunities (like taking on extra work to make more money or advance your career). Late notice travel is unlikely to be possible. You will probably be sleep deprived for a large part of the first year or more of your child’s life, and this may impact on your work performance. The work of parenting will take time, though some of it may be outsourced at the cost of increased financial outlay.

So, this baby is going to cost you about £2000 a year and take a variable but large amount of your time, which will equate in the end to another chunk of money. For parents taking parental leave or working less than full time to provide childcare, there may be delay to career progression as well as income.  Does this represent an unacceptably large sum of money and time to be compatible with the goal of maximising our impacts for the good?

I would argue it’s possible to integrate this cost into one’s personal rather than charitable budgets. When deciding if we would try to start a family, my partner Toby and I planned a ‘baby budget’: we would each contribute a portion of our income to the fund for our child’s needs. By doing this we can still meet our giving pledges, but adjust other spending patterns. In effect, we would channel money from ourselves to this other person, without reducing our commitments to give. If the costs turn out to be higher than we can absorb this way, I may reduce my giving, but am confident of still being able to give over £1,000,000 in the course of my career.

Likewise, we would plan to continue to work in our careers full time (after a period of parental leave). We envisage that our personal time budgets would change pretty dramatically.  I don’t work all the hours in a day (even if it sometimes feels like it!) and neither does my partner.  We spend time talking over coffee, we listen to music, we read, we walk, we hang out with friends, I knit and sometimes I watch a lot of crappy television. How might that time budget change? There would be fewer sleep-ins, and more wakeful nights. I would probably exchange bad TV for audiobooks while breastfeeding. Our walks would be shorter and slower, and will be more likely to end in a playground than a pub.

Some of this – the sleeplessness, bits of pregnancy and the process of childbirth – would not be fun. Again, I don’t think this represents a categorical difference from the sacrifices we elect to make in pursing other long term goals. In my case, I’ve worked 84 hours of night shifts in a week, a feat which requires several days to recover physically and mentally, because it was integral to training in my career. I’ve studied for hours at either end of a full time working day because I want to progress in my specialty.[9] If the payoff is worth it, I am prepared to do this too.

So what’s the gain?

First, there can be personal benefits. While parenting is work, it’s also a challenge and a discovery. To observe and take part in the growth of a human from a ball of cells to someone who can talk, think, reason and love sounds to me like a most amazing journey. I expect that parenthood will challenge me to the end of my character. I expect to learn and change and grow.

I don’t think you have to become a parent to learn these lessons, but nor do I think I’ll learn them just by working at my paid job. I might learn them from traveling widely, from climbing mountains, from learning another language, by arguing with strangers in a bar at 3am, by reading all of Derek Parfit’s works, by singing songs with friends in the backyard on a sunny afternoon. For most of us, living a life means doing a lot of things that don’t necessarily earn money or prestige. Some people might be able to work 80 hours a week, to just work and sleep and thrive on that. My observation is that even among highly motivated and talented coworkers, such people are the exception rather than the rule. If I did that I am reasonably sure I’d be burned out and unhappy within 5 years, making such a plan not merely demanding but directly self-defeating.

The desire to be a parent – to have children, either biologically or by other means – does not always arise as a clear-eyed appraisal of the potential benefits. The (faintly derisive) term ‘baby fever’ was coined to denote the intense desire for children that many people experience without being able to fully explain, and possibly in the face of their own analysis of the arguments for and against having a child.  Some preliminary psychological research suggests the phenomenon has complex origins and is observable in both men and women.[10]

It is clear that the longing for children that some people experience can not be overcome by clearly viewing the obstacles to and pitfalls of parenthood. The nature of this desire can be so strong, that even when achieving parenthood seems impossible, people’s wish to become parents will drive them to extraordinary efforts. Fertility clinics treat patients prepared to endure years of waiting[11], followed by uncomfortable and invasive testing, difficult procedures and at least a 65% chance of failure[12] in an effort to become parents. If they were able to rationalise themselves out of wanting children, they would stop before exhausting every possible resource – medical, emotional and financial – in efforts to start a family that might span a decade.  In the light of this reality, the rationalist suggestion I have encountered – that one guard against a desire to become a parent by pre-emptively being sterilised before the desire has arisen – seems a recipe for psychological disaster.

I don’t have the answer to the origin of the longing for children that many experience. It’s almost certainly due to a complex mixture of biological and social factors. It might even be an evolutionary trick. However, the fact remains that this desire is real and difficult to manage if unfulfilled.  It can’t be simply discounted or argued away as irrational. It needs to form part of our considerations in whether or not we choose to (attempt to) become parents, because we must consider how tenable it is to sacrifice our chance to fulfil these desires

Finally we may ask whether parenthood – and the resulting person created – will benefit the wider world? This is a harder good to calculate or rely upon. The inheritance of specific character traits is difficult to predict. It’s certainly not guaranteed that your offspring will embrace all of your values throughout their lifetime. The burden of onerous parental expectations are extensively documented, and it would appear foolish to have children on the expectation they will be altruistic in the same way you are. However, your child is likely to resemble you in many important respects. By adulthood, the heritability of IQ is between 0.7 and 0.8,[13] and there is evidence from twin studies of significant heritability of complex traits like empathy.[14] This would give them a high probability of adding significant net good to the world.

For EA’s making this decision, there is a further benefit in changing how the world sees the EA life. A core message of Giving What We Can is that many people can do a significant amount of good. We are so comparatively wealthy that without significant sacrifice, we can help thousands of people. If the way we live implies that to make this difference you must sacrifice parenthood, this will drastically narrow the range of people who can consider doing the same.

How do we weigh these up?

It’s a complicated ledger.

On the one hand: the vast bulk of your leisure time for perhaps five years and a significant portion for the next 13, a less flexible work and home life, the emotional cost of knowing your heart may never be your own again, and finally your share of the financial cost of £150,000-£200,000. Two parents might be able to save 70 lives by donating this money to Against Malaria Foundation

On the other hand: we demonstrate the level of sacrifice required to make the world significantly better does not require a dramatic deviation from what most people consider ‘a normal life’; we gain all the good that might be contained in one life; two parents grow and develop and enjoy a lifetime with their child, and, for some, there is the fulfilment of a deep desire.

The nature of an individual will almost certainly play a determining role. For myself, I could theoretically cut my personal spending anyway, work more and be able to give more money, to save more lives. I could live just to work, earning to give. But I know such a life plan would be self-defeating and would not last. I’m happy donating 50% of my income over my life, but if I also chose not have a child simply to raise that amount to 55%, then that final 5% would cost me more than all the rest, to the extent I don’t believe I could continue to do so. Julia Wise writes beautifully about how it changed her outlook on life to allow herself the possibility of children,[15] and it’s a feeling that I totally understand because I’ve felt the same. I believe that by making this decision to spend my personal money and time budgets in this way, I’m deciding to meet a major psychological need and to plan a life I can continue to live in the long term. I think this decision will also benefit my future child, and I think there is a significant chance it will benefit the wider world.

Is EA “family friendly”? Why does it matter?

Within the EA movement I’ve sometimes encountered a fairly dismissive attitude to parenthood in the abstract. Sometimes the best on offer is a sort of resigned tolerance, with EA’s advocating we not “shun” people just for having children.[16]  At other times I’ve seen parenthood characterised as foolish, selfish or both, to be discouraged with great zeal. I genuinely wonder where this hostility comes from. Is it simply that the hostile attitudes I’ve encountered have been expressed by people quite early in their lives? The onset of a desire to have a family may post-date one’s third decade (creating difficulties for the half of our species with a limited reproductive lifespan). Possibly it’s so prevalent because in our society women are more likely to express a desire for children than men are, and men dominate some internet EA conversations.

I hope these attitudes aren’t representative. Providing some counter evidence is that since announcing that we are expecting a baby, my husband Toby and I have both experienced a universally warm and excited response from friends and colleagues in the EA community. I’m reasonably sure – and I certainly hope – they aren’t just deciding not to shun us.

I think it’s vital that as a movement, EA enthusiastically embraces parents and potential parents. In order to spread EA values, and to build a robust movement, these values must be tenable as part of a whole life. We are not machines who can spend every waking moment working or earning money to give. We are not able to ignore our fundamental needs in order to eliminate the needs that divert us from spending every moment maximising our actions to a single goal. We need to take into account our psychological needs as we set the goals and paths for our lives.

Finally, I believe we need to recognise that to understand and engage with our complex world, we need to encompass a range of experiences and perspectives. Recent criticisms of the EA movement have raised the concern that we risk cultivating a monoculture.[17] Parenthood is only one variance we can embrace: education, gender, ethnicity are several others, as we continue to build a movement that really does strive to achieve the best it can in the world.


[2] For instance, more than £50,000 of their figure assumes parents will be pay university fees for their adult offspring (rather than take a universally available fee loan, repayable when their earnings reach a threshold which is currently set above median wage).

[3] Harford, Tim (presenter) More or Less  (audio podcast) 2014, Jan 31. Retrieved from [Accessed: 31 Mar 2014]

[4] Meikle J “Cost of raising child in UK increases to nearly £150,000” The Guardian, available at [Accessed: 23 Jan 2014]

[5] “Minimum Income Standard” [Accessed: 23 Jan 2014]

[6] As advocates for increased welfare support in the UK, we might expect the JRF to have some bias when deciding which costs to include as a ‘minimum’.

[7] Caplan B, Selfish Reasons to Have More Kids: Why Being a Great Parent is Less Work and More Fun Than You Think Basic Books, 2011

[8] The Biodeterminists Guide to parenting [blog] December 12, 2012, available at [20 retrieved Jan 2014]

[9] Training as a doctor has also involved some even less pleasant things, but its been suggested to me that most people don’t enjoy reading about them in essays like this.

[10]Brase, GL and Brase SL,  “Emotional regulation of fertility decision making: What is the nature and structure of “baby fever”?” Emotion, Vol 12(5), Oct 2012, 1141-1154.

[11] “NHS Choices: Can I get IVF treatment on the NHS” [Accessed 23 January 2014]

[12] “NHS Choices: IVF” [Accessed 23 January 2014]

[13] Plomin R, Pedersen NL, Lichtenstein P, McClearn GE. Variability and stability in cognitive abilities are largely genetic later in life. Behav Genet. 1994 May;24(3):207-15. PubMed PMID: 7945151.

[14] Davis MH, Luce C, Kraus SJ. The heritability of characteristics associated with dispositional empathy. J Pers. 1994 Sep;62(3):369-91. PubMed PMID: 7965564.

[15] Wise, J. 2014. Cheerfuly. Giving Gladly, [blog] June 8, 2013, Available at: [Accessed: 01 Apr 2014]

[16] Tomasik, B 2012, ‘The Cost of Kids’ Essays on reducing Suffering August 4, 2012, Available at: [Accessed: 02 Apr 2014]

[17] Kuhn, B “A critique of Effective Altruism” [blog] available at [Accessed 21 January 2014]

Thanks to Michelle Hutchinson and Toby Ord for their useful feedback on earlier drafts of this piece.

Will MacAskill on normative uncertainty


Will MacAskill recently completed his DPhil at Oxford University and, as of October 2014 will be a Research Fellow at Emmanuel College, Cambridge.

He is the cofounder of Giving What We Can and 80,000 Hours. He’s currently writing a book, Effective Altruism, to be published by Gotham (Penguin USA) in summer 2015.

Luke Muehlhauser: In MacAskill (2014) you tackle the question of normative uncertainty:

Very often, we are unsure about what we ought to do… Sometimes, this uncertainty arises out of empirical uncertainty: we might not know to what extent non-human animals feel pain, or how much we are really able to improve the lives of distant strangers compared to our family members. But this uncertainty can also arise out of fundamental normative uncertainty: out of not knowing, for example, what moral weight the wellbeing of distant strangers has compared to the wellbeing of our family; or whether non-human animals are worthy of moral concern even given knowledge of all the facts about their biology and psychology.

…one might have expected philosophers to have devoted considerable research time to the question of how one ought to take one’s normative uncertainty into account in one’s decisions. But the issue has been largely neglected. This thesis attempts to begin to fill this gap.

In the first part of your thesis you argue that when the moral theories to which an agent assigns some credence are cardinally measurable (as opposed to ordinal-scale) and they are intertheoretically comparable, the agent should choose an action which “maximizes expected choice-worthiness” (MEC), which is akin to maximizing expected value across multiple uncertain theories of what is desirable.

I suspect that result will be intuitive to many, so let’s jump forward to where things get more interesting. You write:

Sometimes, [value] theories are merely ordinal, and, sometimes, even when theories are cardinal, choice-worthiness is not comparable between them. In either of these situations, MEC cannot be applied. In light of this problem, I propose that the correct metanormative theory is sensitive to the different sorts of information that different theories provide. In chapter 2, I consider how to take normative uncertainty into account in conditions where all theories provide merely ordinal choice-worthiness, and where choice-worthiness is noncomparable between theories, arguing in favour of the Borda Rule.

What is the Borda Rule, and why do you think it’s the best action rule under these conditions?

Will MacAskill: Re: “I suspect that result will be intuitive to many.” Maybe in your circles that’s true! Many, or even most, philosophers get off the boat way before this point. They say that there’s no sense of ‘ought’ according to which what one ought to do takes normative uncertainty into account. I’m glad that I don’t have to defend that for you, though, as I think it’s perfectly obvious that the ‘no ought’ position is silly.

As for the Borda Rule: the Borda Rule is a type of voting system, which works as follows. For each theory, an option’s Borda Score is equal to the number of options that rank lower in the theory’s choice-worthiness ordering than that option. An option’s Credence-Weighted Borda Score is equal to the sum, across all theories, of the decision-maker’s credence in the theory multiplied by the Borda Score of the option, on that theory.

So, for example, suppose I have 80% credence in Kantianism and 20% credence in Contractualism. (Suppose I’ve had some very misleading evidence….) Kantianism says that option A is the best option, then option B, then option C. Contractualism says that option C is the best option, then option B, then option A.

The Borda scores, on Kantianism, are:

A = 2

B = 1

C = 0

The Borda scores, on Contractualism, are:

A = 0

B = 1

C = 2

Each option’s Credence-Weighted Borda Score is:

A = 0.8*2 + 0.2*0 = 1.6

B = 0.8*1 + 0.2*1 = 1

C = 0.8*0 + 0.2*2 = 0.4

So, in this case, the Borda Rule would say that A is the most appropriate option, followed by B, and then C.

The reason we need to use some sort of voting system is because I’m considering, at this point, only ordinal theories: theories that tell you that it’s better to choose A over B (alt: that “A is more choice-worthy than B”), but won’t tell you how much more choice-worthy A is than B. So, in these conditions, we have to have a theory of how to take normative uncertainty into account that’s sensitive only to each theory’s choice-worthiness ordering (as well as the degree of credence in each theory), because the theories I’m considering don’t give you anything more than an ordering.

The key reason why I think the Borda Rule is better than any other voting system is that it satisfies a condition I call Updating Consistency. The idea is that increasing your credence in some particular theory T1 shouldn’t make the appropriateness ordering (that is, the ordering of options in terms of what-you-ought-to-do-under-normative-uncertainty) worse by the lights of T1.

This condition seems to me to be very plausible indeed. But, surprisingly, very few voting systems satisfy that property, and those others that do have other problems.

Luke: One problem for the Borda Rule is that it is, as you say, “extremely sensitive to how one individuates options” — a problem analogous to the problem of clone-dependence in voting theory. You tackle this problem by modifying the Borda Rule to include a measure over the set of all possible options. Could you explain how that works? Also, is this modification to the Borda Rule novel to your thesis?

Will: A measure is a way of giving sense to the size of a space. It allows us to say that some options represent a larger part of possibility space than others. This is an intuitive idea: ‘drinking tea tomorrow’ represents a larger portion of possibility space than ‘drinking tea with my left hand tomorrow at 3pm’. With the idea of a measure on board, we can rewrite our definition of the Borda Rule, as follows (I’ll ignore the possibility of equally choice-worthy options, as that makes the definition a little more complicated):

For each theory, an option’s Borda Score is equal to the sum total of the measure of the options that rank lower in the theory’s choice-worthiness ordering than that option. An option’s Credence-Weighted Borda Score is equal to the sum, across all theories, of the decision-maker’s credence in the theory multiplied by the Borda Score of the option, on that theory.

So, suppose that, according to some theory Ti, A>B. On the old definition of the Borda Rule, A gets a Borda Score of 1. But if option B is split into options B’ and B”, such that A>B’=B”, then A gets a Borda score of 2. The fact that A’s score has changed just because of how you’ve individuated options is problematic.

But let’s use the new definition, which incorporates a measure, and suppose that the measure of A is 0.5 and the measure of B is 0.5. If so, then, when the decision-maker faces options A and B, then A gets a Borda Score of 0.5, on Ti. But when option B is split into options B’ and B”, then the measure is split, too. Suppose that B’ and B” are equally large. If so, then B’ would have a measure of 0.25 and B” would have a measure of 0.25. So A’s Borda score, on Ti, would be 0.5, just as before.

This modification to the Borda Rule is novel, though the idea was given to me by Owen Cotton-Barratt, so I can’t take credit! I guess the reason it hasn’t been suggested in the voting theory literature is because it might seem obvious that every candidate gets the same measure. But perhaps you could think of the ‘space’ of possible political positions (which would be easy if it were really a left-right spectrum), and assign candidates a measure based on how much of this space they take up. That could possibly allow for the Borda Rule to avoid problems to do with clone-dependence. But I think that for actual voting systems, the Schulze method is better than the Borda Rule. It’s clone-independent even without a measure, and is much less vulnerable to strategic voting than the Borda Rule is.

Luke: What is the relevance of Arrow’s impossibility theorem to your suggested use of a modified Borda Rule for handling normative uncertainty?

Will: I suggest that, in conditions of ordinal theories, we should exploit the analogy with voting. But that analogy with voting means that we’ll run into a analogue of Arrow’s Impossibility Theorem: the result that no voting system can satisfy all of a number of highly desirable properties.

There are a few ways to formulate the impossibility result. The strongest, in my view, is to say that any voting system that satisfies other more essential properties must violate Contraction Consistency, where Contraction Consistency is defined as follows:

Let A be the option set, and M be the set of maximally appropriate options within A. Let S be a subset of A that contains all members of M. The condition is: A is a maximally appropriate option, given option set S, iff it is a member of M.

It’s a condition that you’ve got to be careful how to formulate. I don’t go into that in my thesis. But some violations of it are intuitively clearly irrational. E.g. imagine you’ve got the options of blueberry ice cream or strawberry ice cream. You currently prefer blueberry. But then you discover that the restaurant also serves chocolate ice cream, and so you switch your preference from blueberry to strawberry, even though your assessment of the relative values of blueberry and strawberry hasn’t changed. That seems irrational – e.g. it would suggest that you should spend resources trying to find out if you have available to you an option that you know you won’t take.

I think that contraction consistency is a problem for the Borda Rule. But it’s a problem that affects all voting system analogues. So it’s something that we’ve got to live with – it’s just unfortunate that we have (or ought to have) credence in merely ordinal theories.

There is a second response as well. Which is to distinguish the Narrow and Broad versions of the Borda Rule. The Narrow version assigns Borda Scores within an option-set. The Broad version assigns Borda Scores across all possible options. It’s only the Narrow version that violates Contraction Consistency. But the Broad version has its own weirdnesses. Suppose that you’ve got a situation:

99%: T1: A>B

1%: T2: B>A

Where T1 and T2 are merely ordinal theories. It might seem obvious that you should pick A in that situation. But you can’t infer that from that case, according to the Broad Borda Rule. Instead, you’ve got to look at how A and B rank in T1 and T2′s ordering of all possible options. If A and B are very close on T1 but very far apart, on T2, then B might be the most appropriate option. So the Broad Borda is very difficult to use. And it gives results that seem wrong to me – as if you’re ‘faking’ cardinality where there is none.

So my general view on this is that any account you have will have deep problems. Endorsing a particular view involves carefully weighing up different strengths and weaknesses; there’s no obviously correct position. (This becomes a theme when you start working on normative uncertainty. To an extent, this should be expected: we’re dealing with messy nonideal agents, who don’t have perfect access to their own values or to the normative truth).

Luke: Your thesis covers many other interesting topics; we won’t try to cover them all here. How would you summarize the other major “takeaways” you’d most want people to know about from your thesis?

Will: The “Maximise Expected Choice-Worthiness” approach to moral uncertainty is the best approach. It is able to respond to a number of objections that have been levelled against it.

If you think that you can’t compare choice-worthiness across theories, then you should normalise different theories at their variance. But I think that the arguments for intertheoretic incomparability don’t work. Instead, you should feel comfortable about using your intuitions about how different theories compare.

We can make sense of the idea of two theories T1 and T2 having exactly the value-ordering over options, but T1 thinking that everything is twice as important as T2 does. So ‘utilitarianism’ really refers to a class of theories, each of different levels of amplification.

Most of our intuitions about different Newcomb and related problems can be captured by maximising expected choice-worthiness over uncertainty about whether evidential or causal decision theory is true (with much higher credence in causal decision-theory than evidential decision theory). Taking decision-theoretic uncertainty into account puts causal decision theory on pretty strong grounds — you can respond to the intuitive and “Why Ain’cha Rich?” arguments in favour of evidential decision theory.

Moral philosophy provides a bargain in terms of gaining new information: doing just a bit of philosophical study or research can radically alter the value of one’s options. So individuals, philanthropists, and governments should all spend a lot more resources on researching and studying ethics than they currently do.

Even if you think that continued human survival is net bad, you should still work to prevent near-term human extinction, in case the human race gets evidence to the contrary over the next few centuries. (Well, this is true given a few fairly controversial premises.

Luke: Thanks, Will!

Good Done Right: a conference on effective altruism


Registration for Good Done Right, a conference on effective altruism, is now open.

The conference will take place from the 7th to the 9th of July at All Souls College, one of the constituent colleges of the University of Oxford. It seeks to bring together leading thinkers to address issues related to effective altruism in a shared setting. Speakers will include Derek Parfit (Oxford), Thomas Pogge (Yale), Rachel Glennerster (MIT Poverty Action Lab), Nick Bostrom (Oxford), Norman Daniels (Harvard), Toby Ord (Oxford), Jeremy Lauer (WHO), William MacAskill (Cambridge), Larissa MacFarquhar (the New Yorker), Nick Beckstead (Oxford), and Owen Cotton-Barratt (Oxford). The speakers are drawn primarily from within moral philosophy, but there will also be academics who specialize in development and health economics. There will also be a conference dinner on the 8th in the Hall at All Souls.

You can learn more about the conference at its official website, which also contains instructions for how to register and suggestions for finding accommodation.

2014 Weekend Away


The Centre for Effective Altruism (CEA) will be holding the 2014 Weekend Away at Atlantic College in Wales from 27th to 29th of June. (For a review of the previous Weekend Away, see this post by Holly Morgan.)

The weekend will be filled with seminars on various EA topics, outdoor activities, as well as ample opportunities for socialising/networking. The price is yet to be determined, but will simply go to cover the cost of rent and food for the weekend. There will be a group travelling from Oxford on Friday 27th, which participants are welcome to join. You are also free to make your own travel arrangements.

CEA hopes to be able to accommodate everyone who is interested in attending, but depending on demand this may not be possible.

Please use this Google form to RSVP. Please note that in the event of oversubscription an RSVP does not guarantee a place, but does increase the likelihood of securing one.

Paul Christiano on cause prioritization


Paul Christiano is a graduate student in computer science at UC Berkeley. His academic research interests include algorithms and quantum computing. Outside academia, he has written about various topics of interest to effective altruists, with a focus on the far future.  Christiano holds a BA in mathematics from MIT and has represented the United States at the International Mathematical Olympiad. He is a Research Associate at the Machine Intelligence Research Institute and a Research Advisor at 80,000 Hours.

Pablo: To get us started, could you explain what you mean by ’cause prioritization’, and briefly discuss the various types of cause prioritization research that are currently being conducted?

Paul: I mean research that helps determine which broad areas of investment are likely to have the largest impact on the things we ultimately care about. Of course a huge amount of research bears on this question, but I’m most interested in research that addresses its distinctive characteristics. In particular, I’m most interested in:

  1. Research that draws on what is known about different areas in order to actually make these comparisons. I think GiveWell Labs and the Copenhagen Consensus Center are the two most salient examples of this work, though they have quite different approaches. I understand that the Centre for Effective Altruism (CEA) is beginning to invest in this area as well. I think this is an area where people will be able to get a lot of traction (and have already done pretty well for the amount of investment I’m aware of) and I think it will probably go a very long way towards facilitating issue-agnostic giving.
  2. Research that aims to understand and compare the long-term impacts of the short-term changes which our investments can directly bring about. For example, research that clarifies and compares the long-term impact of poverty alleviation, technological progress, or environmental preservation, and how important that long-term impact is. This is an area where answers are much harder to come by, but even slight improvements in our understanding would have significant importance for a very broad range of decisions. It appears that high-quality work in this area is pretty rare, though it’s a bit hard to tell if this is due to very little investment or if this is merely evidence that making progress on these problems is too difficult. I tend to lean towards the former, because (a) we see very little public discussion of process and failed attempts for high-quality research on these issues, which you should expect to see even if they are quite challenging, and (b) this is not an area that I expect to receive a lot of investment except by cause-agnostic altruists who are looking quite far ahead. I think the most convincing example to date is Nick Bostrom’s astronomical waste argument and Nick Beckstead’s more extensive discussion of the importance of the far future, which seem to take a small but reasonably robust step towards improving our understanding of what to do.

There are certainly other relevant research areas, but they tend to be less interesting as cause prioritization per se. For example, there is a lot of work that tries to better understand the impact of particular interventions. I think this is comparably important to (1) or (2), but that it currently receives quite a lot more attention at the moment and it’s not clear that a cause-agnostic philanthropist would want to change how this is being done. More tangentially, efforts to improve forecasts more broadly have significant relevance for philanthropic investment, though they are even more important in other domains so prima facie it would be a bit surprising if these efforts ought to be a priority by virtue of their impact on improving philanthropic decision-making.

Pablo: In public talks and private conversation, you have argued that instead of supporting any of the object-level interventions that look most promising on the evidence currently available, we should on the current margin invest in research on understanding which of those opportunities are most effective.  Could you give us an outline of this argument?

Paul: It seems very likely to me that more research will lead to a much clearer picture of the relative merits of different opportunities, so I suspect in the future we will be much better equipped to pick winners. I would not be at all surprised if supporting my best guess charity ten years from now was several times more impactful than supporting my best guess charity now.

If you are this optimistic about learning more, then it is generally better to donate to your best guess charity in a decade, rather than donating to your current best guess. But if you think there is room for more funding to help accelerate that learning process, then that might be an even better idea. I think this is the case at the moment: the learning process is mostly driven by people doing prioritization research and exploratory philanthropy, and total investment in that area is not very large.

Of course, giving to object level interventions may be an important part of learning more, and so I would be hesitant to say that we should avoid investment in object-level problems. However, I think that investment should really be focused on learning and exploring (in a way that can help other people make these decisions as well, not just the individual donor) rather than for a direct positive impact.  So for example I’m not very interested in scaling up successful global health interventions.

The most salient motivation to do good now, rather than learning or waiting, is a discount rate that is much steeper than market rates of return.

For example, you might give now if you thought your philanthropic investments would earn very high effective rates of return. I think this is unlikely for the kinds of object-level investments most philanthropists consider–I think most of these investments compound roughly in line with the global growth rate (which is smaller than market rates of return).

You might also have a high discount rate if you thought that the future was likely to have much worse philanthropic opportunities; but as far as I can tell a philanthropist today has just as many problems to solve as a philanthropist 20 years ago, and frankly I can see a lot of possible problems on the horizon for a philanthropist to invest in, so I find this compelling.

Sometimes “movement-building” is offered as an example of an activity with very high rates of returns. At the moment I am somewhat skeptical of these claims, and my suspicion is that it is more important for the “effective altruism” movement to have a fundamentally good product and to generally have our act together than for it to grow more rapidly, and I think one could also give a strong justification for prioritization research even if you were primarily interested in movement-building. But that is a much longer discussion.

Pablo: I think it would be interesting to examine more closely the object-level causes supported by EAs or proto-EAs in the past (over, say, the last decade), and use that examination to inform our estimates about the degree to which the value of future EA-supported causes will exceed that of causes that EAs support today.  Off the top of my head, the EAs I can think of who have donation records long enough to draw meaningful conclusions all have in the past supported causes that they would now regard as being significantly worse than those they currently favour.  So this would provide further evidence for one of the premises in your argument: that cause prioritization research can uncover interventions of high impact relative to our current best guesses.

The other premise in your argument, as I understand it, is that the value of the interventions we should expect cause prioritization research to uncover is high relative to the opportunity cost of current spending. Can you elaborate on the considerations that are, in your opinion, relevant for assessing this premise?

Paul: Sorry, this is going to be a bit of a long and technical response. I see three compelling reasons to prefer giving today to giving in the future. But altogether they don’t seem to be a big deal compared to how much more we would expect to know in the future. Again, I think that using giving as an opportunity to learn stands out as an exception here–because in that case we can say with much more confidence that we will need to learn more at some point, and so the investment today is not much of a lost cause.

  1. The actual good you do in the world compounds over time, so it is better to do good sooner than later.
  2. There are problems today that won’t exist in the future, so money in the future may be substantially less valuable than money today.
  3. In the future there will be a larger pool of “smart money” that finds the best charitable opportunities, so there will be fewer opportunities to do good.

Regarding (1), I think that the vast majority of charitable activities people engage in simply do not compound that quickly. To illustrate, you might consider the case of a cash transfer to a poor family. Initially such a cash transfer earns a very high rate of return, but over time the positive impact diffuses over a broader and broader group of people. As it diffuses to a broader group, the returns approach the general rate of growth in the world, which is substantially smaller than the interest rate. Most other forms of good experience a very similar pattern. So if this were the only reason to give sooner, then I think that you would actually be better served by saving and earning prevailing interest rates for an extra year, and then donating a year later–even if you didn’t expect to learn anything new.

A mistake I sometimes see people make is using the initial rates of return on an investment to judge its urgency. But those returns last for a brief period before spreading out into the broader world, so you should really think of the investment as giving you a fixed multiplier on your dollar before spreading out and having a long-term returns that go like growth rates. It doesn’t matter whether that multiplier accrues instantaneously or over a period of a few years during which you enjoy excess returns. In either case the magnitude of the multiplier is not relevant to the urgency of giving, just whether the multiplier is going up or down.

A category of good which is plausibly exceptional here is creating additional resources that will flexibly pursue good opportunities in the future. I’m aware that some folks around CEA assign very high rates of return, in excess of 30% / year, to investment in movement-building and outreach. I think this is an epistemic error, but that would probably be a longer discussion so it might be easier to restrict attention to object-level interventions vs. focusing on learning.

Regarding (2), I don’t really see the evidence for this position. From my perspective the problems the world faces today seem more important–in particular, they have more significant long-term consequences–than the problems the world faced 200 years ago. It looks to me like this trend is likely to continue, and there is a good chance that further technological development will continue to introduce problems with an unprecedented potential impact on the future. So with respect to object-level work I’d prefer to address the problems of today than the problems of 200 years ago, and I think I’d probably be even happier addressing the problems we face 50 years.

Regarding (3), I do see this as a fairly compelling reason to move sooner rather than later. I think the question is one of magnitudes: how long do you expect it will take before the pool of “smart money” is twice as large? 10 times larger? I think it’s very easy to overestimate the extent to which this group is growing. It is only at extremely exceptional points in history that this pool can grow 20% faster than economic growth. For example, if you are starting from a baseline of 0.1% of total philanthropic spending, that can only go up 20% per year for 40 years or so before you get to 100% of spending. On the flip side, I think it is pretty easy to look around at what is going on locally and mistakenly conclude that the world must be changing pretty rapidly.

I think most of what we are seeing isn’t a changing balance between smart money and ordinary folk, it’s continuously increasing sophistication on the part donors collectively–this is a process that can go on for a very long time. In this case, it’s not so clear why you would want to give earlier when you are one of many unsophisticated donors rather than giving later when you are one of many sophisticated donors, even if you were only learning as fast as everyone else in the world. The thing that drives the discount rate earlier was the belief that other donors were getting sophisticated faster than we are, so that our relative importance was shrinking. And that no longer seems to apply when you look at it as a community increasing in sophistication.

So overall, I see the reasons for urgency in giving to be relatively weak, and I think the question of whether to give or save would be ambiguous (setting aside psychological motivations, the desire to learn more, and social effects of giving now) even if we weren’t learning more.

Pablo: Recently, a few EAs have questioned that charities vary in cost-effectiveness to the degree that is usually claimed within the EA community.  Brian Tomasik, for instance, argues that charities differ by at most 10 to 100 times (and much less so within a given field). Do you think that arguments of this sort could weaken the case for supporting research into cause prioritization, or change the type of cause prioritization research that EAs should support?

Paul: I think there are two conceptually distinct issues here which should be discussed separately, at least in this context.

One is the observation that a small group looking for good buys may not have as large an influence as it seems, if they will just end up slightly crowding out a much larger pool of thoughtful money. The bar for “thoughtful” isn’t that high, it just needs to be sensitive to diminishing returns in the area that is funded. There are two big reasons why this is not so troubling:

  • Money that is smart enough to be sensitive to diminishing marginal returns–and moreover which is sufficiently cause-agnostic to move between different fields on the basis of efficiency considerations–is also likely to be smart enough to respond to significant changes in the evidence and arguments for a particular intervention. So I think doing research publicly and contributing to a stock of public knowledge about different causes is not subject to such severe problems.
  • The number of possible giving opportunities is really quite large compared to the number of charitable organizations. If you are looking for the best opportunities, in the long-term you are probably going to be contributing to the existence of new organizations working in areas which would not otherwise exist. This is especially true if we expect to use early investigations to help direct our focus in later investigations. This is very closely related to the last point.

This issue is most severe when we consider trying to pursue idiosyncratic interests, like an unusually large degree of concern for the far future. So this consideration does make me a bit less enthusiastic about that, which is something I’ve written about before. Nevertheless, I think in that space there are many possible opportunities which are simply not going to get any support from people who aren’t thinking about the far future, so there still seems to be a lot of good to do by improving our understanding.

A second issue is that broad social improvements will tend to have a positive effect on society’s ability to resolve many different problems. So if there is any exceptionally impactful thing for society to do, then that will also multiply the impact of many different interventions. I don’t think this consideration says too much about the desirability of prioritization: quite large differences are very consistent with this observation, these differences can be substantially compounded by uncertainty about whether the indirect effects of an intervention are good and bad, and there is substantial variation even in very broad measures of the impact of different interventions. This consideration does suggest that you should pay more attention to very high-impact interventions even if the long-term significance of that impact is at first ambiguous.

Pablo: Finally, what do you think is the most effective way to promote cause prioritization research?  If an effective altruist reading this interview is persuaded by your arguments, what should this person do?

Paul: One conclusion is that it would be premature to settle on an area that currently looks particularly attractive and simply scale up the best-looking program in that area. For example, I would be hesitant to support an intervention in global health (or indeed in most areas) unless I thought that supporting that intervention was a cost-effective way to improve our understanding of global health more broadly. That could be because executing the intervention would provide useful information and understanding that could be publicly shared, or because supporting it would help strengthen the involvement of EA’s in the space and so help EA’s in particular improve their understanding. One could say the same thing about more speculative causes: investments that don’t provide much feedback or help us understand the space better are probably not at the top of my priority list.

Relatedly, I think that global health receives a lot of attention because it is a particularly straightforward area to do good in; I think that’s quite important if you want your dollar to do as much good directly as possible, but that it is much less important (and important in a different way) if you are paying appropriate attention to the value of learning and information.

Another takeaway is that it may be worth actively supporting this research, either by supporting organizations that do it or by giving on the basis of early research. I think Good Ventures and GiveWell Labs are currently the most credible effort in this space (largely by virtue of having done much more research in this space than any other comparably EA-aligned organization), and so providing support for them to scale up that research is probably the most straightforward way to directly support cause prioritization. There are some concerns about GiveWell Labs capturing only half of marginal funding, or about substituting with Good Ventures’ funding; that would again be a longer and much more complicated discussion. My view would be that those issues are worth thinking about but probably not deal-breakers.

I hear that CEA may be looking to invest more in this area going forward, and so supporting CEA is also a possible approach. To date they have not spent much time in this area and so it is difficult to predict what the output will look like. To the extent that this kind of chicken-and-egg problem is a substantial impediment to trying new things faster and you have confidence in CEA as an organization, providing funding to help plausible-looking experiments get going might be quite cost-effective.

A final takeaway is that the balance between money and human capital looks quite different for philanthropic research than for many object-level interventions. If an EA is interested in scaling up proven interventions, it’s very likely that their comparative advantage is elsewhere and they are better served by earning money and distributing it to charities doing the work they are most excited about. But if you think that increasing philanthropic capacity is very important, it becomes more plausible that the best use of time for a motivated EA is to work directly on related problems. That might mean working for an “EA” organization, working within the philanthropy sector more broadly, or pursuing a career somewhere else entirely. Once we are talking about applying human capital rather than money, ability and enthusiasm for a particular project becomes a very large consideration (though the kinds of considerations we’ve discussed in this interview can be another important input).

Announcing a forthcoming book on effective altruism


I’m delighted to officially announce my forthcoming book on effective altruism. The current working titles are Effective Altruism for the US edition, and Doing Good Better for the UK edition. The US edition is being published with the Gotham imprint of Penguin USA, and the UK edition is being published with Guardian Faber (a collaboration between the Guardian newspaper and Faber). The book will be pitched to publishers in other countries after the manuscript has been written.

The book will be an accessible introduction to the idea of effective altruism, presented for a popular audience. It’ll be in two parts. The first part explains the key principles behind effective altruism, illustrating them with reference to a wide variety of issues — charity and career choice, which are heavily discussed by EAs, but also issues that are often discussed in mainstream media like Fair Trade, socially responsible investing, and sweatshop labour. The second part takes those principles and applies them to the questions of how to spend your money, how to spend your time, what are the most important causes, and what is the highest-value activity to be doing right now.

I’d really like this to be the go-to resource for people interested in, but still new to, effective altruism. And, as I’ve been discovering, writing well for a popular audience is extremely difficult — there are a huge number of decisions that require judgment calls, like how colloquial to be, how philosophical, how academic, how much to include, which topics to include… So I’m planning to gather as much feedback on drafts of the book as I possibly can. If you’re interested in providing feedback, please e-mail me to let me know!

The timeline for the book is that I hand in the final manuscript on the 1st August; it’ll then come out one year later. You can expect to not hear much from me until August, but then to hear a lot from me after then, as I’ll be spending some time writing articles and op-eds.

The motivation for my taking on this project is that I think that growing the EA movement is the among the best things I can be doing with my time: every new involved member of the EA community means another 80,000 hours devoted to doing the most good. And that’s a big deal!

Effective altruism blogs


What follows is a list of blogs of potential interest to effective altruists. We tried to include all blogs that either feature EA content on a regular basis or are officially linked to an existing EA organization. If you think we are missing something, please let us know.

See also the .impact EA RSS feed.



EA transparency update: September to March 2014


I have collated reports published by EA organizations from 13 September 2013 to 12 March 2014. I commend all of these organizations on their transparency and quote Holden Karnofsky:

I believe that nonprofits sometimes mimic for-profits in ways that don’t make sense given their missions… They keep information confidential rather than publishing it as a public good. And they exaggerate successes and downplay shortcomings, while being more honest would help the rest of the world learn and thus ultimately promote their mission (if not their organization).

This post begins with some highlights from the activities of EA organizations and concludes with a list of links to their reports.


Meta Effective Altruism

  • 80,000 Hours has pivoted toward performing case-studies on a smaller number of effective altruists
  • The Center for Applied Rationality has delivered workshops more frequently


  • Effective Fundraising has rebranded to the Greatest Good Foundation, and has stopped grant-writing in order to diversify its operations, including fundraising from high net-worth individuals
  • Giving What We Can has concluded that it is effectively generating donations
  • The Life You Can Save has has generated more funding since Peter Singer’s TED talk

Charity Evaluation

  • Animal Charity Evaluators have rebranded and are evaluating the impact of leafleting, humane education and US animal organizations
  • Givewell is moving more money, cooperating with GoodVentures, and diversifying its research through GiveWell Labs

GCR Research

  • The Centre for the Study of Existential Risk has attained funding, launched, delivered some lectures and done some media
  • The Future of Humanity Institute has done significant media, hosted and attended conferences, liaised with government and started the Global Priorities Project with the Centre for Applied Altruism
  • The Global Catastrophic Risk Institute has published, presented and held an online lecture series
  • The Machine Intelligence Research Institute has published a lot of research and interviews, delivered presentations and held workshops



80,000 Hours

Center for Applied Rationality


Greatest Good Foundation

Giving What We Can

The Life You Can Save


Animal Charity Evaluators


Global Catastrophic Risk Research

Centre for the Study of Existential Risk

Future of Humanity Institute

Global Catastrophic Risks Institute

Machine Intelligence Research Institute


I have included all reviews, newsletters and global assessments that I could easily find online but I probably missed some. Please Please contact me to have them included.

Crossposted from Ryan Carey’s blog

Effective Altruism Summit 2014


In 2013, the Effective Altruism movement came together for a week-long Summit in the San Francisco Bay Area. Attendees included leaders and members from all the major effective altruist organizations, as well as effective altruists not affiliated with any organization. People shared strategies, techniques, and projects, and left more inspired and more effective than when they arrived.

Following last year’s success, this year’s Effective Altruism Summit will comprise two events. The Summit will be a conference-style event held on the weekend of August 2-3, followed by a smaller Effective Altruism Retreat from August 4-9. To accommodate our expanding movement and its many new projects, this year’s Summit will be bigger than the last. The Retreat will be similar to last year’s EA Summit, providing a more intimate setting for attendees to discuss, to learn, and to form lasting connections with each other and with the community.

We’re now accepting applications for the 2014 events. Whether you’re a veteran organizer trying to keep up with Effective Altruism’s most exciting developments, or you’re new to the movement and want to meet the community, we’d love for you to join us.

The history of the term ‘effective altruism’


A few people have expressed interest recently in the origins of the effective altruism community. I realized that not that many people know where the term ‘effective altruism’ came from, nor that there was a painfully long amount of time spent deciding on it. And it was fun digging through the old emails. So here’s an overview of what happened!

The need to decide upon a name came from two sources:

First, the Giving What We Can (GWWC) community was growing. 80,000 Hours (80k) had soft-launched in February 2011, moving the focus in Oxford away from just charity and onto ethical life-optimisation more generally. There was also a growing realization among the GWWC and 80k Directors that the best thing for us each to be doing was to encourage more people to use their life to do good as effectively as possible (which is now usually called ‘movement-building’).

Second, GWWC and 80k were planning to incorporate as a charity under an ‘umbrella’ name, so that we could take paid staff (decided approx. Aug 2011; I was Managing Director of GWWC at the time and was pushing for this, with Michelle Hutchinson and Holly Morgan as the first planned staff members). So we needed a name for that umbrella organization (the working title was ‘High Impact Alliance’). We were also just starting to realize the importance of good marketing, and therefore willing to put more time into things like choice of name.

At the time, there were a host of related terms: on 12 March 2012 Jeff Kaufman posted on this, listing ‘smart giving’, ‘efficient charity’, ‘optimal philanthropy’, among others. Most of the terms these referred to charity specifically. The one term that was commonly used to refer to people who were trying to use their lives to do good effectively was the tongue-in-cheek ‘super-hardcore do-gooder’. It was pretty clear we needed a new name! I summarized this in an email to the 80k team (then the ‘High Impact Careers’ team) on 13 October 2011:

We need a name for “someone who pursues a high impact lifestyle”.  This has been such an obstacle in the utilitarianesque community – ‘do-gooder’ is the current term, and it sucks.”

What happened, then, is that there was a period of brainstorming – combining different terms like ‘effective’, ‘efficient’, ‘rational’ with ‘altruism’, ‘benevolence’, ‘charity’. Then the Directors of GWWC and 80k decided, in November 2011, to aggregate everyone’s views and make a final decision by vote. This vote would decide both the name of the type of person we wanted to refer to, and for the name of the organization we were setting up.

Those who voted were as follows (I think, but am not certain, that this is complete):

  • Will MacAskill (then ‘Crouch’)
  • Toby Ord
  • Nick Beckstead
  • Michelle Hutchinson
  • Holly Morgan
  • Mark Lee
  • Tom Ash
  • Matt Wage
  • Ben Todd
  • Tom Rowlands
  • Niel Bowerman
  • Robbie Shade
  • Matt Gibb
  • Richard Batty
  • Sally Murray
  • Rob Gledhill
  • Andreas Mogensen

Tom Rowlands, who was then Director of Communications for both GWWC and 80k, sent round the following email on 3 December 2011:

I’ve been through all the suggestions on the umbrella name – thanks.The names that have arisen mostly reflect two components: an ethical position i.e. ‘good’ and optimizing this i.e. ‘maximisation’We might also want a name for ‘group’.[I've deliberately used the above words as they didn't arise in the suggestions, to avoid bias.]For these reasons, I’ve split the voting into three parts, based on these categories – to do otherwise would make it almost incoherent. The downside is this doesn’t really account for acronyms and combinations (you might like three of the terms in isolation, but don’t like them as a group).So, please consider the options in the three categories, before coming up with up to three names you like together:e.g. Good Maximisation Group


a) altruist
b) do-gooder
c) utilitarian
d) humanist
e) empathetic
f) philanthropist
g) consequentialist
h) positive
i) benetarian


a) hardcore
b) dedicated
c) rational
d) professional
e) optimal
f) high impact
g) evidence-based
h) effective
i) biggest


a) alliance
b) group
c) centre
d) community
e) institute
f) network
g) association

You might not think all three components are necessary, in which case just use the ones you think are e.g. Good Maximisers.

If you completely disagree with the methodology, please say so and I’ll come up with another. I did spend some time considering this!

Sorry we haven’t got to the voting yet, but it seemed like this is a necessary step on the way there.

Please send me your ideas by 2100 Sunday. I’ll then send another email with a shortlist to vote on. [Michelle – I hope this meets the deadline; sorry if not)

From non-snowy Val Thorens,


And on 5 December 2011 there was a vote, for what the name of the new umbrella organization should be. The shortlist was:

  • Rational Altruist Community RAC
  • Effective Utilitarian Community EUC
  • Evidence-based Charity Association ECA
  • Alliance for Rational Compassion ARC
  • Evidence-based Philanthropy Association EPA
  • High Impact Alliance HIA
  • Association for Evidence-Based Altruism AEA
  • Optimal Altruism Network OAN
  • High Impact Altruist Network HIAN
  • Rational Altruist Network RAN
  • Association of Optimal Altruists AON
  • Centre for Effective Altruism CEA
  • Centre for Rational Altruism CRA
  • Big Visions Network BVN
  • Optimal Altruists Forum OAF

(There was actually two other votes, too: one on whether to use ‘Effective’ rather than ‘rational’ or ‘strategic’; and one on whether to use ‘Centre’ rather than anything else. Tom expressed how arduous the name-decision process had been – after this list he said “Again, apologies for the way this process has gone. I’ll try to keep the last couple of days relatively pain-free.”  I remember Matt Wage commenting that he thought that this whole process was a really ineffective use of time. But it seems to have been worth it in retrospect!)

In the vote, CEA won, by quite a clear margin. Different people had been pushing for different names. I remember that Michelle preferred “Rational Altruism”, the Leverage folks preferred “Strategic Altruism,” and I was pushing for ‘”Effective Altruism”. But no-one had terribly strong views, so everyone was happy to go with the name we voted on. GiveWell was using “rational altruism” for a while after that point (e.g. here and here), before switching to “effective altruism”.

We hadn’t planned ‘effective altruism’ to take off in the way that it did. ‘Centre for Effective Altruism’ was intended not to have a public presence at all, and just be a legal entity. I had thought that effective altruism was too abstract an idea for it to really catch on, and had a disagreement with Mark Lee and Geoff Anders about this. Time proved them correct on that point!

After that, the term was used progressively more, as 80,000 Hours started using it (e.g. this was the go-to page on effective altruism for quite a while, published 5th March 2012) and THINK was set up to promote effective altruism specifically. Ruairí Donnelly set up the Effective Altruists Facebook group in November 2012. Then I think what really solidified the term was Peter Singer’s TED talk, which was filmed in March 2013, and posted on-line in May 2013.