Against Effective Altruism
Well, not so much 'against' as 'nuanced disagreement', but who wants that in a title?
Well, not so much 'against' as 'nuanced disagreement', but who wants that in a title?
For those who aren't familiar with it, Effective Altruism is a loosely organised movement that focuses on trying to ensure that charitable giving - and other actions - are as 'effective' as possible, according to various much-debated criteria. It has grown over the years and now has conferences, books written about it, organisations such as the GiveWell Foundation and 80,000 Hours and various concepts, such as the Giving What You Can pledge, or the idea that people should take up altruistic careers. A lot of what they do involves detailed calculations of - for example - giving bed nets or curing parasitic worms will cure more lives in Africa, but various people also focus on odder things, such as preventing existential risk from advanced Artificial Intelligence killing us all.
I first heard about this from a friend who told me about it in the context of him doing a sponsored event of some sort to raise money for the Schistosomiasis Control Initiative, a charity aiming to cure a specific type of parasitic worm which at that time (and quite probably still) came out highly in terms of lives saved per pound. I have a couple of other friends who engage seriously with it, as well as a couple of blogs I follow, so I've been aware of it for a while, but never quite felt entirely comfortable about buying into all of their underlying assumptions, without thinking too much about why. Then, in 2022, Scott Alexander posted a Tower of Assumptions, challenging those who disagreed with Effective Altruism to set out exactly where and why they disagreed with it. This post is me taking up that challenge.
To be clear, before proceeding, this is disagreement in the sense of 'I don't agree with all of the assumptions of this movement and so aren't going to throw myselves behind it or live by it'; not disagreement in the sense of 'these people are evil and the movement should be razed to the ground and the land sowed with salt'. To the extent that Effective Altruism has encouraged a lot of people to give more to charity - which I think it has - and to the extent that a lot of this money has gone to save lives in the developing world - which again, I think it has - it's a good thing and the world is better off for it, even if some people end up doing things which I think are somewhat wacky.
And before we continue, if you like what you read here, you can help by sharing what I write (I rely on word of mouth for my audience). You can also ensure you never miss a post, by entering your email address into the subscription form below.
So, without further ado, what is this 'Tower of Assumptions'? I've reproduced it here, taken from Scott's blog. I should be clear, before continuing, that while Scott is one of the better known advocates of Effective Altruism, he isn't any form of official 'leader' or with a place in its hierarchy (I don't think it has formal leaders), so there will no doubt be Effective Altruists who disagree with this framing. That being said, from the outside it seems like a very fair summary of the core beliefs of the movement, whilst recognising that individuals may differ in various elements.
The top two tiers - Specific Projects and Institutional Credibility - are only really relevant if you're pretty bought in to the bottom three tiers. If you don't agree with, for example, the Less Basic Assumptions, you're not going to spend time researching Will MacAskill or Specific Project ABC, any more than you're going to spend time researching which denomination of church to go to, or the preaching styles of local ministers, if you don't accept Christianity1. This post is therefore focused on the bottom three tiers, which in any case are those which more directly address the assumptions and beliefs of Effective Altruism, while the top two tiers are more about how they're currently being applied in practice.
Before we get into the post proper, I should also add that I am not going to take cheap shots about recent events - I don't consider the collapse of FTX, due to various forms of alleged improper and fraudulent behaviour, to say anything meaningful about whether EA is a good philosophy. All movements attract their rogues and 'wanting to get rich through dodgy means' is a common failure mode. The fact that various EA charities accepted charitable donations from FTX - from a man who spoke positively about their cause, running an apparently legitimate company - also doesn't say anything bad about them. Until it collapsed, FTX had clearly deceived a wide variety of investors, customers, financial institutions and regulators, most of whom would have had a greater ability and responsibility to detect rogue practices than a charity. Perhaps it's opened people's eyes in the community to the fact that they are not immune to rogues living amongst them, but it doesn't change the fundamentals.
Foundational Assumptions
I agree with all of these. To take each in turn:
We should help other people - Yes, unless one is a psychopath, this is a fairly unobjectionable and obviously true statement, under almost all moral codes.
We should strongly consider how much effort we devote to this - Less straightforwardly true, and less widely subscribed to, but yes, I do agree with this; otherwise I wouldn't be writing this post. Your approach to charity and helping others is something that will be relevant throughout your life and it's worth properly considering it at least once, and ideally more than that.
Some methods of helping people are more effective than others - Also trivially and obviously true.
So, I agree with all the Foundational Assumptions and have said I've not looked in any detail at the top two tiers. As we'll see, my major disagreements are in the second and third tiers: Less Basic Assumptions and Cause Prioritisations.
Less Basic Assumptions
I largely disagree with all three of these, though, as we will see, in different ways and for different reasons. I'll also acknowledge at least a kernel of truth in each, at least in certain circumstances.
We should donate at least 10% of income or choose an altruistic career
To start with, why 10%? EA has a superficially good answer to this - in Scott's words, "I think we should believe it because if we reject it in favor of “No, you are a bad person unless you give all of it,” then everyone will just sit around feeling very guilty and doing nothing. But if we very clearly say “You have discharged your moral duty if you give ten percent or more,” then many people will give ten percent or more." The point is also made that 10% is the traditional figure in Judaism and Christianity (though in Islam, it should be noted, is 2.5%, so 10% is hardly universal).
As a psychological approach, this is hard to argue with - I've no doubt that it works better than either 'give anything you want' or 'give everything', as churches and other religious institutions have also found over the centuries. But we're talking here about the moral aspect.
What the references to historical or religious charitable duties is that we - or at least I, and I expect the vast majority of my readers - live in a western society with historically high levels of taxation and a generous welfare state. Now, I like many elements of this welfare state, including healthcare that is free-at-the-point-of-delivery, a state pension and the idea people who are out of work are not just left to starve on the streets. I also believe that, overall, the tax burden in the UK- at around 35% of GDP, the highest in over 70 years - is too high, with too much spent on unnecessary spending within public services and in benefits, and that we would benefit both as a country and as individual households if the state were smaller and took a lower share of national income in taxation. You may or may not agree with that statement. But whether you think we should be taxing more or less, the fact remains that in the UK, we are already paying nearly over a third of our national income to be spent on what would have once been paid for by charity. This fact has to be taken into account when considering whether '10%' is the right figure.
The state, in fact, is currently the major spender on many, many areas of what would have been the responsibility, from alms for the poor and homeless to medical research to overseas aid. Some of these (*cough* the first two) I see as a more legitimate uses of state-coordinated use of taxpayer funding than others. But regardless, they are being spent.
Now, does that mean that giving money to charity does no good? On the contrary, there are clearly very many good causes in the world where this money could do good. But in a western, highly taxed, nation, I would see further charitable donations in the general case as supererogatory, rather than obligatory. In other words, they go above what is required, and are praiseworthy and good deeds. More broadly, I would add that I consider the concept of things being supererogatory to be a tremendously underrated one in discussions of morality - without it, to often everything devolves into being either mandatory or forbidden2.
While this is the general case, I would argue that there may be specific cases where individuals have a moral obligation to give their time or money. Common examples of this might be if a person believed that something that should be funded was being significantly underfunded by the government to which they paid taxes; in terms of obligations owed due to things they were fortunate enough to receive when in need, or that benefitted them (where the response could be to either repay directly, or to 'pay it forward', depending on which seems more appropriate); as members of a particular community or organisation to which they have made explicit or implicit commitments; or to friends or family members to which they have bonds of kinship or friendship. Examples of each of these, respectively, might include a disease which received comparatively little funding (dementia vs cancer, for example); someone who had received mentoring or an educational opportunity; a donation of time or money to a local society or organisation to which one was a member; or helping a family member or friend who had lost their job or home. Some items may come into more than one of these categories: I give money to my university because I consider it is underfunded by Government, because of the obligation I consider I owe it due to the opportunities it gave me (and wanting others to have the same opportunity in future), and as a member of its community, though the second is the most important.
We'll discuss these more in the next section below, but it is worth saying that one of the major failings of EA, in my view, is that it has no time for these sorts of obligations, replacing them with a general utilitarianism obligation.
Utilitarian Calculus (e.g. QALYs)
At the most basic level, of course this is useful. If I've decided I want to give money to prevent malaria, I'd like to give it to the charity which will use it most effectively to save lives. If I want to use it to educate children in a poor country, I'd like to educate as many children as well as possible. That might mean looking at the charities concerned to see how much of each donation they spent on the front line, as well as considering whether, for example, bed nets or medicines were more effective at saving lives from malaria and donating accordingly. It might also mean deciding to give money to deworming rather than malaria, if the evidence showed this saved more lives in a similar region. The work the EA community does in evaluating charities on this basis - and the pressure that in turn puts on charities to up their game - is inarguably valuable and useful.
Measurement
Problems arise, however, on multiple levels. At the most basic, even if we accept we should be maximising QALYs (which I don't) there are measurement difficulties. Measuring lives saved by bed nets is relatively straightforward, but using these forms of metrics to evaluate scientific research is much more problematic. Using obviously measurable metrics - the number of citations or publications - may actually distort the research and lead to worse outcomes. And should we prefer research to find an effective malaria vaccine - which could happen in a decade or, alternatively, never - to saving lives now. Tackling corruption, influencing policy or a host of other aims - many of which may do more long-term good in lifting communities or nations out of poverty, or in tackling global challenges - is even harder to measure.
The failure of QALYs
More broadly, I disagree that maximising QALYs is the goal of charity. What or art, music or culture? What of the natural environment? It is hard to argue that saving a life, or lives, is not the ultimate goal, but perhaps the easiest way to see this is to look back in history and see what others have done then.
The bequests in the 18th century that founded and developed the British Museum, and those which established similar museums and galleries in Britain and elsewhere were made at a time of tremendous hardship, poverty and suffering. And yet the collections they've established are seen and valued by millions - six million people visit the British Museum alone each year - and their educational and cultural impact felt well beyond that. Would we rather, today, that Hans Sloane and his contemporaries had created and preserved these marvellous institutions for posterity, or that their wealth had been used to save lives amongst the rural and urban poor? Which has greater benefit to the world, our society and humanity as a whole today? Scientific progress, commerce and the march of democracy ultimately lifted our society out of poverty, which we should be glad of, but a vital role for charity is in in creating and preserving that which makes life valuable.
For those who don't care about culture, consider the natural world. At any point, rainforests could be bulldozed or habitats destroyed to save lives. Yet do we wish our children to inhabit a world rich in nature, or one which is inhabited by the largest number of humans? I think most people know the answer to this.
Man cannot live by bread alone. There was a poem we studied at GCSE by John Betjeman that - as most of the poems we studied did - left me cold at the time, but that later in life has increasingly resonated.
Cut down that timber! Bells, too many and strong,
Pouring their music through the branches bare,
From moon-white church towers down the windy air
Have pealed the centuries out with Evensong.Remove those cottages, a huddled throng!
Too many babies have been born in there,
Too many coffins, bumping down the stair,
Carried the old their garden paths along.I have a Vision of the Future, chum,
The workers' flats in fields of soya beans
Tower up like silver pencils, score on score:
And Surging Millions hear the Challenge come
From microphones in communal canteens
"No Right! No Wrong! All's perfect, evermore!"The Plantster's Vision - John Betjeman
This ethos, of clearing away all that is right and good to maximise QALYs sits right at the heart of EA and similar approaches to altruism or charity(2). It is one of the chief reasons why - no matter how much good it may do as a small movement - I cannot wish for it to become a dominant force.
Obligations
We discussed obligations above, but it bears restating here. Doing good is not simply about maximising outcomes, but about fulfilling commitments and repaying obligations. A person who received unlooked for mentoring in their youth has more obligation to repay that - in this case, perhaps, by 'paying it forward', for their own benefactor may not need it - than to buy bed nets in Africa. If one's brother3 has been cast out of their house, or a friend is in need, one has more obligation to help them than to a stranger. If one is part of a community, one should support it.
I am not suggesting here that I, or anyone, always successfully rises to the level of these obligations, simply that they exist, and are more compelling or real than an abstract requirement to minimise global suffering.
My sense is that there is a feeling amongst many in the EA community that, through rationalism, they have someone moved past or beyond such 'sentimental' views, and that by taking a coldly logical approach they are maximising the amount of good they are doing far more effectively. I would suggest they have moved beyond them only in the ways that the scientists of the N.I.C.E., in Lewis's That Hideous Strength, have moved beyond 'sentimental' concepts such as justice - not beyond, but below. Humans are social creatures; it is the myriad ties of relationship, community, kinship, obligation and reciprocity that create, bind and preserve our society, turning us from isolated creatures into communities, societies, nations and civilisation as a whole - a web of interlocking relationships at every level.
We do not progress by ignoring these ties; rather, we regress.
Wackiness
I do not want to be too hard on EA here - it is a movement still in its infancy, one that is still finding its feet and experimenting with its methodology. Also, every movement attracts some people with wacky beliefs.
Nevertheless, I would say it is suggestive that the apparently large numbers of people in the EA community who find themselves distracted by suggestions that we should be annihilating all predators to prevent prey species from suffering, or worrying about the suffering of insects, or worrying about AI basilisks, or arguing that infanticide is justified - and the degree of seriousness with which these appear to be taken within the community - would suggest that this is not the best tool with which to direct altruistic endeavours, but rather something that is, at best, something prone to malfunction when used as a reasoning tool and, at worst, something that has some deeply immoral axioms within it.
Helping 3rd World is More Effective than helping 1st (eg bednets vs alumni donations)
Let's not get distracted by alumni donations - I've set out above the rationale for this is on repaying obligations, not pure effectiveness. Or about revisiting the arguments that communities matter - I've set these out above, and you'll either agree with them, or you won't. At the broader level this is arguing that you can save, feed, house, clothe, educate more people, for less money, in the developing world than by, for example, donating to a homeless centre or food bank in your local community.
I'm something of an aid sceptic. I'm not convinced that most international aid does much good at all - the countries that have lifted themselves out of poverty seem to have done it primarily through good economic policies and building strong civic institutions, rather than through aid. I'm currently unsure whether aid is simply ineffective, or actively counter-productive, through distorting local economies towards the public sector, channeling the most able people into jobs that involve maximising aid received rather than those which would stimulate economic growth, and empowering or exacerbating the influence of corrupt or power-hungry elites - or just corruption generally. But either way the long-term benefits of most aid seem dubious.
That said, there does seem to be some aid which is genuinely effective. Disaster relief following an earthquake or similar can repair immediate damage and allow a community or region to get back on its feet. This is where the majority of the charity I've given outside Britain has gone, over the years. Efforts to eliminate diseases or control an epidemic, such as smallpox, polio or ebola definitely do long-term good. Work to repair badly damaged nations and bring them into a broader community of nations - such as the Marshall Plan, or the EU's work in Eastern Europe - has a relatively high track record of success, though you have to be a powerful state rather than an individual for this to be a valid option available to you. So if you're choosing your charities, you can presumably choose these - or others that you think are effective. Also, even if it doesn't do any long-term good, saving lives is still a very good thing.
Ultimately I guess this is the 'less basic assumption' I have most time for. If you're looking in utilitarian or QALY terms, and aren't convinced about some of the broader objections to a utilitarian approach then yes, you probably can do more good donating to the developing world than the developed.
Cause Prioritisations
The most effective causes are probably global health, animal welfare and preventing X-Risks
Let's take these in turn.
Global health
OK, no real objections here. Saving lives is a good thing and we can save a lot of lives in various ways here, whether that's developing vaccines, giving out bed nets or deworming people. Improving people's health may also help economic development and long-term prosperity by reducing parasitic load, or increasing the number of people who can contribute economically, or letting children get more of an education. As discussed above, I'm a bit sceptical about how large some of these factors are absent a well-functioning government, society and economy, but regardless, saving lives is good in itself.
I'm happy to agree that global health is a perfectly legitimate aim for altruism.
Animal welfare
This would definitely not make my top three, nor my top ten. I don't see eye-to-eye at all with the EA movement here: while, like most people, I'm opposed to needless cruelty to animals, I am entirely comfortable with a natural law perspective that says we evolved to eat animals and there is nothing wrong in doing so. I see no moral weighting from an animal welfare perspective in being vegetarian4.
I definitely don't agree there is any negative moral calculus to the suffering of animals caused by the operation of the natural world; for example a lion eating a gazelle, or any moral imperative to reduce this. Again, from the perspective set out above, that we should seek to preserve that which is right and good and beautiful, I would see there being a moral imperative to preserve the environment, to not make species extinct and to ensure that there are elements of wilderness and nature that survive, but that is because these are good things in and of themselves, and which bring joy and meaning to many (simply knowing they exist, as well as seeing them - including in nature programmes, not necessarily in real life), not from an animal welfare perspective.
I also see no sensible moral calculus which weights 10, or 100, or however many cows against a human life, still less 1000 or 10,000 fleas or mosquitos.
Preventing X-Risks
By X-Risks EA means risks that could wipe out all of humanity, such as a nuclear apocalypse. My views on this sit somewhere between global health and animal welfare.
On the one hand, I definitely agree that humanity becoming extinct would be very bad, and we should work to preserve this. On the other hand, I'm not sure how much good most individuals can do about most of these, still less that the causes that most EAs appear to devote themselves to are that effective.
In balance, while I don't really prioritise these myself, campaigning for global nuclear disarmament, working on AI safety or trying to get NASA to set up an asteroid deflection programme don't seem like fundamentally unreasonable things to do.
The most concerning X-Risks are nuclear war, pandemics and AI
I've actually got a fair amount of time for each of these.
Nuclear War
I know it's not fashionable to worry about this anymore5 but, if we're honest, we probably should. The US and Russia between them still possess thousands of warheads and, even if that's a lot fewer than at the height of the Cold War, it's still probably enough to cause a nuclear apocalypse, kill billions and plunge us back into barbarism, and potentially full-on extinction. It's very easy to see how a nuclear exchange between lesser nuclear powers - India and Pakistan - could also kill hundreds of millions.
Putin reminded us last year that war has not been banished. The number of times we came close to a nuclear war during the Cold War is scary - do read the link, if you're not familiar with it. The global geopolitical environment could easily get more, rather than less tense, with new, competing hostile power blocs - if not in the next decade, then in the next century. Ultimately, as long as we still have thousands of these things sitting around, we might use them.
I guess unless you're actively involved in the military, foreign policy or defense environment I'm not sure what one can usefully do here, but I certainly agree it's a valid X-Risk.
Pandemics
With memories of COVID still vivid, this also doesn't seem to need much justification. I think extinction due to a natural disease is unlikely - but I know that various people in the EA community (and in many other places) are actively worried about gain of function research, which seems an eminently sensible thing to worry about.
Also, even if it doesn't wipe us out, reducing the impact of the next pandemic - or stopping one getting started - would be massively worthwhile. A really not-very-deadly disease still managed to kill tens of millions of people globally, force massive lock-downs and do economic damage we're still recovering from. It could easily be much worse.
There are also some really clear things that one could be doing here, everything from biomedical research to straight-forward, realistic things to try to lobby government to do (most countries are continuing to be woefully unprepared for the next pandemic, just as we were for the last one).
AI
I'm probably most sceptical here. I feel some of the EA scenarios regarding fast-take-offs, the Singularity and super-intelligent AI rely on a whole range of scenarios that seem really very unlikely. I'm not convinced that AIs will be able to keep designing more and more intelligent AIs without hitting various constraints and limits; I also don't see any justification for the idea that superintelligence equates to an infinite Charisma score that would allow it to talk people into doing anything they want.
That being said, humans are incredibly dumb, and I could imagine us letting out large numbers of killer robots without proper controls or alignment, or programming some kind of rapidly evolving agentic virus, much more easily than some of the Singularity scenarios, and that could end very badly, given how fragile the modern technological world is. AI is evolving very rapidly, and it will surely have massively disruptive societal impacts sooner or later, and figuring out how to make programmes do what we actually mean them to do is likely to be a useful thing to do. And, as I understand it, some people involved do think extinction here is very unlikely (Scott Aaronson says 2%), but worth spending some time worrying about anyway, which seems hard to argue with.
In the meantime you can make some paperclips.
Conclusion
When there's a movement, or ideology, that you find compelling in some ways, but flawed in others, I think it's a constructive approach to systematically work out where, why, and on what levels do you differ. And I imagine there are other like me who are in some ways familiar with this movement, but who haven't been able to put a finger on why they don't agree with it.
To those who do support it - I hope this won't be taken as a personal attack. I agree with Scott: 'I think the effective altruists are genuinely good people.' I think they, both individually, and as a movement, do a great deal of good. And, unlike some charities or movements6, I don't think that the more unusual causes they support don't - at least at the current level of political power they possess - do any real harm7.
But overall, I disagree with some of the core assumptions, particularly at Tier 2, and at the conclusions that flow from that. I don't think that's a problem: there's a lot of good that needs doing in the world, and a lot of space for different people to approach it in different ways. Overall, I'm happy to look positively at the good that EA does, without the need for full agreement or subscribing to all of its premises myself.
(1) The fact that I have views on Arminianism as opposed to Calvinism does not disprove this general point.
(2) I would argue that it underlies much of progressive thinking also, underpinning its ultimate barrenness, but that is another story.
(3) Literal, not metaphorical, brother.
(4) There may be environmental or climate change considerations.
(5) I'll be honest, I spend approximately zero amount of my time worrying about this.
(6) For example, I will not donate to NSPCC, Unicef or similar, even at the level of 'giving to a colleague doing a sponsored event', due to their hostility towards the family and other political views I strongly disagree with.
(7) I would accept this could change significantly if the movement gained real influence or elements of its priorities began being adopted by major political parties or pressure groups.
The fact that I have views on Arminianism as opposed to Calvinism does not disprove this general point.
would argue that it underlies much of progressive thinking also, underpinning its ultimate barrenness, but that is another story.
Literal, not metaphorical, brother.
There may be environmental or climate change considerations.
I'll be honest, I spend approximately zero amount of my time worrying about this.
For example, I will not donate to NSPCC, Unicef or similar, even at the level of 'giving to a colleague doing a sponsored event', due to their hostility towards the family and other political views I strongly disagree with.
I would accept this could change significantly if the movement gained real influence or elements of its priorities began being adopted by major political parties or pressure groups.