17 August 2022
We explore why, despite there being plenty to admire about Effective Altruism, many are still uneasy about the movement’s ideas and influence.
There has been a spate of articles about Effective Altruism (EA) recently. Some have been good, like these New Yorker and Time magazine profiles of Will MacAskill, or this Dylan Matthews Vox piece about Sam Bankman-Fried and the evolution of EA. And some have been very much not good, like this rather execrable Wall Street Journal opinion piece. All of them, however, have given rise to a lot of discussion about EA – which has prompted me to pick up some ideas I have had sitting around in draft form for a while now about my own views of EA and why, despite finding plenty to admire about the movement and its ideas, I just can’t quite get on board the EA train.
I should make it clear that the question posed in the title of the article is a genuine one. The nature of online debate these days tends to imply that if you don’t agree with something you are immediately in the critical camp, and that your job is merely to whoop appreciatively when someone posts the latest “take-down” of whatever the other camp says. But that’s not really how I feel about EA, and the reason I didn’t go for the more obviously clickbait title of “Why I am not an Effective Altruist” is that in all honesty I am still trying to put my finger on precisely what it is about EA that I disagree with or find off-putting. (And I feel as though if I am going to reject it or be critical, I need to be clear with myself about why).
This whole question has added personal relevance for me, as there is a definite “Sliding Doors” scenario in which I could have easily become involved in EA. Back in the mid-2000s, when many of the subsequent key figures of EA (people like Will MacAskill and Toby Ord) were at Oxford University formulating their ideas and getting to know one another, I was also there doing a postgraduate philosophy BPhil. I didn’t have any real interest in philanthropy at that point (my philosophical interests back then were more focussed on the theoretical foundations of set theory because, oh yes, that’s how cool I am). I was also somewhat preoccupied with having a slow-motion mental health crisis at the time (definitely a story for another day…) However, I did go as far as attending some of the early seminars organised by Nick Bostrom’s Future of Humanity Institute, so it isn’t much of a stretch to imagine that I might have crossed paths with some of these people and ended up falling into the origin story of EA. And I suspect that if I had come across the movement at that point, I might well have become an ardent true believer.
What puzzles me, however, is that I have subsequently become very interested in philanthropy (almost to a fault, some might say…), yet I don’t necessarily feel drawn to the EA movement, despite the fact that it sits squarely in my wheelhouse and would almost certainly have appealed greatly to a younger version of me.
In this piece I want to unpick some of the reasons that this might be the case. This obviously runs the risk of coming across as self-indulgent wallowing, but hopefully all the things I am saying also function as useful broader points! (I should also point out that whilst this article may go into more than enough depth for most readers, I am also aware that there is a lot more complexity to many of these issues and that adherents of EA may – quite justifiably- feel as though there is a lot more to be said about them!)
EA doesn’t reflect the reality of philanthropy
Right, let’s get this one out of the way up front. One of the surface arguments often put forward against EA is that it doesn’t reflect the practical reality of how people make decisions about philanthropy, or what motivates them to give, and as such it is little more than an intellectual parlour game (albeit an interesting one). However, this criticism is clearly misguided. EA is, by its own admission, normative rather than descriptive: it is not trying to analyse how philanthropy is currently done, but instead outlining a view of how it should be done in an ideal world. Hence the fact that most philanthropy doesn’t accord with EA principles is, it is claimed, not so much a failure of EA as a failure of current approaches to giving.
There is nothing wrong with taking a normative view of philanthropy: like most people, I have my own views about what giving “should” be like or how it could be “better” and these unavoidably reflect my own worldview and preferences. It is also worth pointing out that a small but increasingly significant minority of giving does now follow EA principles in any case, so the theoretical framework of EA arguably does reflect philanthropic practice (albeit only a subset of it).
EA doesn’t reflect the reality of how moral decisions are made
A more compelling critique of EA (to my mind) centres not on its failure to reflect the reality of philanthropic practice, but on the more fundamental idea that it fails to reflect the reality of how we, as humans, make moral decisions. This goes back to a highly influential critique of Utilitarianism by the philosopher Bernard Williams’s (since, as a form of Utilitarian theory, EA is susceptible to these arguments). Williams’s basic argument is that Utilitarian theories – particularly purely consequentialist ones that treat the outcomes of our actions as the sole measure of their moral worth and which demand that we make the objective maximisation of overall wellbeing our only goal – set an unrealistic and unobtainable bar in asking us to adopt an impersonal perspective. This is not, Williams argues, how we actually approach questions of morality, since our own role as an agent and our own proximity to issues are inextricable factors. Furthermore, trying to deny this reality by forcing people to “remove themselves from the decision-making process” (as EA asks us to do) doesn’t just go against the grain of human nature, but represents an “attack on our integrity”.
The point often made those who have sympathy with Williams’s critique is that if we take Utilitarianism seriously, it requires that we must be willing to accept the idea that we should simply abandon our own projects and commitments if the Utilitarian calculation shows that the most overall good would be delivered by someone else’s actions. And that this is unreasonable when those projects and commitments might represent the very foundations of our identity as individuals. Hence Williams himself asks:
“How can a man, as a utilitarian agent, come to regard as one satisfaction among others, and a dispensable one, a project or attitude round which he has built his life, just because someone else’s projects have so structured the causal scene that that is how the utilitarian sum comes out?”
And he goes on to argue that:
“This is to alienate him in a real sense form his actions and the source of his action in his own convictions. It is to make him into a channel between the input of everyone’s projects, including his own, and an output of optimific decision; but this is to neglect the extent to which his projects and his decisions have to be seen as the actions and decisions which flow from the projects and attitudes with which he is most closely identified. It is thus, in the most literal sense, an attack on his integrity.”
Now, of course, it is open to the EA advocate to respond that they are happy to take such an impersonal perspective, and that they would be comfortable with abandoning their own projects or commitments in favour of others’, if faced with evidence that this was the optimal course of action. However, I also think that this should be seen as an active decision on their part to adopt a particular (and arguably rather unnatural) mode of moral decision-making, and that any such EA advocate should accept that it is unreasonable to expect or demand that everyone follows suit. (I would suspect that many moderate EAs would be willing to accept this, but there are definitely some evangelicals for whom the only acceptable position seems to be that everyone adopts an EA approach and this, to me, seems unreasonable).
Some versions of EA are normative about how you give, but not about whether you give in the first place
Another potential issue (which I will outline here, as it is also heavily rooted in the philosophical underpinnings of EA) is that whilst all EA is normative to some extent when it comes to how people give, not all of it is normative about whether they should give. Some of it is, of course: most notably, anyone who subscribes to the arguments put forward by EA intellectual godfather Peter Singer influential papers and articles such as “Famine, Affluence and Morality” or “What Should a Billionaire Give, and What Should You?”, is likely to believe that there is a strong moral compunction on us to give as a way of addressing suffering around the world (and presumably to do so in a way that reflects EA principles when we do). However, other prominent EA figures such as Will MacAskill have at times seemed much more ambivalent on this question. Yes, they argue, we should ideally adopt an EA approach when it comes to choosing what to give to, but whether or not we choose to give at all is a prior question that is left up to the individual.
Some have taken things even further, arguing that it is fact worse to give but to do so “badly” (from an EA perspective) than it is not to give at all. This is a view rooted in the work of moral philosopher Derek Parfit, who argued that in some cases an “all or nothing” principle applies; where it is acceptable to do nothing at all, but not acceptable to take action except in a way that produces the maximal possible outcome. The philosopher Theron Pummer has formulated an EA version of this principle, arguing that:
“In many cases it would be wrong of you to give a sum of money to charities that do less good than others you could have given to instead, even if it would not have been wrong of you not to give the money to any charity at all.”
This may seem deliberately contrarian, but EAs subscribing to this sort of view are in fact (but perhaps without realising it) tapping into a rich historical tradition of believing that giving in the “wrong way” is worse than not giving at all, due to the negative unintended consequences it produces. J A Hobson, for instance, argued that:
“It is more socially injurious for the millionaire to spend his surplus wealth in charity than in luxury, for by spending it in luxury, he chiefly injure himself and his immediate circle, but by spending it in charity he inflicts a graver injury upon society… It substitutes the idea and the desire of individual reform for those of social reform, and so weakens the capacity for collective self-help in society.”
Likewise Andrew Carnegie claimed in his highly influential essay “Gospel of Wealth” that “of every thousand dollars spent in so-called charity today…it is probable that $950 is unwisely spent; so spent, indeed as to produce the very evils which it proposes to mitigate or cure,” while the economist William Stanley Jevons went as far as to argue that “much of the poverty and crime which now exist have been caused by mistaken charity in past times.”
My problem with this is that the sort of ideological absolutism which dictates that people either give in the “right” way or don’t do it at all seems like a clear case of throwing the baby out with the bath water. As someone who believes firmly that whilst we do need to make philanthropy “better” in various ways, we also just need more giving full stop, I think we need to have a certain amount of pragmatism about working with the world as we find it (even if our eventual aim is to change that world in order to make it closer to how we think it should be).
Is EA just another in a long line of attempts to “rationalise” philanthropy?
The dose of historical perspective at the end of the last section brings me to another one of my issues with EA: a nagging suspicion that it is in fact just another in a very long line of efforts to make philanthropy more “rational” or “effective” throughout history. The C18th and early C19th, for instance, saw efforts to impose upon charity the principles of political economy (the precursor to modern economics which focused on questions of production, trade and distribution of national wealth – as exemplified in the work of writers such as Adam Smith, Thomas Malthus and David Ricardo). Then in the C19th and early C20th the Charity Organisation Society and Scientific Philanthropy movements waged war on the perceived scourge of emotionally driven “indiscriminate giving”.
This perhaps bothers me more than most people because I spend so much of my time noodling around in the history of philanthropy. It also isn’t a reason to dismiss EA out of hand: the fact that it might have historical precedents doesn’t invalidate it, it just means that we should be more critical in assessing claims of novelty and uniqueness. It also suggests to me that there would be value in providing greater historical context for the movement and its ideas. Doing so may well show that EA is genuinely novel in at least some regards (the idea of total cause agnosticism, for instance, is something that one might struggle to find in previous attempts to apply utilitarian thinking to philanthropy). But the other thing the history of philanthropy tends to show is that everyone thinks at the time that their effort to make giving “better” or “more rational” is inherently and objectively right, and it is often only with the benefit of hindsight that it becomes clear quite how ideologically driven and of their time they actually are. For my money, it is still an open question as to whether future historians will look back on EA in the same way that we look back on the Charity Organisation movement today.
The other thing that historical perspective brings is the ability to trace longer-term consequences. And this is particularly important here, because efforts to make charity more “rational” have historically had an unfortunate habit of producing unintended consequences. The “scientific philanthropy” movement of the early 20th century, for instance (which counted many of the biggest donors and foundations of the era among its followers) had its roots in the 19th century charity organisation societies, which were primarily concerned with addressing inefficiency and duplication of charitable effort at a local level, and ensuring that individual giving was sufficiently careful to distinguish between ‘deserving’ and undeserving’ cases (as outlined further in this previous article). Over time, however, the influence of new ideas about applying Darwin’s theories of evolution by natural selection to human societies led to a growing number of scientific philanthropists flirting with (or, in some cases, outright embracing) theories of eugenics; including deeply problematic, pseudo-scientific ideas about race and approaches such as forced sterilisation. Which is not to say that Effective Altruism will follow the same template, but it should provide a warning from history that “improving” philanthropy can be a dangerous business. (And some would argue that EA has already started to develop problematic consequences of its own, as we shall see in the next section).
Longtermism
A lot of the recent focus on EA has centred on the growing influence of Longtermism: the idea that we should dramatically extend the time horizons over which we consider the implications of our actions, and accord potential future lives equal moral value to currently existent ones. Longtermism as a theory (and, some would say, an ideology) is distinct from EA: however, both share strong intellectual roots in the University of Oxford’s philosophy department and Longtermism has come to exert a clear hold over at least a portion of the EA community (to the extent that some now characterise EA as a movement that has evolved from one primarily concerned with buying mosquito nets in the developing world to one to one that is primarily concerned with addressing existential risks such as the risk of superintelligent AI deciding to enslave humanity).
EA’s embrace of Longtermism has brought controversy. Although thinking about the future and taking a longer-term view seems like an intuitively good thing to do, critics argue that the combination of utilitarian ethics and extreme time horizons can lead to dangers. In particular, there is the risk of what is known as “Pascal’s Mugging” – where potential events that are highly improbable, but which would have wide-reaching consequences if they were to happen, are deemed to be more deserving of our attention than other less severe, but far more likely, events. So, for instance, it may work out on some EA calculations that addressing future risks such as collisions with extra-planetary objects or the possible development of Artificial General Intelligence (AGI) is more important than addressing immediate challenges such as global poverty or hunger; since whilst the latter may affect millions of present-day lives, the former can be argued to affect billions (if not trillions) of potential future lives.
Many within the EA movement are well aware of the dangers of Pascal’s Mugging; and there are probably few who would take such a full-blooded approach to Longtermism that they would subscribe to the sort of thinking I have just outlined. However, even a more moderate version Longtermism can bring challenges: the EA organisation 80,000 Hours, for instance, found itself the centre of online controversy earlier this year after a post it published appeared to downplay the importance of climate change (relative to other areas EA donors could focus on). The post itself was well-researched and relatively considered, and it is likely (as with so many online debates and controversies) that many people were taking issue with it based on the headline rather that the contents. However, the reaction shows that plenty of people feel uncomfortable with the conclusions that Longtermism can lead to and are concerned that it provides a licence for people to dismiss or minimise the challenges facing communities around the world today on the basis of theoretical speculation about the future.
This tension reminds me of a game I used to play a game as a kid, where I would prime myself to zoom out to a galactic perspective and timescale (usually by thinking about something like Asimov’s Foundation series, or Olaf Stapledon’s Last and First Men, since I was a massive retro sci fi nerd). From this viewpoint my concerns as an individual, and even the concerns of large swathes of the present human race, usually seemed somewhat trivial. I would then force myself to flip back to a human-level perspective in the immediate present (perhaps by thinking of someone dealing with family trauma or having to survive in a difficult situation such as a conflict zone or a famine) and realise that my ability to dismiss their suffering seemed unbelievably heartless and cold (and probably the product of a decent amount of privilege, though I wouldn’t necessarily have thought in those terms at that point).
I found the effect of the shift from extreme macro to micro dizzying and kind of fun, but it feels like this tension between these two different perspectives is now at the heart of a lot of the debate over longtermism. My unease stems from the feeling that it was one thing for me to experiment with taking a galactic view and thereby dismissing large swathes of humanity as a slightly weird 11 year old who was bored on long car journeys, but it is another thing when some of the most powerful people in the world appear to be letting the same principles guide their approach to life. (See, for instance, Elon Musk’s recent appreciative tweet about EA guiding light Will MacAskill’s new book on Longtermism, What We Owe the Future).
Worth reading. This is a close match for my philosophy. https://t.co/cWEM6QBobY
— Elon Musk (@elonmusk) August 2, 2022
Partly as a result of MacAskill’s book, Longtermism seems set to get a long more attention in the near future. And that is by no means a bad thing – there is plenty to admire about the willingness of Longtermists to think about (erm) long-term consequences and what they might imply for our actions in the present. I, for one, am very much looking forward to reading what MacAskill has to say on the subject, as I suspect it might be more balanced than some of the discussion around Longtermism to date. However, we should also heed the growing number of critical voices, who raise concerns about Longtermism’s ideas and influence.
EA fails to address structural issues (and arguably glorifies the structural status quo)
One of the long-standing critiques of EA is that is a movement rooted in the status quo, since it asks us only to maximise the good we can do through giving within the existing systems and structures within our societies, rather than seeking to change those structures and systems. For those who believe that only fundamental structural reform can address issues such as global inequality this is a major failing of EA. As the philosopher Amia Srinivasan wrote in a widely read critique of Will MacAskill’s 2015 book Giving What We Can:
“Effective Altruism doesn’t try to understand how power works, except to better align itself with it. In this sense it leaves everything just as it is. This is no doubt comforting to those who enjoy the status quo – and may in part account for the movement’s success.”
In many ways this reflects a wider critique of philanthropy: namely that it is inherently a reflection of existing inequalities and power structures, and therefore will always be part of the problem rather than part of the solution when it comes to addressing such issues at a fundamental level. There are those within the world of philanthropy who are trying to overcome these challenges, by finding models and approaches that allow for genuine structural reform rather than simply addressing the symptoms of structural issues (perhaps through supporting social movements or embracing participatory methods which empower recipients). Whether EA can do something similar (and, indeed, whether its followers even want to) very much remains to be seen. There are some in the EA community who clearly acknowledge these challenges and have tried to adapt EA frameworks to make them more amenable to supporting efforts to drive fundamental social change. This is difficult, however, as EA’s core focus on measurement demands that the value of these sorts of efforts must be quantified in such a way that they come out better in whatever adapted utilitarian calculus we are using than interventions whose impact is clearer and more obviously measurable, and many social change efforts don’t necessarily lend themselves to this sort of measurement. (Although one of the potentially positive things about the growing influence of Longtermism on EA is that it might open up more space for thinking about how to assess the value of “upstream” interventions like campaigning and political advocacy – indeed, from what I have read this is an important part of Will MacAskill’s new book so I will be very interested to see what he has to say).
The pragmatic solution to this problem (which you may be unsurprised to hear at this point appeals to me) is for EA advocates (and indeed all other philanthropists) to seek a balance between addressing the symptoms of problems in society through direct interventions and addressing their underlying causes through more fundamental reform efforts, rather than seeing them as some sort of zero-sum game. This is hardly a new realisation, but in her paper “Severe Poverty as an Unjust Emergency” the philosopher Elizabeth Ashford puts it neatly in terms of fundamental “duties of justice” to address the causes of poverty and “backup duties” to give to charity in order to address its symptoms in the meantime:
“The Victorian philanthropist should have used his wealth and influence to support the impetus for structural reform that eventually led to the Factory Acts. In the meantime, he should also have acknowledged the importance of donating to organizations that supported destitute children and enabled them to attend school and so on.
Affluent agents should recognize that the persistence of severe poverty constitutes an ongoing structural human rights violation, which imposes on us an urgent shared general duty of basic justice to implement the structural reforms that would achieve its abolition. Until this is achieved, we also have urgent duties to support NGOs in providing for a vast number the only available opportunity to avoid a drastic and cheaply preventable harm that is likely to blight or altogether destroy their lives.”
EA seems increasing out of step with the focus on rebalancing power dynamics in philanthropy
Related to the point above, about whether EA is too heavily rooted in the status quo is a concern that at a time when the rest of philanthropy seems to be coming to terms with the need to address some of the fundamental asymmetries of power in the traditional relationship between donor and recipient, EA very much represents a top-down (and arguably highly paternalistic) approach to giving. Many argue that the traditional models and approaches we have for philanthropy are ones which put too much emphasis on the donor’s wishes and ability to choose, and give little or no recognition to the voices of recipients. As such, the danger is that they result in paternalistic (albeit often well-intentioned) decisions being taken about certain people and communities (particularly ones who have historically been marginalised), rather than decisions being taken with them or by them.
These problems are exacerbated when the donors and recipients come from clearly distinct communities, as is often the case, because the power dynamics tend to be even more skewed. This was historically the case when White philanthropists in the US in the first half of the 20th century gave to Black-led organisations (as detailed in the work of scholars like Megan Ming Francis, Erica Kohl-Arenas and Maribel Morey). It is also still too often the case when large funders and philanthropists from the global north try to fund smaller organisations in the global south. And EA doesn’t really fare very well from this perspective, since the movement is not exactly diverse or representative. The most recent survey of EA demographics in 2020 found that 76% of EA followers were white, 71% were male, 82% were 34 or younger and a disproportionate number had attended an elite university. (Although I will say in Effective Altruism followers’ favour that the reason I know this is that they bother to self-examine their own failings in a way that many other parts of the philanthropy world would not – and I suspect many traditional philanthropy organisations wouldn’t fare that much better in terms of being representative). This leaves us with the sense of EA as a movement in which a cadre of youthful (mostly white, mostly male) philosopher-kings get to decide for the rest of humanity what its most pressing problems are and how to address them. As Gideon Lewis-Kraus rather wryly notes in his New Yorker piece:
“It does seem convenient that a group of moral philosophers and computer scientists happened to conclude that the people most likely to safeguard humanity’s future are moral philosophers and computer scientists.”
I don’t know about you, but this makes me just a tad uncomfortable.
EA is too cult-like (and too influential)
Perhaps more fundamental than any of the principled points outlined above when it comes to explaining my lack of attraction to EA, however, is a sense that it just feels a bit… well… cult-like. I’m aware that this may seem even more broad-brush than everything I have said already, and that the EA community contains a wide range of viewpoints (even if it isn’t that diverse in other ways), but I just can’t shake this feeling.
As a movement that counts among its numbers a high proportion of tech bros and Oxford/Stanford philosophy students, there is a perhaps unsurprising glorification of a type of showy, contrarian academic intelligence that I used to find deeply alluring when I was 23, but which I increasingly just find quite tiresome. Some EA advocates I have come across (though by no means all) also have a kind of missionary zeal and a willingness to write off all other approaches to philanthropy as wrong-headed (in much the same way as the Charity Organisation Society used to back in the 19th century, which led to it gaining as many fierce critics as it had supporters). And there are even whistle blowers accounts of the inner workings of the EA community, with rumours of secret Google docs and WhatsApp groups in which the leaders of the movement discuss how to position themselves and how to hide their more controversial views or make them seem palatable. I have no idea how much of this is true, and how much is overblown conspiracy theory, but it certainly doesn’t make the whole thing feel any less cult-like.
The cultishness might not even have been that much of a problem if EA had remained largely the preserve of postgraduate philosophy seminars, with limited intrusion into the real world. But its influence is growing considerably: the ideas of EA and Longtermism now exert a powerful grip on many prominent figures in the tech world, and are slowly creeping into policymaking as well. Meanwhile Sam Bankman-Fried, perhaps the most prominent (and wealthiest) EA donor is spending millions trying to influence US elections, in part in order to get EA-sympathetic candidates into positions of power. Put together, this does slightly make Effective Altruism seem like SPECTRE in the recent Bond movies (i.e. a shadowy network that has infiltrated global institutions at all levels in order to influence world events…)
Attempts to address critiques make EA more acceptable, but less distinctive
It is worth saying that many of the questions and challenges I have raised here are ones that various sections of the EA community have themselves recognised and are trying to address. For some this might mean putting more empahsis on finding ways to measure the value of upstream interventions such as social change campaigning, whilst for others it might mean softening EAs demands that we take an entirely neutral stance and allowing individuals some element of choice about which cause areas to focus on. The problem I find with all of these efforts is that the more they try to broaden EA’s appeal, the less distinctive it becomes. I can certainly get on board with an approach to philanthropy which allows people some element of personal choice about which causes are important to them, then suggests that when they have chosen they should try (as far as possible) to give their money to more effective organisations, measured in such a way as to encompass a broad definition of “effectiveness”. But then again, it is quite hard to see who could disagree with such an approach. As Amia Srinivasan puts it:
“If Effective Altruism is simply in the business of getting us to be more effective when we try to help others, the it’s hard to object to. But in that case it is als hard to see what it’s offering in the way of fresh moral insight, still less how it could be the last social movement we’ll ever need.”
EA crowds out other philosophical thinking on philanthropy
One of my other gripes about EA (though admittedly a niche one) is that it has become so dominant within the small (perhaps not even fully existent) field of philosophy of philanthropy that it risks crowding out other philosophical perspectives. This isn’t really EA’s fault, and perhaps we shouldn’t’ blame it for its own success, but it does present an issue. Whilst the overall amount of focus on philanthropy from a philosophical perspective may have increased significantly due to the advent of EA (both in terms of developments of its ideas and critiques of them), the range of questions being addressed is framed by the issues that EA deems to be important. The problem is that this doesn’t represent the full spectrum of issues that have historically been of important to philosophers when it comes to considering philanthropy, so we may be limiting our horizons. (For examples of some of the types of questions we might be overlooking, check out this Philanthropisms podcast episode on the philosophy of philanthropy).
I resent EA because I am not willing to make those kinds of sacrifices myself
The other thing I need to consider, of course, is the possibility that the fault lies not with EA, but with me. Is part of my problem with EA that I find it awkward because it makes demands of its followers that I am not willing to entertain? The movement asks (but does not demand) that its followers commit to give away a significant proportion of their income during their lifetimes to organisations chosen according to EA principles, and many choose to do so. Which is highly admirable; and ensures that EA is not just a talking shop, but a movement based on action.
Some EA adherents make even more dramatic sacrifices: a growing number, for instance, (including Vox editor Dylan Matthews) have voluntarily chosen to donate one of their kidneys, on the basis that the personal cost to them as a donor is outweighed by the value of saving someone else’s life (when it is possible to live perfectly well with one kidney). It is hard not to see this as an incredibly selfless and altruistic course of action, yet I am keenly aware that I am sitting here with two healthy kidneys and no short-term plans for elective surgery to donate either of them. I am tempted to fall back on the excuse that many of the sacrifices EA demands are easier to make when you are a 20-something graduate than they are when you are a 40-year-old father of two, as there is certainly some truth in this – but ultimately that feels like a cop out. There is also the temptation to mock those who have made these kinds of sacrifices, or to caricature them as wacky outsiders in order to deflect attention from our own lack of action. Which reminds me of Henry Wadsworth Longfellow’s observation that:
“We often excuse our own want of philanthropy by giving the name of fanaticism to the more ardent zeal of others.”
And this is one of the reasons I do my best not to lapse into snarkiness even when critiquing EA. If its followers are willing to make sacrifices that I myself am not willing to make, on the basis of their adherence to EA principles, then that deserves my respect even if I don’t necessarily agree with all of their arguments. (And, of course, I am sure that plenty of EA followers are not quite so self-sacrificing in any case…)
Ultimately, I think I have to concede that EA advocates who have made significant commitments to give during their lifetimes (whether that is in the form of cash or bodily organs) just are more generous and altruistic than me. However, it does not necessarily follow that their views about how best to give should be accorded greater moral worth or seen as immune from criticism. I can take their sacrifice as a constant challenge to myself to do better, whilst still taking respectful issue with their ideas.
CONCLUSION
For a complex mix of all the reasons in this article, I wouldn’t count myself as a follower of Effective Altruism. But I also wouldn’t consider myself an out-and-out critic either. I’m certainly wary of EA’s growing influence and dominance – partly because I would be wary about any one viewpoint holding such sway, and partly because of some of the specific concerns about EA highlighted above. However, I also think that some of the best and most interesting work in philanthropy at the moment is being done by EA organisations (perhaps because they have such seemingly vast resources!) It is certainly one of the most considered and consistent intellectual frameworks for thinking about how to do philanthropy well, and I have a lot of time, too, for the willingness of many EA followers I have encountered (though not all, it should be said) to analyse their own failings and to engage with criticism of the movement.
For my money, even if (like me) you don’t feel as though you can wholeheartedly subscribe to EA, there is still a huge amount of value in applying some of its perspective and tools to other philanthropic approaches. And even if you disagree with everything about EA, it does at the very least provide an important benchmark or challenge for other approaches to giving; as it forces us to answer the question “if I choose not to do this in an EA way, why not?” You also have to respect the dedication of many EA followers, and the commitments they are willing to make as a result of their beliefs.
So, I will probably continue to sit in the “agnostic, erring towards politely sceptical” camp for now when it comes to EA. At least until the superintelligent AI takeover of humanity, of course; at which point I will repent and beg forgiveness from the EA gods just like everyone else.