Skip to main content

Philanthropy & Civil Society in a Post-Work Future?

18th October 2024

In this article, we consider what role philanthropy and civil society might play in a “post work” future where AI and automation has fundamentally altered our society and economic models.

Dario Amodei, the CEO of Anthropic AI, recently published a blog post entitled “Machines of Loving Grace” (a title taken from the 1967 poem “All Watched Over by Machines of Loving Grace” by Richard Brautigan). In it, Amodei takes seriously the prospect of the emergence of so-called “powerful AI” (i.e. AI that outperforms human capabilities in almost all regards and exhibits something like generalised superintelligence), and puts forward what he sees as some of the potential benefits for humanity – as a counterweight to the fact that this kind of scenario is often presented as a dystopian nightmare to be avoided at all costs.

Image created by Microsoft Copilot

I like the fact that Amodei explicitly tries to avoid what he calls “sci fi baggage”, and instead focuses on potential real-world upsides of powerful AI. I also like the fact that he displays an unusual level of intellectual humility for someone from the tech industry- acknowledging that his thoughts might be totally wrong, and suggesting that it would be good to bring together a group of cross-disciplinary experts to discuss them in a more informed way). I’m not sure, however, that I can be as relaxed as he is about the prospect of powerful AI. This bit of his concluding argument in particular rang some alarm bells for me:

“Basic human intuitions of fairness, cooperation, curiosity, and autonomy are hard to argue with, and are cumulative in a way that our more destructive impulses often aren’t. It is easy to argue that children shouldn’t die of disease if we can prevent it, and easy from there to argue that everyone’s children deserve that right equally. From there it is not hard to argue that we should all band together and apply our intellects to achieve this outcome. Few disagree that people should be punished for attacking or hurting others unnecessarily, and from there it’s not much of a leap to the idea that punishments should be consistent and systematic across people. It is similarly intuitive that people should have autonomy and responsibility over their own lives and choices. These simple intuitions, if taken to their logical conclusion, lead eventually to rule of law, democracy, and Enlightenment values. If not inevitably, then at least as a statistical tendency, this is where humanity was already headed. AI simply offers an opportunity to get us there more quickly—to make the logic starker and the destination clearer.”

That’s all obviously great and lovely, but it does seem to amount to saying, “we should just chill out about the risks of developing powerful AI, because these systems will probably decide to adopt values and approaches that broadly mirror the way we would have done things anyway.” Would they, though? I’m pretty doubtful. For one thing, I would be wary about making any assumptions about what an artificially created superintelligence is likely to think or how it will act, even if it is guided by a concern for the “best interests” of humans. I also think the assumption that an AI superintelligence would naturally gravitate towards “Enlightenment values” reflects a Whiggish interpretation of historical progress and almost Steven Pinker-ish levels of belief in the necessary triumph of rationalism. I don’t think I would choose to be so sanguine.

 

However, this is all slightly tangential to my main interest, which is section 5 of Amodei’s blog, where he considers “work and meaning”. The basic premise here is the assumption that powerful AI has evolved to such a degree that it is possible to automate the vast majority of tasks that currently require humans to undertake paid work in order to complete them. Which is an idea (or a fear) that people have been entertaining for a long time. As a 2016 report by the UK Government’s Science and Technology Select Committee noted, “Concerns about machines ‘taking jobs’ and eliminating the need for human labour have persisted for centuries.” Most notable in this regard, of course, were the original Luddites (although before anyone feels the need to point it out, I am aware that the Luddites were protesting more against the improper use of new technology than against the technology itself, so it is not a perfect example). And famously, in 2013, researchers at Oxford University’s Martin School produced a paper ranking over 700 named professions in terms of how likely they were to be replaced by computers. (The BBC even produced a handy tool that allows you to type in your own profession and find out how likely it is that your job will be taken by a robot in the future …).

If you accept the premise, the key question then is: “in an imagined future in which AI does everything for us and we don’t need to work, what do we all do?” Amodei acknowledges that this is probably “more difficult than the other questions” he considers, since it goes to the heart of some pretty fundamental issues about what it means to be human, and I agree with him. I also agree (strongly) with him when he argues that “meaning comes mostly from human relationships and connection, not from economic labor”, and when he concludes that “at [the point where powerful AI emerges] our current economic setup will no longer make sense, and there will be a need for a broader societal conversation about how the economy should be organized.” Which all brings me to my point: what is the role of philanthropy and civil society in all of this? Civil society is founded on notions of association, care, agency, connection; and philanthropy (arguably) is a reflection of the basic human drive towards altruism, as well as providing an alternative model for meeting needs that is separate from either the state or the market. So they both feel very relevant to this debate. However, what a post-work future might mean for philanthropy and civil society tends to be (at best) a sidebar in wider discussions. So what might we be able to say about this question? Here are a few ideas (all of which, admittedly, need a lot more development!)

 

Will we even need philanthropy?

Let’s assume for a moment that the techno-optimists are right and that we arrive at the sunlit uplands of an automated future in which none of us need to work (in a traditional sense) and we have also transformed our economic models (potentially through some form of Basic Income, of which more in a moment). Let’s further assume that this is a world in which there is little or no poverty, inequality or ill health. The question then is: would there still be any work left for charity or philanthropy to do?

The idea that “in an ideal world, there would be no philanthropy or charity” is one with a long history (and which I have explored in detail in a previous article). For most of that history, of course, it has been little more than a thought experiment offered up by utopian idealists, since …*checks history books*… we haven’t actually managed to create a perfect society at any point that I am aware of. In these thought experiments, the state is often posited as the force that will obviate the need for philanthropy: the argument being that through a sufficient level of taxation and state-provided welfare services, our needs would be taken care of “from cradle to grave”, so there would be no need to rely on voluntary giving by individuals for anything. This is certainly the argument put forward by some of the figures involved in the creation of the UK welfare state shortly after WWII, such as Aneurin Bevan, who claimed that the birth of this welfare state would mean the death of what he saw as philanthropy’s “patchwork of local paternalisms”, and rejoiced in the fact. I think this is misguided for at least two reasons. The first is that, in practical terms, that isn’t how things panned out. Granted, there was a decline in philanthropy in the UK in the 1950s, but then, as civil society organisations adapted their role and their work in light of expanded state provision, it became clear that there was still a great need for them, and levels of philanthropic giving rose accordingly. In part, this was down to a growing realisation of the limitations of the state, and that there were still gaps in provision and places where the state needed to be challenged or where there was value in trying different approaches. If we assume for a moment that our automated future is anything less than entirely utopian, then presumably there will be a similar need for CSOs and philanthropy to cover gaps, add value, and challenge provision here?

Some might claim that this analogy is fallacious, because unlike human governments our benevolent AI overlords will be infallible and thus able to realise (at last) the vision of a genuine full welfare state, so there really will be no need for philanthropy. However, even if this is the case, I would argue that there will still be a need for philanthropy: firstly, we might decide that there is sufficient value in giving people a sense of agency and allowing them to participate that it makes sense to deliver certain public goods through voluntary means even if it would be more efficient to let the AI do it for us. And secondly, even if you don’t buy this, I would argue that there are things which are currently in the domain of philanthropy and civil society which any sane person would never want an AI system (or the state) to take over responsibility for. What about an organisation dedicated to preserving archaic musical instruments, for instance, or my local community garden, or a patient support association for a rare medical condition? Assuming these things have value to at least some people, we would want them to exist, but would we want the state (or a powerful AI system) to operate them on our behalf for our benefit? (I am aware that those who believe in the full-fat version of communism might answer with a hearty “yes!” at this point, but I’m going to assume most people’s response will be some version of “no thank you, we’re fine”).

As William Beveridge eloquently put it in 1948, when addressing the question of whether there would still be a role for philanthropy and voluntary action in the newly-formed welfare state:

Voluntary Action is needed to do things which the state should not do, in the giving of advice, or in organising the use of leisure. Itis needed to do things which the state is most unlikely to do. It is needed to pioneer ahead of the state and make experiments. It is needed to get services rendered which cannot be got by paying for them.”

The really interesting question to my mind is how much of this holds true for the context of a potential post-work automated future?

 

UBI & Philanthropy

If the future sees many jobs – and even entire industries – being replaced by artificial intelligence and automation then it may no longer be possible for those who lose out simply to “get a different job”. The question then, given that our current economic models (in most countries, at least) are largely predicated on capitalism and the notion of earned income, is how do we all survive when we are no longer working?

Image by Scott Santens, CC BY-SA 2.0

 

An idea that is often touted as at least part of the solution to this problem is Universal Basic Income (UBI). In this context, UBI is mooted as a mechanism by which governments (or perhaps supranational tech oligopolies…?) could ensure the welfare needs of citizens in a future where the majority of labour is automated and thus most people do not have traditional jobs (see e.g. this article for more on this concept). We can obviously envision the introduction of UBI along a spectrum, from where it is merely a small-scale add on to income that people are still able to earn, right through to a scenario in which people earn no income apart from what they get through UBI. On the assumption that all of our welfare needs are also being met much more effectively by our utopian AI guardians, we will presumably have far less need for personal income to purchase necessities, so levels of UBI would certainly not have to match those of standalone wages.

The pertinent question for this article is what would this mean for charitable giving? And the answer will depend very largely on the actual design and implementation of the UBI system. For example, if UBI is merely introduced to a system in which people have the ability to earn their own wealth relatively easily, then it would seem unlikely to make a huge difference, as many people will still have disposable wealth.

However, let’s consider for a moment the more radical scenario in which automation has exceeded a tipping point and the majority of citizens are reliant on some form of UBI for their income, with little or no means to earn their own money in addition. In this scenario there are at least three possibilities;

1) Charitable giving dramatically declines, or even disappears. This could happen either if UBI is set at such a level that it caters for people’s basic needs but does not leave them with any disposable assets, or if the payments are made in such a way that there is little discretion in how to spend them (i.e. they are required to be spent on specific welfare items and services).

2) Charitable giving increases. This could happen if there was a sufficiently generous UBI with no restrictions on how it was spent. Given that people would have more time to focus on social action (a point we will come back to shortly) and the money would be unearned, it is possible that people’s willingness to give it away would be greater than it is currently and hence the overall level of giving might increase.

3) Charitable giving is factored into the system of UBI. If there were a situation closer to i) than ii), it is still possible that charitable giving could continue to exist if it were deliberately built into the design of the system. This might be by ensuring that at least a certain portion of the UBI is free to be spent however an individual chooses, on the assumption that maintaining a sense of personal agency is important and at least some people will choose to give it away for the benefit of other. Or it might be by directly stipulating that a certain portion of the UBI has to go to charity but allowing individuals to retain discretion over the choice of beneficiary. This may seem slightly far-fetched, but in fact is very similar to the model of “percentage philanthropy” that has been in place in some countries (primarily former Eastern bloc ones) for a number of years. It is also not far off the intriguing idea of a “giving wage” proposed by Amy Schiller in her excellent book The Price of Humanity. (Which you can hear me discuss with Amy on an episode of the Philanthropisms podcast).

 

Volunteering & Participation

In discussions of automation and UBI, one positive argument that is often put forward is that we would all be freed up to focus on pursuits other than work, and thereby unleash an explosion in creativity and learning, as people have more time to dedicate to artistic and academic pursuits. This is in essence a turbo-charged version of the idea put forward by Beth Kanter and Allison Fine in The Smart Nonprofit, where they argue that AI could bring a “human dividend” if employees are no longer required to do many tasks that can be easily automated, and are instead free to focus on those tasks that bring genuine value. The difference is that in Kanter and Fine’s version, we are still talking about people having to do employed work within the context of an organisation of some kind (albeit much more rewarding work); whereas if there is a system of UBI and we no longer need to earn, we would be free to do pretty much whatever we want.

The problem, some critics point out, is that most of us aren’t very good at self motivating to do things when we don’t need to, so rather than a tech-enabled renaissance where we all swan around like 18th century gentlemen of letters, conducting ‘philosophic experiments’ and writing bad poetry, we might well end up instead with a moribund dystopia in which we all becoming increasingly listless and morose and sit around watching reruns of Friends or looking for things to spend our UBI money on that we don’t really need. Which is precisely why I think the principles and values of philanthropy and civil society will be so important, since they are all about giving people a sense of purpose and connection – either to address a cause (assuming there are still causes that need addressing), or to pursue shared interests and common goals. And this doesn’t even have to be with some lofty ambition in mind – as Kurt Vonnegut so memorably said about the value of doing things badly:

“I don’t think being good at things is the point of doing them. I think you’ve got all these wonderful experiences with different skills, and that all teaches you things and makes you an interesting person, no matter how well you do them.”

Assuming that we all have to work a lot less (or not at all) in the future and that we turn to civil society as a place to find purpose, volunteering would obviously change dramatically in scale. It would also change in nature, as volunteering would no longer be something that people had to fit around work commitments or wait until retirement to get fully involved in. So our model of civil society would potentially evolve from one where the dominant paradigm is that of expert organisations looking to fundraise from supporters or involve them in pre-determined, discrete volunteer opportunities to a model in which supporters are involved in directing and delivering far more substantive, long-term voluntary services and campaigns, and the lines between what we might once have considered “paid work” and “volunteering” are so blurred as to have no meaning.

 

What Now?

Artificial intelligence is not going to replace most of our jobs tomorrow, or even next year (probably…). Likewise, it doesn’t seem likely that governments across the world are suddenly going to start introducing Universal Basic Income for their citizens. However, it is clear that the use of AI is increasing at an accelerating rate, and we are also seeing a growing number of people advocating the idea of UBI as a long-term solution to the challenges that automation poses for the future of world, as well as various pilots of UBI around the world. Given how fundamentally the interplay of these two developments could reshape our society and how profound their impact on philanthropy might be, I would really like to see this become part of the wider debates we are starting to have about AI and philanthropy.

Learn from our past to better understand our future.

Philanthropy has a long and varied history. We’ve created bite-size chapters that you can jump in and out of to better understand philanthropy.