Skip to main content

OpenAI and the challenges of combining profit with purpose

23rd November 2023

What lessons might the recent high-profile chaos at OpenAI hold for philanthropy and civil society?

The recent drama at Open AI sent shockwaves through the technology world.  The surprise ouster of talismanic CEO Sam Altman, the chaos of the ensuing fallout (“He’s joining Microsoft!” “Wait… no he isn’t!”), and then his hastily-announced return would have stretched the credibility of most soap opera plots, except that it was happening for real at one of the world’s most high-profile tech companies. This meant that even people who previously had no interest in AI found themselves engrossed and rapidly getting up to speed with a cast of characters that have come to play key roles in shaping the development of this powerful technology.

Sam Altman in 2019

Image by TechCrunch, CC BY 2.0 license

 

A lot of words have already been written about this story − at the moment you can barely move for hot-takes on “What REALLY happened at OpenAI” and LinkedIn posts about “5 Lessons from OpenAI for [insert niche industry subsector here]” – and I suspect we will hear a lot more about it in the weeks to come. In many ways I am loath to add to this growing mountain of musings: I certainly don’t have any sort of inside track on what has really gone on, and I am well aware that any article framed around OpenAI at the moment risks coming across as a brazen attempt at ‘engagement farming’ by piggybacking on a story that lots of people are already talking about in order to garner clicks. That being said, I have worked on the intersection of AI and philanthropy for quite a few years now, and I do think there are a couple of angles to this story that have potentially important relevance for civil society, but which risk get lost in the wider heat and noise, so I’m going to overcome my reservations in order to explore them briefly.

The angles in question are:

  • Do OpenAI’s problems potentially illustrate something important about the challenges of trying to combine profit with purpose, and about the upsides and downsides of enshrining a social mission in a company’s legal structure or governance?
  • Do the arguments put forward by those involved in OpenAI reflect a wider tendency in the technology world to believe that the development of technology is in itself a social good, and that commercial activities are therefore a form of “philanthropy” (and arguably even a better form of philanthropy than traditional giving)? What does this say about who gets to determine what counts as “social good”?
  • The prominence of Effective Altruism in the OpenAI story is a reminder that philanthropy is already playing an influential role in the development of AI, but is it playing the role that we want it to play? Or is there a danger that the EA focus on existential risks skews attention and money away from more immediate short-term AI issues that need addressing?

Tensions between profit and purpose

One element of the OpenAI story that has been identified by many commentators as important is the unusual hybrid nature of the company. If you are only aware of it thanks to the huge success of ChatGPT, you would probably assume that OpenAI is yet another tech ‘unicorn’- a commercial startup poised to grow rapidly and take its place alongside existing players such as Microsoft, Google and Amazon (or, more likely, get bought out by one of them, as that seems to be the normal course of events). It may, therefore, come as a surprise to learn that OpenAI actually started life as a non-profit; with a mission to “advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return”. And it will probably come as even more of surprise to learn that the commercial part of OpenAI is actually still technically owned and controlled by that original nonprofit (and, crucially for all the recent upheaval, its board).

The point of this hybrid structure (which is not unprecedented, but is very unusual in the context of high-growth tech companies) was to ensure that the organisation’s original mission and purpose remained the guiding star for its strategy and activities, and could therefore counterbalance any commercial drivers and temptations. However, many observers have argued that it was growing tension between this social mission (or the Board’s interpretation of it, at least) and Altman’s decisions as head of the company that led to his dramatic firing. (Although it should also be said that this is still somewhat speculative, as the OpenAI board have yet to offer any clarification as to why they took the decision to fire Altman, beyond their original, slightly gnomic statement that “he was not consistently candid in his communications with the board”, so in all honesty we don’t really know).

In the case of OpenAI, it may not be as simple as ‘profit’ and ‘purpose’ pulling in opposite directions. The complicating factor here (we are told) is that even on the “purpose” side there are some fundamental ideological differences over what our goals should be when it comes to the development of AI (which reflect deep divisions not just within OpenAI, but in the tech sector more broadly). We shall explore this a bit in the next two sections, but for now let’s put aside the issue of how we view the overall goals of AI development and assume that we are talking to at least some extent about a straightforward tension between a supposed social mission and the desire to make money. (Which at least some more cynical critics claim is a closer reflection of reality; since they argue that all the talk of ideological differences is overblown, and those within OpenAI who have aligned themselves with Sam Altman’s fast-growth vision are doing so less because they believe deeply in the potential for AI to transform society and more because they stand to make a filthy amount of money if it continues to grow and goes to an IPO…)

The interesting question here from a philanthropy perspective, then, is whether the situation at OpenAI tells us anything useful about the broader challenges of balancing a profit motive with a social mission. The idea that this is possible is not a new one (although it is sometimes presented as if it is). In fact, when you look back historically, people were often blurring the lines between business and philanthropy all over the place. Indeed, as far back as the 17th century we can find people like the London merchant Thomas Firmin, who was a prolific charitable giver, but also used his businesses to achieve his social goals; establishing a series of “projects for the imploying of the poor”, which he ran as loss-making companies, where the reduced profits that came from paying fair wages and ensuring decent working conditions were seen as a form of “thrifty philanthropy”. By 1710 the trend for combining profit and purpose was widespread enough that we find James Hodges complaining that:

“any proposals for publick Benefit, at the Expense of private Purses, without any visible Return, must probably make but a small progress”

Then, in the late 19th and early 20th centuries, figures such as Octavia Hill and Edward Guinness became well-known for employing a “percentage philanthropy” model to their efforts to build and maintain affordable housing for the working classes: in which they would charge rent at below market rates, and view that lost percentage as a form of philanthropy.

Octavia Hill

Clearly, then, the notion of balancing profit and purpose is not new. The idea that there are firm dividing lines between philanthropy and business seems to have come about in the 20th century, perhaps as a result of the business world becoming more corporate and companies becoming more likely to be accountable to non-executive boards and shareholders, so the idea of an individual owner-entrepreneur controlling all aspects of a company and potentially using it to further their own philanthropic aims has become less commonplace. (There is an interesting question as to whether in some parts of the world, such as India, where the model of the owner-entrepreneur has remained more prevalent, the lines between individual (or family) philanthropy and corporate philanthropy have remained more blurred. And also whether this applies in the tech sector, where the template of the dominant founder-owner once again seems to be in vogue). The apotheosis of the idea that we can draw strict dividing lines between profit and purpose is to be found in the ideas put forward by Milton Friedman in the 1970s, when he argued that the whole idea that companies have social responsibilities is misbegotten, and that it is illegitimate for a company to pursue any purpose other than maximizing its profits (and its returns to shareholders). Friedman eventually rowed back on these views to some extent, but for a while in the latter half of the 20th century they were highly influential: to the extent that, when people did start to question whether companies could combine their profit-seeking with other social and environmental goals, this seemed to many like a radical new idea, rather than a rediscovery of something that we had only relatively recently forgotten.

The idea of combining profit and purpose may well be more of a rediscovery than a new phenomenon, but one distinctive feature of its most recent incarnation is the use of legal structures to enshrine the hybrid social and financial aims of a company in its governance (rather than simply leaving it up to the individual or family to ensure the balance is maintained). In the case of OpenAI this was done by using a traditional nonprofit structure (a 501(c)(3)) and giving it oversight of a commercial subsidiary. Which, it is worth noting, is again not a wholly new idea – in fact, there is a fairly long history of nonprofit ownership of companies, as we explored in this previous WPM article about Yvon Chouinard’s decision to hand Patagonia into nonprofit ownership. In the past this was primarily driven by a desire to ensure the long-term governance of family companies or by a desire to exploit loopholes in the tax code (loopholes which, in the case of the US at least, were subsequently closed). In the modern examples we are now seeing where companies are put in nonprofit ownership, however, the rationale seems to be more about protecting a broader social mission (or at least, that is the stated reason). There are also genuinely new legal structures such as the B Corporation, which have been designed specifically to allow the creation of companies that have both financial goals and a stated social mission.

When we look back at the history of efforts to blend philanthropy and business, and at the situation right now at OpenAI, one common question emerges: is it ever possible to achieve a true balance between profit and purpose? The evidence certainly seems to suggest it is difficult, as there are many examples of companies that start out in one way or another trying to produce both financial and social returns, but over time end up erring more and more towards the former until they are essentially just commercial entities. Why, we might ask, is this? Is the profit motive simply stronger than the philanthropic one, so it will always prove dominant over the long term? Or is it more to do with the fact that there is an asymmetry in how easily we are able to define and measure these different competing ambitions – where financial returns are universally understood and easily measurable in agreed ways, but social returns are continuously contested and may not lend themselves to easy measurement of any kind? There is almost certainly truth in both of these, and the challenges are likely to be greater the more people are involved. In the case of a single dominant founder-owner, if they want to combine profit and purpose within their company then the philanthropic goals can reflect their own views and priorities and they can choose what will count as satisfactory evidence of social impact. If, however, we are talking about a company where shareholders, non-executive directors, managers or employees have some sort of stake, then achieving consensus (and maintaining it) on what the company’s purpose should be, and how it should be measured, may prove significantly more difficult. (It should, of course, still be fairly easy to reach agreement on what financial return looks like, although there may still be disagreement about how this is balanced against the purpose-driven elements).

As we shall see in the next section, difficulties in achieving consensus over purpose, and then balancing that purpose against commercial imperatives, certainly seem to have played a big part in the OpenAI story.

 

What purpose, whose purpose?

In much of the analysis of the situation at OpenAI, the root cause has been identified as an ideological schism between two “tribes” with conflicting views about the development of AI. On the one hand are the techno-optimists (or “boomers”), who broadly believe that the rapid development of AI is a good thing for society and humanity, and who consequently gathered behind Sam Altman and his fast-growth vision. On the other hand are the techno-pessimists (or “doomers”), who have concerns about the risks posed by the emergence of artificial general intelligence (or “AGI”), and who therefore rallied around members of the OpenAI board who are keen to stick to the organisation’s original mission of ensuring the safe development of AI. (Indeed, it has been reported by Reuters that the recent power struggles at OpenAI may have been sparked by an internal memo in which a number of senior employees expressed concerns that recent breakthroughs in a project known as Q* may have brought them dangerously close to the development of AGI).

Image by Ronald Douglas Frazier, CC BY 2.0 license

 

There are a couple of things to say about this from a philanthropy/civil society point of view. The first is that it would be easy to portray the situation as a classic tension between purpose and profit, of the kind we have outlined above, but I’m not sure it is quite as straightforward as that. Oh sure, I don’t doubt that at least some of the people who have sided with Sam Altman have done so because they stand to make a lot of money if OpenAI focuses on growth and capitalising on the existing success of tools such as ChatGPT. However, I’m also willing to bet that many of them sided with Altman because they genuinely believe that pursuing his approach of pushing ahead at relentless pace to is in the best interests of society. For these people, accelerating the development of AI is just as much of a ‘purpose’ as slowing it down is for the pessimists who take the opposing view. And this reflects a broader phenomenon that we see across the tech world, in which those building products, tools and platforms often frame their work in terms of its supposed positive impact on society, rather than just its commercial success. Some tech figures have even questioned whether this makes their business not only a form of philanthropy, but a better form of philanthropy than any type of traditional nonprofit approach (as Felix Salmon highlighted in a recent piece for Axios).

Another important point to make is that if this is genuinely about an ideological divide between two different views of how we should proceed when it comes to developing AI, then this is not only about OpenAI, since that division runs more deeply through the entire technology world. Open AI may have acted as a lightning rod for it, and brought it to wider public attention, but the divide has been apparent for some time now. One question is whether the way in which the OpenAI drama has resolved itself – with Sam Altman reinstated and his doomer antagonists seemingly kicked out – tells us anything about how this debate is going to play out more widely. Is the relentless march of AI progress going to prove unstoppable, and are those who urge caution and a slowing of the pace going to find themselves increasingly fighting an uphill battle? Is it, more particularly, also another blow for the Effective Altruism movement? An important element of EA in recent years has been the advocacy of “Longtermism”, and the need to put resources into addressing “existential risks” including the threat posed by the emergence of superintelligent AI, so will the fact that the faction holding these views seems to have quite clearly lost the battle at OpenAI set alarm bells ring more broadly for the EA movement?

Now, I am not particularly a champion for EA or Longtermism (I outlined my reservations about the former in this WPM article, and my thoughts on the latter in this review of Will MacAskill’s book What We Owe the Future. More frivolously, I also poked fun at Longtermism in this sketch I made last year). However, in a straightforward choice between careering headlong into whatever the development of AI brings, and paying at least some heed to the risk that we might inadvertently wipe ourselves out in the process, I would definitely opt for the latter. But the other crucial point to make is that of course these aren’t the only two sides in this debate, even if they have been positioned as such in some of the reporting. There are plenty of people who argue that we should be worried about the impact of AI, but that we should be worried about the real-world impact it is already having – such as creating new challenges around misinformation, bias and inequality – and that all the talk of the threat posed by AGI is a red herring, as that remains entirely hypothetical (and perhaps highly unlikely).

The fact that more immediate concerns about AI’s impact on existing people and communities has been entirely sidelined in all the discussion surrounding OpenAI is worrying. And given that this is where the majority of focus has been so far for civil society organisations engaging with these issues (except, of course, for those organisations that are aligned with the EA/X-risk agenda), it just highlights the fact that getting a meaningful seat at the table in discussions about the development of AI is a continuing uphill struggle for civil society. This may get better as a result of initiatives like the European AI Fund or the recently-announced new $200m partnership on ethical AI launched by 10 US foundations, but there is still a long way to go to get the voice of CSOs properly heard. Given that many of these organisations represent people and communities that are already marginalised, and are likely to be hit earliest and hardest by the negative unintended consequences of AI, this should be a real source of concern.

The final thing to say (for now) about OpenAI is that all along the story has been framed as one that is about a single individual, and the impact their firing (and re-hiring) will have on the development of AI; and as such it reinforces a bias towards a “great man theory of history” interpretation of what is going on. The challenge when it comes to AI, it seems to suggest, is not that we need to engage with complex systemic issues and what they demand in terms of new regulatory and political response (“Boo! That’s boring!”), but rather that we just need to make sure we pick the right nerdy-yet-vaguely-messianic Stanford drop-out to lead us into the promised land of technological utopia (“Hooray! Look at him on the cover of Fortune!) This is a problem that philanthropy faces too: what I like to call “the myth of the philanthropic lone saviour” has for a long time exerted a strong allure when it comes to how we understand the role philanthropy plays in driving change – for the simple reason that the notion of a single individual coming in and effecting change through the sheer brilliance of their ideas and actions appeals to a natural human desire for heroic narratives, and is therefore more appealing than acknowledging that most meaningful change requires collaboration, consensus and a willingness to engage with the messy reality of structural solutions. The only problem with this lone saviour narrative, of course, is that it is very rarely (if ever) true. And that applies to AI just as much as it does to philanthropy: the sooner we can get away from the soap opera lure of eccentric founders and charismatic iconoclasts, the sooner we might actually start making some progress towards deciding together as a society how we want this technology to develop so that it brings benefits, rather than harm, to as many people as possible.

Learn from our past to better understand our future.

Philanthropy has a long and varied history. We’ve created bite-size chapters that you can jump in and out of to better understand philanthropy.