AI pioneer Geoffrey Hinton’s departure from Google and his warnings about the dangers of AI have caused quite a stir. Veteran critics of large language models and the companies that make and control them now accuse Hinton of ignoring and downplaying the already existing problems that large language models create. This is exemplified by Hinton’s lack of support for the AI ethicist Timnit Gebru when she was fired by Google. In a television interview, Hinton justified this by saying that the criticism from Gebru and her colleagues was less existential than the fears that are now driving him.
What is striking, however, is that Hinton uses terms and figures of thought in his interviews that can be assigned to “effective altruism” – a movement that has a lot of influence in the USA, but which is also very controversial. For example, Hinton speaks of the “existential threat” to humanity from AI. Thinking about existential risks (xrisk) – the term was coined by the British philosopher Nick Bostrom – is also part of this movement’s fundus, as is the argument that intelligent AI will try to manipulate people in order to achieve their goals.
After studying physics, Wolfgang Stieler switched to journalism in 1998. He worked at c’t until 2005, after which he became editor of Technology Review. There he oversees a wide range of topics from artificial intelligence and robotics to network policy and questions of future energy supply.
Distribute the “scarce commodity” help as “profitably” as possible
Effective altruism (EA) is actually just a school of thought that tries to combine neoliberal economics and ethics. The basic premise is: there is too much misery, too many problems in the world. Not all of them can be solved. So how can the “scarce commodity” of possible aid be used as “profitably” as possible. This leads to a series of further – increasingly adventurous – conclusions.
One of them is “Earn to Give”. The idea: Because each person has only limited time and energy to invest, it is ethically imperative to make as much money as possible as quickly as possible in order to then donate part of this money. Traditional ethical considerations such as “financial speculation is driven by greed and is wrong” are overruled by this principle.
From this it follows – unsurprisingly: Since the early 2000s, EA has developed into a movement, especially in Silicon Valley, that has a lot of money and therefore some influence because it attracted tech bros like Peter Thiel, Elon Musk or Sam Bankman-Fried has. At the same time, it not only provides organizational structures, but also an ideological core that justifies the actions of this group as justified, logical and ethically impeccable.
Longtermism: Secure the existence of mankind
While the movement initially focused on “evidence-based” aid projects, an ideological branch called “longtermism” – a term for which there is still no good German translation – gained prominence. The idea behind it: Because significantly more people will live in the future than have lived up to now, maximizing human happiness means first and foremost securing the existence of mankind. Because according to Nick Bostrom, the fate of mankind lies in spreading intelligence throughout the cosmos. EA thus stands in the tradition of technical utopias such as transhumanism.
However, long-termism should not be confused with long-term thinking. Anyone who thinks that thinking about existential risks will lead to a determined fight against climate change is wrong. Since climate change is unlikely to lead to human extinction, it is not considered an existential threat in EA circles. A nuclear war, a man-made pandemic, the eruption of a super volcano, cascading system failures and, of course, super-intelligence that has gotten out of control are certainly among the existential crises and can therefore be avoided at all costs – if humanity can do it . Because according to Hinton, a suitably intelligent AI can and will manipulate people in such a way that they gain more autonomy, an idea that comes from the so-called AI box experiment that has been discussed in xrisk circles since the 2000s.
Do these connections tend to bolster Hinton’s arguments or make them seem dubious? At the moment, only one thing seems certain: the discussion about the opportunities and risks of artificial intelligence that has been going on for more than 50 years is far from over. On the contrary: it has only just picked up speed.