The effective altruism (E.A.) movement, which originated from the idea that philanthropists should maximize the impact of every dollar spent, brought a practical approach to a field where good intentions often overshadow rational decision-making. A new generation of effective altruists, many influenced by Silicon Valley thinking, have embraced this data-driven and unbiased philosophy, turning their intentions into tangible actions that benefit society.
However, the emergence of artificial intelligence (AI) has revealed a flaw within the movement: a faction of doomsayers who not only misdirect philanthropic efforts but also advocate for the creation of agencies with alarming levels of authority.
In response to various factors, the E.A. movement has shifted its focus towards longtermism, a more extreme version of utilitarianism that values potential future lives as much as present lives. With the belief that any human extinction event, no matter how unlikely, carries infinite costs, longtermists prioritize reducing existential risks.
Some proponents of E.A. argue that highly intelligent AI systems pose such risks. Notably, the Machine Intelligence Research Institute (MIRI), a prominent E.A. organization, has stated its objective to persuade global powers to cease the development of advanced AI systems. Founder Eliezer Yudkowsky has even made controversial calls for drastic measures to prevent AI-related catastrophes.
Extremism is not unique to AI discussions, as seen in other movements like environmentalism and religious extremism. However, within the E.A. community, radical voices hold significant influence. Individuals like Sam Bankman-Fried have used unconventional means, such as cryptocurrency schemes, to channel funds towards E.A. causes, including their own Future Fund.
Despite controversies, AI doomsayers have garnered substantial financial backing from the E.A. community. The introduction of proposed bills like the Responsible Advanced Artificial Intelligence Act (RAAIA) and California’s Senate Bill 1047, closely linked to E.A. and longtermist organizations, highlights the movement’s involvement in shaping AI policy.
The RAAIA, in particular, has raised concerns due to its authoritarian nature. The bill seeks to establish a federal agency with sweeping powers over various AI systems, including the ability to halt research deemed too advanced. Emergency provisions within the RAAIA grant unprecedented authority to the agency and its appointed administrator, enabling them to enforce strict regulations and penalties.
On the other hand, California’s S.B. 1047 proposes a milder approach to AI regulation but still grants significant control to a newly created Frontier Model Division (FMD). This division would oversee high-powered AI models and impose stringent requirements on developers, potentially hindering innovation in the AI sector.
Both bills underscore the influence of radical E.A. factions in shaping AI policy, raising concerns about unchecked government authority and the potential stifling of technological advancements. It is essential to scrutinize these legislative proposals to ensure they strike a balance between AI regulation and innovation.