The Effective Altruism (EA) movement, based on the premise that philanthropists should spend every dollar doing the greatest good, injects pragmatism into an arena where good intentions can trump rational, efficient number-crunching. A generation of highly effective altruists—many of them schooled in Silicon Valley thinking—has since embraced this metrics-driven, impartial philosophy and turned their good intentions into great work.
However, artificial intelligence (AI) has exposed a flaw in the movement: a powerful faction of doomsayers. The result is not just misplaced philanthropy, but lobbying for institutions of astonishing authority.
For a variety of reasons, the EA movement turned its attention to long-termism—a more radical form of utilitarianism that weighs the value of each potential future life roughly the same as the value of the living. Because any human extinction event, no matter how unlikely, carries infinite costs, long-termists can place enormous moral value on reducing what they perceive to be existential risks.
Some supporters of EA believe that sufficiently smart artificial intelligence poses such a risk. In fact, one of the most influential and oldest EA organizations, the Machine Intelligence Research Institute (MIRI), recently point out Its “goal is to convince major countries to stop developing cutting-edge artificial intelligence systems around the world.” Eliezer Yudkowsky, the founder of MIRI, is notorious for Appeal to the United States Bombing “rogue” data centers and threatening nuclear war against countries that do not stop research on artificial intelligence.
Extremism is not unique to the AI debate. Environmentalists have stop oil As well as manifestations of their undesirability, religion has its violent extremists and even Luddites have their college bombers. But in EA, radicals are prominent. Sam Bankman-Fried claims his cryptocurrency scam was his Machiavellian conspiracy to provide tens of millions of dollars to the EA organization, including through his own Future Fund.
Although AI’s reputation is tarnished, the doom of AI is already upon us Hundreds of millions of EA dollars In support. Despite their proclamations, extremists have rarely introduced legislation to show how far they are willing to go — until now.
Enter two proposed bills: Federal Responsible Advanced Artificial Intelligence Act (RAAIA) by Artificial Intelligence Policy Centerand California Senate Bill 1047 sponsored by Artificial Intelligence Security Center (CAIS). Both bills and their supporters are closely related EA and long-termism funding and organization.
In short, the authoritarianism of RAAIA is appalling. The bill would create a new federal agency (run by a presidentially appointed administrator) to manage a wide range of artificial intelligence systems, from weather forecasting to weapons. Companies must obtain a license before developing software, and the agency can set any licensing conditions it wishes. If an approved model proves to be too effective, the agency can halt the study. Open source projects must somehow verify and track the identities of all users and ensure that each user has a “legitimate, socially beneficial interest.”
The emergency powers granted to the President and Chief Executive by RAAIA are authoritarian. Administrators could shut down an entire cutting-edge artificial intelligence industry for six months under their own authority. If the president declares an AI emergency, administrators can seize and destroy hardware and software, have guards “remove any unauthorized personnel from designated facilities” and/or “take full possession and control of designated locations or equipment” implement. They can enlist the FBI and federal marshals and direct other federal law enforcement officers. The chief executive will have prosecutorial and enforcement powers to subpoena witnesses, compel testimony, conduct raids and demand any evidence deemed relevant, even in speculative “proactive” investigations.
In addition, RAAIA will establish a registry for all high-performance artificial intelligence hardware. If you “buy, sell, give away, receive, trade or transport” even one covered microchip without providing the required form, you will be committing a crime. The bill creates criminal liability for other violations, and agency employees may be criminally prosecuted for “willfully and willfully” refusing to perform their duties under the bill.
The bill also includes techniques to try to insulate managers from future administrations or other aspects of government. For example, there is a one-way ratchet provision that empowers administrators to update the rules but makes it difficult to “weaken or relax” them. It seeks to limit the standards of judicial review, compress appeal time, and exempt administrators from litigation obligations. Congressional Review Actamong other things.
Predictably, the agency is funded by levied fines and fees. This creates an incentive to tax, limits congressional budget oversight, and demonstrates proponents’ disdain for democratic checks and balances.
While California’s SB 1047 has milder language, CAIS and state Rep. Scott Weiner (D-San Francisco) have drafted a state bill that could have a similarly authoritarian effect.
SB 1047 would create a new Frontier Model Division (FMD) to regulate organizations that train artificial intelligence models that require exceeding certain computer power or expense thresholds (thresholds that the FMD would set). If any issues arise, cloud computer providers will be required to implement a kill switch to shut down AI models, and additional emergency powers will be granted to governors.
But essentially, SB 1047 requires AI developers to demonstrate negative impacts to hostile regulators before proceeding. Specifically, developers of certain high-cost models must somehow prove in advance that their products will never be used to cause “serious harm.”
variations of the word reasonable Appears more than 30 times in SB 1047. reasonable is defined as. Other crafty words used include Material, Integrityand reasonably foreseeable. Weiner and his co-authors hide their authoritarianism in this vague and arbitrary language.
If FMD employees are likely to be EA-influenced AI destroyers, like those who drafted the bill, if the FMD doesn’t like an AI research proposal, it can impose custom conditions or block it entirely. Even if the FMD approves a plan, it can decide after the fact that the plan was unreasonable and punish the company. All of this will inevitably hinder the development of new models, which is probably the point.
The seemingly benign language of SB 1047 is part of the reason the bill has passed the California Senate and is being considered by the House. Currently, RAAIA lacks congressional sponsorship. However, both bills warrant caution. They are the product of a radical EA faction that loves policing known threats and is willing to blindly empower governments through unaccountable agencies, vague demands, a presumption of guilt, and unrestricted emergency powers.