n Wednesday, in a policy paper, the former Google CEO, Eric Schmidt, Scale AI CEO Alexander Wang along with Center for AI Safety
Director Dan Hendrycks stated that the U.S. should not pursue a Manhattan Project Style push in order for the development of AI systems with “superhuman” intelligence, also known as AGI.
The paper mentioned the title “Superintelligence Strategy,” asserts that an aggressive bid by the US can exclusively control the super intelligent AI system cloud, prompting fierce retaliation that came from China/ This also had the potential of forming a cyberattack, that could also destabilize international relations.
The co-authors wrote “[An] Manhattan Project [for AGI] assumes that rivals will acquiesce to an enduring imbalance or omnicide rather than move to prevent it,” talso adding “What begins as a push for a superweapon and global control risks prompting hostile countermeasures and escalating tensions, thereby undermining the very stability the strategy purports to secure.”
The paper that has been co-authored by three important names in America’s AI industry, the paper comes just a few months after a US congressional commission that has proposed a “Manhattan Style-Project” in the effort of funding an AGI modeled after America’s atomic bomb program in the 1940s.
As a result, Chris Wright, the secretary of Energy, recently said the US is at “the start of a new Manhattan Project” on AI while standing in front of a supercomputer site alongside Greg Brockman, the co-founder of OpenAI.
Subscribe to our newsletter
According to Schmidt, Wang and Hendrycks, the US is in an AGI standoff, not dissimilar to mutually assured destruction. In the same way that global powers do not look to monopolize over nuclear weapons that can also be the trigger of a strike, that Schmidt and his co-authors argue that the US should be cautious about the race that surrounds the race towards powerful AI systems.
As a response, the three of them are proposing a shift from “winning the race to superintelligence” to developing methods that deter other countries. They also argue that the government should “expand [its] arsenal of cyberattacks to disable threatening AI projects” that are controlled by other nations as well as limit adversaries’ access to the latest generation of AI chips as well as the open source models.
In the paper mentioned, it is proposed another manner in which a measured approach to developing AGI that prioritizes defensive strategies can take place, wrote TechCrunch.