Super Intelligence Warning

13766440697?profile=RESIZE_400xA recent open letter calling for the prohibition of the development of superintelligent AI was announced with the signatures of more than 700 celebrities, AI scientists, faith leaders, and policymakers.  Among the signatories are five Nobel laureates; two so-called “Godfathers of AI;” Steve Wozniak, a co-founder of Apple; Steve Bannon, a close ally of President Trump; Paolo Benanti, an adviser to the Pope; and even Harry and Meghan, the Duke and Duchess of Sussex.[1]

The open letter says, in full:  “We call for a prohibition on the development of superintelligence, not lifted before there is broad scientific consensus that it will be done safely and controllably, and strong public buy-in.”

The letter was coordinated and published by the Future of Life Institute (FLI), a nonprofit that in 2023 published a different open letter calling for a six-month pause on the development of powerful AI systems.  Although widely circulated, that letter did not achieve its goal.

Organizers said they decided to mount a new campaign, with a more specific focus on superintelligence, because they believe the technology, which they define as a system that can surpass human performance on all useful tasks, could arrive in as little as one to two years. “Time is running out,” says Anthony Aguirre, the FLI’s executive director, in an interview with TIME.  The only thing likely to stop AI companies barreling toward superintelligence, he says, “is for there to be widespread realization among society at all its levels that this is not actually what we want.”

Superintelligence is generally defined as an artificial intelligence system that can surpass human performance in all meaningful and useful tasks.  This means that such a system would not only match but exceed human capabilities across a wide range of activities, including problem-solving, reasoning, creativity, and decision-making.  The concept is significant because it represents a threshold where AI could potentially operate beyond human control or understanding, raising both opportunities and profound risks for society.

There is no specific single individual or organization leading superintelligence research, but many look to FLI as a key entity coordinating efforts and public discourse on the topic.  FLI is quoted expressing concern about the rapid development of superintelligent AI systems, emphasizing the urgency of societal awareness and regulatory action.  FLI has been instrumental in organizing open letters and campaigns to address the risks associated with superintelligence, bringing together prominent figures from various fields to advocate for caution and oversight.  Additionally, the context mentions several notable AI scientists, including two "Godfathers of AI," among the signatories of the open letter, indicating their active involvement in the field.  

Among the most prominent scientists in artificial intelligence are Geoffrey Hinton and Yoshua Bengio, both often referred to as "Godfathers of AI" for their groundbreaking contributions to deep learning and neural networks.  Yann LeCun, another key figure, has played a central role in the development of convolutional neural networks and is currently Chief AI Scientist at Meta.  These individuals have shaped the direction of modern AI research and are widely recognized for their influence in advancing the field.

Other notable scientists include Demis Hassabis, co-founder and CEO of DeepMind, whose work in reinforcement learning and AI-driven problem-solving has led to major breakthroughs such as AlphaGo.  Fei-Fei Li, known for her work in computer vision and as a former director of Stanford’s AI Lab, has made significant strides in image recognition technologies.  Collectively, these scientists have helped drive both the theoretical and practical evolution of artificial intelligence.

This article is shared with permission at no charge for educational and informational purposes only.

Red Sky Alliance is a Cyber Threat Analysis and Intelligence Service organization.  We provide indicators of compromise information via a notification service (RedXray) or an analysis service (CTAC).  For questions, comments or assistance, please contact the office directly at 1-844-492-7225, or feedback@redskyalliance.com    

Weekly Cyber Intelligence Briefings:

Weekly Cyber Intelligence Briefings:

REDSHORTS - Weekly Cyber Intelligence Briefings

https://register.gotowebinar.com/register/5207428251321676122

[1] https://www.msn.com/en-us/news/other/open-letter-calls-for-ban-on-superintelligent-ai-development/ar-AA1OWlpx/

E-mail me when people leave their comments –

You need to be a member of Red Sky Alliance to add comments!