AI to Create BioWeapons ?

12163774693?profile=RESIZE_400xA trio of influential artificial intelligence leaders testified at a congressional hearing on 25 July 2023, warning that the frantic pace of AI development could lead to serious harms within the next few years, such as rogue states or terrorists using the tech to create bioweapons.

See:  https://redskyalliance.org/xindustry/ai-and-its-hazards

Yoshua Bengio, an AI professor at the University of Montreal who is known as one of the fathers of modern AI science, said the United States should push for international cooperation to control the development of AI, outlining a regime similar to international rules on nuclear technology. Dario Amodei, the chief executive of AI start-up Anthropic, said he fears cutting edge AI could be used to create dangerous virus and other bioweapons in as little as two years. And Stuart Russell, a computer science professor at the University of California at Berkeley, said the way AI works means it is harder to fully understand and control than other powerful technologies.

“Recently I and many others have been surprised by the giant leap realized by systems like ChatGPT,” Bengio said during the Senate Judiciary Committee hearing. “The shorter timeline is more worrisome.”

The hearing demonstrated how concerns about AI surpassing human intelligence and getting out of control have quickly gone from the realm of science fiction to the mainstream. For years, futurists have theorized that one day AI could become smarter than humans and develop its own goals, potentially leading it to harm humanity.

But in the past six months, a handful of prominent AI researchers, including Bengio, have moved up their timelines for when they think “supersmart” AI might be possible from decades to potentially just a few years. Those concerns are now reverberating around Silicon Valley, in the media and in Washington, and politicians are referencing those threats as one of the reasons governments need to pass legislation.

Sen. Richard Blumenthal (D-Conn.), the chair of the subcommittee holding the hearing, said humanity has shown itself capable of inventing incredible new technologies that people never thought would be possible at the time. He compared AI to the Manhattan Project to build a nuclear weapon, or NASA’s efforts to put a man on the moon.  “We’ve managed to do things that people thought unthinkable,” he said. “We know how to do big things.”

Not all researchers agree with the aggressive timelines for super-smart AI outlined at the hearing, and skeptics have pointed out that hyping up the potential of AI tech could help companies sell it. Other prominent AI leaders have said those who talk about existential fears like an AI takeover are exaggerating the capabilities of the technology and needlessly spreading fear.

At the hearing, senators also raised the specter of potential antitrust concerns.  Sen. Josh Hawley (R-Mo.) said one of the risks is Big Tech companies like Microsoft and Google developing a monopoly over AI tech. Hawley has been a firebrand critic of the Big Tech companies for several years and used the hearing to argue that the companies behind the tech are themselves a risk.  “I’m confident it will be good for the companies, I have no doubt about that,” Hawley said. “What I’m less confident about is whether the people are going to be all right.”

Bengio made large contributions throughout the 1990s and 2000s to the science that forms the foundation for the techniques that make chatbots like OpenAI’s ChatGPT and Google’s Bard possible. Earlier this year, he joined his fellow AI pioneer, Geoffrey Hinton, in saying that he had grown more concerned about the potential impact of the tech they helped to create.

In March 2023, he was the most prominent AI researcher to sign a letter asking tech companies to pause the development of new AI models for six months so that the industry could agree on a set of standards to stop the technology getting out of human control. Russell, who has also been outspoken about the impact of AI on society and co-authored a popular textbook on AI for university classes, also signed the letter.

Blumenthal framed the hearing as a session to come up with ideas on how to regulate AI, and all three of the leaders gave their suggestions. Bengio called for international cooperation and labs around the world that would research ways to guide AI toward helping humans rather than getting out of our control.

Russell said a new regulatory agency specifically focused on AI will be necessary. He predicts the tech will eventually overhaul the economy and contribute a massive amount of growth to GDP, and therefore will need robust and focused oversight, he said. Amodei, for his part, said he is “agnostic” on whether a new agency is created or if existing regulators like the FTC are used to oversee AI, but said standard tests must be created for AI companies to run their tech through to try to identify potential harms.

“Before we have identified and have a process for this, we are, from a regulatory perspective, shooting in the dark,” he said. “If we don’t have things in place that are restraining AI systems, we’re going to have a bad time.”

Unlike Bengio and Russell, Amodei actually runs a working AI company that is pushing the technology forward. His start-up is staffed with former Google and OpenAI researchers, and the company has tried to position itself as a more thoughtful and careful alternative to Big Tech. At the same time, it has taken around $300 million in investment from Google and relies on the company’s data centers to run its AI models.  He also called for more federal funding for AI research to learn how to mitigate the range of risks from AI. Amodei predicted that malicious actors could use AI to help develop bioweapons within the next two or three years, bypassing tight industry controls meant to stop people from developing such weapons. “I’m worried about our ability to do this in time but we have to try,” he said.

This article is presented at no charge for educational and informational purposes only. 
Source:  
https://www.washingtonpost.com/technology/2023/07/25/ai-bengio-anthropic-senate-hearing/

Red Sky Alliance is a Cyber Threat Analysis and Intelligence Service organization.  For questions, comments, or assistance, please get in touch with the office directly at 1-844-492-7225, or feedback@redskyalliance.com

 Weekly Cyber Intelligence Briefings:

Reporting:    https://www.redskyalliance.org/
Website:       https://www.redskyalliance.com/
LinkedIn:      https://www.linkedin.com/company/64265941

Weekly Cyber Intelligence Briefings:

REDSHORTS - Weekly Cyber Intelligence Briefings

https://attendee.gotowebinar.com/register/5993554863383553632  

 

E-mail me when people leave their comments –

You need to be a member of Red Sky Alliance to add comments!