Ever since generative AI exploded into public consciousness with the launch of ChatGPT at the end of 2022, calls to regulate the technology to stop it from causing undue harm have risen to a fever pitch worldwide. The stakes are high; technology leaders signed an open public letter saying that if government officials get it wrong, the consequence could be the extinction of the human race.
See: https://redskyalliance.org/xindustry/the-future-is-here
While most consumers are just having fun testing the limits of large language models such as ChatGPT, a few worrying stories have circulated about the technology making up supposed facts (also known as "hallucinating") and making inappropriate suggestions to users, as when an AI-powered version of Bing told a New York Times reporter to divorce his spouse.
Tech industry insiders and legal experts also note a raft of other concerns, including the ability of generative AI to enhance the attacks of threat actors on cybersecurity defenses, the possibility of copyright and data-privacy violations since large language models are trained on all sorts of information and the potential for discrimination as humans encode their own biases into algorithms.
Possibly the biggest area of concern is that generative AI programs are essentially self-learning, demonstrating increasing capability as they ingest data, and that their creators don't know exactly what is happening within them. As ex-Google AI leader Geoffrey Hinton has said, this may mean that humanity may be a passing phase in the evolution of intelligence and that AI systems could develop their own goals that humans know nothing about. All this has prompted governments around the world to call for protective regulations. But, as with most technology regulation, there is rarely a one-size-fits-all approach, with different governments looking to regulate generative AI in a way that best suits their own political landscape. “[When it comes to] tech issues, even though every country is free to make its own rules, in the past what we have seen is there’s been some form of harmonization between the US, EU, and most Western countries,” said Sophie Goossens, a partner at law firm Reed Smith who specializes in AI, copyright, and IP issues. “It's rare to see legislation that completely contradicts someone else's legislation.”
While the details of the legislation put forward by each jurisdiction might differ, one overarching theme unites all governments that have so far outlined proposals: how the benefits of AI can be realized while minimizing the risks it presents to society. Indeed, EU and US lawmakers are drawing up an AI code of conduct to bridge the gap until any legislation has been legally passed.
Generative AI is an umbrella term for any automated process that uses algorithms to produce, manipulate, or synthesize data, often images or human-readable text. It is called generative because it creates something that didn’t previously exist. It is not a new technology, and conversations around regulation are not new either.
Generative AI has arguably been around (in a very basic chatbot form, at least) since the mid-1960s when an MIT professor created ELIZA, an application programmed to use pattern matching and language substitution methodology to issue responses fashioned to make users feel like they were talking to a therapist. But generative AI's recent advent into the public domain has allowed people without access to the technology before to create sophisticated content on just about any topic based on a few basic prompts.
As generative AI applications become more powerful and prevalent, there is growing pressure for regulation. “The risk is higher because now these companies have decided to release extremely powerful tools on the open internet for everyone to use, and I think there is a risk that technology could be used with bad intentions,” Goossens stated.
Although discussions by the European Commission around an AI regulatory act began in 2019, the UK government was one of the first to announce its intentions, publishing a white paper in March this year that outlined five principles it wants companies to follow: safety, security, and robustness; transparency and explainability; fairness; accountability and governance; and contestability and redress.
To avoid “heavy-handed legislation,” the UK government has called on existing regulatory bodies to use current regulations to ensure that AI applications adhere to guidelines rather than draft new laws. Since then, the European Commission has published the first draft of its AI Act, which was delayed due to the need to include provisions for regulating the more recent generative AI applications. The draft legislation includes requirements for generative AI models to reasonably mitigate against foreseeable risks to health, safety, fundamental rights, the environment, democracy, and the rule of law, with the involvement of independent experts.
The legislation proposed by the EU would forbid the use of AI when it could become a threat to safety, livelihoods, or people’s rights, with stipulations around the use of artificial intelligence becoming less restrictive based on the perceived risk it might pose to someone coming into contact with it for example, interacting with a chatbot in a customer service setting would be considered low risk. AI systems with limited and minimal risks may be used with few requirements. AI systems posing higher levels of bias or risk, such as those used for government social-scoring systems and biometric identification systems, will generally not be allowed, with few exceptions.
Even before the legislation had been finalized, ChatGPT, in particular, had already been scrutinized by several European countries for possible GDPR data protection violations. The Italian data regulator initially banned ChatGPT over alleged privacy violations relating to the chatbot’s collection and storage of personal data, but reinstated the use of the technology after Microsoft-backed OpenAI, the creator of ChatGPT, clarified its privacy policy and made it more accessible, and offered a new tool to verify the age of users.
Other European countries, including France and Spain, have filed complaints about ChatGPT like those issued by Italy, although no decisions relating to those grievances have been made. All regulation reflects the politics, ethics, and culture of the society you’re in, said Martha Bennett, vice president, and principal analyst at Forrester, noting that in the US, for instance, there’s an instinctive reluctance to regulate unless there is tremendous pressure to do so. In contrast, in Europe, there is a much stronger culture of regulation for the common good. “There is nothing wrong with having a different approach because you do not want to stifle innovation,” Bennett said. Alluding to the comments made by the UK government, Bennett said it is understandable not to want to stifle innovation. Still, she doesn’t agree with the idea that by relying largely on current laws and being less stringent than the EU AI Act, the UK government can provide the country with a competitive advantage, particularly if this comes at the expense of data protection laws. “If the UK gets a reputation of playing fast and loose with personal data, that’s also inappropriate,” she said.
While Bennett believes that differing legislative approaches can have their benefits, she notes that AI regulations implemented by the Chinese government would be completely unacceptable in North America or Western Europe. Under Chinese law, AI firms must submit security assessments to the government before launching their AI tools to the public, and any content generated by generative AI must align with the country’s core socialist values. Failure to comply with the rules will result in providers being fined, suspended their services, or facing criminal investigations.
Although several countries have begun to draft AI regulations, such efforts are hampered by the fact that lawmakers constantly have to catch up to new technologies, trying to understand their risks and rewards. “If we refer back to most technological advancements, such as the internet or artificial intelligence, it’s like a double-edged sword, as you can use it for both lawful and unlawful purposes,” said Felipe Romero Moreno, a principal lecturer at the University of Hertfordshire’s Law School whose work focuses on legal issues and regulation of emerging technologies, including AI. AI systems may also harm inadvertently since humans who program them can be biased, and the data the programs are trained with may contain biased or inaccurate information. “We need artificial intelligence that has been trained with unbiased data,” Romero Moreno said. “Otherwise, decisions made by AI will be inaccurate and discriminatory.”
He said accountability on the part of vendors is essential, stating that users should be able to challenge the outcome of any artificial intelligence decision and compel AI developers to explain the logic or the rationale behind the technology’s reasoning. (A recent example of a related case is a class-action lawsuit filed by a US man who was rejected from a job because AI video software judged him untrustworthy).
Tech companies need to make artificial intelligence systems auditable so that they can be subject to independent and external checks from regulatory bodies, and users should have access to legal recourse to challenge the impact of a decision made by artificial intelligence, with final oversight always being given to a human, not a machine, Romero Moreno said. Another major regulatory issue that needs to be navigated is copyright.
The EU’s AI Act includes a provision to make creators of generative AI tools disclose any copyrighted material used to develop their systems. “Copyright is everywhere, so when you have a gigantic amount of data somewhere on a server, and you’re going to use that data to train a model, chances are that at least some of that data will be protected by copyright,” Goossens said, adding that the most difficult issues to resolve will be around the training sets on which AI tools are developed. When this problem arose, lawmakers in countries including Japan, Taiwan, and Singapore made an exception for copyrighted material that found its way into training sets, stating that copyright should not stand in the way of technological advancements. Goossens said many of these copyright exceptions are now almost seven years old. The issue is further complicated by the fact that in the EU, while these same exceptions exist, anyone who is a rights holder can opt out of having their data used in training sets. Currently, because there is no incentive to have your data included, huge swathes of people are now opting out, meaning the EU is a less desirable jurisdiction for AI vendors to operate from.
In the UK, an exception currently exists for research purposes. However, the plan to introduce an exception that includes commercial AI technologies was scrapped, with the government yet to announce an alternative plan.
China is the only country that has passed laws and launched prosecutions relating to generative AI. In May 2023, Chinese authorities detained a man in Northern China for allegedly using ChatGPT to write fake news articles.
Elsewhere, the UK government has said that regulators will issue practical guidance to organizations, setting out how to implement the principles outlined in its white paper over the next 12 months. Meanwhile, the EU Commission is expected to vote imminently to finalize the text of its AI Act.
In comparison, the US still appears to be in the fact-finding stages. However, President Biden and Vice President Harris recently met with executives from leading AI companies to discuss the potential dangers of AI. In May 2023, two Senate committees met with industry experts, including OpenAI CEO Sam Altman. Speaking to lawmakers, Altman said regulation would be “wise” because people need to know if they’re talking to an AI system or looking at content images, videos, or documents generated by a chatbot. “I think we’ll also need rules and guidelines about what is expected in terms of disclosure from a company providing a model that could have these sorts of abilities we’re talking about,” Altman said.
This is a sentiment Forrester’s Bennett agrees with, arguing that the biggest danger generative AI presents to society is the ease with which misinformation and disinformation can be created. “[This issue] goes hand in hand with ensuring that providers of these large language models and generative AI tools are abiding by existing rules around copyright, intellectual property, personal data, etc., and looking at how we make sure those rules are enforced,” she said.
Romero Moreno argues that education is key to tackling the technology’s ability to create and spread disinformation, particularly among young people or those less technologically savvy. Pop-up notifications that remind users that content might not be accurate would encourage people to think more critically about how they engage with online content, he said, adding that something like the current cookie disclaimer messages that show up on web pages would not be suitable as they are often long and convoluted and therefore rarely read.
Ultimately, Bennett said, regardless of the final legislation, regulators and governments worldwide need to act now. Otherwise, we will end up in a situation where technology has been exploited to fight a battle we can never win.
This article is presented at no charge for educational and informational purposes only.
Red Sky Alliance is a Cyber Threat Analysis and Intelligence Service organization. For questions, comments, or assistance, please get in touch with the office directly at 1-844-492-7225, or feedback@redskyalliance.com
Weekly Cyber Intelligence Briefings:
- Reporting: https://www.redskyalliance.org/
- Website: https://www.redskyalliance.com/
- LinkedIn: https://www.linkedin.com/company/64265941
Weekly Cyber Intelligence Briefings:
REDSHORTS - Weekly Cyber Intelligence Briefings
Comments