LLM, GPT & AI

11421452658?profile=RESIZE_400xChatGPT is a large language model (LLM) falling under the broad definition of generative AI.  The sophisticated chatbot was developed by OpenAI using the Generative Pre-trained Transformer (GPT) model to understand and replicate natural language patterns with human-like accuracy.  The latest version, GPT-4, exhibits human-level performance on professional and academic benchmarks.  Without question, generative AI will create opportunities across all industries, particularly those that depend on large volumes of natural language data.

Generative AI as a security enabler - Enterprise use cases are emerging with the goal of increasing the efficiency of security teams conducting operational tasks.  Products such as Microsoft’s Security Co-Pilot draw upon the natural language processing capabilities of generative AI to simplify and automate certain security processes.  This will alleviate the resource burden on information security teams, enabling professionals to focus on technically demanding tasks and critical decision making. In the longer term, these products could be key to bridging the industry’s skills gap.

While the benefits are clear, the industry should anticipate that the mainstream adoption of AI is likely to occur at glacial speeds.  Research by PA Consulting found that 69% of individuals are afraid of AI and 72% say they don’t know enough about AI to trust it.  Overall, this analysis highlights a reluctance to incorporate AI systems into existing processes.

Generative AI as a cyber security threat - In contrast, there are concerns that AI systems like ChatGPT could be used to identify and exploit vulnerabilities, given its ability to automate code completion, code summarization, and bug detection.  While concerning, the perception that ChatGPT and similar generative AI tools could be used for malware development is oversimplified.

In its current state, the programming capabilities of generative AI are limited and often produce inaccurate code or ‘hallucinations’ when writing functional programs.  Even generative AI tools that are fine-tuned for programming languages show limited programming potential, performing well when resolving easy Python coding interview questions, but struggling with more complex problems.  And, while there are examples of malware developed using generative AI, these programs are written in Python, which is impractical for real world use.  Ultimately, adversaries seeking to develop malware will not gain further advantages from generative AI in comparison to existing tools or techniques.  Currently, it is still in its infancy, but the AI arms race currently being waged by ‘big-tech’ organizations is likely to result in more powerful and reliable models.  Managing this shifting threat landscape requires a proactive and dynamic risk posture.

Organizations should not completely dismiss today’s security threats posed by ChatGPT and other generative AI models.  LLMs are extremely effective at imitating human conversation, making it challenging to differentiate generative AI-synthesized text from human discourse.  Adversaries could implement generative AI in WhatsApp, SMS, or email to automate conversations with targets, build rapport, and obtain sensitive information.  This could be requested directly or gathered by persuading targets to click links to malware.  Generative AI may also be used for fraudulent purposes, such as deepfake videos and AI-powered text-to-speech tools for identification spoofing and impersonation.

A proactive approach for organizations - In 2022, human error accounted for 82% of data breaches; with the advent of generative AI tools, this is likely to increase. But while people may be the weakest link, they can also be an organization’s greatest asset.

In response to the changing threat landscape, they must ensure their employees are prepared for more convincing, more sophisticated attacks.  Leaders must be visible advocates of change, and ensure their people are well-equipped and informed to manage threats.  By building psychological safety into their cyber culture, organizations will empower individuals to report security events such as phishing without fear of retribution.  This kind of inclusive, transparent cyber culture will be the key differentiator for those with effective cyber security.

Learn more about this topic - AI-powered cyber security tools have now developed to a point where they are becoming an effective approach to protecting the organisation. Learn how you can benefit from adopting them.  SearchSecurity's Risk and Repeat podcast covers the focus on AI-powered security products and uses at RSA Conference 2023 in San Francisco in May 2023, as well as other trends at the show.

Regular corporate communications highlighting emerging threats, case studies, and lessons learned should be supported by regular training that reflects new trends.  For example, now that generative AI can write error-free, colloquial prose, it’s no longer possible to identify non-human communication through grammatical errors or robotic sentence structures.  By re-evaluating their approach to scam awareness training, organizations should teach employees to verify the recipients of sensitive or personal information.

It is important to keep it simple.  The key to a secure culture is implementing straightforward processes and providing accessible training and guidance.  Practically, this includes automated nudges to warn colleagues of potentially unsafe actions and HR policies that support a culture of ‘better safe than sorry’.

The way forward - Organizations are staring deeply into the generative AI kaleidoscope, but a watchful eye must be kept on the potential security, privacy, and societal risks posed.  They must balance the benefits and threats of introducing AI into their processes and focus on the human oversight and guidelines needed to use it appropriately.

Source: Generative AI – the next biggest cyber security threat? | Computer Weekly

This article is presented at no charge for educational and informational purposes only.

Red Sky Alliance is a Cyber Threat Analysis and Intelligence Service organization.  For questions, comments, or assistance, please get in touch with the office directly at 1-844-492-7225, or feedback@redskyalliance.com

Weekly Cyber Intelligence Briefings:

Weekly Cyber Intelligence Briefings:

REDSHORTS - Weekly Cyber Intelligence Briefings

E-mail me when people leave their comments –

You need to be a member of Red Sky Alliance to add comments!