llm (8)

13104873684?profile=RESIZE_400xResearchers at Google said last week that they have discovered the first vulnerability using a large language model.  In a blog post, Google said it believes the bug is the first public example of an AI tool finding a previously unknown exploitable memory-safety issue in widely used real-world software.  The vulnerability was found in SQLite, an open-source database engine popular among developers.

Google researchers reported the vulnerability to SQLite developers in early October, who fixed it

12987293459?profile=RESIZE_400xI recently saw the title of a Recorded Future podcast regarding AI and police reporting.  I have 28 years of law enforcement experience, 8 years as a uniformed police officer and this title really intrigued me.  So I watched the segment: AI is Writing Police Reports, Should We be Worried?[1]  

The story starts with police body cams, which began somewhat experimentally in 2011 and now has gain acceptance throughout US policing.  The main purpose of demanding police wear body cams was to change po

12960356261?profile=RESIZE_400xDue to economic turbulence and a relentless surge in cyber threats, today's cybersecurity landscape requires enterprises to remain resilient by adapting to security risks.  Many organizations have chosen to adapt to these risks by embracing modern technology such as generative artificial intelligence (GenAI), which can present new risks if not implemented properly.  The speed at which companies innovate and adopt new technology is far outpacing the security measures that must be addressed first.

12390146467?profile=RESIZE_400xIt is no longer theoretical; the world's major powers are working with large language models to enhance offensive cyber operations.  Advanced persistent threats (APTs) aligned with China, Iran, North Korea, and Russia use large language models (LLMs) to enhance their operations.  New blog posts from OpenAI and Microsoft reveal that five prominent threat actors have used OpenAI software for research, fraud, and other malicious purposes.  After identifying them, OpenAI shuttered all their accounts

12215117476?profile=RESIZE_400xThe UK’s National Cyber Security Centre (NCSC) issued a warning this week about the growing danger of “prompt injection” attacks against applications built using AI.  While the warning is meant for cybersecurity professionals building large language models (LLMs) and other AI tools, prompt injection is worth understanding if you use any kind of AI tool, as attacks using it are likely to be a major category of security vulnerabilities going forward.

Prompt injection is a kind of attack against LL

12125883280?profile=RESIZE_400xComputer professionals may be impressed with artificially intelligent Large Language Models (LLMs) like ChatGPT that can write code, create an app, and pass the bar exam.  A large language model (LLM) is a type of artificial intelligence (AI) algorithm that uses deep learning techniques and massively large data sets to understand, summarize, generate and predict new content.  LLMs are capable of processing and generating text, and can be used for a wide range of applications, including language

11421452658?profile=RESIZE_400xChatGPT is a large language model (LLM) falling under the broad definition of generative AI.  The sophisticated chatbot was developed by OpenAI using the Generative Pre-trained Transformer (GPT) model to understand and replicate natural language patterns with human-like accuracy.  The latest version, GPT-4, exhibits human-level performance on professional and academic benchmarks.  Without question, generative AI will create opportunities across all industries, particularly those that depend on l

10873817894?profile=RESIZE_400xRobots are taking over the world.  According to Oxford Economics, there will be 14 million robots in China by 2030 and 20 million worldwide.  In the USA, robots will modify or replace 1.5 million job positions.  Labor shortages due to the COVID-19 pandemic encouraged both manufacturers and warehouse companies to partner with robotic companies to optimize human and robot collaboration.   We have already seen robots build robots, what is next?

Now enter the engineers from Google, they have unveile