Hackers, criminals, and spies are rapidly adopting Artificial Intelligence (AI), and considerable evidence is emerging of a substantial acceleration in AI-enabled crime. This includes evidence of the use of AI tools for financial crime, phishing, distributed denial of service (DDoS), child sexual abuse material (CSAM), and romance scams. In all these areas, criminal use of AI is already augmenting revenue generation and exacerbating financial and personal harms. Scammers and social engineers,
cybersecurity (41)
They say necessity is the mother of invention. As our reliance on digital infrastructure has grown, we have demanded more from our networks: seamless access, automated processes, uninterrupted user journeys, and effortless interoperability. Each improvement has pushed us further toward a hyper-connected, “smarter” enterprise, but at a cost that rarely registers on the risk scale. In the background, facilitating all of this is a new type of workforce, an army of AI bots and agents that keep t
Over the past year, FortiGuard Labs has been tracking a stealthy malware strain exploiting a range of vulnerabilities to infiltrate systems. Initially disclosed by a Chinese cybersecurity firm under the name “Gayfemboy,” the malware resurfaced in July with new activity, targeting vulnerabilities in products from vendors such as DrayTek, TP-Link, Raisecom, and Cisco, and exhibiting signs of evolution in both form and behavior. This Fortinet research presents an in-depth analysis of Gayfemboy, r
Cybersecurity researcher Jeremiah Fowler identified two unprotected, misconfigured databases containing nearly one million records linked to Ohio Medical Alliance LLC, a company better known under its brand name Ohio Marijuana Card. Fowler, who reported the exposure to Website Planet, found that the databases were left open without encryption or password protection, allowing anyone with an internet connection to access names, Social Security numbers (SSN), dates of birth, home addresses, and hi
Homeland Security Investigations (HSI), in partnership with US and international law enforcement agencies, has dismantled the infrastructure behind BlackSuit ransomware, a major cybercriminal group and successor to Royal ransomware, in a coordinated global operation. The action targeted the backbone of the group's operations, including servers, domains, and digital assets used to deploy ransomware, extort victims, and launder proceeds. According to US Immigration and Customs Enforcement (ICE),
Google has announced a significant data breach that has hit its corporate Salesforce database, and Google sent email notifications to the affected users on 08 August 2025. Earlier, Google had said that one of its corporate Salesforce instances was compromised in June 2025 by the notorious cybercriminal group known as ShinyHunters, officially tracked as UNC6040 by the Google Threat Intelligence Group. “We believe threat actors using the 'ShinyHunters' brand may be preparing to escalate their ex
Remote Access Trojans, also known as RATs, have been around for years, although their prevalence in the market has surged recently. RATs are digital skeleton keys, giving an attacker remote control over a system, often without the user ever knowing. This kind of access often starts with someone clicking a malicious link or opening a rogue attachment in a phishing email or messaging app. From there, the attacker can move laterally, steal data, monitor activity, or trigger ransomware.
RATs have
The legal market segment has been a prime target for cybercriminals due to the highly sensitive and confidential data it holds. A recent report from the International Legal Technology Association (ILTA) and Fenix24, "Security at Issue: State of Cybersecurity in Law Firms," reveals a crucial shift in the threat landscape. The report, based on a survey of 60 law firms, indicates that while awareness and investment are rising, fundamental vulnerabilities persist, and human-operated attacks are no
The notorious Russian cyber-espionage gang known as Fancy Bear, also known as APT28, has increased its attacks against governments and military entities worldwide using new sophisticated cyber tools and technology. Fancy Bear is perhaps best known in the United States for its hack and leak of Democratic National Committee emails in the lead-up to the 2016 presidential election. Eleven Western countries have accused the hacking group of targeting defense, transport, and tech firms involved in
On 13 June 2025, Israel launched a sweeping pre-emptive operation targeting Iran’s military leadership, conventional military sites, air defenses, and nuclear infrastructure. The campaign was called Operation Rising Lion by the Israeli government and military. Last month, our friends at Fortinet published a blog detailing the new realities of cyber warfare, which were highlighted by this recent conflict.
Affected Platforms: N/A |
Researchers from FortiGuard Labs recently uncovered an active delivery site that hosts a weaponized HTA script and silently drops the infostealer “NordDragonScan” into victims’ environments. Once installed, NordDragonScan examines the host and copies documents, harvests entire Chrome and Firefox profiles, and takes screenshots. The package is then sent over TLS to its command-and-control server, “kpuszkiev.com,” which also serves as a heartbeat server to confirm the victim is still online and
A proof-of-concept attack detailed by Neural Trust demonstrates how bad actors can manipulate LLMs into producing prohibited content without issuing an explicitly harmful request. Named "Echo Chamber," the exploit uses a chain of subtle prompts to bypass existing safety guardrails by manipulating the model's emotional tone and contextual assumptions. Developed by Neural Trust researcher Ahmad Alobaid, the attack hinges on context poisoning. Rather than directly asking the model to generate in
A proof-of-concept attack detailed by Neural Trust demonstrates how bad actors can manipulate LLMs into producing prohibited content without issuing an explicitly harmful request. Named "Echo Chamber," the exploit uses a chain of subtle prompts to bypass existing safety guardrails by manipulating the model's emotional tone and contextual assumptions. Developed by Neural Trust researcher Ahmad Alobaid, the attack hinges on context poisoning. Rather than directly asking the model to generate in
A proof-of-concept attack detailed by Neural Trust demonstrates how bad actors can manipulate LLMs into producing prohibited content without issuing an explicitly harmful request. Named "Echo Chamber," the exploit uses a chain of subtle prompts to bypass existing safety guardrails by manipulating the model's emotional tone and contextual assumptions. Developed by Neural Trust researcher Ahmad Alobaid, the attack hinges on context poisoning. Rather than directly asking the model to generate in
A proof-of-concept attack detailed by Neural Trust demonstrates how bad actors can manipulate LLMs into producing prohibited content without issuing an explicitly harmful request. Named "Echo Chamber," the exploit uses a chain of subtle prompts to bypass existing safety guardrails by manipulating the model's emotional tone and contextual assumptions. Developed by Neural Trust researcher Ahmad Alobaid, the attack hinges on context poisoning. Rather than directly asking the model to generate in
A proof-of-concept attack detailed by Neural Trust demonstrates how bad actors can manipulate LLMs into producing prohibited content without issuing an explicitly harmful request. Named "Echo Chamber," the exploit uses a chain of subtle prompts to bypass existing safety guardrails by manipulating the model's emotional tone and contextual assumptions. Developed by Neural Trust researcher Ahmad Alobaid, the attack hinges on context poisoning. Rather than directly asking the model to generate in
A proof-of-concept attack detailed by Neural Trust demonstrates how bad actors can manipulate LLMs into producing prohibited content without issuing an explicitly harmful request. Named "Echo Chamber," the exploit uses a chain of subtle prompts to bypass existing safety guardrails by manipulating the model's emotional tone and contextual assumptions. Developed by Neural Trust researcher Ahmad Alobaid, the attack hinges on context poisoning. Rather than directly asking the model to generate in
A proof-of-concept attack detailed by Neural Trust demonstrates how bad actors can manipulate LLMs into producing prohibited content without issuing an explicitly harmful request. Named "Echo Chamber," the exploit uses a chain of subtle prompts to bypass existing safety guardrails by manipulating the model's emotional tone and contextual assumptions. Developed by Neural Trust researcher Ahmad Alobaid, the attack hinges on context poisoning. Rather than directly asking the model to generate in
A proof-of-concept attack detailed by Neural Trust demonstrates how bad actors can manipulate LLMs into producing prohibited content without issuing an explicitly harmful request. Named "Echo Chamber," the exploit uses a chain of subtle prompts to bypass existing safety guardrails by manipulating the model's emotional tone and contextual assumptions. Developed by Neural Trust researcher Ahmad Alobaid, the attack hinges on context poisoning. Rather than directly asking the model to generate in
A proof-of-concept attack detailed by Neural Trust demonstrates how bad actors can manipulate LLMs into producing prohibited content without issuing an explicitly harmful request. Named "Echo Chamber," the exploit uses a chain of subtle prompts to bypass existing safety guardrails by manipulating the model's emotional tone and contextual assumptions. Developed by Neural Trust researcher Ahmad Alobaid, the attack hinges on context poisoning. Rather than directly asking the model to generate in