Researchers from FortiGuard Labs recently uncovered an active delivery site that hosts a weaponized HTA script and silently drops the infostealer “NordDragonScan” into victims’ environments. Once installed, NordDragonScan examines the host and copies documents, harvests entire Chrome and Firefox profiles, and takes screenshots. The package is then sent over TLS to its command-and-control server, “kpuszkiev.com,” which also serves as a heartbeat server to confirm the victim is still online and
cybersecurity (31)
A proof-of-concept attack detailed by Neural Trust demonstrates how bad actors can manipulate LLMs into producing prohibited content without issuing an explicitly harmful request. Named "Echo Chamber," the exploit uses a chain of subtle prompts to bypass existing safety guardrails by manipulating the model's emotional tone and contextual assumptions. Developed by Neural Trust researcher Ahmad Alobaid, the attack hinges on context poisoning. Rather than directly asking the model to generate in
A proof-of-concept attack detailed by Neural Trust demonstrates how bad actors can manipulate LLMs into producing prohibited content without issuing an explicitly harmful request. Named "Echo Chamber," the exploit uses a chain of subtle prompts to bypass existing safety guardrails by manipulating the model's emotional tone and contextual assumptions. Developed by Neural Trust researcher Ahmad Alobaid, the attack hinges on context poisoning. Rather than directly asking the model to generate in
A proof-of-concept attack detailed by Neural Trust demonstrates how bad actors can manipulate LLMs into producing prohibited content without issuing an explicitly harmful request. Named "Echo Chamber," the exploit uses a chain of subtle prompts to bypass existing safety guardrails by manipulating the model's emotional tone and contextual assumptions. Developed by Neural Trust researcher Ahmad Alobaid, the attack hinges on context poisoning. Rather than directly asking the model to generate in
A proof-of-concept attack detailed by Neural Trust demonstrates how bad actors can manipulate LLMs into producing prohibited content without issuing an explicitly harmful request. Named "Echo Chamber," the exploit uses a chain of subtle prompts to bypass existing safety guardrails by manipulating the model's emotional tone and contextual assumptions. Developed by Neural Trust researcher Ahmad Alobaid, the attack hinges on context poisoning. Rather than directly asking the model to generate in
A proof-of-concept attack detailed by Neural Trust demonstrates how bad actors can manipulate LLMs into producing prohibited content without issuing an explicitly harmful request. Named "Echo Chamber," the exploit uses a chain of subtle prompts to bypass existing safety guardrails by manipulating the model's emotional tone and contextual assumptions. Developed by Neural Trust researcher Ahmad Alobaid, the attack hinges on context poisoning. Rather than directly asking the model to generate in
A proof-of-concept attack detailed by Neural Trust demonstrates how bad actors can manipulate LLMs into producing prohibited content without issuing an explicitly harmful request. Named "Echo Chamber," the exploit uses a chain of subtle prompts to bypass existing safety guardrails by manipulating the model's emotional tone and contextual assumptions. Developed by Neural Trust researcher Ahmad Alobaid, the attack hinges on context poisoning. Rather than directly asking the model to generate in
A proof-of-concept attack detailed by Neural Trust demonstrates how bad actors can manipulate LLMs into producing prohibited content without issuing an explicitly harmful request. Named "Echo Chamber," the exploit uses a chain of subtle prompts to bypass existing safety guardrails by manipulating the model's emotional tone and contextual assumptions. Developed by Neural Trust researcher Ahmad Alobaid, the attack hinges on context poisoning. Rather than directly asking the model to generate in
A proof-of-concept attack detailed by Neural Trust demonstrates how bad actors can manipulate LLMs into producing prohibited content without issuing an explicitly harmful request. Named "Echo Chamber," the exploit uses a chain of subtle prompts to bypass existing safety guardrails by manipulating the model's emotional tone and contextual assumptions. Developed by Neural Trust researcher Ahmad Alobaid, the attack hinges on context poisoning. Rather than directly asking the model to generate in
A proof-of-concept attack detailed by Neural Trust demonstrates how bad actors can manipulate LLMs into producing prohibited content without issuing an explicitly harmful request. Named "Echo Chamber," the exploit uses a chain of subtle prompts to bypass existing safety guardrails by manipulating the model's emotional tone and contextual assumptions. Developed by Neural Trust researcher Ahmad Alobaid, the attack hinges on context poisoning. Rather than directly asking the model to generate in
In May 2025, cybersecurity researchers at Cyfirma disclosed serious zero-day vulnerabilities in Versa Concerto, a prominent SD-WAN and SASE solution used by enterprises worldwide. Among these vulnerabilities, CVE-2025-34027 is particularly alarming due to its high severity and ease of exploitation. The flaw arises from a path-based authentication bypass in Concerto’s orchestration platform RESTful API, enabling attackers to gain administrative privileges and execute arbitrary commands remotely
The analysis from Fortinet below is part of an incident investigation led by their Incident Response Team. Their researchers discovered malware that had been running on a compromised machine for several weeks. The threat actor had executed a batch of scripts and PowerShell to run the malware in a Windows process. Although obtaining the original malware executable was difficult, a memory dump of the running malware process and a full memory dump of the compromised machine (the “fullout” file,
Artificial intelligence (AI) is no longer an emerging trend but a present-day disruptor. From automated threat detection to generative content creation, AI is transforming industries, workflows, and entire careers. While some sectors are seeing productivity gains, others are bracing for significant job displacement as AI replaces or reshapes roles that rely heavily on routine, repetitive, or pattern-based tasks.
In the cybersecurity industry and across the broader workforce, the question is no
Defending against real-world threats is not just part of the job at Sentinel Labs; it is the reality of operating as a cybersecurity company in today’s landscape. Real-world attacks against our environment serve as constant pressure tests, reinforcing what works, revealing what does not, and driving continuous improvement across our products and operations. When you’re a high-value target like Sentinel, for some of the most capable and persistent adversaries out there, nothing less will do.
Di
The latest Thetius report, commissioned by CyberOwl and HFW, gathers insights, assesses current and future cybersecurity challenges, evaluates the industry’s response to evolving regulations and technological advancements, and highlights the importance of integrated cybersecurity practices throughout the vessel lifecycle, from design to maintenance.
Key findings of the report include:
- 7% of stakeholders paid a ransom within the last 12 months. In 2023, nearly 14% admitted to paying a ransom.
- Th
In today’s interconnected world, safeguarding critical infrastructure from cyber threats is more important than ever. The continuous evolution of technology and the adoption of the Connected Worker have created unprecedented opportunities for growth and innovation. However, they have also created a vast and complex digital landscape where vulnerabilities can be easily exploited. The cybersecurity challenges facing critical infrastructure are not hypothetical; they are stark realities that can
The recent U.S. Supreme Court decision in Loper Bright Enterprises v. Raimondo questions the topic of cybersecurity regulation. The Court's decision effectively overturned the Chevron Doctrine, a longstanding principle that gave deference to federal agencies' interpretations of ambiguous laws. Cybersecurity leaders are now scrambling to understand the implications for regulating the threat landscape already a moving target. Business leaders have questioned the validity of unelected bureaucrats
Cyber security is undergoing a massive transformation, with Artificial intelligence (AI) at the forefront of this change, posing both a threat and an opportunity. AI can potentially empower organizations to defeat cyberattacks at machine speed and drive innovation and efficiency in threat detection, hunting, and incident response. Adversaries can use AI as part of their exploits. It is never been more critical for us to design, deploy, and use AI securely.
For over a decade, the Security and Exchange Commission (SEC) has been working with corporations and their many stakeholders to seek ways to appropriately influence corporate governance around cybersecurity. On 26 July 2023, the SEC voted to implement new rules for all publicly traded corporations.[1] [2]
In 2011, the SEC issued guidance to help companies understand they should take responsibility for reducing cyber risk. This was guidance vice formal regulation, but it helped raise awareness
In the face of unrelenting pressure from significant cyber incidents and regulatory action to mitigate them, enterprises are assessing whether they are doing enough to deal with cybersecurity. Public companies are evaluating responses to new SEC rules calling for disclosures regarding cybersecurity strategy, risk management, and governance practices. The SEC’s action against Solar Winds is setting off alarm bells throughout the cybersecurity community, causing CISOs to worry about personal lia