Rebranded as TrendAI, Trend Micro has published findings from a global study of 3,700 business and IT decision makers showing that 67% felt pressured to approve artificial intelligence projects despite security concerns. One in seven described those concerns as extreme, yet overrode them to match competitors and meet internal demands.
Chief Platform and Business Officer and Head of TrendAI, Rachel Jin, commented: “Organizations are not lacking awareness of risk; they’re lacking the conditions to manage it. When deployment is driven by competitive pressure rather than governance maturity, you create a situation where AI is embedded into critical systems without the controls needed to manage it safely.” The research highlights widespread governance inconsistencies and unclear lines of responsibility for AI risk. Security teams often respond reactively to top-down decisions, leading to workarounds and a greater reliance on unsanctioned or shadow AI tools.[1]
AI adoption continues to outpace controls. Some 57% of respondents said AI was advancing more quickly than they could secure it, while 64% reported only moderate confidence in their understanding of the legal frameworks governing AI use. Just 38% of organizations have comprehensive AI policies in place, with many still drafting them. A further 41% identified unclear regulation or compliance standards as a key barrier. In practice, AI systems are being brought into operation before rules for their use are fully established.
Confidence in autonomous, agentic AI systems remains limited. Less than half (48%) believe such tools will significantly improve cyber defense in the short term. The leading concerns center on data handling and oversight.
Some 44% of organizations cited AI agents accessing sensitive data as their biggest risk. Over a third (36%) highlighted the danger of malicious prompts compromising security, while 33% pointed to an expanded attack surface for cybercriminals, and the same proportion feared abuse of trusted AI status, along with risks from autonomous code deployment. Nearly a third (31%) admitted they lacked observability or auditability over these systems. Around 40% supported the introduction of AI “kill switch” mechanisms to shut down systems in the event of failure or misuse, though nearly half remained unsure.
Recent TrendAI threat research shows that attackers are already using AI to automate reconnaissance, accelerate phishing campaigns, and lower the barrier to entry for cybercrime. “Agentic AI is moving organizations into a new risk category. Our research shows the concerns are already clear, from sensitive data exposure to loss of oversight. Without visibility and control, organizations are deploying systems they don’t fully understand or govern, and that risk is only going to increase unless action is taken.”
This article is shared at no charge for educational and informational purposes only.
Red Sky Alliance is a Cyber Threat Analysis and Intelligence Service organization. We provide indicators of compromise information (CTI) via a notification service (RedXray) or an analysis service (CTAC). For questions, comments, or assistance, please get in touch with the office directly at 1-844-492-7225 or feedback@redskyalliance.com
- Reporting: https://www.redskyalliance.org/
- Website: https://www.redskyalliance.com/
- LinkedIn: https://www.linkedin.com/company/64265941
Weekly Cyber Intelligence Briefings:
REDSHORTS - Weekly Cyber Intelligence Briefings
https://register.gotowebinar.com/register/5207428251321676122
[1] https://www.cybersecurityintelligence.com/blog/risky-business---too-rapid-ai-deployment--9232.html
Comments