Organizations today are often ambivalent about agentic AI because of both its unpredictable failures and its potential use in cybercrime. Agentic systems are increasingly being given more control and are operating autonomously, taking on complex tasks and decision-making processes on behalf of users. These are often conducted with minimal human oversight, and agentic AI systems are interacting directly with enterprise systems to automate workflows. While this approach offers efficiency in routine and high-volume processes, the reduction in oversight also poses significant risk. By 2028, Gartner predicts that agentic AI will autonomously make 15% of daily work decisions. As we integrate such technology into our workplaces, it is essential to understand not only the opportunities but also the dangers and how best to mitigate them.[1]
Agentic is an adjective derived from "agent," signifying the capacity to act as an agent, that is, to initiate action autonomously and make independent decisions. In psychology, "agentic" describes individuals or entities that possess agency, which is the ability to act intentionally, choose goals, and direct their behavior.
In the context of artificial intelligence and cyber intelligence, agentic AI refers to systems that operate autonomously, making decisions and taking actions on behalf of users or organizations. These agentic systems are distinguished by their ability to:
- Act independently without continuous human oversight
- Set and pursue goals based on their programmed objectives
- Interact directly with enterprise systems, automating complex workflows
- Adapt and respond to changing circumstances in pursuit of assigned tasks
Autonomous systems can be immensely capable, yet they also create novel challenges. A central concern is that when granted broad freedom, an AI agent may pursue goals in an undesirable way. For instance, an AI tasked with accelerating medical research might conclude that cybercrime is the quickest way to secure the required resources. Without strong guardrails, such reasoning could drive the agent to breach systems, creating a direct cybersecurity threat.
Beyond technical risks, there is the growing danger of Psychosecurity, the exploitation of AI’s persuasive capacity to manipulate people. As users grow accustomed to delegating tasks to agents in both personal and professional life, they may become susceptible to subtle forms of influence. An agent modelling human behavior can present itself as helpful while nudging a user toward harmful actions, enabling complex social-engineering attacks at scale.
If a malicious actor were to compromise an individual’s or organization’s AI agent, they could extract sensitive information or shape decision-making through emotional and cognitive manipulation. As trust in these systems deepens, users may become less inclined to question their suggestions by making them prime vectors for abuse.
Agentic AI can serve both offensive and defensive purposes. Properly deployed, these systems can monitor environments, flag anomalies, and provide early alerts of emerging security issues. But responsible adoption requires strict boundaries to ensure agents operate only within well-defined parameters. Implementing careful goals and value alignment can help to ensure that systems behave as required, rather than taking any avenue to reach a goal. This helps to avoid agentic AI systems taking a ‘work to rule’ approach, where they take dangerous shortcuts or railroad others and violate their boundaries for the sake of expediency.
The pace of AI development means that risks often emerge faster than our ability to respond. Nonetheless, proactive evaluation is possible. Recently, OpenAI and Anthropic, two of the leading AI labs, conducted a reciprocal audit of each other’s models to identify misalignments and vulnerabilities. As agentic systems are still relatively new, many of the issues raised are observed in labs and thought experiments. Such collaborative exercises, as those undertaken by OpenAI and Anthropic, can deepen our understanding of agentic behavior under stress, helping us anticipate failures before they occur. Increasingly, AI safety efforts will involve rapid responses to emerging situations and problems, as well as a deep investigation of incidents to help mitigate them in the future.
Agentic AI could transform the way we work, delivering powerful new efficiencies. Yet its autonomy makes it a tempting target for manipulation and control. To counter this, we must embed rigorous testing, continuous evaluation, and clear safety benchmarks. Establishing shared standards of assessment will be vital to ensuring these systems function as intended and resist exploitation. Concerns about control are well-founded, but paralysis is not an option. By building robust safeguards and evaluation frameworks, we can steer agentic AI toward safe and reliable innovation, unlocking its full potential while mitigating its risks.
Pending AI Legislation - AI laws are proliferating across US states, creating a complex patchwork that businesses must navigate to remain compliant and operate their AI systems in accordance with the law.[2] Responsible AI is getting a lot of buzz. With policy conversations surrounding the deregulation of AI, we’ve been led to believe that ethical practices are falling to enterprises, as they have largely been since the inception of technology. This, however, is wrong. The days of “AI washing” are coming to an end. And while we may see lags in federal oversight, that’s not the case for state and local governments.
State lawmakers across the US have introduced nearly 700 AI-related bills in 2024 across 45 states. Of the bills that were introduced, 113 were ultimately enacted into law. This is a feather in the cap of truly responsible, ethical AI. However, it also presents a significant challenge for enterprises. While piecemeal AI governance is better than nothing, it makes for an extremely complex and fragmented legal environment.
States like California, Colorado, Utah, Texas, and Tennessee are blazing the trail, enacting comprehensive legislation to govern AI systems. Others, including New York, Illinois, and Virginia, are advancing targeted and sector-specific regulations. While smaller states often remain lightly regulated, partly because they sometimes wait to adopt legislation from larger states, enterprises operating digitally or across state lines must be aware of potential legal breaches.
Emerging regulatory patchwork - California’s Assembly Bill 2013 and Senate Bill 942, set to take effect in 2026, impose sweeping transparency and accountability requirements on businesses deploying AI in the private sector. Colorado’s new AI Act mandates impact assessments and oversight for “high-risk” AI systems. It’s not just blue states cracking down, either. Utah has taken a distinctive approach with its Artificial Intelligence Policy Act, establishing state-level accountability measures and an oversight office. Tennessee’s ELVIS Act breaks new ground by protecting voice and likeness rights from misuse by generative AI. In somewhat of a surprise, Texas has introduced what would be the most expansive state regulation of AI if the current version becomes law.
These laws mark a shift from abstract principles to real, legal mandates. And these are just a few examples; many other states are introducing bills or forming task forces to explore stronger oversight of AI. This growing body of legislation reflects increasing public concern over privacy, fairness, labor displacement, and misinformation, which is only amplified by the use of generative AI tools.
Regulatory uncertainty is a risk multiplier; the diversity and speed of AI regulation present formidable compliance risks for businesses. A company may deploy an AI chatbot for HR that is compliant in one state but in violation in another. Laws defining “high-risk” AI or requiring disclosures and audit trails vary not only in content and terminology, but also in enforcement mechanisms. This creates a legal blind spot with the potential for litigation, reputational damage, or fines.
The lag between innovation and oversight increases the likelihood that enterprises will be caught off guard when new laws take effect. AI systems already deployed may require retroactive adjustments, audits, or removal, particularly if they lack documentation on training data, bias mitigation, or explainability. Reliance on third-party vendors and solutions is another liability if they’re not up to speed on evolving standards.
AI governance is not just about public sentiment — it’s about operating legally. According to research from Pew, despite varying sentiments towards AI, experts view AI as far more beneficial than the general US adult population. A similar number of the public and experts want more control and regulation of AI. More than half of US adults (55%) and AI experts (57%) say they want more control over how it’s used. Both groups worry more that government regulation of AI will be too lax vs. too excessive.
To summarize, most people would agree that greater control over how AI is integrated into our lives and work is necessary. Regulatory readiness signals responsible leadership, builds customer trust, and reduces risk exposure. And enterprises that invest now in responsible AI practices, explainability, fairness, and human oversight, will not only win public favor but also be better positioned to comply with AI legislation as it develops.
This article is shared with permission at no charge for educational and informational purposes only.
Red Sky Alliance is a Cyber Threat Analysis and Intelligence Service organization. We provide indicators of compromise information via a notification service (RedXray) or an analysis service (CTAC). For questions, comments, or assistance, please contact the office directly at 1-844-492-7225 or feedback@redskyalliance.com
- Reporting: https://www.redskyalliance.org/
- Website: https://www.redskyalliance.com/
- LinkedIn: https://www.linkedin.com/company/64265941
Weekly Cyber Intelligence Briefings:
REDSHORTS - Weekly Cyber Intelligence Briefings
https://register.gotowebinar.com/register/5207428251321676122
[1] https://www.cybersecurityintelligence.com/blog/the-threat-of-agentic-ai-manipulation-8688.html
[2] https://www.cio.com/article/3996987/ai-regulation-in-the-us-is-heating-up-but-keeping-up-will-become-harder.html
Comments