Are Humans Smarter than AI?

13700806294?profile=RESIZE_400xA recent report by Salt Security highlights a critical warning: without proper Application Programming Interface (API) discovery, governance, and security, the very technology meant to drive smarter customer engagement could open the door to cyber-attacks or data leakage.  The research also reveals an increasing trust gap between businesses that deploy agentic AI for external communications and consumers who are wary of sharing personal information due to security concerns.

Because APIs power AI agents, with the ability to make requests, retrieve data, and interact across different platforms, the common thread towards improving confidence in agentic AI interactions is API security.  The report proposes that once security is strengthened, consumer trust will follow, allowing agentic AI to reach its full business potential.[1]

Censuswide carried out the survey on behalf of Salt Security of 1000 US-based consumers and 250 organizations with 250+ employees who are already using agentic AI.  The unique report delved into both sides of the agentic AI equation: organizations already using it and consumers encountering it.

According to the report, a significant number of organizations are already deploying or planning to deploy this technology for customer-facing roles, with over half (53%) indicating this.  The adoption of AI agents is widespread, with nearly half of organizations using between 6 and 20 types, and a notable 19% deployment involving 21 to 50 types.  The number of active agents within systems is also high, as 37% of organizations report having between 1 and 100, while a substantial 18% host between 501 and 1000 agents.

Despite this widespread adoption, the report reveals a concerning gap in security and governance.  Only 32% of organizations conduct daily API risk assessments, and a mere 37% have a dedicated API security solution or a data privacy team to oversee their AI initiatives.  In fact, a small but worrying 7% of organizations assess API risk monthly or even less frequently.

Consumer interactions with AI chatbots have significantly increased over the past year, with 64% of people engaging with them more frequently.  A large majority of these users, specifically 80%, shared personal information during these conversations.  This is often due to a sense of pressure, as 44% of consumers admit they have felt compelled to share information just to complete a task.  Despite these frequent interactions, a clear lack of trust remains, with only 22% of consumers feeling comfortable sharing data with AI agents.  This contrasts sharply with the 37% who trust phone interactions and the 54% who feel comfortable sharing data in person.  Additionally, most people, 62%, believe that AI agents are easier to deceive than humans.

“Agentic AI is changing the way businesses operate, but consumers are clearly signaling a lack of confidence,” said Michael Callahan, CMO at Salt Security.  “What many organizations overlook is that the safety and success of AI depends on APIs that power it, and they must be effectively discovered, governed, and secured.  Otherwise, the trust gap will widen, and the risks will escalate.”

APIs form the digital foundation for AI agents, enabling them to retrieve data, trigger actions, and interact across platforms.  However, each connection adds a potential attack surface. As AI agents automate more tasks and handle sensitive information, weaknesses in API authentication, input validation, and access control become high-risk vulnerabilities.

The report outlines key best practices, including continuous API monitoring and anomaly detection using AI tools, encryption for data in transit and at rest, and regular security testing and developer training.  “As AI agents become more autonomous and embedded in business operations, securing the APIs that power them should be an urgent priority, as this is a problem that will just keep compounding until it’s out of control,” Michael Callahan concluded.  “Securing API infrastructure needs to happen now to reduce risk, improve trust, bolster innovation, and increase overall cyber resilience.

 

This article is shared with permission at no charge for educational and informational purposes only.

Red Sky Alliance is a Cyber Threat Analysis and Intelligence Service organization.  We provide indicators of compromise information via a notification service (RedXray) or an analysis service (CTAC).  For questions, comments, or assistance, please contact the office directly at 1-844-492-7225 or feedback@redskyalliance.com    

Weekly Cyber Intelligence Briefings:
REDSHORTS - Weekly Cyber Intelligence Briefings
https://register.gotowebinar.com/register/5207428251321676122

 

[1] https://www.itsecurityguru.org/2025/08/14/62-of-people-believe-ai-agents-are-easier-to-deceive-than-humans/

E-mail me when people leave their comments –

You need to be a member of Red Sky Alliance to add comments!