DeepSeek is Not the Right Answer

13519524060?profile=RESIZE_400xLike many advanced AI-driven tools, the Chinese DeepSeek AI application offers incredible innovation. However, significant data privacy concerns are raised due to the sensitive nature of the data being processed and the regulatory environment. Integrating large-scale data collection and advanced AI technologies, particularly in healthcare, surveillance, and financial services, exacerbates these concerns.

See: https://redskyalliance.org/xindustry/banning-deepseek-from-govt-devices

The Australian government has recently banned the DeepSeek AI app from being installed on devices due to privacy concerns. The South Korean intelligence agency raised similar concerns. Regulators in Italy have blocked the use of the DeepSeek AI app on mobile devices.
Among the risks and challenges associated with DeepSeek AI are:

  • Excessive data collection is a critical issue. DeepSeek may collect vast amounts of personal data, including location, biometric, behavioral, and sensitive health information, without transparent consent mechanisms. Users do not fully understand what data is being collected, how it is processed, or whether it will be shared with third parties, raising ethical and regulatory red flags.
  • Data sharing and cross-border transfers are another primary security concern. Under Chinese data protection laws, such as the Personal Information Protection Law (PIPL), companies must adhere to strict data localization rules. However, DeepSeek data could be shared across borders in global collaborations, raising questions about compliance and exposing user data to jurisdictions with weaker privacy protections. The Chinese government’s regulatory requirements for data access could result in heightened surveillance risks, as sensitive user data could be accessed for state purposes, raising international concerns about civil liberties and the misuse of personal information.
  • Data security vulnerabilities add to the privacy risks. Advanced AI applications like DeepSeek rely on centralized or cloud-based architectures to process and analyze data, making them targets for cyberattacks.

A lack of robust encryption or security protocols could expose users to data breaches, identity theft or misuse of personal data.
Threat actors could exploit DeepSeek AI's algorithms through model inference or poisoning attacks, potentially exposing sensitive training or input data. This creates a dual threat: user privacy is compromised, and the reliability and trustworthiness of the AI outputs are also jeopardized.

  • A recurring issue is the lack of user control over their data. DeepSeek users often have limited visibility into how their data is stored, processed, or retained. The potential for misuse of personal information once it enters the system is a significant concern. DeepSeek AI developers must prioritize compliance with privacy laws, enhance transparency, and adopt privacy-by-design principles.

This includes implementing secure data storage practices, encrypting sensitive information, and giving users greater control over their data to build trust and ensure ethical AI use. Understanding how attackers target applications like DeepSeek is not just important; it's urgent. It allows us to assess the presence of inherent security controls and verify their robustness, which is an immediate task given the increasing cyber threats.

  • Attackers exploit DeepSeek's AI models to infer sensitive training data through model inversion attacks. By analyzing the model's outputs, they can reconstruct input data such as user profiles, health metrics, or personal identifiers that were part of the training set. Such attacks can be executed without direct access to the training data, making them stealthy and dangerous.
  • Applications like DeepSeek rely on continuous learning or updates to their models and are vulnerable to data poisoning attacks. Attackers introduce malicious or biased data into the training pipeline by infiltrating external data sources or exploiting insufficient validation mechanisms. This can corrupt the model, causing it to generate harmful or inaccurate outputs.
  • Adversarial attacks exploit weaknesses in the app’s data processing capabilities by introducing crafted inputs that deceive the model. In the case of DeepSeek AI, attackers could manipulate input data, such as images or text, with subtle perturbations that cause the model to misclassify or generate unintended results. For example, an attacker could use adversarial inputs like subtly altered images or text to bypass a security feature or generate misleading outputs.
  • Open-source AI models like DeepSeek, while offering accessibility and innovation, are increasingly vulnerable to supply chain attacks triggered during large-scale cyberattacks. These attacks, where adversaries exploit the reliance on third-party dependencies, pre-trained models, or public repositories, can have severe consequences. Adversaries may tamper with pre-trained models by embedding malicious code, backdoors, or poisoned data, which can compromise downstream applications. Additionally, attackers may target the software supply chain by manipulating dependencies, libraries, or scripts used during model training or deployment.

This can corrupt systemic AI functionality. For example, malicious code disguised as a DeepSeek package was recently distributed to spread infections via supply chains.

  • DeepSeek AI apps rely on APIs and third-party integrations to function efficiently. Attackers can exploit insecure APIs to gain unauthorized access to user data or the app's backend systems. Additionally, if DeepSeek shares data with other applications or services, attackers can intercept or manipulate these exchanges, creating a broader attack surface.
  • Improperly secured API endpoints or insufficient authentication protocols can expose sensitive data to exploitation. Recently, the attackers have exploited open reverse proxy (ORP) instances containing DeepSeek API keys, indicating widespread exploitation. These compromised ORP servers allow access to critical LLM services, resulting in unauthorized access to AI models while masking user identities to conduct LLM jacking.
  • Adversaries can trigger a DDoS attack against the DeepSeek AI app, aiming to overwhelm its infrastructure, rendering the app unavailable to legitimate users and disrupting its services. Given the data-intensive nature of AI applications like DeepSeek, which rely on real-time data processing and model inference, attackers can exploit these resource demands by flooding the app with excessive fake requests or malicious traffic. This overloads the servers, consuming bandwidth, computational power, and storage capacity, leading to system crashes or delays.

Such attacks impact the app’s availability and create opportunities for secondary attacks, such as injecting malware or exploiting vulnerabilities during recovery efforts.

When AI applications and services like DeepSeek are attacked, data becomes a key angle of exploitation and vulnerability. Data is the foundation of AI systems—it drives their functionality, accuracy, and decision-making capabilities. Adversaries conduct data exfiltration as AI systems such as DeepSeek often process sensitive information such as customer data, proprietary models, or real-time inputs.

Malicious actors may exploit vulnerabilities to extract this data, exposing organizations to privacy breaches, regulatory violations, and reputational damage.
Professionals must understand these risks and take steps to mitigate them. Attacks targeting the data aspect of AI systems can have far-reaching consequences, including undermining the system's integrity, exposing sensitive information, and corrupting the AI model's behavior.

 

This article is shared at no charge and is for educational and informational purposes only.

Red Sky Alliance is a Cyber Threat Analysis and Intelligence Service organization. We provide indicators of compromise information via a notification service (RedXray) or an analysis service (CTAC). For questions, comments, or assistance, please get in touch with the office directly at 1-844-492-7225 or feedback@redskyalliance.com

• Reporting: https://www.redskyalliance.org/
• Website: https://www.redskyalliance.com/
• LinkedIn: https://www.linkedin.com/company/64265941

Weekly Cyber Intelligence Briefings:
REDSHORTS - Weekly Cyber Intelligence Briefings
https://register.gotowebinar.com/register/5207428251321676122

E-mail me when people leave their comments –

You need to be a member of Red Sky Alliance to add comments!