checkpointresearch (1)

31133356696?profile=RESIZE_400xUsers frequently entrust AI assistants with highly sensitive information, including medical records, financial documents, and proprietary business code.  Check Point researchers have disclosed a critical vulnerability in ChatGPT's architecture that enables attackers to extract user data covertly.  A flaw in ChatGPT's code execution environment demonstrated how a single malicious prompt could quietly exfiltrate sensitive user data without warning or user approval.[1]

The Vulnerability - OpenAI de