Stop Using CoPilot

12420201687?profile=RESIZE_400xData security continues to cause angst and thus the US House of Representatives has reportedly banned congressional staffers from using Microsoft’s AI coding assistant, Copilot.  This comes just weeks after Microsoft announced the official public release of AI Copilot on 14 March 2024.

The ban, implemented by the House’s Chief Administrative Officer Catherine Szpindor, reportedly stems from concerns about potential data leakage.  According to Axios, Szpindor’s office believes AI Copilot “poses a risk to users due to the threat of leaking House data to non-House approved cloud services.”

Concerns Over Data Security - Copilot, an AI tool integrated within Microsoft’s development environment, analyzes a programmer’s code and suggests completions or entire lines of code. This can significantly boost productivity for developers.  However, the tool relies on a massive dataset of publicly available code, raising concerns about potential security vulnerabilities.

The House’s Office of Cybersecurity reportedly fears that sensitive congressional data could be inadvertently incorporated into this vast codebase, potentially exposing it to unauthorized access.[1]

Microsoft Responds - Microsoft has acknowledged the concerns raised by the House and emphasized its commitment to government user security.  A Microsoft spokesperson, speaking to Reuters, stated, “We recognize that government users have higher security requirements for data.  That’s why we announced a roadmap of Microsoft AI tools, like Copilot, that meet federal government security and compliance requirements that we intend to deliver later this year.”

Wider Implications - The House’s decision to ban AI Copilot reflects a broader trend of increased scrutiny surrounding AI tools that access user data.  While AI offers immense potential for efficiency and innovation, concerns about data privacy and security remain significant.

Commenting on this, Callie Guenther, Senior Manager, Cyber Threat Research at Critical Start argued that the US government is cautious about regulating AI due to concerns like data security and bias.  “The ban on congressional staffers’ use of Microsoft AI Copilot highlights the government’s careful approach to AI while trying to regulate it. The risks include data security, potential bias, dependence on external platforms, and opaque AI processes,“ Ms. Guenther pointed out.

“The industry must enhance security, improve transparency, develop government-specific solutions, and support ongoing evaluation to address these concerns.  Congress might reconsider its stance if these issues are effectively addressed, especially with government-tailored AI versions demonstrating high security and ethical standards,” she advised.

Uncertain Future - The House’s ban is a strict one, targeting all commercially available versions of AI Copilot.  However, Szpindor’s office did indicate they would “be evaluating the government version when it becomes available and deciding at that time.”  This suggests a potential path forward if Microsoft can address the House’s security concerns.

It remains to be seen if other government agencies or private companies will follow suit and implement similar restrictions on AI Copilot. However, this incident indicates the need for data security practices in a world increasingly reliant on AI tools.[2]

This article is presented at no charge for educational and informational purposes only.

Red Sky Alliance is a Cyber Threat Analysis and Intelligence Service organization.     For questions, comments, or assistance, please get in touch with the office directly at 1-844-492-7225 or    


Weekly Cyber Intelligence Briefings:

REDSHORTS - Weekly Cyber Intelligence Briefings



E-mail me when people leave their comments –

You need to be a member of Red Sky Alliance to add comments!