In a series of prohibition-based decisions, the U.S. House of Representatives has issued a ban on the use of Microsoft Copilot, Microsoft’s AI chatbot, on all House-issued devices. The generative AI tool will also be blocked on all managed Windows devices. The cited reason for this decision: Data privacy and national security concerns. The announcement was handed down by Catherine Szpinder, the House of Representatives’ Chief Administrative Officer (CAO).
According to reports, the primary concern is that staffers’ use of Copilot and similar large language model (LLM) services will result in leaks of sensitive information, particularly from classified documents. Considering current events, trying to keep classified information under wraps is not a bad idea.
While the ban aims to safeguard against potential data breaches and ensure the protection of government-related data, information and document leaks have been a problem for the U.S. government since long before AI was anything but a fictional movie plot.
The ban, therefore, is simply a stopgap measure. For one thing, LLMs may facilitate data and information leakage, but they don’t precipitate it. Second, although I don’t have access to such data, I love to make inconsequential bets, and in that vein, I’d be willing to bet that 99+% of staffers own and operate non-work-issued devices. With their personal devices, staffers can take whatever information they are privy to and use it as inputs to Copilot, ChatGPT, Gemini, LLaMA, and the like.
So, even though the electronic transfer and storage of work documents and information should be relegated to approved and managed devices, information will be transferred/shared outside of protocols much in the same way other “workarounds” have occurred, and for many of the same reasons.
Governing AI
For the record, many private companies are also attempting to ban or apply stricter governance to using LLMs and AI. At a minimum, smart companies are issuing acceptable use policies for the technology. No one wants their data leaked. Many companies don’t want their data used for training purposes either. Regarding government and national secrets, the stakes are even higher; data protection is paramount. Given the history of government data leaks — and the fact that state information is so highly coveted — it's no wonder Congress is wary.
All of this being said, government officials are aware that LLMs like Copilot have already achieved liftoff, meaning there’s no way they can completely stop it from propagating throughout their user base. What’s more, trying to do so would be both an exercise in futility and pure hypocrisy; in October 2023 the Biden Administration issued an executive order explicitly addressing the secure development and use of AI. Congress must also be aware that adversarial governments and cyber criminals are taking full advantage of every technological advancement they can get their hands on. If the U.S. were to ignore the advantages AI and LLMs offer, it would be willful ignorance.
Vendors, too, have recognized that ignoring the needs and requests of the U.S. government is a mistake. In this case, Microsoft has acknowledged the government’s concerns and has announced that it will start to build a government-focused edition of Copilot that incorporates enhanced security controls and compliance requirements (one wonders why the rest of us aren’t afforded such protections, but I digress). Szpindor’s office has not committed to the presumptive tool’s use when it becomes available, but it is a good strategic decision for Microsoft to head down that runway.
The Wrap Up
Whether or not the House allows Copilot — or any other generative AI tool — on government-owned devices or systems is almost irrelevant. Some of you might argue with me and say, “But AI is the future of all technology!” And you are probably right. But at the end of the day, most concerns about AI are really about data security and privacy; we’re just applying data security concepts to a different model. At the heart of AI (and all its subcategories) is data — data repositories, data algorithms, data access controls, etc. Historical principles apply. What this means is that developers do not have to reinvent the wheel when it comes to AI security. It might feel like they do, but really — they don’t. Developers can take years of lessons learned and mold them for AI-based tools. If you were to read cybersecurity funding reports, it would be easy to think this is a new category of security controls. It’s actually just an evolution (though, if you are a vendor trying to raise venture capital, sprinkling “AI” into your pitch materials isn’t a bad way to go).
The moral of the story may be this: The House of Representatives is banning a generative AI tool because of concerns about data security and privacy. Though the focus is on the “AI” part, it really comes down to the data: how it’s generated, how it’s used, how it’s processed, how it’s stored, where it’s stored, who has access to it... There is little new in this. AI is slightly different in that developers have to also consider the security of the algorithms used to generate results. But even this isn’t a totally new security concept. Which is all good news.
Except when a powerful entity, in this case, the House of Representatives, calls out a vendor for not developing to the highest security standards. If Microsoft can develop a hardened version of Copilot, it should. Perhaps that was the goal all along — to improve Copilot’s security as new versions are built. Or maybe the goal was always revenue first; freemium versions aren’t afforded the same level of security as paid versions. I get it. We live in a capitalist society.
However, when it comes to LLMs, there are implications beyond commercialization that builders and buyers have to think about. Kudos to the House for stepping forward and demanding better. Good luck to them, though, on keeping Copilot out of the hands of staffers entirely.
I agree with your take on this. I would add just a couple thoughts. I don't think governments (and maybe especially our House of Representatives who likely could not explain the difference between Facebook and an LLM) can regulate / contain AI on their own. Like it or now, they have to work with the big tech giants who are the leaders in the GenAI space.
And I think that in addition to leveraging the same security controls we apply across the board, there are also some AI specific policies and controls needed. Reviewing what the GenAI tools spit out for hallucinations and bias being just one of those.