As a cybersecurity reporter at ProPublica, much of my work over the past two years has focused on how the federal government and its IT contractors, such as Microsoft, have driven major technology changes. Now in the news every day is artificial intelligence.
This emerging technology has its potential for everyone: Home users, corporations, and the federal government are all rushing to adopt it. President Donald Trump and his Cabinet say AI will change society, make us more productive, efficient and safer — if we can adopt it fast enough.
But this message is not new. President Barack Obama’s administration used similar language a decade and a half ago as the US embarked on a cloud computing revolution.
I’ve studied how the federal government has handled — and mishandled — this transition over the past two decades, and my report offers cautionary tales and important lessons as policymakers encourage the use of AI and federal agencies embrace the technology.
Lesson 1: There is no such thing as a free lunch
Now: In the early 2020s, a series of cyber attacks linked to Russia, China and Iran left the federal government reeling. The Biden administration has asked major technology companies to help the US strengthen its security. In response, Microsoft CEO Satya Nadella promised to give the government $ 150 million in technical services to help improve its digital security. It also offers “free” security updates for government customers.
Now: Last year, the Trump administration announced a series of agreements with technology companies that were intended to help government agencies “buy commercial AI tools at prices acceptable to the government.” Agencies can use OpenAI’s ChatGPT for $1. Gemini by Google for 47 cents. Grok by xAI for 42 cents. The administration hoped the lower prices would make it “easier for government agencies to access powerful AI capabilities … to improve mission delivery and operational efficiency.”
Takeaway: Be careful with freebies. Our investigation into Microsoft’s seemingly straightforward commitment revealed a complex, profit-driven scheme. After installing the updates, federal customers would be effectively locked out, because switching to a competitor after a free trial would be difficult and expensive. At that time, the customer would have no choice but to pay the high subscription fees. The plan worked: One former Microsoft salesman told me “it was more successful than any of us could have imagined.” Responding to questions about the commitment, Microsoft said “its sole purpose at this time was to support the Administration’s urgent request to improve the security posture of government agencies that were still facing advanced threats from national threats.”
Organizations looking to buy AI tools at discounted prices today must consider how costs may increase down the road. The General Services Administration warns that AI “use costs can grow rapidly without proper management and control” and advises organizations to “set usage limits and review usage reports regularly.”
Lesson 2: Surveillance programs are only as effective as their tools
Now: In the Obama era, the federal government transferred information and computing needs to data centers owned by private companies. Recognizing the potential risks, the administration created the Federal Risk and Authorization Management Program, or FedRAMP, in 2011 to help ensure the security of the cloud computing services it encouraged US agencies to use.
But in my most recent research into the program, I found that it was no match for Microsoft, which has stymied the FedRAMP team for five years while the company sought program approval for a large cloud offering known as GCC High. Despite serious concerns about its cybersecurity, FedRAMP ultimately approved the product, in part because it lacked the resources to continue. In response to questions, Microsoft told me: “We stand by our products and the comprehensive steps we’ve taken to ensure that all FedRAMP-approved products meet the necessary security and compliance requirements.”
Now: Today, this tiny niche within the General Services Administration has even fewer resources to manage the cloud technologies the government relies on — including AI. FedRAMP says it is currently operating “with minimal support staff” and “limited customer service.” The program was an early target of the Trump Administration’s Department of Government Operations.
Takeaway: FedRAMP, which a 2024 White House memo said “must be a professional program that can analyze and verify the security claims” of cloud providers, is now a rubber stamp for the tech industry, former employees told me. As government agencies adopt AI tools that mine sensitive information, the implications of this reduction in cybersecurity protections are far-reaching. A GSA spokesperson defended the program and said FedRAMP now “operates with enhanced regulatory and accountability mechanisms.”
Lesson 3: “Independent” thoughts are only independent
Now: The government has long relied on so-called third-party auditors to verify security claims made by cloud service providers such as Microsoft and Google. In theory, these firms should be independent experts who provide recommendations to FedRAMP on whether a product meets federal standards. But in fact, their independence is stark: They are paid by the companies they review.
My recent research has found that this arrangement creates an inherent conflict of interest. In the case of Microsoft’s GCC High, two reviewers recommended the product despite not being able to thoroughly review it, according to a former FedRAMP reviewer. One of those firms did not answer my questions and the other rejected the account.
FedRAMP, we found, is well aware of how financial arrangements between cloud companies and their auditors can distort official findings on cybersecurity issues. The program has even created a “back burner” to encourage auditors to share concerns they might not raise in their legal reports for fear of upsetting their technology clients and losing business.
Now: As FedRAMP has been reduced to a “paper processor,” as one former GSA official put it, these third-party review firms have taken on even greater importance in the review process. In response to questions from ProPublica, the GSA said that the FedRAMP system “does not create an inherent conflict of interest for professional auditors who meet ethical and contractual performance expectations.” It did not respond to questions about the program’s latest channel.
Takeaway: In fact, the pendulum has swung back to the pre-FedRAMP era, when each federal agency was individually responsible for inspecting the products it used. GSA told me that FedRAMP’s job is to “ensure that agencies have enough information to make these risk decisions.” The problem is that organizations often lack the staff and resources to conduct comprehensive audits, which means the entire system relies on the claims of cloud companies and the audits of third-party firms they pay to do the audits.
#Federal #Government #Accelerates #report #offers #cautionary #tales