The adoption of artificial intelligence within U.S. federal agencies has surged notably in recent years, yet several impediments such as talent shortages and public distrust are impeding the responsible integration of this technology into government services, according to a new report by the Brookings Institution.
Drawing on AI use case inventories from 2023 to 2025, federal employment data, Office of Management and Budget memoranda, and interviews with current and former technologists across eight agencies, the Wednesday report paints a picture of swift progress. By 2025, 41 agencies documented over 3,600 distinct AI applications—69% more than reported in 2024 and fivefold the number from 2023. These applications cover diverse governmental functions: More than half of Social Security Administration’s use cases aid service delivery and benefits processing, while over half of the Department of Justice’s inventory bolsters law enforcement efforts.
However, this growth is unevenly distributed. For the past three years, five large agencies have accounted for more than half of all reported AI applications, with these larger entities contributing 76% of the total inventory in 2025. Smaller agencies struggle to keep up: The 11 small agencies reporting in 2025 collectively submitted only 60 use cases, a mere 2% of the overall total.
Several structural barriers hinder wider adoption, with a notable lack of specialized talent being one of the most critical. Out of over 56,000 technical job postings by the federal government since 2016, just more than 1,600—less than 3%—explicitly mentioned AI capabilities.
A hiring boost during the Biden administration was intended to bridge this gap, yet workforce reductions in early 2025 may have negated these efforts. At least 25% of AI-specific job listings were posted from 2024 onward, indicating that many recently hired individuals could have been among those dismissed most easily.
Besides staffing issues, a deep-seated culture of risk aversion persists within federal agencies. Nearly 60% of all AI applications are either in the pilot or pre-deployment phase, signifying a still-rapid growth period that demands time for education and experimentation—a luxury many agencies lack. The report also highlights how the Trump administration’s linking of AI deployment to workforce reductions via the Department of Government Efficiency (DOGE) may have exacerbated this hesitancy.
Accountability gaps are another concern. More than 85% of all high-impact deployed AI applications in 2025 are missing some required information on risk mitigation measures, despite explicit directives from the OMB.
Public confidence presents an additional challenge. Recent Pew Research Center data reveals that about half of Americans now express more concern than excitement about AI’s growing role, up from 37% four years ago, with only 17% believing AI will positively impact the U.S. in the next two decades.
The report warns of significant stakes: Public trust in federal institutions remains near historic lows, with just 16% of Americans indicating they believe Washington acts rightly most or nearly all of the time. Against this backdrop, the authors argue that poorly executed AI applications could cause severe damage, whereas well-designed ones focused on tangible service improvements could help restore confidence in government.
To achieve this, Brookings suggests enhancing AI literacy training across agencies, reforming procurement rules designed for static software systems, strengthening transparency practices around high-risk AI use, and prioritizing initiatives that yield clear, beneficial outcomes for the public.