Who Uses AI in Congress?
Everything I learned from running everything Congress has produced through AI-detection software
Artificial intelligence is the most important innovation since cellular respiration. Man, having created a Golem of extravagant and even limitless intelligence, can foresee his own extinction; even if he survives, he is beginning an era we can see through a glass but darkly. The order of the world will be upset, the prideful will be humbled, and the high will be made low (Massenkoff and McCrory, 2026). Soon the mechanical tendrils of life will reach across the galaxy, and the universe will be consumed by order.
I do not study that. Instead, I ask: how has AI been adopted by members of Congress? What effect, if any, has it had on the ideological content of legislation? What effect has it had on the rhetoric? Has it made them more productive? And can we attribute adoption to the members, or to their staff? In brief, it has been widely adopted, but without any impact on actual policy outcomes. The bills people propose are no different. They are no more productive, after controlling the preexisting productivity. However, I can show that AI use is substantially driven by the movement of staff from office to office, and that even after controlling for the ideology of the legislator, AI written speeches are substantially more socially progressive.
In the three years since the release of ChatGPT, AI has become a widely used tool among members of Congress and their staff. I was curious how much, so I ran the entire Congressional Record, and the full text of all bills proposed, in the 118th and 119th Congress through Pangram. Pangram is an AI-detection company with exceptional accuracy, making a point of emphasis of having no false positives. (Jabarian and Imas, 2025)
Adoption has been substantial. In the past three months of the 119th Congress, fully 25% of submitted documents in the Congressional Record are AI-generated. Out of pure gossiping interest, here are the largest users of AI in the current Congress.
The floor speeches are what you might expect intuitively from a congressional record – it is a transcript of things people said. However, it used to be that you could correct the record at will, which led to whole speeches being read into the record which were never delivered. This state of affairs was regarded as somewhat ridiculous, and so in 1968 they formalized the appendix as being the repository of anything a member would like to submit. It is where you will find congratulations, commendations, editorials, and commentary.
If we look at what is largely AI-written, about 3% of floor speeches, and 26% of extensions of remarks, are written largely with AI.
This masks the fact that legislators are using substantial AI assistance in writing their speeches. Looking at the total AI usage almost doubles the apparent use of AI, while the ceremonial remarks are largely either totally AI or entirely human written.
Adoption is much higher in the House. This is partly explained by the House members submitting more extension of remarks into the record and by them being younger, but the effect remains after that. Likely, the difference lies in the quality of the staff – newer members use more AI.
However, mere adoption is an uninteresting topic. Of course they are going to adopt the tools available. What would be much more interesting is if AI tools were having an actual effect on the policy positions or the rhetorical emphasis. Unfortunately, we can pretty conclusively rule those out. First, though, we do have substantial stylistic changes. The AI speeches of a human are more verbose. They refer to “we” in preference to “I”. They will refer less to specific bills before the House, or to anything specific at all.
To score the ideology of bills and the congressional record, I used Claude as a research assistant and asked it to rate how conservative or liberal a given text was along five dimensions: economic, social, spending, the scope of the federal government, and partisanship. Doing this was surprisingly cheap, costing me only around 30 dollars, and correlated well with external measures of ideology like DW-nominate. It was able to correctly predict the party of the person giving the speech, bearing in mind that many speeches are entirely bipartisan speeches thanking someone for their distinguished servant.
My main finding is that AI text is more socially progressive within both parties, including after controlling for the ideology of the member and clustering at the member level. This is actually quite substantial, with the difference being equivalent to 30% of the gap between parties.
However, while we have the same effect (but smaller) in bills, this seems to be driven by the resolutions, not the substantive legislation. I am fairly confident this is just picking up on the AI companies’ writing style being “woker”, and there being unobserved selection into which resolutions and extensions are AI written.
So I am not in the business of trying things over and over again until we find a positive result, but not finding any change in real world outcomes is of course a bit disappointing. So, I tried out two things. First, is the adoption of AI being driven from people moving from office to office? (This was inspired by Andrew Kao and Sara Ji’s working paper “Puppetmasters or Pawns”). Second, does it increase the raw number of bills produced?
Congress releases the complete records of who and what they pay every quarter, including the names of all staffers. While we do not know the exact day which people change employment, we can infer the quarter from when they either disappear from the records, or change from office to office. These lateral moves are frequent enough that we can actually learn something from them.
<Edit: At this point, I presented some results on the movement of staffers. However, I erred in how I treated fixed effects, and it was not actually removing the time trend as I had hoped. Upon corrected, there is no evidence that staffers bring their AI usage with them.>
Thus far, I do not think we have to worry that AI is degrading the quality of Congress, or influencing legislation.
This is the first pass at something which I hope to turn into a paper. (Not now, though – the thought of working in overleaf makes me shy away in terror. I need a week to work up the nerve). I would greatly appreciate comments, opinions, and corrections. I would like to thank Max Spero of Pangramlabs, for providing me with a substantial amount of free AI detection, and for the fine folks at Anthropic for Claude Code, which made all of this possible.










This is fantastic work — exactly the kind of empirical grounding the conversation about AI on the Hill has been missing.
A couple of things from our work at POPVOX Foundation tracking AI adoption in the legislative branch:
The variation in adoption likely correlates with what's been officially approved institutionally — we've been cataloguing sanctioned tools across the House and Senate, and the approved list varies more than people realize. Worth layering in.
We're also hearing from Hill staff that the Copilot rollout in the House includes filters that occasionally block outputs flagged as "political" — a potential confounder worth knowing about for detection-based analysis.
Finally, I'd push back on bills introduced as the productivity measure. Oversight activity, legislative quality, and constituent casework are all places AI could be having real effects that raw bill counts miss entirely.
Happy to compare notes — caitlin@popvox.org
> Pangram is an AI-detection company with exceptional accuracy, making a point of emphasis of having no false positives.
I’m skeptical there can truly be no false positives. At one point at least I remember seeing challenges to some of Pangram’s claims and their methodology (though this is still a very cool analysis and probably a decent way to measure as a start)