Discussion about this post

User's avatar
Caitlin McNally's avatar

This is fantastic work — exactly the kind of empirical grounding the conversation about AI on the Hill has been missing.

A couple of things from our work at POPVOX Foundation tracking AI adoption in the legislative branch:

The variation in adoption likely correlates with what's been officially approved institutionally — we've been cataloguing sanctioned tools across the House and Senate, and the approved list varies more than people realize. Worth layering in.

We're also hearing from Hill staff that the Copilot rollout in the House includes filters that occasionally block outputs flagged as "political" — a potential confounder worth knowing about for detection-based analysis.

Finally, I'd push back on bills introduced as the productivity measure. Oversight activity, legislative quality, and constituent casework are all places AI could be having real effects that raw bill counts miss entirely.

Happy to compare notes — caitlin@popvox.org

Age of Infovores's avatar

> Pangram is an AI-detection company with exceptional accuracy, making a point of emphasis of having no false positives.

I’m skeptical there can truly be no false positives. At one point at least I remember seeing challenges to some of Pangram’s claims and their methodology (though this is still a very cool analysis and probably a decent way to measure as a start)

17 more comments...

No posts

Ready for more?