US Races to Build Federal AI Rules Amid State Surge
The United States faces mounting pressure to establish comprehensive federal AI regulation as Congress debates multiple bills requiring algorithmic bias audits, while states like Colorado forge ahead with their own enforceable frameworks.
A Patchwork Under Pressure
The United States stands at a regulatory crossroads. As artificial intelligence increasingly drives decisions in hiring, lending, healthcare, and criminal justice, Congress is racing to catch up with a wave of proposed federal legislation — even as individual states have begun enforcing their own AI accountability rules.
At the center of the debate sits the Algorithmic Accountability Act of 2025, introduced in both the Senate (S.2164) and House (H.R.5511). The bill would require companies deploying AI in high-stakes decisions to conduct Algorithmic Impact Assessments — systematic evaluations of bias, discrimination, transparency, and data security. Results would be submitted to the Federal Trade Commission, with summaries potentially made public.
The legislation targets "covered entities" with annual revenues exceeding $50 million or data on more than one million consumers. Sectors in scope include employment, credit, housing, healthcare, education, and public benefits — areas where algorithmic errors can devastate lives.
Competing Visions in Congress
On March 18, 2026, Senator Marsha Blackburn (R-TN) released a sweeping 291-page discussion draft known as the TRUMP AMERICA AI Act, which takes a markedly different approach. While it establishes a "duty of care" for AI developers and mandates third-party audits for political viewpoint discrimination, its primary goal is to preempt the growing patchwork of state AI laws with a single federal standard.
The Blackburn draft also incorporates provisions on copyright protection for creators, criminal penalties for AI-enabled child exploitation, and the sunsetting of Section 230 liability protections. Critics, including the Center for Data Innovation, argue the bill prioritizes political concerns over genuine algorithmic accountability.
Neither bill has passed Congress. The Algorithmic Accountability Act remains in committee, and the TRUMP AMERICA AI Act is still a discussion draft seeking stakeholder feedback.
States Aren't Waiting
While federal legislators debate, states have moved decisively. Colorado's AI Act (SB 24-205), set to take full effect on June 30, 2026, requires deployers of high-risk AI systems to conduct annual impact assessments, implement risk management programs, and disclose any discovered algorithmic discrimination to the state attorney general within 90 days.
New York City's Local Law 144 already mandates annual bias audits for AI tools used in employment decisions — a model that has influenced legislation nationwide. Illinois and California have introduced their own AI disclosure and anti-discrimination requirements effective in 2026.
This state-level momentum is precisely what the Trump administration's December 2025 executive order sought to contain, directing advisers to propose a "uniform" federal framework and creating an AI Litigation Task Force to challenge state laws deemed inconsistent with federal policy.
The Global Context
The US regulatory scramble unfolds against a global backdrop. The EU AI Act, which entered into force in August 2024, will become fully applicable by August 2026, imposing strict pre-deployment assessments on high-risk systems. South Korea's AI Basic Act and Singapore's governance framework for agentic AI both launched in January 2026.
The gap between these comprehensive international frameworks and the fragmented American approach is widening. According to a OneTrust analysis, regulatory divergence will intensify through 2027, creating compliance challenges for multinational companies operating across jurisdictions.
What Comes Next
Whether the US ultimately adopts the accountability-focused approach of the Algorithmic Accountability Act, the preemption-oriented TRUMP AMERICA AI Act, or some hybrid remains uncertain. What is clear: the era of unregulated algorithmic decision-making in America is ending — the only question is who writes the rules.