States Are Quietly Setting the Rules for Clinical AI
By Will Moss · April 2026
Most conversations about AI regulation in healthcare focus on the federal government. That makes sense on paper. Federal agencies oversee medical devices, data privacy, and large parts of the healthcare system. If there were going to be a single, defining framework for AI, it would likely come from that level.
In practice, that is not what is happening.
Instead, states are starting to play a much larger role in shaping how AI is actually used in healthcare. While federal policy remains fragmented and often slow to develop, states have been more willing to pass targeted laws that address specific use cases. These laws tend to be narrower, but they are also more concrete.
This matters because many of the policies with the clearest operational impact are coming from states. While a large share of AI-related policy activity is still advisory or focused on governance, some of the more direct constraints on how AI can be deployed are emerging at the state level.
The result is a system where the rules for clinical AI are not being set in one place. They are developing unevenly across jurisdictions.
For health systems, this creates immediate challenges. Hospitals often operate across multiple states, and AI tools are not confined to a single jurisdiction. A system that is acceptable in one state may raise compliance concerns in another. As more state-level policies emerge, organizations are forced to track and reconcile differences that were not previously relevant.
For developers and vendors, the implications are similar. Products that are designed for national deployment now have to account for state-specific requirements. This can affect how systems are built, documented, and marketed. Even when policies are relatively narrow, they can influence broader design decisions in order to avoid compliance risks.
This dynamic also helps explain why governance is shifting to the organizational level. In the absence of a unified federal framework, and with states introducing their own rules, health systems are left to interpret how different policies interact. They have to decide what standards to apply across their operations, even when those standards are not fully defined by regulators.
None of this suggests that states are replacing federal oversight. Federal agencies still shape large parts of the ecosystem, particularly through guidance and standards. But when it comes to more direct and enforceable constraints on AI use, states are increasingly where those rules are emerging.
The shift is gradual, and it is easy to miss. But it has real consequences. AI in healthcare is not being governed by a single, coordinated system. It is being shaped by a growing number of state-level decisions that, taken together, are starting to define the boundaries of clinical use.