The loudest argument in favor of New York’s new AI safety law is that it harmonizes with emerging federal standards. That argument deserves scrutiny — because the timing alone tells a different story.
Governor Kathy Hochul signed the Responsible AI Safety and Education Act — the RAISE Act — on December 19, 2025, just days before a White House executive order explicitly aimed at curbing exactly this kind of state-level AI legislation. The sequence was not coincidental. It was a race to the signature line. What New York framed as a nationally aligned regulatory framework arrived, in practice, as the opening salvo in what is fast becoming a fragmented continental map of AI governance — one that will cost American businesses hundreds of millions in duplicated compliance architecture before the decade is out.
What the Law Actually Does — Stripped of the Press Release
The RAISE Act, effective January 1, 2027, targets developers of frontier AI models — systems trained on computing power above defined thresholds. It mandates safety protocols before and during deployment, requires transparency documentation, establishes a new state oversight office, and creates liability exposure for developers whose systems contribute to “catastrophic” harms.
The final version signed into law is meaningfully narrower than the bill that passed the legislature in June. Penalties were reduced. Scope was trimmed. The provisions most likely to trigger immediate constitutional challenge were softened. As Nelson Mullins notes, the final law also addresses AI companion models separately — an acknowledgment that emotional-manipulation risks require distinct treatment from raw computational safety thresholds. Read more: AI Regulation Has Its Own Alignment Problem. Read more: The AI Governance Power Grab: Why China’s 2025 Action Plan Changes the Rules of the Game. Read more: UK and India Are Writing the Rules Together-Before Someone Else Does.
The law’s architecture reflects genuine policy thought. The problem is not the ideas. The problem is the jurisdiction.
The Fragmentation Math No One Is Running Publicly
New York is not legislating in a vacuum. California attempted its own sweeping AI safety law — SB 1047 — before Governor Newsom vetoed it in 2024, citing precisely the fragmentation concern. Colorado has sector-specific AI obligations already on the books. Illinois, Texas, and Virginia are each at various stages of AI-adjacent rulemaking. The European Union’s AI Act is simultaneously pulling multinational firms toward a third compliance standard.
What emerges for any company developing or deploying frontier AI systems is not a regulatory environment. It is a regulatory obstacle course with different rules at each checkpoint.
| Jurisdiction | Primary Mechanism | Scope Trigger | Effective / Status | Preemption Risk |
|---|---|---|---|---|
| New York (RAISE Act) | Developer safety protocols + oversight office | Compute threshold, frontier models | January 1, 2027 | High — conflicts with federal EO direction |
| California (SB 1047) | Safety obligations + kill-switch mandates | Training cost > $100M | Vetoed Sept 2024 | N/A — but successor bills advancing |
| Colorado (SB 205) | Algorithmic discrimination disclosure | High-risk AI decisions | February 2026 | Moderate |
| EU AI Act | Risk-tiered compliance + conformity assessments | Market deployment in EU | Phased 2024–2027 | Extraterritorial by design |
| Federal (U.S.) | Executive Order — anti-state-law stance | TBD — no enacted statute | Active but non-binding | Aspirational only |
The table above is not an academic exercise. For a company like Anthropic, OpenAI, or any hyperscaler with frontier model operations, each row represents a distinct legal team, a distinct documentation standard, and a distinct liability exposure. The compliance stack compounds geometrically, not linearly.
Albany’s Timing Problem Is Actually Washington’s Failure
It would be convenient to cast New York as the villain in this story. That reading misses the structural cause. States are legislating because Congress has not. The federal government has produced executive orders — directives that shift with administrations — but no durable statutory framework for AI safety law at the national level. Into that vacuum, states will always move. It is what legislatures do.
“The absence of federal AI legislation is not a neutral condition. It is an active invitation for states to fill the space — and they will fill it differently, because their constituent pressures are different.”
New York’s financial services concentration means Albany’s instinct is to protect systemic risk. California’s instinct is to protect consumers from algorithmic harm. Texas’s instinct is to prevent what it perceives as content over-moderation. These are not compatible regulatory philosophies. A federal framework would force a single negotiated outcome. Without one, every state produces its own answer to the same question — and companies must satisfy all of them simultaneously.
Skadden’s analysis of the RAISE Act notes pointedly that the law was signed just as the White House was moving in the opposite direction, issuing guidance aimed at limiting state-level AI regulation. That collision is not resolved. It is deferred — to courts, to political cycles, and ultimately to the companies caught in the middle.
What “Frontier Model” Actually Captures — And What It Misses
The RAISE Act’s compute-threshold trigger is both its technical strength and its strategic weakness. Targeting frontier models by training compute is a defensible proxy for risk: systems trained at massive scale have commensurately massive potential for harm at scale. Alston & Bird’s review of the law confirms the threshold is designed to capture only the most advanced systems, deliberately excluding the long tail of commercial AI deployment.
The problem is that compute thresholds age badly. The model that required $100 million in compute to train in 2024 can be replicated for $4 million in 2027, as algorithmic efficiency compounds. The regulatory threshold that captured five companies at signing may capture fifty at enforcement — or, inversely, may be rendered obsolete by architectural shifts that achieve frontier capability through methods the threshold doesn’t measure. Regulators writing technology law against a fixed technical benchmark are, by definition, writing yesterday’s law.
More consequentially for C-suite decision-makers: the law’s obligations attach to developers, not deployers. A company that fine-tunes a frontier model for proprietary enterprise use sits in genuinely ambiguous territory. The AI safety law‘s documentation and transparency requirements were designed with foundation model labs in mind. Applied to enterprise AI teams, they demand compliance infrastructure those teams were never architected to provide.
The Competitive Geometry Is Already Shifting
Regulatory arbitrage is not a hypothetical future risk. It is an operating present. When states impose asymmetric compliance costs, capital and talent respond — not immediately, and not in dramatic relocations, but in incremental decisions about where to incorporate new AI subsidiaries, where to establish model training operations, and which regulatory jurisdiction to designate as the compliance anchor for global operations.
New York’s AI safety law does not make New York inhospitable to AI development. The talent density, the capital markets access, the proximity to financial services customers — none of that evaporates. But it does introduce a cost differential that did not previously exist, and in a capital-intensive industry with thin margins at the frontier, cost differentials compound into strategic consequences.
The more durable risk is reputational fragmentation of American AI governance itself. When the United States presents the world with a patchwork of state laws, a White House executive order pointing one direction, and no enacted federal statute, it weakens the credibility of American AI governance as a model for allied nations trying to set their own standards. The EU AI Act, whatever its technical shortcomings, offers a single answer to a single question. The American answer is currently fifty potential answers plus one unratified federal aspiration.
What Boards Should Be Deciding Now, Not in 2026
The RAISE Act’s effective date of January 1, 2027 creates an apparent buffer. That buffer is an illusion for any organization with frontier model exposure. The compliance architecture the law requires — safety documentation, pre-deployment assessments, incident response protocols, the standing capacity to interact with a new state oversight office — cannot be built in the final quarter of 2026. It requires structural decisions about legal entity design, documentation governance, and model development pipeline that need to be made now.
Specifically, boards and their general counsels should be stress-testing three questions. First: does our AI development activity trigger the RAISE Act’s compute thresholds, and how do we monitor that threshold as our training operations scale? Second: does our compliance architecture assume a single federal standard that does not yet exist, leaving us exposed to state-by-state divergence we haven’t priced? Third: as New York and California converge on overlapping but non-identical AI safety law frameworks, are we building compliance systems flexible enough to satisfy both, or are we locking in to one state’s standard and hoping for federal preemption that may not come?
The companies that treat these as legal department questions rather than strategic questions will find themselves in 2027 with expensive retrofits instead of embedded infrastructure.
FetchLogic Take
Within 18 months of the RAISE Act’s effective date, at least one major frontier model developer will formally challenge New York’s law on Commerce Clause grounds — arguing that a state cannot regulate AI systems whose training, deployment, and effects are inherently interstate. That challenge will not resolve quickly. But the filing itself will freeze a wave of copycat state legislation as legislatures wait to see whether New York’s framework survives judicial scrutiny. Paradoxically, the fragmentation New York’s law accelerates may be temporarily halted by the legal battle its enforcement triggers. The window between now and that litigation outcome is the most dangerous period for compliance teams: maximum uncertainty, maximum divergence, and no federal floor to stand on.