The Regulatory Clock Is Already Running
Fifty-seven state and local AI bills were active heading into 2026. If your firm has been watching that number climb and wondering which ones create actual exposure, the White House just introduced a significant new variable.
In March, the Trump administration released a formal AI policy blueprint directed at Congress. The core ask: establish a single federal AI governance standard and preempt state laws that conflict with it. According to POLITICO, the administration framed this explicitly as a competitiveness issue. A fragmented regulatory environment, with Colorado running one set of rules, Illinois another, and Texas a third, creates compliance overhead that slows AI adoption and puts American firms at a disadvantage globally.
The immediate read for most architecture practices: this probably makes things simpler. One standard instead of fifty. But the timing matters enormously. The gap between now and any enacted federal law is where the real exposure lives.
Colorado Is Not Waiting for Washington
Colorado's AI Act (SB 24-205) is already on the books. It applies to what the law calls "high-risk" AI systems, defined as systems that make or substantially influence consequential decisions. If your firm is using AI-assisted tools for building code analysis, structural recommendations, or anything that feeds directly into a licensed professional's stamped deliverable, the question of whether that qualifies as "consequential" is genuinely unsettled. Colorado puts the burden of disclosure and impact assessment on the deployer. In most cases, that means the firm, not the software vendor.
Illinois has its own Artificial Intelligence Video Interview Act, which has been in effect since 2020 and covers AI use in hiring decisions. Texas advanced HB 1709 in 2025, which would impose obligations on developers and deployers of high-risk AI systems similar in structure to Colorado's approach. Three different states, three different compliance frameworks, and your firm potentially operating in all of them on active projects.
The White House preemption proposal would theoretically override laws like Colorado's. It has not passed. Congress is not known for moving quickly on technology legislation. The practical window where state laws are live and federal law does not yet exist could stretch well into 2027.
What This Means for Firms Using AI Tools in Production
Tools like Spacemaker (now part of Autodesk), TestFit, and Hypar are increasingly embedded in early-phase design workflows. Firms using them for site feasibility or massing analysis are making decisions that affect project budgets, zoning strategies, and client commitments. The humans signing those deliverables are responsible for them. A federal preemption of state AI law does not change that.
Consider what "consequential decision" means in practice. A TestFit output that shows a 200-unit residential building is viable on a given site directly influences whether a developer commits capital. A Spacemaker massing study that informs a rezoning strategy shapes a project's entire financial model. Chaos Blog's overview of AI tools for architects documents how tools in this category are producing outputs that practitioners rely on, not just explore. When those outputs inform licensed work, the firm is in the zone that Colorado's law is designed to regulate.
There is also a professional liability dimension that has nothing to do with state AI statutes. If a project outcome is ever questioned, your ability to show exactly where AI contributed, where a licensed professional reviewed and overrode that input, and how the client was informed is your first line of defense. That documentation does not exist by default. You have to build the habit of creating it.
The Three Paths Firms Are Taking
Right now, firms are landing in roughly three places on this. Some are building internal AI governance around the strictest current state standard, knowing it might be preempted. Some are holding off entirely, assuming federal clarity is coming. The third group is documenting AI use in a way that satisfies both current state requirements and what federal standards are likely to require, regardless of which framework survives.
The third path is the one worth taking. Firms already maintaining clear records of where AI tools contribute to project deliverables are building something valuable on two fronts: compliance and liability defense. That is not overhead. That is practice management.
What the White House Blueprint Signals
The most useful thing the administration's proposal signals is that Washington has accepted AI governance as a legitimate business and legal concern. This is a real shift. The conversation in federal policy circles has moved from civil liberties framing toward economic and professional accountability framing. That is the framing that architecture, engineering, and construction firms operate in every day.
Federal standards are coming. The question is whether your firm's documentation habits will be ready when they arrive, or whether you will be building them reactively under deadline pressure after a project dispute or a state enforcement inquiry. The firms that will have the least friction in 2027 are the ones doing the unglamorous work now: writing down which tools are in use, on which project types, with what human review steps, and with what client-facing disclosure language.
That work is not complicated. It is just easy to skip when projects are moving fast and the rules are not yet final. The rules not being final is not a reason to wait. It is a reason to start with a framework flexible enough to adapt.
Three Steps Worth Taking Before Federal Rules Land
First, inventory every AI tool currently in use across your project workflow. Include tools embedded in platforms you already own, like Autodesk's Spacemaker capabilities or generative features inside Rhino and Revit plugins. The list is almost certainly longer than you think.
Second, map each tool to the decision types it influences. Is it generating options for human review, or is its output being passed forward with minimal modification? That distinction matters under every current state framework and will almost certainly matter under federal rules.
Third, draft a one-page internal policy that defines who reviews AI-generated content before it reaches a client or a stamped document, and what that review is expected to catch. You do not need outside counsel to write a first draft. You need someone in your firm who understands both the tools and the professional responsibility standards you already operate under.
The firms that build these habits now will not have to retrofit them under pressure later. That is the actual return on this work.
FAQ
Does Colorado's AI Act apply to architecture firms using AI design tools?
Colorado's AI Act (SB 24-205) applies to deployers of "high-risk" AI systems, defined as systems that make or substantially influence consequential decisions affecting individuals. For architecture firms, the critical question is whether AI tools used in building code analysis, structural input, or site feasibility studies qualify as influencing consequential decisions. The law places the burden of disclosure and impact assessment on the deployer, which in most cases means the firm rather than the software vendor. Because the definition of "consequential" is not yet fully settled through enforcement or case law, firms using AI tools that feed into licensed deliverables should treat Colorado's requirements as applicable until there is clear guidance otherwise. Firms with active projects in Colorado should document their AI use and review workflows now.
What does the White House federal AI preemption proposal mean for architecture practices?
The Trump administration released an AI policy blueprint in March 2026 asking Congress to establish a federal AI governance standard that would preempt conflicting state laws, according to POLITICO. For architecture practices, the proposal signals that a single national compliance framework may eventually replace the current patchwork of state-level AI regulations. However, the proposal has not passed, and Congress has historically moved slowly on technology legislation. The practical period where state laws like Colorado's are enforceable and federal law does not yet exist could extend into 2027 or beyond. Firms should not pause compliance planning in anticipation of federal preemption that may be years away from taking effect.
How should architecture firms document AI use to protect against professional liability?
Firms should maintain records that identify which AI tools contributed to any given project deliverable, what type of input or output each tool produced, and which licensed professional reviewed that output before it moved forward. The documentation should also capture whether the professional accepted, modified, or overrode the AI-generated content, and what disclosure the client received about AI's role in the work. This is not a compliance exercise alone: if a project outcome is ever disputed, that documentation is the firm's primary evidence that a qualified professional exercised independent judgment on the work. A one-page internal policy defining review responsibilities and documentation standards is a practical starting point. Firms do not need outside counsel to create a first draft of that policy.
Which AI design tools create the most regulatory exposure for architecture firms right now?
Tools that generate outputs directly influencing project decisions carry the most exposure under current state AI frameworks. Spacemaker (now part of Autodesk), TestFit, and Hypar are examples of tools used in early-phase massing, site feasibility, and space planning that produce outputs informing client commitments, zoning strategies, and budget assumptions. When a tool's output shapes whether a developer proceeds with a project or what a rezoning application claims is feasible, that output sits squarely in the territory that Colorado's AI Act and similar state laws target. Tools used purely for internal ideation with no output that reaches a client or a stamped document carry lower risk. The practical test is: does this tool's output influence a decision that a client or a municipality relies on?
What should architecture firm principals do right now to prepare for AI regulation?
The three most actionable steps are: inventory every AI tool in active use across the firm's project workflow, including features embedded in platforms like Autodesk or Rhino plugins; map each tool to the type of decision it influences and assess whether that decision type would qualify as consequential under current state definitions; and draft a short internal policy that assigns review responsibility for AI-generated content before it reaches a client or a stamped document. Firms operating across multiple states should review the specific AI legislation active in each state where they have current projects rather than assuming one standard applies everywhere. Starting with documentation habits now, before rules are final, means firms will not be rebuilding their practices under pressure after a dispute or enforcement inquiry.



