Texas has officially joined the AI lawmaking race. On June 22, 2025, Governor Greg Abbott signed the Texas Responsible Artificial Intelligence Governance Act (TRAIGA) into law. Taking effect on January 1, 2026, TRAIGA makes Texas the third state, after Colorado and Utah, to pass legislation regulating the development and deployment of artificial intelligence systems. But unlike its peers, Texas aims to establish itself as both a regulator and a champion of innovation.
For employers, TRAIGA represents a major pivot from earlier legislative efforts. HB 149, the bill that became TRAIGA, replaces a more burdensome proposal, HB 1709, that would have directly regulated AI use in employment decisions. With HB 1709’s failure and TRAIGA’s passage, Texas has opted for a narrower framework that emphasizes intent-based accountability, government oversight, and a hands-off approach to private-sector hiring practices.
TRAIGA in Focus: Who It Covers and What It Regulates
TRAIGA applies to developers and deployers of AI systems that operate in Texas or serve Texas residents. Developers are those who create AI systems that are offered, sold, or deployed in Texas. Deployers are entities that put those systems into service in the state. The law defines an “artificial intelligence system” broadly as any machine-based process that infers from data inputs to generate outputs that can influence physical or virtual environments. This includes recommendation algorithms, generative AI models, and biometric systems.
Key restrictions include:
- Intentional discrimination: Developers and deployers may not use AI systems with the intent to unlawfully discriminate against protected classes. TRAIGA expressly excludes disparate impact as a basis for liability. Unintentional bias or disproportionate outcomes alone do not violate the statute.
- Constitutional violations: The law prohibits AI systems developed or used with the intent to infringe upon constitutional rights under federal or Texas law.
- Social scoring ban: TRAIGA bars Texas government entities from using AI to assign social scores that lead to detrimental treatment of individuals based on observed behavior or personal characteristics.
- Biometric restrictions: Government agencies may not use AI to identify individuals using biometric data scraped from public sources without consent if doing so violates state or federal law.
- Child exploitation prohibitions: The law bans the use of AI systems intended to create, aid, or distribute deepfake or AI-generated sexually explicit content involving minors.
Unlike broader frameworks such as Colorado’s AI law, TRAIGA does not use a tiered risk system. Instead, it relies on direct prohibitions for certain behaviors and targeted duties tied to developer and deployer roles.
A Major Departure from HB 1709
For employers, TRAIGA’s passage marks a strategic shift. HB 1709, a failed predecessor, would have imposed significant obligations on private employers using AI in hiring, including mandatory impact assessments, transparency disclosures to job applicants, and detailed vendor accountability requirements.
TRAIGA removes those burdens entirely, offering employers a far lighter compliance lift than earlier proposals. It includes no mandatory audits, bias testing, or disclosure obligations for employment uses. Its enforcement authority resides solely with the Texas Attorney General, and it does not provide a private right of action.
Innovation-Friendly Features
TRAIGA also makes clear that Texas wants to be a destination for AI innovation. The law establishes one of the first AI-specific regulatory sandboxes in the U.S., allowing developers to test products without triggering certain licensing or regulatory requirements. Participants must submit applications, report quarterly metrics, and agree to oversight by the Department of Information Resources and the new Texas Artificial Intelligence Council.
These features reflect a strategic balance. This isn’t just regulation, it’s a long-game strategy positioning Texas as a national leader in AI innovation. By offering safe harbors, flexibility, and clear guardrails, the state is sending a signal to startups and enterprise innovators alike.
What It Means for Employers
While TRAIGA may not directly regulate how private employers use AI in hiring or workplace management, that doesn’t mean employers are free from all risk.
An employer’s intent matters. TRAIGA prohibits intentional discrimination. If an employer knowingly deploys or retains an AI tool that discriminates unlawfully and does nothing to address it, they could still face enforcement by the Attorney General.
Employers using biometric tools, such as facial recognition, should proceed with caution. They must ensure they do not source data in a way that violates consent or constitutional protections.
Finally, vendor oversight remains critical. Even though TRAIGA doesn’t require impact assessments or transparency reports, employers should still vet vendors carefully to reduce potential exposure under civil rights laws.
Minimal Burden, But Not Zero Risk
That lighter touch means fewer mandates, but it doesn’t eliminate employer responsibility. Compared to HB 1709, TRAIGA imposes minimal compliance requirements on private employers. There are no required disclosures in the employment context, and no audits or use-case inventories are mandated. However, employers should still take precautions.
Although TRAIGA places most of its obligations on government agencies and excludes commercial and employment contexts from disclosure mandates, private employers should not ignore the law. Prohibitions on intentional discrimination and biometric misuse still apply and could create exposure if employers use AI carelessly or fail to oversee vendors. TRAIGA may have spared employers from sweeping regulation, but it doesn’t eliminate the need for responsible AI governance.
Looking Ahead
Texas joins a short list of states, including Colorado and Utah, that have moved beyond narrow or sector-specific rules to enact comprehensive AI legislation. Others, like California, have taken a more targeted approach. For instance, the California Civil Rights Department recently finalized regulations governing the use of automated decision systems in employment. While meaningful, those rules are limited in scope: they apply only to hiring and only under the state’s anti-discrimination laws.
By contrast, TRAIGA includes statewide preemption, blocking cities and counties from passing conflicting local AI ordinances. This offers regulatory clarity for companies operating across multiple jurisdictions in Texas.
Texas has made its stance clear: it intends to lead, not wait. TRAIGA may be narrow in scope, but it lays a foundation for future AI governance while protecting space for innovation.
Parting Thoughts
For employers, TRAIGA is a bullet dodged. Compared to its predecessor, HB 1709, the new law provides a far lighter compliance lift. Still, it offers a glimpse into the future of AI regulation, one that balances constitutional protections, fairness, and innovation. Employers would be wise to use this breathing room to assess their AI systems, vet their vendors, and prepare for what’s next. Because one thing is certain: TRAIGA is just the beginning.