Government

Torrez, Serrato Propose AI2A to Curb Deceptive Synthetic Media

New Mexico AG Raúl Torrez and Rep. Linda Serrato proposed the Artificial Intelligence Accountability Act (AI2A) to curb deceptive synthetic media and protect residents from AI-generated sexual exploitation.

James Thompson2 min read
Published
Listen to this article0:00 min
Share this article:
Torrez, Serrato Propose AI2A to Curb Deceptive Synthetic Media
Source: losalamosreporter.com

New Mexico Attorney General Raúl Torrez and state Representative Linda Serrato unveiled the Artificial Intelligence Accountability Act, a proposed law aimed at giving prosecutors and civil authorities new tools to counter deceptive synthetic media. Announced at a press conference on January 17, the measure responds to growing concerns about AI-generated content that can falsely depict people and cause serious harm.

Torrez framed AI2A as a targeted response to recent misuse of generative systems, citing a recent case involving AI-created sexual exploitation images. The bill would create statutory protections for New Mexicans against such deceptive material, with special attention to crimes that include the manufacture of child sexual abuse material using AI tools. The proposal is intended to both expand criminal accountability and enable civil remedies for victims, strengthening the legal toolkit available to state authorities.

The announcement comes amid heightened attention from state and federal officials about the misuse of artificial intelligence for disinformation, impersonation, and sexual exploitation. That broader alertness informed the bill’s focus on harms that cross digital and jurisdictional boundaries, reflecting the transnational nature of generative AI and the challenges it poses to traditional evidence and enforcement models.

For Los Alamos County residents, the proposal has immediate local resonance. In a tight-knit lab town where professional reputations and personal privacy are closely linked, the ability to limit circulation of fabricated images or videos matters for families, the workforce at Los Alamos National Laboratory, and small businesses that rely on community trust. If enacted, AI2A could give county law enforcement clearer grounds to investigate AI-enabled abuse, and provide victims pathways to civil relief without relying solely on federal prosecutions.

The bill also raises practical questions for local government and institutions. Law enforcement, campus security, and school administrators may need new training and protocols to identify synthetic media, preserve digital evidence, and interface with prosecutors empowered by the new statute. At the same time, civil liberties and free speech considerations will shape how regulators draw lines between harmful fabrication and permissible expression, especially when AI tools are widely available.

Beyond New Mexico, lawmakers in other states and countries are watching how jurisdictions adapt existing statutes to novel technologies. AI2A could influence broader policy debates over how to balance innovation, public safety, and victims’ rights in an era when images and audio can be fabricated at scale.

For now, next steps include legislative hearings and detail work on definitions, penalties, and enforcement mechanisms. Verify unusual content before sharing it, preserve digital evidence if you are targeted, and follow local updates as the bill moves through the legislative process. The proposal signals that New Mexico is aiming to marry local protections with the broader push for legal responses to AI’s harms.

Know something we missed? Have a correction or additional information?

Submit a Tip
Your Topic
Today's stories
Updated daily by AI

Name any topic. Get daily articles.

You pick the subject, AI does the rest.

Start Now - Free

Ready in 2 minutes

Discussion

More in Government