Early on June 30, alongside 150+ organizations working to support children’s online safety, consumer protections, and responsible innovation, we signed an urgent coalition statement in response to Senator Blackburn caving and striking a deal with Senator Cruz to keep the Artificial Intelligence (AI) moratorium in the budget reconciliation bill, aka the “One Big Beautiful Bill.” This moratorium – also known as a pause or delay – would prevent states from creating or enforcing laws to regulate, limit, or restrict AI policy for 10 years.
While Senator Blackburn crafted a new version that reduced the proposed ten-year ban to a five-year one (how about NO ban?) and included some exceptions for laws related to kids’ online safety and personal publicity rights, it required states to prove these laws do not pose “undue” or a “disproportionate burden” on AI systems. This vague new standard basically would have given Big Tech a clear path to challenge nearly any kids safety law in court, including any in-progress lawsuits.
At our firm, we’ve seen how Big Tech hides behind ridiculous, dated laws to evade any responsibility for the catastrophic harms caused to children on their platforms (see Amazon, Grindr, Snap, etc). Our founder Carrie Goldberg is one of the staunchest critics of Section 230 of the Communications Decency Act, which provides immunity to online platforms from liability for content posted by their users.
For years we have been fighting against this law through litigation and by advocating for reforms. Based on Section 230 interpretations in court, instead of being held accountable for the damage they’re doing to people’s lives, tech companies are able to profit from harmful content while avoiding responsibility. And we know that AI tech companies will likely do the same.
“We can expect that GenAI service providers, if sued, will argue that they are entitled to Section 230 immunity based on being sued as a publisher. While it’s obvious that information set forth by the AI is not third party content, they will argue that it was prompted by third party content and thus are entitled to immunity. This is under the but-for theory – i.e. but for third party content, the harm would not have occurred. In the recent Central District of CA Doe v Grindr, the court said that the but-for test applies not only to content, but also to harms caused by all ‘functions, operations, and algorithms’ because they are ‘tools meant to facilitate the communication and content of others.’ Under this interpretation, GenAI platforms could expect to be immune for all harms their platform causes.” – Carrie Goldberg in her May 2024 testimony to the U.S. House of Representatives Committee on Energy and Commerce Subcommittee on Communications and Technology
To summarize Carrie’s quote, because of Section 230 immunity, AI companies can argue that that the information their technology set forth only happened because a human user (third party) prompted the responses given. Basically, if the human user did NOT prompt AI technology, the resulting harm never would have occurred.
Placing an AI moratorium on state-level AI regulation would do monumental damage at a very high rate given how quickly technology advances, and how easily Big Tech avoids any accountability.
The GOOD news is that on July 1, 2025, the U.S. Senate struck down the proposed 10-year moratorium on state-level Artificial Intelligence (AI) regulation from President Trump’s massive budget reconciliation package, aka the “One Big Beautiful Bill.” The amendment, which passed 99 to1, removed language that would have mandated states to pause all new AI-related laws, including everything from combatting deepfakes to protecting kinds online. With this amendment, the budget reconciliation bill passed the House, and Trump signed it into law on July 4. In short, this meant that states would keep their full authority to regulate AI without fear of losing critical federal support.
However, in brazen moves, this administration continues its attempts to coerce states into abandoning existing AI laws and halting future efforts. On July 22, we signed another urgent coalition statement strongly opposing the administration’s forthcoming AI Action Plan, which would promote a de-facto moratorium – conditioning the availability of federal AI funds to states without “restrictive” regulations.
And on July 23, the White House released its plan “Winning the AI Race: America’s AI Action Plan.” It emphasizes that to accelerate AI innovation, the Federal government should “create the conditions where private-sector-led innovation can flourish.” On the state level, “the Federal government should not allow AI-related Federal funding to be directed toward states with burdensome AI regulations that waste these funds but should also not interfere with states’ rights to pass prudent laws that are not unduly restrictive to innovation.” The plan then outlines recommended policy actions that will not “burden” AI innovation. While some of the language in the plan is ambiguous, it puts the AI moratorium back into play.
The Dangers of AI
In the global race to advance AI technology, the time is not being taken to properly implement ethical safeguards, thoroughly assess potential consequences, or ensure protections for the most vulnerable among us. AI constantly poses new risks, opening the door for abusers to gain new tools to weaponize. At C.A. Goldberg, PLLC, we have seen how AI affects victims of image-based sexual abuse (IBSA), aka “revenge porn,” through the rise of deepfakes.
Our firm has been advocating for legislation to protect survivors of IBSA for years, and some of us are survivors ourselves. Carrie was interviewed in late 2024 on CNN’s podcast Terms of Service to discuss AI, specifically deepfakes. In April 2025, we joined 100+ organizations representing victims, law enforcement, tech companies, and more, in a letter to the House asking that they pass bill S.146, the bipartisan TAKE IT DOWN Act, which requires online platforms to remove non-consensual intimate images within 48-hours of being notified and criminalizes the publication of these images, whether actual or AI-generated (deepfakes). Earlier this year, this was also a major topic when Carrie testified to the Senate Judiciary Committee in February. And on May 19, the TAKE IT DOWN ACT was signed into federal law.
This was a huge victory for survivors of IBSA, and for all the survivors and activists who have fought for legislation over the past decade – both at the federal and state level. This would not have happened without their strength to share their stories, advocate for laws to protect those suffering abuse, and fight to take back their power.
But now, with advancements in machine learning and natural language processing technologies accelerating at an unprecedented rate, the weaponization of AI is rapidly increasing. Harms that might’ve sounded like they came out of a Black Mirror episode a few years ago have become the norm, and more and more lives are ruined. When Carrie testified in May 2024 to the U.S. House of Representatives Committee on Energy and Commerce Subcommittee on Communications and Technology noted how children are currently being harmed by Generative AI.
“The advent of generative AI is guaranteed to bring about unknown harms to individuals and populations. Companies like Snap are already powered by OpenAI’s GPT offering children a new feature called “My AI” which they describe as an “experimental friendly chatbot” that can “help and connect you more deeply to the people and things you care about most.” The App Store and Google Play have an array of apps that “nudify” images, including those of children.” – Carrie Goldberg
AI chatbots are causing catastrophic harm. Often, offenders are weaponizing the technology to terrorize their victims (through deepfakes, stalking, impersonation, sexual abuse/exploitation, doxxing, and more). And in other cases, the AI technology itself acts as a predator or abuser due to being a faulty product with no proper guardrails or safety measures in place to protect users.
“The risks to children of GenAI tools are beyond the imagination especially as bots become more personal and emotional, where animate and inanimate blur. The risk of influencing child perceptions, inciting behaviors, and causing violence and self-harm are potent. Already Amazon’s Alexa challenged a child to electrocute herself by sticking a coin in an electrical socket. And Snapchat’s AI chatted with reporters posing as children about booze and sex.” – Carrie Goldberg
We know what we’re up against in our fight against Big Tech. We know how long it can take to get legislation passed – no matter how crucial it is to protect lives. We know (see above) that as of now AI service providers will cry “Section 230 immunity!” until that law is reformed.
We will continue fighting for our clients who have suffered from tech-facilitated violence, and we won’t stop advocating for a safer digital landscape. This means a continued understanding of the physical and emotional impact that ever-evolving AI technology has on humans, and how the legal landscape will keep pace with the constant change and advancements.
If you or some you love is a victim of AI tech facilitated violence, please contact us.
Now Check Out
Currently Investigating
Blog
- AI update: FBI deepfake warning and new NY law
- AI in the hands of stalkers, abusers, and traffickers: a new frontier in victims’ rights