The Senate Public Education Committee this month advanced a bill that would clarify and expand school policies on sexually explicit “deepfakes” generated by artificial intelligence, an increasingly widespread problem in American schools.
The explosion of AI tools in recent years has given the public the power to easily edit photos, create artwork, transform videos and generate audio clips. It’s also made it easier than ever to produce realistic nude photographs or porn videos without a subject’s consent using just a few photos from social media.
That’s an especially fraught problem for teenagers today, who are navigating relationships and social dynamics while spending more time online and on social media than their parents did. A new report from Thorn, a research nonprofit focused on child sexual abuse, found that one in eight students personally knew someone who had been the victim of a sexual deepfake, and that one in 17 were themselves victims.
And in the past two years, students in New Jersey, Pennsylvania and Texas have targeted classmates with sexual deepfakes that caused lasting harm and ostracism.
Elliston Berry, of Aledo, Texas, was one of those victims when two years ago, she discovered that a classmate had anonymously circulated AI-generated nudes of her on social media, she testified to the education committee on April 8.
“The fear and shame and overall misery that I endured is something I would never wish upon anyone, and millions of people are suffering,” Berry said. “Although this is happening to all ages, as a 14-year-old freshman in high school, I had never even thought of this kind of thing could ever occur. And I missed school out of shame. I locked myself in my room out of fear, and my academics suffered.”
Though Texas law already defines cyberbullying, Weatherford Republican Sen. Phil King’s Senate Bill 747 would expand that definition to include the “production or distribution of a video or image that depicts or appears to depict another student with the student’s intimate parts exposed or engaged in sexual conduct, including a video or image created through the use of artificial intelligence technology.”
Releasing or threatening to release artificially generated “intimate visual material” of minors or students without their consent would be grounds for expulsion or removal from class under SB 747, acts that already carry those penalties under current code for real sexual material.
School board policies on bullying would need to specifically address the distribution of nudes or sexual videos, both real and artificially generated, under the new policy. And existing programs for school districts that address the consequences of sharing sexually explicit photos or videos depicting minors would need to describe the specific risks of creating or sharing sexual deepfakes.
Sen. Angela Paxton, R-McKinney, presented the committee substitute for the bill in place of King during the committee’s April 8 meeting. The substitute made “a clarifying change” to Section 4, the portion dealing with programs about the consequences of sharing explicit material of minors.
But the committee left the bill pending at that meeting, revisiting it at its next meeting two days later. It recommended approval of the committee substitute SB 747 unanimously in a 10-0 vote.
There have been some steps taken at a federal level to reduce the incidence of similar cases. Some provisions in last year’s version of the Kids Online Safety Act, which Congress failed to pass, would have addressed deepfakes, expanding the scope of the original version of that bill that debuted in 2022.
This year, Congress is considering the Take It Down Act, sponsored by Texas Sen. Ted Cruz and Minnesota Sen. Amy Klobuchar, which would focus more specifically on “revenge porn,” including cases that involve AI. The bipartisan proposal would attempt to make it more difficult for service providers to enable the distribution of similar material on their sites. It’s been widely supported by victim advocacy groups, law enforcement and tech companies, but critics are concerned that it could infringe on civil liberties that allow private citizens to use encrypted communication.
The Senate passed the bill in February, and the House Committee on Energy and Commerce passed the bill out with strong bipartisan support on April 8. First Lady Melania Trump and Berry both urged the bill’s passage in a statement.
But if the federal bill is signed into law, Trump’s deepening relationship with Elon Musk, the CEO of X and the head of the new Department of Government Efficiency, has led some to worry about selective enforcement of the policy on that platform. Last year, sexual deepfakes of Taylor Swift spread rampantly on X, garnering tens of millions of views, according to the National Sexual Violence Research Center. Since then, the platform has cut its content moderation team.
In the meantime, as of Wednesday morning, SB 747 was listed on the local and uncontested calendar for Thursday for consideration by the full Senate. If approved, it then would head to the Texas House for further consideration.