Richard Sill, Author at Reason Foundation https://reason.org/author/richard-sill/ Fri, 14 Nov 2025 20:04:07 +0000 en-US hourly 1 https://reason.org/wp-content/uploads/2017/11/cropped-favicon-32x32.png Richard Sill, Author at Reason Foundation https://reason.org/author/richard-sill/ 32 32 Comments to the Office of Science and Technology Policy on AI regulatory reform https://reason.org/testimony/comments-to-the-office-of-science-and-technology-policy-on-ai-regulatory-reform/ Mon, 27 Oct 2025 14:00:00 +0000 https://reason.org/?post_type=testimony&p=85964 A version of the following public comment letter was submitted to the White House Office of Science and Technology Policy on October 27, 2025.

The post Comments to the Office of Science and Technology Policy on AI regulatory reform appeared first on Reason Foundation.

]]>
A version of the following public comment letter was submitted to the White House Office of Science and Technology Policy on October 27, 2025.

On behalf of Reason Foundation, we respectfully submit these comments in response to the Office of Science and Technology Policy’s (OSTP’s) request for information on “Regulatory Reform on Artificial Intelligence.”

Reason Foundation is a national 501(c)(3) public policy research and education organization with expertise across a range of policy areas, including technology and communications policy.

There are numerous activities, innovations, and deployments currently inhibited, delayed, or constrained by federal statute, regulation, or policy. For this reason, we recommend a formal audit or review to identify areas of regulatory conflict with innovation—including the effect of state laws where federal regulation is silent. However, we offer the following specific examples in response to Question (i) for OSTP’s review:

  1. Legacy NEPA Rules and Expansion Create Major Delays in Energy Production
  2. Regulatory Barriers Limit the Expansion of Automated Track Inspection

Legacy NEPA rules and expansion create major delays in energy production

In order to maintain global technological superiority, the United States must focus squarely on reforms that increase energy capacity through streamlined permitting reforms in order to facilitate the development of artificial intelligence (AI) across industries. As of now, multi-year permitting delays are the status quo in any energy project. These delays set back the construction of new power plants, but also lead to the downstream effects of a restricted energy grid. As the United States competes with foreign adversaries for dominance in AI, energy capacity will either be a force multiplier in the country’s success or lead to failure on the global stage.

Congress passed the National Environmental Policy Act (NEPA) in 1969, directing federal agencies to evaluate the environmental impact of their decision-making prior to a major federal action. As part of this directive, agencies were required to produce an Environmental Impact Statement (EIS) when a federal action would significantly alter the environment, which is to include a comprehensive analysis of environmental effects, alternatives to the proposed action, and proposed mitigation measures (42 U.S.C. § 4332).

For federal actions that would impose smaller effects on the environment or where the size of the effect is uncertain, agencies must complete an Environmental Assessment (EA). An EA is a shorter-form document that aims to determine whether a proposed federal action warrants a full EIS or if the effects are small enough to render a Finding of No Significant Impact (FONSI). These mandated reviews were meant to inform both decision-makers and the public of potential significant environmental impacts and potential mitigations, but have evolved into increasingly lengthy and complex processes. Further, despite their extensive documentation, these reviews generate a substantial amount of litigation. As a result, the environmental review process that was designed to increase public transparency increasingly serves to delay and add costs to worthy projects.

For instance, the Nuclear Regulatory Commission (NRC) promulgated licensing rules that incorporate NEPA’s environmental review framework into nuclear power project approvals (10 C.F.R. Part 51). These NRC licensing processes have traditionally entailed lengthy reviews and administrative hurdles, delaying and often derailing reliable energy projects that could support AI infrastructure. Similarly, power grid interconnection regulations governed by the Federal Energy Regulatory Commission (FERC) under 16 U.S.C. § 824a et. seq. impose restrictive control over how new loads such as AI data centers connect to the grid. Lengthy wait times and cost allocation disputes in FERC’s interconnection queues compound delays to reliable, scalable power delivery essential to AI model performance.

The Supreme Court’s decision in Seven County Infrastructure Coalition v. Eagle County curtailed this expansion of agency review. Moreover, recent reforms, such as the expansion of categorical exclusions, recent executive orders on permit streamlining, and the U.S. Court of Appeals for the D.C. Circuit’s Marin Audubon Society ruling, may remove some of the chokepoints.

However, legacy NEPA implementation and statutes built upon decades of overexpansion continue to impose substantial procedural burdens on AI-related infrastructure—particularly energy.

As the need for abundant energy production grows more vital, this regulatory barrier to energy production is particularly relevant in light of small modular nuclear reactors (SMRs), which have emerged as a promising source of clean, abundant energy to power the energy-intensive AI data centers at the heart of U.S. technological superiority.

Regulatory barriers limit the expansion of automated track inspection

Automated track inspection (ATI) technologies have been tested in recent years to improve railway track defect detection and have the potential to improve rail safety while also increasing operational efficiency of the network. Instead of shutting down tracks for human inspectors to walk, or using specialized rail vehicles to inspect track visually, ATI sensors are mounted to trains as they are in service to collect track component data as part of normal rail operations. These robust sensor data are then fed to AI-powered models to better plan maintenance activities.

Through pilot programs established by railroads, which obtained waivers from the Federal Railroad Administration (FRA), ATI was demonstrated to more reliably detect defects than traditional inspections—and improve maintenance forecasting and planning over time. Pilot program data submitted to FRA show that defects per 100 miles of inspected track declined from 3.08 before the use of ATI to 0.24 during the ATI pilots, or a 92.2% reduction. Reportable track-caused train derailments on main track per year during that same period declined from eleven to three, or a 72.7% reduction. None of those three derailments was attributable to ATI-targeted defects, with two occurring while manual visual inspections were still taking place twice weekly and one while pilot testing was inactive.

These results are in line with successful ATI performance expectations, with a shift in maintenance practices from being guided by a “find and fix” approach to a “predict and prevent” approach. Better and earlier detection of geometry defects allows track maintenance to be performed in a more preventative manner. Further, the higher-quality data collected by ATI over time allows for AI-powered improvements to maintenance forecasting and strategy. As such, as ATI use is expanded and repeated over time, defect detection rates—and defect-related hazards—should decline.

Realizing the benefits of ATI requires changes to manual inspection practices. ATI cannot inspect turnouts (i.e., the point where trains switch from one track to another), turnout components (e.g., “frogs”), and other special trackwork. By focusing ATI on track geometry defects, human inspectors can be redeployed to infrastructure where they are best positioned to inspect. If legacy visual inspection requirements are not modernized, railroads will have less incentive to invest in ATI and improve their inspection practices.

Analysis of the ATI pilot program data found that visual inspectors identified far more non-geometry defects than track geometry defects. Prior to ATI testing on the pilot corridors, visual inspectors identified 10,645 non-geometry defects and 422 geometry defects. In 2021, during the ATI pilots, visual inspectors identified 14,831 non-geometry defects (a 39.3% increase) and 238 geometry defects (a 43.6% decrease). Of the non-geometry defects identified by visual inspectors, 60-80% were in turnouts and special trackwork that ATI cannot inspect.

Another important benefit of ATI is reducing visual inspectors’ exposure to on-track hazards. Substituting ATI for routine geometry defect inspection, coupled with a corresponding reduction in visual inspections, will remove inspectors from harm’s way. Data from the ATI pilot program indicate that inspector track occupancy duration declined by approximately one-quarter after visual inspections were reduced to once per week as part of the ATI pilots, suggesting substantial inspector workforce safety risk reductions are likely to occur if ATI is widely deployed.

The Association of American Railroads recently petitioned for an industry-wide waiver to enable significantly expanded ATI deployments. The necessity of a waiver is indicative of the inflexibility of legacy rail safety regulations, which mandate rigid manual visual inspection frequencies (49 C.F.R. § 213.233). Importantly, these long-standing inspection frequency rules are based on questionable assumptions about accumulated tonnage loads and lack the scientific rigor that ought to guide safety policy. FRA has yet to act on the pending ATI waiver petition, thereby preventing rail carriers, rail workers, shippers, and consumers from realizing the safety and efficiency benefits of ATI.

Conclusion

We greatly appreciate OSTP’s attention to regulatory barriers to the development and deployment of AI technologies. Realizing the full benefits of these various technologies and applications will require a sustained, concerted effort on the part of policymakers.

Thank you for the opportunity to provide these comments to OSTP. We look forward to further participation and stand by to assist as requested.

Download the full public comment:

Download this Resource

Comments to the Office of Science and Technology Policy on AI regulatory reform

Reason Foundation

Thank you for downloading!

Please provide your work email address to access this report:”
This field is hidden when viewing the form

The post Comments to the Office of Science and Technology Policy on AI regulatory reform appeared first on Reason Foundation.

]]>
Georgia could create a safer online environment for kids by empowering parents https://reason.org/testimony/georgia-could-create-a-safer-online-environment-for-kids-by-empowering-parents/ Tue, 16 Sep 2025 19:40:59 +0000 https://reason.org/?post_type=testimony&p=86189 Balancing safety, parental empowerment, and constitutional rights would foster a safer and privacy-respecting digital environment for all. 

The post Georgia could create a safer online environment for kids by empowering parents appeared first on Reason Foundation.

]]>
A version of the following public comment was submitted to the Georgia Senate Study Committee on the Impact of Social Media and Artificial Intelligence on Children and Platform Privacy Protection on September 16, 2025.

Thank you for the opportunity to provide Reason Foundation’s view on the impact of social media and AI on children and platform privacy protection. In a time when parents are concerned over their children’s safety, advocates for bills such as the proposed App Store Accountability Act at the federal level and recently enacted versions at the state level (Utah, Texas, and Louisiana) have called for age verification practices at the device level to ensure they do not access harmful content. The state bills do not go into full effect until 2026. 

In practice, these mandates would require users to provide verifiable age information, such as government-issued IDs, and/or biometric data (facial scans, for example), at the point of creating an account that can reliably establish their age. The system would then categorize users into predefined age brackets (children under 13, teenagers 13-17, and adults over 18) to tailor content restrictions and access rights accordingly. For minors, this would mandate linking their accounts to verified parental accounts, with explicit parental consent required before allowing downloads, purchases, or access to certain application features. 

While these checks aim to reduce minors’ exposure to harmful material, this approach both raises privacy concerns and risks eroding online anonymity. Requiring websites to view and store government IDs and biometrics greatly increases the risk of putting people’s privacy at risk if a site is breached, especially sites that are required to verify age but do not have sufficient data security measures. One clear example of this is when the dating app Tea was breached, leading to thousands of users’ information being made public for bad actors to potentially use.  

Furthermore, age verification negatively affects online anonymity substantially, as users must provide evidence of age, which could be linked to their identity, even when platforms claim to employ privacy-preserving technologies. Throughout the United States’ history, anonymous speech has been considered First Amendment-protected, including online speech. However, age verification, which links government IDs and biometrics to specific users, erodes the ability to participate pseudonymously or anonymously online, crucial for whistleblowers, activists, and vulnerable groups engaging with sensitive issues. The persistent digital footprints required by these laws raise risks of profiling, tracking, and surveillance, especially as verification systems integrate with government digital identity schemes. 

Reason Foundation urges the committee to instead consider policies that would empower parents—the primary decision-makers for their children’s online access.  

Rather than mandating invasive age verification systems that collect personal sensitive data, it would be better to promote and utilize existing technology and parental control features found at device and platform levels, such as screen time limits, content filters, and family account management. These tools can be flexibly adapted to individual preferences without exposing minors to privacy risks or chilling anonymous speech.  

Similarly, promoting age-appropriate educational programs within schools is critical to equipping youth with the skills and knowledge to navigate online environments safely and ethically. Digital citizenship curricula, such as those offered by Common Sense Education or Google’s Be Internet Awesome, guide students in understanding privacy, communication etiquette, digital footprints, and cyberbullying awareness. Such education fosters informed, responsible technology use from an early age, complementing parental controls rather than replacing them.  

A balanced approach that maximizes family autonomy, minimizes data exposure, and supports education over coercion creates a safer online environment while respecting constitutional freedoms and technical feasibility. 

Although age verification practices are meant to protect minors from harmful content and regulate online engagement, current proposals involve complex technical challenges that risk both children’s and adults’ online privacy and security. Balancing safety, parental empowerment, and constitutional rights would foster a safer and privacy-respecting digital environment for all. 

The post Georgia could create a safer online environment for kids by empowering parents appeared first on Reason Foundation.

]]>
Free speech rights secure a legal victory over California’s restrictive deepfake laws https://reason.org/commentary/free-speech-rights-secure-a-legal-victory-over-californias-restrictive-deepfake-laws/ Thu, 21 Aug 2025 10:30:00 +0000 https://reason.org/?post_type=commentary&p=84320 The case underscores the difficulty of state legislators trying to regulate AI-generated content without infringing on constitutionally protected speech.

The post Free speech rights secure a legal victory over California’s restrictive deepfake laws appeared first on Reason Foundation.

]]>
Free speech rights recently secured an important legal win against one of California’s overly broad deepfake laws. The case underscores the ongoing difficulty of state legislators trying to regulate AI-generated content without infringing on constitutionally protected speech.

The California law, Assembly Bill 2655, would have required social media platforms to remove or label “materially deceptive” AI-generated political content near elections. Elon Musk and X, formerly Twitter, sued, and a federal judge just struck down the law, ruling it was preempted by Section 230 of the Communications Decency Act, which protects online platforms from liability for user-generated content.

Last year, the state enacted two political deepfake laws in the lead-up to the presidential election. AB 2655, the subject of Musk’s lawsuit, would have mandated online platforms to act against deceptive political deepfakes. The other law, Assembly Bill 2839, would have banned the creation and distribution of any political deepfakes depicting a candidate “doing or saying something that the candidate did not do or say” 120 days before and 60 days after an election. A federal judge blocked that law for lacking protections for parody and satire, which are essential pieces of free speech.

While the court rulings protecting free speech were correct, it is important to note that some concerns about deepfakes are well-founded. AI-generated media can depict people saying or doing things they never did, leading to reputational damage, misinformation, and defamation. This includes non-consensual, sexually explicit content known as “revenge porn.”

In 2024, California updated its existing laws to explicitly outlaw the creation and distribution of AI-generated revenge porn, giving victims and law enforcement stronger tools to address harms.

Rather than trying to overly regulate this fast-evolving technology in ways that restrict protected speech rights, California’s best course lies in leveraging existing legal tools. The state’s defamation, fraud, privacy, and right of publicity laws already provide strong remedies for victims of harmful deepfakes.

If someone finds themselves falsely depicted in deepfakes spread online, it can be a painful and difficult process, but they should document the deepfake and report it to the hosting platform. Major social media sites such as Facebook, Instagram, X, and YouTube all have reporting tools for manipulated content, and reporting such posts temporarily blocks their spread. If the deepfake is defamatory, invades privacy, or results in emotional distress, individuals can pursue legal remedies under existing laws covering defamation and privacy violations. Non-public figures choosing to sue for defamation only need to prove negligence in court.

Key tools in addressing deepfakes, more specifically the potential spread of misinformation, are California’s existing political advertising disclosure laws. State law requires any political ads created or distributed by committees containing AI-generated or substantially altered images, audio, or video to include a clear and conspicuous disclosure stating that the content has been altered using artificial intelligence. The law provides an exception for ads that only use AI in the editing process. Like other disclosures in political ads, deepfake disclosures are meant to inform voters of what they are viewing and provide for more transparency. In this way, California’s current framework is already well-equipped to address concerns over misinformation in AI-generated.

Technology can also help. There are AI detection tools that are increasingly good at identifying fake content, and media literacy initiatives could further help the public better recognize and question manipulated media.

Rather than holding online platforms broadly liable, which risks over-removal of legitimate content and threatens free speech protections, the state’s policy should focus on empowering users and institutions to address abuses directly. Liability should be imposed on the bad actors who create and distribute illegal deepfakes, rather than on the platforms that host third-party content.

Like any new technology, it can be used for both good and bad. Some of the risks of deepfakes are real and concerning. But rather than trying to implement broad mandates that violate the First Amendment, the state should enforce existing laws that can protect Californians without sacrificing core liberties.

A version of this column first appeared at The Orange County Register.

The post Free speech rights secure a legal victory over California’s restrictive deepfake laws appeared first on Reason Foundation.

]]>
Supreme Court erodes online privacy and free speech in age verification ruling https://reason.org/commentary/supreme-court-erodes-online-privacy-and-free-speech-in-age-verification-ruling/ Thu, 31 Jul 2025 04:01:00 +0000 https://reason.org/?post_type=commentary&p=83844 The ruling in 'Free Speech Coalition v. Paxton' marks a shift in how courts approach online age-verification laws targeting sexual content.

The post Supreme Court erodes online privacy and free speech in age verification ruling appeared first on Reason Foundation.

]]>
The Supreme Court recently upheld as constitutional a Texas law that requires any user who attempts to visit a site where at least one-third of the material is labeled “sexually explicit” and “harmful to minors” to prove they are 18 years old. This ruling in Free Speech Coalition v. Paxton marks a shift in how courts approach online age-verification laws targeting sexual content. The decision is already emboldening lawmakers in other states to pursue broader restrictions under the banner of child protection, but it remains to be seen how far courts will let that logic stretch.

For three decades, laws that regulated online content, especially sexual content, were tested under the “strict scrutiny” standard. Strict scrutiny is the highest standard of judicial review, used by courts to evaluate whether laws or government actions infringe on fundamental rights. This standard requires the government to prove these laws serve a compelling interest and are narrowly tailored to achieve that interest.

In Free Speech Coalition v. Paxton, the Supreme Court’s majority concluded that intermediate scrutiny should be the standard here, not strict scrutiny. Under intermediate scrutiny, the government only needs to prove it has a substantial interest to address a harm and the law is reasonably tailored to that aim. Writing for the majority, Justice Clarence Thomas analogized the requirement to show ID for alcohol or tobacco purchases, which are longstanding and widely accepted practices. He recasts Texas’s law as regulating “unprotected conduct” (letting minors access material deemed obscene for minors) and said the new identification (ID) requirement only incidentally burdens adults.

This decision marks a significant departure from the strong First Amendment protections for online speech established in two major cases: Reno v. ACLU (1997) and Ashcroft v. Free Speech Coalition (2004).

In Reno, the Supreme Court struck down the Communications Decency Act of 1996’s anti-indecency provisions as unconstitutionally vague and overbroad, holding that online speech is entitled to the same strict scrutiny as print, and that blanket bans on “indecent” or “patently offensive” content to protect minors could not justify sweeping limits on lawful adult speech.

Similarly, in Ashcroft, the Supreme Court repeatedly invalidated the Child Online Protection Act of 1998, emphasizing that even regulations aimed at restricting “harmful to minors” material could not limit adults’ access to legal speech if less restrictive alternatives, such as user controls or filtering, were available.

Under this new precedent, states now have more leeway to regulate online content under the guise of child safety, signaling that well-crafted age verification mandates for obscene or explicit material need not meet the previously rigorous First Amendment bar set by Reno and Ashcroft, and potentially paving the way for further state-level digital content regulation.

The complications of NetChoice v. Carr (2025)

Just one day before the Supreme Court issued its decision, the U.S. District Court for the Northern District of Georgia blocked enforcement of a much broader law: Georgia Senate Bill 351, also known as the “Protecting Georgia’s Children on Social Media Act of 2024.” The law would have required social media platforms to verify the age of all account holders, obtain parental consent for minors, and ban targeted advertising to minors based on personal data. In her ruling in NetChoice v. Carr, Judge Amy Totenberg found that the law would restrict teens’ access to online forums, infringe on anonymous speech, and interfere with platforms’ rights to communicate.

“The Court does not doubt the dangers posed by young people’s overwhelming exposure to social media,” Judge Totenberg wrote. “But, in its effort to aid parents, the Act’s solution creates serious obstacles for all Georgians, including teenagers, to engage in protected speech activities and would highly likely be unconstitutional.”

Following the release of the Supreme Court ruling, Georgia state Sen. Jason Anavitarte (R-Dallas), the author of Senate BIll 351, stated that “Based on Friday’s ruling at The Supreme Court, Judge Totenberg should be left with no choice but to allow SB 351 to go into effect … in its entirety.”

Georgia’s law, however, is much broader than Texas’ law and regulates all speech on social media platforms, not just obscene content, so the Supreme Court’s Texas decision does not automatically validate Georgia’s approach. This ruling is likely to be appealed, thus requiring a higher court to eventually clarify under what exact circumstances a state can require age verification from websites. 

The District Court applied the strict scrutiny standard in this case, which was the precedent prior to the Supreme Court ruling. However, the Supreme Court’s decision the very next day established that age verification laws targeting access to obscene or sexually explicit material can be upheld under intermediate scrutiny, as long as they are narrowly focused and only incidentally burden adult speech. This means that if a law is specifically designed to prevent minors from accessing pornography and does not substantially restrict adults’ access to protected speech, it is more likely to be considered constitutional. 

However, it is also possible that states will attempt to word every proposed age verification bill with broad definitions of what is “obscene,” “sexually explicit,” or “harmful to minors.” In doing so, states will try to force large social media sites that are not hosting pornography to nevertheless verify the ages of all its users. This may chill adults’ ability to access websites of their choosing while remaining anonymous, a feature that is still protected by the First Amendment so long as the platform allows anonymous participation.

Privacy concerns persist

Justice Clarence Thomas’ comparison between checking an ID at a liquor store and digital age verification obscures a profound difference between offline and online age verification. Buying alcohol at a liquor store does not create a permanent record of your interests and habits. Online verification, however, may involve uploading a government ID, submitting biometric data, or verifying identity through third-party platforms.

As Reason Foundation explained in its amicus brief opposing the law, this sensitive information—detailing exactly who visits which sites, and when—can be stored indefinitely, commercially exploited, or exposed in data breaches. Without robust federal safeguards regulating the collection, storage, and use of this data, mandatory age verification not only compromises user anonymity but threatens to chill free expression online.

Furthermore, it is also questionable whether Texas’ age verification law will shield kids from harmful content. Similar age verification efforts have proven ineffective, as tech-savvy minors are able to circumvent restrictions using Virtual Private Networks (VPNs), borrowed credentials, or mirror websites, meaning the laws introduce serious privacy and speech burdens for adults without effectively achieving their stated goal of shielding minors from harmful content.

As states now have the authority to require age verification to block sexually explicit content to minors, it will be important to note if/when states try to encapsulate broad age verification requirements on any website they deem to have content harmful to minors.

It is highly likely in the coming years that the Supreme Court will have to clarify the exact line of what age verification requirements do not violate the First Amendment. It will be important to note if any of these age verification requirements actually keep children safe online or if they merely satisfy a political urge to appear protective while exposing millions of users to new privacy and data-security risks.

The post Supreme Court erodes online privacy and free speech in age verification ruling appeared first on Reason Foundation.

]]>
Deepfakes, AI, and existing laws https://reason.org/policy-brief/deepfakes-ai-and-existing-laws/ Thu, 24 Jul 2025 04:01:00 +0000 https://reason.org/?post_type=policy-brief&p=83801 A nuanced policy response can address the challenges of deepfakes while preserving the benefits of creative and expressive digital technologies.

The post Deepfakes, AI, and existing laws appeared first on Reason Foundation.

]]>
Introduction

The rapid advancement of artificial intelligence (AI) has led to a recent rise of “deepfakes,” in which AI is used to manipulate or fabricate audio, video, or images with realistic accuracy. The highly realistic depiction of deepfakes technology has led to fears of potential misuse, such as harming others’ reputations through deliberate misrepresentation or spreading misinformation online.

Thirty-nine states have passed laws regulating the spread of intimate or erotic deepfakes, including bans on the creation and distribution fabricated images or videos depicting child sexual abuse material (CSAM) and revenge porn. Revenge porn, or nonconsensual pornography, is sexual or pornographic images of individuals distributed without their consent. These laws are expansions of currently existing laws and reiterate that these activities are still illegal when done with this new technology.

In addition, over 30 states have also proposed laws to try and regulate political deepfakes, which would restrict certain fabricated depictions of political candidates or office holders. These proposals have different goals than restrictions on sexual deepfake regulation; they are meant to prevent the creation and spread of images that may deceive voters, spread misinformation, and potentially influence elections.

Most of these state laws allow for political deepfakes so long as they include clear disclosures or watermarks identifying them as synthetic media. Deepfakes of candidates lacking these disclosures created or distributed before an election are outlawed. The watermark approach reflects a growing consensus that these disclosure requirements are a key tool in combating malicious political deepfakes.

While these proposals may have good intentions, restricting political deepfakes risks limiting political speech that is protected by the First Amendment. Many state laws are intended to combat political deepfakes that attempt to deceive voters. However, regulations may lead to the targeting of deepfakes meant to be parody or satire. Not all deepfakes are created with the intent to deceive. Parody and satire often use exaggerated or fabricated imagery to critique public figures or highlight social issues, and the line between humor and deception can be highly subjective. States may attempt to ban satirical political deepfakes that are realistic enough to potentially mislead some viewers, regardless of the creator’s actual purpose. The ambiguity of which political deepfakes a law regulates can create a chilling effect, deterring artists, comedians, and political commentators from engaging in creative expression out of fear that their work could be mischaracterized as deceptive and subject to legal action.

It is crucial that policymakers approach this issue with caution. Deepfakes can be used as a form of self-expression, parody, and satire, all of which are protected speech under the First Amendment. Regulatory responses must not inadvertently infringe on free speech. Rather than rushing to impose restrictions, policymakers should focus on an approach that recognizes the protections already provided by libel and slander laws and encourages transparency and accountability without undermining fundamental rights.

This policy brief explores the potential dangers of deepfakes while advocating for solutions that prioritize technological advancement, self-regulation, and public education over government intervention. A nuanced response will allow society to address the challenges of deepfakes while preserving the benefits of creative and expressive digital technologies.

Read the full report here:

Download this Resource

Deepfakes, AI, and Existing Laws

By Richard Sill, Technology Policy Fellow

Thank you for downloading!

Please provide your work email address to access this report:

The post Deepfakes, AI, and existing laws appeared first on Reason Foundation.

]]>
Texas amends non-consensual sexual deepfake law to include images  https://reason.org/commentary/texas-amends-non-consensual-sexual-deepfake-law-to-include-images/ Tue, 10 Jun 2025 10:30:00 +0000 https://reason.org/?post_type=commentary&p=82947 House Bill 449 exemplifies how legislatures can address emerging technologies without undermining civil liberties.

The post Texas amends non-consensual sexual deepfake law to include images  appeared first on Reason Foundation.

]]>
At the end of its 2025 legislative session in May, Texas passed House Bill 449 (HB 449), amending Section 21.165 of the Texas Penal Code to prohibit the production and distribution of all forms of non-consensual sexually explicit deepfakes. Previously, the code only banned “deepfake videos,” which then left a loophole to allow for deepfake images. Texas House Bill 449 closed this loophole in existing law, and its narrowly tailored language strengthens protections for victims of digital sexual exploitation while carefully avoiding overreach into constitutionally protected speech. 

HB 449 targets the malicious creation or distribution of sexually explicit deepfakes, which are digitally altered images or videos that superimpose a person’s likeness onto explicit content without their consent. Under the revised statute, offenders face criminal penalties if they knowingly produce or share material depicting a person with their “intimate parts exposed or engaged in sexual conduct,” mirroring the language of prior law. By maintaining this existing standard, HB 449 ensures continuity in enforcement and avoids subjective expansions of what constitutes illegal content. The bill specifically addresses the rise of AI-generated imagery, which has enabled bad actors to exploit victims with alarming ease, often causing severe emotional, reputational, and professional harm. 

Unlike broadly worded political deepfake laws that may infringe on free speech, bills that limit their scope to non-consensual sexual content expands on existing laws, such as those banning child sexual abuse material (CSAM) and revenge porn. Courts have historically granted less protection to sexually explicit material that lack consent and over 40 states have enacted laws to combat this problem. By focusing on harm rather than deepfake technology overall, the HB 449 avoids conflating malicious deepfakes with other AI-generated content more generally. 

The Texas law exemplifies how state legislatures can address emerging technologies without undermining civil liberties. By narrowly targeting harmful, non-consensual acts and preserving existing legal standards, the Texas bill provides a model for other states grappling with the ethical and legal challenges of deepfake exploitation. As AI tools become more accessible, laws like HB 449 will play a vital role in deterring abuse and protecting individuals from digital violations of privacy and dignity. Policymakers nationwide should take note: Proactive, precision-based legislation can combat technological harms without infringing on lawful expression.  

The post Texas amends non-consensual sexual deepfake law to include images  appeared first on Reason Foundation.

]]>
Texas House Bill 449 would prevent unauthorized sexually explicit deepfakes https://reason.org/testimony/texas-house-bill-449-would-prevent-unauthorized-sexually-explicit-deepfakes/ Mon, 19 May 2025 19:49:25 +0000 https://reason.org/?post_type=testimony&p=86195 The bill would amend the Texas Penal Code to include “deep fake images” in what constitutes the production or distribution of sexually explicit content.

The post Texas House Bill 449 would prevent unauthorized sexually explicit deepfakes appeared first on Reason Foundation.

]]>
A version of the following public comment was submitted to the Texas State Senate Committee on Criminal Justice on May 19, 2025.

Thank you for the opportunity to submit comments on the House Bill 449 of 2025 (HB449). My name is Richard Sill, and I serve as a policy analyst at Reason Foundation, a national 501(c)(3) public policy research and education organization with expertise across a range of policy areas, including housing, public finance, and technology.  

HB449 would amend Section 21.165 of the Texas Penal Code to include “deep fake images” in what constitutes the production or distribution of sexually explicit content. Currently, the Penal Code only refers to “deep fakes videos.”  

HB449 takes a careful regulatory approach, particularly in that it does not expand the definition of what constitutes an offense: a non-consensual deepfake depicting a “person with the person’s intimate parts exposed or engaged in sexual conduct.” By keeping the same language as existing law, HB 449 remains focused on targeting only truly harmful, non-consensual acts. 

We commend Rep. González and the authors of HB 449 for their thoughtful approach in addressing the growing problem of non-consensual sexually explicit deepfake images and videos. The bill sends a clear message that exploiting someone’s likeness is unacceptable and will be met with serious consequences. Overall, HB 449 is a narrowly tailored and well-crafted addition to existing law meant to combat a specific and pressing issue. It deserves praise for striking a balance between protecting privacy and dignity without overreaching into other forms of expression.  

The post Texas House Bill 449 would prevent unauthorized sexually explicit deepfakes appeared first on Reason Foundation.

]]>
Comments to the Federal Trade Commission on digital censorship https://reason.org/testimony/comments-to-the-federal-trade-commission-on-digital-censorship/ Fri, 16 May 2025 04:01:00 +0000 https://reason.org/?post_type=testimony&p=82415 Government interference in online speech is a bigger concern than technology platform censorship alone.

The post Comments to the Federal Trade Commission on digital censorship appeared first on Reason Foundation.

]]>
On behalf of Reason Foundation, we respectfully submit these comments in response to the Federal Trade Commission’s (FTC’s) request for comment in the proceeding, Technology Platform Censorship, published February 20th, 2025.

Reason Foundation is a national 501(c)(3) public policy research and education organization with expertise across a range of policy areas, including technology and communications policy.

Our comments argue that government interference in online speech is a bigger concern than technology platform censorship alone, as illustrated by censoring policies during the COVID-19 pandemic and the Hunter Biden laptop controversy, and this has important implications for how Section 230 of the Telecommunications Act of 1996 is enforced.

Full comment: Comments to the Federal Trade Commisssion on digital censorship

The post Comments to the Federal Trade Commission on digital censorship appeared first on Reason Foundation.

]]>
Comments to the Federal Communications Commission on deregulatory priorities https://reason.org/testimony/comments-to-the-federal-communications-commission-on-deregulatory-priorities/ Fri, 11 Apr 2025 11:30:00 +0000 https://reason.org/?post_type=testimony&p=81875 The FCC’s efforts to modernize telecommunications are rooted in noble intentions, but have often resulted in inefficiencies, higher costs, and unintended consequences.

The post Comments to the Federal Communications Commission on deregulatory priorities appeared first on Reason Foundation.

]]>
A version of this public comment was submitted to the Federal Communications Commission on April 11, 2025.

On behalf of Reason Foundation, we respectfully submit these comments in response to the Federal Communications Commission’s (FCC’s) request for comment in the proceeding, In Re: Delete, Delete, Delete, published March 12, 2025.

Our comments address the following topics:

  • Universal Service Fund inefficiencies and reform;
  • The application of Title II to net neutrality and Section 230; and
  • Digital discrimination provisions of the Infrastructure Investment and Jobs Act.

Universal Service Fund inefficiencies and reform

The Universal Service Fund (USF) was created to subsidize voice services for rural, low-income, and underserved areas, funded by a tax on voice services. However, in 2011, the FCC expanded the scope of “universal service” to include broadband data and created programs such as the Connect America Fund and Mobility Fund to help expand broadband access. The Commission declared broadband data as an essential utility for communication, education, healthcare, and economic participation. The FCC redefined “voice telephony services” to include Voice over Internet Protocol (VoIP) and required providers offering broadband to meet specific criteria to qualify for USF support. But all of this was still funded only by taxing voice services, as originally authorized by Congress.

These changes introduced several inefficiencies that have made the program less effective. First, adding broadband services significantly increased the program’s costs. The USF’s annual budget now exceeds $8 billion, with much of this allocated to broadband deployment subsidies under the High-Cost Program and E-Rate for schools and libraries. The “contribution fee” rate paid by telecom companies has risen dramatically, from 3% in 1998 to 36.6% in the second quarter of 2025. This dramatic increase has placed an undue financial burden on both providers and consumers, particularly those with lower incomes. Moreover, recent proposals to expand the USF revenue base to include revenue from broadband providers would increase household broadband bills by $5.25 to $17.96 per month, creating further financial strain for lower-income consumers who can ill afford it.

At the same time, telecommunications companies that provide voice services internationally are unfairly required to pay USF contributions for revenue earned outside the United States. Under the Limited International Revenue Exemption (LIRE) formula, companies with domestic interstate revenues of 12% or less of their total revenues pay the USF tax only on their domestic interstate revenue. But any company who puts serving America first with more than 12% of its total revenue from domestic interstate services must pay the current 36.6% fee on its total revenue—a huge tax cliff from 4.4% of total revenue to 36.6% of total revenue.

Administrative cost overruns have also become a significant problem within the USF, with administrative expenses ballooning significantly over time. In 2000, administrative costs were $43 million, but by 2022, they had surged to nearly $330 million—almost as much as the entire Rural Health Care Program budget ($500 million) and more than half of the Lifeline Program budget ($610 million). The Universal Service Administrative Company (USAC), which manages the fund, has faced criticism for failing to economize on overhead costs. The FCC mandates a budget floor of $4.5 billion annually for high-cost areas, requiring collection even if actual needs are lower. This rigid structure discourages cost optimization and diverts funds from more pressing priorities. The growing number of contractors receiving substantial fees from USAC further highlights the lack of cost discipline. These unchecked administrative expenses divert resources away from the program’s intended beneficiaries.

While the E-Rate program is meant to connect schools and libraries, it is riddled with inefficiency, fraud, and wasteful spending, including cases of unused equipment and inflated costs due to non-competitive bidding. Ambiguous rules and delays in funding approvals exacerbate these issues, while “gold plating” leads to unnecessary spending on underutilized technology. Additionally, the lack of performance metrics makes it difficult to assess whether funds are achieving their intended goals.

Similarly, lower-income households spend a larger share of their income on telecommunications services compared to wealthier households, making the fee disproportionately burdensome for them. Higher service costs caused by USF fees can also deter consumers from subscribing to telecommunications services or push them toward non-taxable alternatives such as FaceTime or Zoom. This shift has naturally reduced the taxable base for USF contributions, creating a cycle of rising fees as the FCC struggles to maintain funding levels. Additionally, companies that absorb USF costs instead of passing them onto consumers may scale back investments in network enhancements and new technologies, potentially limiting improvements in consumer access and satisfaction over time.

We suggest the following USF reforms:

  • Reverse the 2011 decision that expanded the scope of “universal service,” thereby reverting to the original program Congress had enacted and intended.
  • Eliminate the USF LIRE tax cliff that penalizes American companies that compete internationally by generalizing the waiver granted to Tata in 2021 to all companies subject to USF obligations. This would ensure that domestic interstate universal services are funded fairly by USF fees on companies’ domestic interstate revenues.
  • Budget discipline must be imposed by introducing a cap for USF programs while eliminating counterproductive budget floors. This would encourage efficient spending and align funding with actual needs rather than arbitrary thresholds.
  • Independent audits of USAC operations should be conducted to implement stricter cost controls and reduce administrative overhead.
  • Embrace improved technologies that now make it possible to serve high-cost areas at lower costs, challenging the need for such subsidies. Today, wireless solutions such as low Earth orbit (LEO) satellite constellations can achieve similar results at lower costs.

Some have proposed expanding the USF tax base by making search engines and social media companies contribute to the fund. Proponents claim that because these companies benefit from publicly funded broadband, they ought to contribute to the fund. However, doing so would harm consumers, business owners, and innovation. It also ignores the reality that the vast majority of broadband infrastructure is entirely privately funded. As noted previously, the high contribution fees imposed on telecom providers divert funds that could have otherwise been invested in innovative technologies to improve service. Casting a wider net would simply distort and depress investment by a larger number of companies. Similarly, imposing new fees on services such as FaceTime or Zoom, if not borne by the companies themselves, would be passed down to consumers, and low-income consumers would disproportionately bear the burden.

Full Comments: Comments to the Federal Communications Commission on deregulatory priorities

The post Comments to the Federal Communications Commission on deregulatory priorities appeared first on Reason Foundation.

]]>
Best practices for development of a federal artificial intelligence action plan https://reason.org/testimony/best-practices-for-development-of-a-federal-artificial-intelligence-action-plan/ Wed, 19 Mar 2025 10:00:00 +0000 https://reason.org/?post_type=testimony&p=81375 President Trump’s Executive Order 14179 properly focuses on innovation and global competitiveness in artificial intelligence development.

The post Best practices for development of a federal artificial intelligence action plan appeared first on Reason Foundation.

]]>
A version of the following federal comment was submitted in response to the Networking and Information Technology Research and Development (NITRD) National Coordination Office’s (NCO) request for information on the Development of an Artificial Intelligence (AI) Action Plan. To download the PDF version of this comment or to read the full footnotes, click here.

Introduction

We applaud President Trump’s Executive Order (E.O.) 14179, Removing Barriers to American Leadership in Artificial Intelligence, signed on January 23, 2025. We shared President Trump’s concern regarding former President Biden’s E.O. 14110, and we support the decision to revoke it in E.O. 14148. President Trump’s E.O. 14179 properly focuses on innovation and global competitiveness to keep the United States at the cutting edge of this critical new technology.

Because of its dynamic market economy, the United States is the world’s leader in innovation and the deployment of new technologies. Artificial intelligence (AI) promises to be among the most important technological revolutions in recent history, and the importance of President Trump renewing the nation’s commitment to free markets and bold innovation in E.O. 14179 cannot be overstated.

AI is the type of foundational technology where new ideas build on each other, opening innovative paths that are difficult to foresee in advance and counterproductive to regulate using knowledge that will quickly become obsolete. We agree with Vice President Vance that “AI will have countless revolutionary applications in economic innovation, job creation, national security, healthcare, free expression, and beyond.” Free markets are not just the best way to realize this future; they are the only way to realize it.

To assist in the administration’s development of an AI Action Plan, we submit comments on several key policy areas important to continued AI growth: avoiding overregulation, data access,  security, and free speech.

Avoiding overregulation and encouraging innovation

President Trump’s E.O. 14179 reflects a decisive shift toward a light-touch regulatory approach to AI development and deployment in the United States. This policy direction prioritizes innovation and global competitiveness while reversing regulations that could be burdensome to AI development. We see the development of the AI Action Plan as a key opportunity for the Trump administration to work alongside AI developers and deployers to create a clear and concise framework to best encourage innovation.

Among the most important steps the AI Action Plan can take is discouraging the overregulation of AI. As Vice President J.D. Vance noted in his February 11, 2025, remarks at the Artificial Intelligence Action Summit in Paris, governments frequently respond to new technologies by being “too self-conscious, too risk-averse.” When governments regulate new technology too early, they often completely cut off whole directions for development without knowing it. Innovation is a costly and uncertain process for which people and firms freely competing in the private sector is essential.

Revoking former President Biden’s E.O. 14110 was an important step in this direction. In that case, fears about outcomes like discrimination motivated premature and burdensome regulation. Importantly, the United States already has numerous laws and regulations prohibiting discriminatory conduct. We will not know if or how these laws should be modified until the technology develops further. 

The Action Plan should adopt a similar recommendation that would be applied across federal departments and use cases for AI. Federal agencies should clarify how existing laws apply to AI technology rather than introducing duplicative or overly burdensome new regulations. Many existing legal frameworks can be adapted to address emerging AI challenges, reducing the need for entirely new regulations. This approach not only minimizes compliance burdens for businesses but also ensures that regulations remain flexible enough to accommodate the rapid evolution of AI technologies.

The Action Plan should also seek to make recommendations that are responsive to the many diverse use cases and industries that AI promises to impact. Different industries use AI in unique ways, and a one-size-fits-all regulatory approach may fail to address sector-specific risks and opportunities. For instance, health care applications of AI require stringent privacy protections due to sensitive patient data, while financial services, fairness, and anti-discrimination concerns could come to the forefront in credit scoring algorithms. The Action Plan should encourage agencies to make use of technical expertise in the private sector for knowledge and tools needed to craft effective policies tailored to their domains.

By assessing the laws and regulations that already apply to AI, determining general and industry-specific legal issues that may arise, and learning from ongoing innovation in the private sector, federal agencies can move toward consistent, clear, and flexible AI policy that will encourage rather than stifle innovation.

Promoting secure access to data for AI models

Former President Biden’s E.O. 14110 sought to scrutinize how personal data might be used in training datasets, no matter if it was private or publicly available. In keeping with President Trump’s goal of promoting AI innovation, the Action Plan should encourage the removal of barriers to accessing and utilizing public data for the training of AI models. Unrestricted access to public information is crucial for maintaining the United States’ global leadership in AI technology. This approach aligns with the Administration’s goal of maintaining American AI leadership worldwide.

Both publicly available data and private data are necessary for AI models’ continued improvement. Any standard on data collected for AI models should clearly distinguish between publicly available and private personal data. Publicly available data includes information that is accessible to the general public, often through government records, public websites, or other open sources. In contrast, private personal data is information that is not intended for public access and is typically collected directly from individuals with an expectation of confidentiality. 

Access to publicly available data directly impacts AI systems’ quality, functionality, and overall performance. It also enables substantial cost and time efficiencies for researchers, entrepreneurs, and government agencies. By eliminating the need to collect, aggregate, and store data from scratch, these stakeholders can focus their resources on problem-solving and innovation. This accelerates the development of new AI models and enables the creation of diverse applications across multiple sectors, from health care and housing to economic development and national security.

Open data fosters innovation by promoting higher-quality decision-making, increasing data-driven accountability, and supporting global advancements in AI. Government agencies can use these AI tools, leveraging open data, to enhance the efficiency, accessibility, and effectiveness of services. For instance, machine learning algorithms can process weather information to provide timely insights to farmers, or AI can simplify tax filing processes for citizens. The combination of open data and AI holds great promise for improving government efficiency, reducing fraud risks, and enhancing security in key economic sectors.

The research community benefits immensely from public datasets, as these enable the training of predictive models that create value for both public and private sectors. Government healthcare data, for example, can contribute to improving existing treatment options and even aid in the development of novel cures. By making information freely available for people and entities to use, reuse, and consume open data, more people can contribute to the United States’ AI development and keep the country at the top of such development.

While access to personal data is also important to AI development, this can raise serious privacy concerns. The AI Action Plan should encourage agencies to work together and with the private sector to keep data secure as technology continues to evolve. Maintaining data security both reduces the risk of personal information being leaked and helps create avenues for more secure AI development in the future. The federal government can collaborate with industry to enhance data security. The National Institute of Standards and Technology (NIST) enhances data security by providing structured guidelines in its Cybersecurity Framework (CSF) and AI Risk Management Framework (AI RMF) for managing risks across traditional information technology infrastructure and AI systems, respectively. The CSF focuses on foundational protections like encryption and access controls, while the AI RMF addresses unique AI risks such as data privacy. These two frameworks help ensure comprehensive security through proactive risk management and regulatory alignment. 

Free speech

President Trump’s E.O. 14179 emphasizes the need for AI systems to be “free from ideological bias or engineered social agendas.” With this in mind, the AI Action Plan should emphasize First Amendment protections for free expression online and in the development of AI systems. Doing so will encourage innovation and maintain the vitality and free flow of information essential to our democracy.  

The “right to compute” is a legislative concept that protects individuals’ ability to privately own and use computational technologies, such as AI and data centers, as a fundamental exercise of free speech and property rights. Currently, bills have been proposed in states such as Montana and New Hampshire that would protect this right to compute. This principle is inherently pro-free speech because computational tools are essential for modern communication, creativity, and information-sharing. By safeguarding access to these technologies, the right to compute ensures that individuals can fully exercise their First Amendment rights in a digital age. It also intersects with property rights, emphasizing autonomy over privately owned computational resources, which supports innovation and economic growth.

To advance the right to compute, the AI Action Plan should encourage federal protections for computational technologies as essential tools for free expression and innovation. The Action Plan should also encourage streamlined regulatory processes for infrastructure development, ensuring timely construction of critical infrastructure while maintaining necessary safeguards. Integrating digital rights into broader policy discussions will bolster public trust in emerging technologies while safeguarding individual freedoms. By taking these steps, the U.S. can maintain its leadership in AI innovation while protecting constitutional freedoms in an increasingly digital world.

Another recent development in AI and First Amendment rights is the creation and sharing of political deepfakes. Deepfakes are AI-generated videos or sounds that convincingly depict real people or events. Using advanced generative AI techniques, they analyze and synthesize vast amounts of visual and audio data to create highly realistic replicas. There is concern that the potential misuse of deepfakes poses significant risks, such as undermining trust in media, spreading misinformation, and influencing public opinion—especially during politically charged events like elections. As policymakers grapple with the implications of deepfake technology, particularly in politics, they face a delicate balance between protecting against malicious uses and safeguarding free speech rights. Political deepfakes are simply a new form of expressing one’s opinions, parody, or satire. Rather than rushing to implement broad regulatory frameworks that could inadvertently stifle free expression or innovation in AI technology, lawmakers should focus on leveraging existing laws—such as those addressing campaign impersonation, slander, and libel—to address these issues.

Section 230 of the Communications Decency Act has played a crucial role in safeguarding free speech online since the early days of the internet, and it should be left untouched in order to protect online expression as AI continues to develop. Section 230 provides immunity to interactive computer services, like social media sites, for content posted by their users. This protection allows platforms to host diverse user-generated content without fear of legal repercussions, fostering a vibrant ecosystem of online expression. By shielding platforms from liability for both hosting and moderating third-party content, Section 230 enables the flourishing of social media, discussion forums, and other online spaces where users can freely exchange ideas, criticize policies, and share information. If Section 230 were weakened or repealed entirely, many platforms might severely restrict user content or stop hosting it altogether, significantly limiting the internet’s role as a forum for free expression and diverse viewpoints. If this were to happen, the United States risks losing a prominent avenue for free expression. 

Conclusion

We are only at the beginning of the AI revolution, and the many possibilities it suggests spark excitement and ingenuity, as well as understandable concerns. America’s free markets and free society enable those exciting possibilities to be realized, but many incorrectly assume that these freedoms lead to greater dangers. In reality, these same freedoms protect us against the risks of new technology as we learn more about risks and adjust along with what we learn.

Former President Biden’s E.O. 14148 projected today’s knowledge onto tomorrow’s highly uncertain AI landscape, attempting to reduce or eliminate risks. Had we continued on this path, we would have given up many of AI’s benefits and still been unprepared to address the real risks as they became clear. By shifting the focus from “AI safety” to “AI opportunity,” in the words of Vice President Vance, this administration makes it possible to achieve both goals.

The post Best practices for development of a federal artificial intelligence action plan appeared first on Reason Foundation.

]]>
New York’s proposed political deepfake ban suppresses speech and violates the First Amendment https://reason.org/commentary/new-yorks-proposed-political-deepfake-ban-suppresses-speech-and-violates-the-first-amendment/ Wed, 26 Feb 2025 11:30:00 +0000 https://reason.org/?post_type=commentary&p=80841 Libel and slander laws already exist and can be used by lawmakers worried about how deepfakes could harm their reputations or spread misinformation.

The post New York’s proposed political deepfake ban suppresses speech and violates the First Amendment appeared first on Reason Foundation.

]]>
New York is proposing a ban on political deepfakes, joining over 30 states that attempted to limit them over the past year. States have been pushing for these bans to combat the spread of misinformation that could harm election integrity. While the effort to combat the spread of false information regarding elections is well-intentioned, New York’s Assembly Bill A235’s broad language suppresses First Amendment-protected political speech—including parody or satire. Instead of creating new, overreaching legislation to address potential harms, New York legislators should look to libel and slander laws already in place to address potential risks associated with this new technology.

Synthetic media, or “deepfakes,” are videos or sounds meant to depict real people or events but are entirely generated by generative artificial intelligence (AI). Unlike other AI systems, generative AI specializes in producing original content, including images, text, and musical compositions. These deepfakes are produced using machine-learning techniques, primarily deep learning, that analyze and synthesize visual and audio data. Deepfake technology has advanced rapidly in recent years, becoming more accessible, realistic, and easier to create. A common worry with deepfakes is that people may struggle to differentiate between real and fake. 

New York’s proposed bill, currently in committee in the state Assembly, strictly targets political deepfakes, specifically depictions of political officials. The state hopes to join 20 other states that passed legislation banning depictions of elected officials and candidates prior to an election, including the 15 states that did so in 2024. Under Assembly Bill A235, generative AI system owners, licensees, and operators would have to “implement a reasonable method” to prevent users from creating deepfakes made of New York public officials or candidates. Once a candidate or public official notifies them that they do not want to be depicted, the owners have 60 days to prohibit depictions of such officials. Owners must also create a “reasonable method” for public officials to notify them via a method that is  “easy to access, understand, complete and send and that such method provides clear updates to the sender on the status of their request in a timely manner.”

Most, if not all, public officials and candidates will likely wish not to be depicted in a deepfake. This potential ban on political deepfakes violates the First Amendment right to political speech, traditionally one of the most constitutionally protected forms of individual expression in the United States. This law would limit the opportunity for citizens to express themselves and their political opinions if the subject they wish to depict does not wish to be parodied or satirized through the use of deepfakes. 

First Amendment concerns were a point of pushback against state political deepfake bans in 2024. In June 2024, Republican Louisiana Gov. Jeff Landry cited the First Amendment when he vetoed House Bill 154, which would have banned political deepfakes of candidates and elected officials. In October 2024, a California judge placed a preliminary injunction on Assembly Bill 2839, which would have prohibited all AI depictions of candidates, including those used for satire. In his ruling, the judge stated California’s bill violates the First Amendment and serves “as a blunt tool that hinders humorous expression and unconstitutionally stifles the free and unfettered exchange of ideas which is so vital to American democratic debate.” 

Beyond the constitutional violations, New York would also have difficulty enforcing these provisions. The internet does not have borders, and information constantly flows freely around the globe, transcending physical and political boundaries. This makes it effectively impossible for state governments to ban certain media within its domain. People anywhere, including foreign actors, can purchase a virtual private network (VPN), which re-routes their IP to another location to avoid being traced. They can easily continue to make and spread political deepfakes in a state where it is outlawed with this technology. Before New York passes any political deepfake ban, legislators should consider the logistics involved in attempting to enforce the law if someone from outside the state disseminates a deepfake to individuals within their state. 

If New York lawmakers are worried about deepfakes that could harm their reputations or spread misinformation, they could always look to libel or slander laws already on the books. If a politician feels someone has written (libel) or spoken (slander) false or defamatory content about them with the intention to harm their reputation, they can file a lawsuit and seek damages. It may be possible that someone creates a political deepfake to make someone look bad, but there is no need to ban the entire technology when libel and slander laws have existed for decades. 

Political deepfakes can be used as both a tool for free expression and potentially spread misinformation. The New York legislature’s proposed solution to misinformation would ban all political deepfakes, which suppresses First Amendment-protected speech and does not allow for parody or satire. Instead, New York should counter potential misinformation by being more transparent with voters. Politicians and candidates hold a unique public platform where they can provide information to voters, clarify any misconceptions about themselves, and be easily accessible to the voting population. If someone has trouble deciphering whether a deepfake message is real, there are emerging services and products to help users identify “fake” generative AI content. 

Prior to the 2024 presidential election, the Federal Bureau of Investigation (FBI) and the Cybersecurity and Infrastructure Security Agency (CISA) released a public statement warning about the threat of political deepfakes. They recommended that voters “critically evaluate the sources of the information they consume and to seek out reliable and verified information from trusted sources, such as state and local election officials.” By providing more direct transparency to voters, New York can combat any potential misinformation while allowing citizens to express themselves using whatever technological means they choose.  

The post New York’s proposed political deepfake ban suppresses speech and violates the First Amendment appeared first on Reason Foundation.

]]>
The future of biometric data regulation must balance innovation and privacy https://reason.org/commentary/the-future-of-biometric-data-regulation-must-balance-innovation-and-privacy/ Wed, 22 Jan 2025 15:00:00 +0000 https://reason.org/?post_type=commentary&p=79904 Biometrics are part of the broader debate over data privacy, but its unique specificity makes it arguably the most important aspect of it.

The post The future of biometric data regulation must balance innovation and privacy appeared first on Reason Foundation.

]]>
Biometric data can verify a person’s identity based on physical or behavioral characteristics, including fingerprints, facial scans, and voice recognition. Innovations in collecting and analyzing biometric data have allowed companies such as social media platforms to offer users new features and better verify people’s identities. Biometrics can also be used in the medical and financial industries to enhance consumer authentication and reduce fraud, making medical records and financial transactions more secure. 

However, concerns of potential misuse by companies and/or the government have led to calls to limit biometric data collection. Companies wishing to collect biometrics should be transparent in their collection and intent to use such data, and governments should set clear regulatory guidelines for companies to do so while still allowing room for innovation. 

An individual’s biometric data is unique to them. As such, industries that demand the strictest security measures can use biometric data to authorize that only specific people can access sensitive material. This can be applied as a primary authentication method or included in multi-factor authentication, along with passwords. A person’s fingerprints and facial geometry are significantly harder to forge, making them a much safer authentication option than solely a password. 

This technology can be used in the medical sector to better secure access to sensitive records. By requiring fingerprint scans before medical providers access medical records and facial scans to verify patients’ identities, the medical sector would have more security and reduced fraud. This technology does not have to just be used in a hospital or doctor’s office; in the telehealth sector, patients can use their facial scan to log securely into their online profile. The same principles can apply to the financial sector by allowing facial scans to confirm in-person and online banking identity. By using biometric authentication, industries where people demand the utmost security can be made more secure.

However, even accepting the usefulness of biometric data collection doesn’t eliminate concerns about how biometric data is secured, the use of biometric data for commercial purposes, and the potential to use biometrics for surveillance. Biometric data is unique to the individual, but unlike passwords or Social Security numbers, it cannot be replaced if the data falls into the wrong hands. This makes it extremely important that companies intending to collect biometric data have a secure infrastructure to keep such data safe. Because of the risks, there have been calls to require companies to obtain consent from users to use their data, restrict the sale of biometric data, and require deleting biometric data after its intended use.

Currently, Illinois, Texas, and Washington are the only states that regulate the collection and use of biometric data. All three laws require companies to obtain consent in some form from consumers before collecting their biometrics, prohibit selling their biometric data, and require companies to delete their biometric data shortly after its intended use. These regulations have led to costly legal action against companies collecting biometric data without consent. 

In July 2024, Meta, who owns Facebook, agreed to pay $1.4 billion to Texas over Facebook collecting users’ facial data for commercial purposes without consent. This settlement, the largest ever collected by an attorney general over privacy, follows Meta’s 2020 settlement with a class-action suit in Illinois for $650 million over the same issue. Meta did not admit wrongdoing in either settlement. 

From 2011 to 2019, Facebook used a feature called “Tag Suggestions” to scan photos uploaded to its site. It collected millions of users’ facial recognition data and used it to make it easier for users to find new people to add as “friends.” Users were automatically signed up for this facial recognition and collection feature unless they manually opted out in the settings tab. In 2019, after the Illinois class action suit first appeared, Facebook switched this feature, now called Face Recognition, to an opt-in service. In 2021, Facebook shut down the Face Recognition system and subsequently deleted over one billion people’s facial data. In announcing this action, Meta cited the lack of a clear regulatory framework for properly acquiring and using biometric data. 

Given the massive financial stakes involved in these cases, Meta was correct in stating that there needs to be clear regulatory guidelines for companies to follow regarding biometrics use. Only three states have laws about biometric data right now, but more states will likely adopt their own in the near future. 

To ensure that people’s biometric data is safe without risking the development of future innovations in biometric security, regulators should create flexible frameworks that allow for responsible innovation. For example, states can allow users to consent electronically, either with an electronic signature or by clicking an “I Agree” box. In 2024, Illinois amended its biometrics law to allow for electronic signatures to count towards obtaining consent, thus clearly defining how companies can obtain “informed consent” from users in a convenient and timely manner. Texas already allows for electronic consent, but Washington still requires documented consent. In addition, regulators can provide explicit guidelines on what information companies must disclose to consumers and how long they can retain biometric data. By doing this, companies can have more certainty that they comply with state laws, and consumers can feel more confident that their data is safe.

Regulators should not, however, restrict the use of biometric data too broadly or create specific requirements of how companies must store people’s data. All three states with biometric data privacy laws require that companies store biometric data safely and securely, but none of them demand a specific way to do so. Other states should adopt similar standards for their own laws. 

Disclosing the intended use of biometrics and obtaining consent is already a transparent process that allows companies to innovate and consumers to know what their data is being used for. States should resist overly restricting how companies use biometric data so long as consent has been obtained. By overregulating to keep people’s data safe, states ironically risk stifling the development of biometric security innovations that can make people’s data safer than ever. 

Biometrics are part of the broader debate over data privacy, but its unique specificity makes it arguably the most important aspect of it. If collected and used properly, biometric data can make the medical and financial sectors more secure. Businesses should take it upon themselves to be transparent about how they intend to use and secure consumers’ biometric data. Governments should provide a clear, consistent framework that businesses can comply with and be allowed to innovate within.   

The post The future of biometric data regulation must balance innovation and privacy appeared first on Reason Foundation.

]]>
AI model openness is a question for the market, not regulators  https://reason.org/testimony/ai-model-openness-is-a-question-for-the-market-not-regulators/ Tue, 26 Mar 2024 21:00:00 +0000 https://reason.org/?post_type=testimony&p=73555 The National Telecommunications and Information Administration should advise against premature rules limiting or controlling the openness of generative AI models.

The post AI model openness is a question for the market, not regulators  appeared first on Reason Foundation.

]]>
A version of this comment was submitted to the National Telecommunications and Information Administration on March 26, 2024.  

Generative AI models entered widespread public awareness following the release of OpenAI’s first public version of ChatGPT in late 2022. Open-foundation AI models with publicly available weights, a particular category of generative AI model explained below, allow greater or full access to model inputs so that others may use them to customize or create applications. 

The National Telecommunications and Information Administration (NTIA) is not a policymaking body but will issue an advisory report to the president on potential policy in this area. In the thus far limited public debate, some have proposed preemptive measures to limit the openness of AI foundation models or their proliferation. 

Our comment discusses several of the questions posed by the NTIA. We argue that allowing model developers to freely choose and innovate along dimensions of openness may be indispensable in realizing many of the technology’s benefits, without any evidence of specific risks over and above those of more closed approaches. 

The NTIA should not recommend a policy to restrict open-foundation AI models with publicly available weights at this time. 

Full Comment to the National Telecommunications and Information Administration

The post AI model openness is a question for the market, not regulators  appeared first on Reason Foundation.

]]>