Technology Archives https://reason.org/topics/technology/ Thu, 20 Nov 2025 22:30:23 +0000 en-US hourly 1 https://reason.org/wp-content/uploads/2017/11/cropped-favicon-32x32.png Technology Archives https://reason.org/topics/technology/ 32 32 Federal Trade Commission fails to convince judge that Meta monopolizes social media https://reason.org/commentary/federal-trade-commission-fails-to-convince-judge-that-meta-monopolizes-social-media/ Fri, 21 Nov 2025 11:30:00 +0000 https://reason.org/?post_type=commentary&p=87012 In its zeal to punish Big Tech, the Federal Trade Commission stuck to a market definition that became more obsolete with every year.

The post Federal Trade Commission fails to convince judge that Meta monopolizes social media appeared first on Reason Foundation.

]]>
During her tenure as Federal Trade Commission (FTC) chair, Lina Khan famously said that competition authorities should be less afraid to prosecute cases rather than settle them, even if it meant taking losses. Presumably, such strong signals from the FTC and Department of Justice (DOJ) would allow the agencies to play a long game, chipping away at court precedents and demonstrating that a new, more aggressive antitrust regime was here to stay. 

But the FTC’s loss this week in its monopolization case against Meta is an unequivocal defeat for Khan and the Neo-Brandeisian antitrust movement she began, signaling instead that antitrust authorities have little basis for finding illegal monopolies among big tech’s digital platforms.

Filed under Khan’s watch under President Joe Biden in 2021 and brought to trial this year under President Donald Trump and current FTC Chairman Andrew Ferguson, the FTC alleged that Meta monopolized the market for “personalized social networking” (PSN) services, most notably through its acquisitions of Instagram and WhatsApp. In a Nov. 18 decision, Judge James Boasberg of the D.C. district court ruled that the relevant market in the case is not PSN services but a wider social media market that, at the very least, includes two other major players: YouTube and TikTok. Meta never held a monopoly in that market, Boasberg determined, because Facebook and Instagram together fall well below any threshold that courts would consider a monopoly.

An irrelevant market

According to the FTC, Facebook viewed Instagram as a competitive threat to its dominant share of the PSN market, and instead of competing, chose to buy its competitor. WhatsApp, while not an existing competitor in the PSN services market, was, according to the FTC, well-poised to enter it. Once again, Meta bought a potential competitor.

The market that the FTC chose to define as relevant for the case proved to be its undoing. The PSN services market, according to the FTC, includes Facebook, Instagram, Snapchat, and a smaller platform called MeWe. Even when the case was filed almost five years ago, the boundaries of this market were fragile. Meta argued that the FTC’s case failed to survive a wider market definition that also included YouTube and TikTok. In this larger six-firm market, Meta’s acquisition of Instagram (or potential competitor WhatsApp) would simply not involve enough market share for the monopolization allegations to hold water.

Staking an entire antitrust case on the assertion that YouTube and TikTok are not competitors of Facebook and Instagram was always, at best, a highly risky move by the FTC. In 2021, when the court decided to allow the case to proceed, it warned, “[T]he agency may well face a tall task down the road in proving its allegations.” But if the FTC’s vision of a distinct PSN services market was dubious in 2021, by 2025 it was dead on arrival.

Facebook and Instagram provide users with two broad types of content: “Connected content” refers to personal postings or media shared directly by friends within the app, while “unconnected content” refers to videos that are recommended to users by artificial intelligence (AI). As Boasberg makes clear in his ruling, Facebook and Instagram shifted over the course of a decade from providing almost exclusively connected content to providing mostly unconnected content, with connected content forming an important secondary source of value. According to evidence presented at trial, in January 2025, Facebook users spent only 17 percent of their time on the app viewing connected content, while the figure for Instagram is a mere 7 percent.

Inconveniently for the FTC, TikTok and YouTube are even more focused on the unconnected content model that now also tops the list for Facebook and Instagram. Boasberg writes that, “Facebook, Instagram, TikTok, and YouTube have thus evolved to have nearly identical main features. On all four, users spend most of their time watching videos. All four use algorithms to recommend those videos to users. And if someone finds content that she likes, all four apps let her tap a button to send it to friends—whether via a direct message on Facebook, Instagram, or TikTok, or using a text message.”

Importantly, the court held throughout the case that the FTC must show that Meta is violating the law now, and that the PSN market is properly defined as of 2025. Ten years ago, the case for Meta’s platforms existing in a distinct personalized social networking services market, excluding YouTube and TikTok, might have been plausible. Now, it causes the case to fall apart. The mere fact that users on all four platforms spend the majority of their time doing the same thing goes most of the way to placing them as competitors in a relevant market. Unfortunately for the FTC, its idea of a distinct PSN market fares just as badly by virtually all other standards accepted by courts.

Among the most compelling evidence for vigorous competition between the four platforms comes from the “natural and field experiments” that were presented at trial and summarized in the judge’s decision.  Unexpected outages of YouTube and Meta in 2018 and 2021, respectively, provide windows into short-term substitutions consumers made with their time, while TikTok bans in the United States and India provide an opportunity to track similar longer-term behavior. These are the types of studies economists outside of court would look to when considering the competitive landscape, and they confirm the existence of vigorous competition among the four platforms. Boasberg writes, “[W]hen consumers cannot use Facebook and Instagram, they turn first to TikTok and YouTube. When they cannot use TikTok or YouTube, they turn to Facebook and Instagram. That evidence leaves the Court with no doubt that TikTok and YouTube compete with Meta’s apps.”

Had the FTC been able to rescue the idea of a distinct social networking market, it would have faced several other obstacles, most notably whether consumers were actually harmed by Meta’s acquisitions of Instagram and WhatsApp. But in the wider and more accurately defined social media market, Meta’s combined share (including Facebook and Instagram) doesn’t come close to any threshold considered monopoly power in previous court cases.

Expect the unexpected

In the wake of its defeat, the FTC should carefully consider the story of how Facebook and Instagram came to be competitors with YouTube and TikTok. At the heart of that story are two sets of disruptive innovations that expanded the frontier of what social media apps were able to provide consumers. The first big change was smartphones. In 2011, according to trial evidence, just over one-third of American consumers had adopted smartphones. The majority of the time consumers spent on apps like Facebook was in front of a desktop or laptop screen. As the quality of cellular data networks increased, consumers switched to using Facebook and Instagram as smartphone apps, and it became clear that streaming videos were among the most popular uses.

At the time, the best way to recommend new content to consumers remained their network of friends. Then, social media companies discovered AI. The same technology that enables generative AI chatbots, drawing inferences from billions of points of data, proved extraordinarily successful at recommending video content to consumers.

Despite these disruptive events, the FTC clung to a rigid and out-of-date market definition that drew a hard line between social networking and video content. By 2021, when it filed suit, it should have already been clear that these markets were being remade. Much of the switch to smartphones, along with improvements in video streaming, had already taken place. In the five years since, the AI revolution dealt a final blow to the idea of a distinct social networking market. Meta, along with the parent companies of YouTube and TikTok, quickly learned that the computing power unlocked by AI could recommend content more successfully than any other method, including a user’s own friends. Sharing content with friends ultimately became a complementary feature to viewing content recommended by AI.

FTC v. Meta is the second antitrust case against a major digital platform that ended this year, in which AI technology radically altered the competitive landscape while the trial was underway. In DOJ v. Google, Judge Amit Mehta found in 2024 that Google had unlawfully monopolized the market for online search. But a year later, generative AI had emerged as a force altering how people access online information in ways that even insiders at Google and its competitors did not foresee the previous year. Judge Mehta noted the need for “a healthy dose of humility” under such uncertainty, which contributed to the final remedies against Google being lighter than many had expected.

Reflecting on the rapid pace of change in FTC v. Meta, Judge Boasberg opened his decision with wisdom from the ancients:

“Believing that the only constant in the world was change, the Greek philosopher Heraclitus posited that no man can ever step into the same river twice. In the online world of social media, the current runs fast, too. The landscape that existed only five years ago when the Federal Trade Commission brought this antitrust suit has changed markedly. While it once might have made sense to partition apps into separate markets of social networking and social media, that wall has since broken down.”

Before plotting future antitrust action against firms in the digital age, the FTC and its counterparts at the DOJ would be wise to ponder the meaning of the word “constant.” Nobody could have predicted exactly how social media would change during the last decade. But the fact that unforeseen change was significant enough to remake a digital market in a few short years should surprise nobody. In its zeal to punish Big Tech and infuse antitrust with populism and activism, the FTC rigidly stuck to a market definition that became more obsolete with every year.

The post Federal Trade Commission fails to convince judge that Meta monopolizes social media appeared first on Reason Foundation.

]]>
Why is Texas investigating Meta’s AI Studio for offering unlicensed therapy? https://reason.org/commentary/why-is-texas-investigating-metas-ai-studio-for-offering-unlicensed-therapy/ Thu, 13 Nov 2025 22:00:13 +0000 https://reason.org/?post_type=commentary&p=86710 Texas Attorney General Ken Paxton launched an investigation into Meta’s Artificial Intelligence Studio to determine whether its chatbot platform misleads children.

The post Why is Texas investigating Meta’s AI Studio for offering unlicensed therapy? appeared first on Reason Foundation.

]]>
Texas Attorney General Ken Paxton has opened an investigation into Meta’s Artificial Intelligence (AI) Studio to determine whether its chatbot platform misleads children by allowing role-playing bots to pose as actual therapists. Meta has responded that the probe misrepresents its product: It provides disclaimers that its bots are not licensed professionals, but cannot ultimately control if a user decides to use its tool to break the law. The flexibility in AI applications highlights the need for clear regulatory frameworks that distinguish between platforms providing foundational tools and those providing services on top of general-use technologies.

Meta’s AI Studio, launched in 2024, was designed as an entertainment and productivity tool for users to generate lighthearted, fictional characters and to experiment with chatbot technology without needing computer science skills. The platform lets users design a bot’s name, personality, tone, and avatar. As Meta’s marketing highlights, “anyone can create their own AI designed to make you laugh, generate memes, give travel advice and so much more.” Creators can even build an AI as “an extension of themselves to answer common DM [direct message] questions and story replies, helping them reach more people.” In other words, it is designed and marketed to be an interactive search tool, not as a therapy product.

However, Paxton asserts that Meta’s platform could mislead users and offer services similar to therapy, but without a license. In the press release, Paxton’s office states the investigation will “determine if they have violated Texas consumer protection laws, including those prohibiting fraudulent claims, privacy misrepresentations, and the concealment of material data usage.”

Practicing therapy without a license is a violation of state law; even those offering very similar treatment modalities, such as “stress reduction,” must be careful not to advertise as providing therapy, counseling, or any services that could be construed as treatment of a mental illness from a licensed provider. Courts have discretion to determine if the language of a service provider is substantially similar to that of a licensed mental health practitioner.

Indeed, some bots pose as therapists or engage in conversations that are substantially similar to therapy. Meta, in its defense, attempts to warn users that bots are products of creators. The Times found screenshots of a chatbot labeled as a “psychologist” that warns users the bot “is not a real person or licensed professional.”

Screenshot originally appeared in The Times.

Regardless of the warning, applying typical legal standards to service providers in relation to chatbots becomes trickier, both because chatbots can veer off into conversational topics for which they were not originally designed and because individual developers can use generic AI technology in ways that violate the law. Nevertheless, Paxton’s investigation targets not these individual developers, but Meta.

Many platforms that allow user-generated content see users push boundaries in ways platforms cannot always anticipate, and Meta’s AI Studio is no exception. This does not present a problem for most users, but a small percentage of users take things in a direction that might be questionable or outright harmful. Though designed as a creative playground, some users turn these chatbots into emotional companions because they are available around the clock and cost far less than professional therapy. Mental health professionals warn about a new phenomenon called “AI psychosis,” where people under distress form delusional beliefs about chatbot sentience or receive responses that reinforce unhealthy thoughts. These cases demonstrate that even without explicit design intent, generative chatbots can assume emotional roles they were never intended for, sometimes with tragic consequences. OpenAI, the company that created ChatGPT, has acknowledged that guardrails around AI “break down” in very long conversations. The technology was not designed to engage mentally distressed users.

Meta’s AI Studio is not exempt from these issues. A search for “therapist” within the tool yields a range of characters, some of whom have thousands of users. These bots were not created by Meta but by individual users, and they tend to mimic the familiar patterns of a therapist: listening, reflecting back, and asking open-ended questions. In some cases, creators add avatars or images styled to look like therapists and script responses in the same voice, even if the bot never explicitly claims to be a licensed professional. This makes the case against Meta more challenging because it is difficult to broadly police ”therapeutic” talk. It’s unclear how Meta could crack down on illicit therapy chatbots.

“We’d first have to be able to define therapy in a way that isn’t so overbroad that it also encompasses discussions with your priest, bartender or best friend—which is to say effectively impossible—or would at least make the chatbot useless,” Andrew Mayne, an original prompt engineer and science communicator who consulted on OpenAI ChatGTP-4 model, writes to Reason Foundation in an email.“You could have the LLM [large language model] remind you that it’s not a therapist in certain discussions—but even then there would be debate on what that line is. It would also be annoying and redundant.”

It may be easier for a court to determine when an unlicensed provider is advertising services that are similar to those of a therapist. However, technologically, there can be thousands of chatbots engaging in thousands of conversations. It is not technologically possible for Meta to clearly label when these bots or conversations violate the law.

Some violations might be easier to spot if Meta manually investigated each conversation and chatbot. However, even if Texas attempted to force Meta to do so, Section 230 of the Communications Decency Act provides that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” This federal law is foundational to modern platforms because it grants them immunity from lawsuits arising out of user-generated content.

In this case, Meta did not create the “therapist” bots, nor did it market AI Studio as a mental health service. It merely provided a creative tool. Holding it liable for user misuse might conflict with Section 230’s regulation that platforms are not considered publishers of user-generated content.

This is not to dismiss the problem. There are more incidents emerging of people being deceived by chatbots, especially individuals who have mental health issues, and this is an unexpected challenge created by artificial intelligence. States could collaborate with developers, who have no vested interest in the harmful uses of their products, to develop more effective safety standards and guidelines. The scope of the issue is still unclear. It needs to be studied, and both governments and companies share a strong interest in keeping users safe.

In addition, it seems likely that people susceptible to using chatbots in harmful ways are also prone to being deceived by individuals in online chat groups, through online advertising, scams, and confusing parody with reality. Our policy goal should be to find ways to support individuals struggling with mental health issues or digital literacy in an increasingly digital landscape. Cooperative efforts to test solutions and adopt safeguards make sense for Texas agencies. It does not, however, make sense for the attorney general to claim that Meta violated some kind of obvious law and should be punished when no clear legal guidelines exist for an emerging problem of this kind.

The post Why is Texas investigating Meta’s AI Studio for offering unlicensed therapy? appeared first on Reason Foundation.

]]>
Michigan House Bill 4388 would regulate social media use by minors https://reason.org/testimony/michigan-house-bill-4388-would-regulate-social-media-use-by-minors/ Thu, 13 Nov 2025 18:49:25 +0000 https://reason.org/?post_type=testimony&p=86919 The bill suffers from constitutional concerns and privacy risks that must be addressed before it becomes law.

The post Michigan House Bill 4388 would regulate social media use by minors appeared first on Reason Foundation.

]]>
A version of the following written comment was submitted to the Michigan House Committee on Regulatory Reform on November 13, 2025.

While the intention behind House Bill 4388 is a worthy attempt to reinforce a parent’s role in keeping kids safe online, the bill suffers from constitutional concerns and privacy risks that must be addressed before it becomes law. As other states have learned, through the passage of nearly identical legislation, the outcome of leaving these constitutional concerns and privacy risks unaddressed is wasted taxpayer dollars in attorneys’ fees without forward progress.  

The fact of the matter is that laws mandating age verification for social media platforms cut through the core of the First Amendment right to access speech and speak anonymously. Age verification laws also create unnecessary privacy risks by requiring online account holders and users to disclose personal information before accessing social media.  

All of this has held true across the states that passed bills that were nearly identical to HB 4388, and for the reasons outlined below, we urge this legislature to oppose this bill.  

HB 4388 is fraught with constitutional concerns 

The exact methods a social media company must use to comply with HB 4388’s age verification mandate is a mystery — the only hint provided by the bill as to the procedures and mechanisms for verifying age is that the attorney general (AG) must recommend more than just the use of a valid government-issued ID. This does not mean a government-issued ID is off the table; it just means it cannot be the only recommendation. While one should never speculate as to the recommendations that could be offered by the Michigan AG, when this bill was passed in Utah, the proposed alternative methods of privacy-invasive age verification included biometric facial scans, bank information requests, social security numbers, and more.

The same ambiguous delegation of authority is relied on for setting rules for confirming a parent is, indeed, the parent of a minor account user. Same for “retaining, protecting, and securely disposing” this information. As the Supreme Court has made clear, it is rare for such a burden on the First Amendment to survive legal scrutiny.

Other states that passed HB 4388 have not been successful in court 

Not even Utah follows this approach, despite being the first state in the country to pass a near-identical bill in 2023. In fact, exactly one month after the first complaint was filed in a lawsuit over Senate Bill 152, alleging that the bill violated the First Amendment, the AG requested that the court reschedule hearings due to the legislature completely rewriting the law and pushing back the effective date. The new law that followed, Utah Senate Bill 194, was enjoined for violating the First Amendment.

Other states that have passed nearly identical laws as HB 4388 have either lost in court, been forced to delay effective dates, or are now awaiting hearings. This includes Arkansas (permanent injunction), Georgia (preliminarily enjoined), Louisiana (pending judgment, effective date delayed), and Tennessee (pending judgment), with Nebraska likely to be added to that list within the year.

Similar, though not identical, bills have found the same to be true. This includes California’s Senate Bill 976, which was blocked by the district court and the Ninth Circuit on appeal. Another similar bill, Mississippi’s House Bill 1126, was also struck down by the courts. The list goes on, including Texas, Ohio, and Maryland.

HB 4388 ignores clear privacy risks inherent in age verification 

As has been fleshed out over time, “commercially available methods” involve handing over sensitive information like a government ID, biometric facial scan data, social security numbers, banking information, and more. This information, and the process used to gather and collect it, has not only led to privacy risks but also painted a target on the backs of companies collecting it, resulting in significant data breaches that could have been prevented had these laws not been in place.

Thank you for the opportunity to submit this written testimony, and we welcome the opportunity to advise the legislature on this subject in the future.  

The post Michigan House Bill 4388 would regulate social media use by minors appeared first on Reason Foundation.

]]>
Nevada’s ban on AI therapists highlights regulation based on fear rather than analysis https://reason.org/commentary/nevadas-ban-on-ai-therapists-highlights-regulation-based-on-fear-rather-than-analysis/ Wed, 12 Nov 2025 15:44:57 +0000 https://reason.org/?post_type=commentary&p=86692 This legislative approach could stifle innovation, prevent change and improvement in products and services, and harm the residents of Nevada.

The post Nevada’s ban on AI therapists highlights regulation based on fear rather than analysis appeared first on Reason Foundation.

]]>
Nevada’s Assembly Bill 406 demonstrates why state-based artificial intelligence AI regulations often restrict new AI applications without considering all the consequences. The bill, signed into law by Gov. Joe Lombardo last June, restricted the use of AI for mental healthcare, which could prematurely deny residents access to a new form of safe and effective treatment.

Although Nevada’s law was initially framed as a narrow ban on AI counseling in public schools, AB 406 actually contained sweeping restrictions on AI behavioral health technologies. The law amended three chapters of the Nevada Revised Statutes—NRS 391, NRS 433, and NRS 629—to prohibit AI from performing any behavioral health functions reserved for licensed professionals, such as diagnosing patients or performing therapy. Violations can trigger civil penalties of up to $15,000 or professional discipline.

There is debate among researchers and mental health experts about the value of AI therapy. AI-driven mental health tools are advancing rapidly. Scientific journals and political offices are exploring how AI can be leveraged to expand access to treatment.

For example, a recent randomized clinical trial of an AI chatbot by Dartmouth researchers found participants reporting significant symptom reductions and relational closeness comparable to human therapists. The study was published in the New England Journal of Medicine (NEJM) AI.

In April, the State University of New York’s Downstate Health Sciences University announced plans to use a taxpayer-funded grant to explore the use of AI to prevent and diagnose mental health issues.

However, the public testimony in the lead-up to Nevada Assembly Bill 406’s passage and signing didn’t reflect this diverse debate. Instead, public hearings leading up to the passage of the bill were relatively one-sided. For example, on May 7, one of four public hearings, the Association of Social Workers had multiple representatives testify about the importance of licensed mental health professionals. Representatives worried about AI apps making unfounded claims about their capabilities to treat mental health disorders, but notable technology trade associations were absent. There was also no call-in (remote) opposition to the bill.

My analysis of public hearings shows that the bill passed without participation or discussion coming from innovators or scientists working on novel forms of automated mental healthcare.

The most generous reading of this bill’s process may be that, although many of these companies and organizations have big budgets and plenty of lobbyists and experts, AI researchers and companies simply failed to offer a counterargument because it is so challenging to track and engage with all the AI-related legislation across the country. The public’s skyrocketing AI use has meant a dramatic increase in AI-related legislation. There are hundreds of AI-focused bills, maybe more, introduced just in 2025.

As similar laws are introduced in other states, researchers and other groups will need to do what they didn’t in Nevada: show lawmakers how these AI services can improve individual and public health and ways lawmakers can implement guardrails without completely stifling research and innovation. 

One of the few voices of skepticism on the Nevada bill before it passed was State Sen. Angela D. Taylor (D-15), chairwoman of the Senate Committee on Education, who noted that AI is advancing quickly and could offer valuable mental health capabilities earlier than the two years it will take before legislators might take up the issues again. In front of the Association of Social Workers representatives, she noted that there could be advancements in six months, but the committee might only revisit it in two years (timestamp around 1:59:52 pm).

During the same hearing, Tom Clark, representing the Nevada Association of School Boards, noted that a federal regulator could certify that an AI therapist was safe. He told the committee that he could talk to the bill sponsor about creating an amendment that would allow Nevada residents to use federally recognized behavioral health technology. Indeed, the Dartmouth research mentioned above is currently undergoing clinical trials for an AI chatbot, and the preliminary results are positive, which could one day lead to FDA-approved therapy.

In response to Clark’s suggestions, the committee said that the sponsor of the bill noted that these suggestions were “not friendly.” But without much discussion or explanation, the committee decided to defer to the bill sponsor and not consider whether something like an FDA-approved therapy bot should be allowed in Nevada. As such, during the last public hearing, the suggestion to allow an exception for FDA-approved products was not approved by the committee.

The AI law means Nevada has banned almost all uses of an innovative approach to behavioral health that could soon greatly increase access to mental health services by those who need them. Lawmakers concerned about possible harms, which might be solved by improving AI systems, are precluding all potential benefits for Nevadans as well. That is a legislative approach that stifles innovation, prevents change and improvement in products and services, and ultimately harms the residents of Nevada.

The post Nevada’s ban on AI therapists highlights regulation based on fear rather than analysis appeared first on Reason Foundation.

]]>
California’s AI law works by staying narrow https://reason.org/commentary/californias-ai-law-works-by-staying-narrow/ Mon, 03 Nov 2025 19:49:33 +0000 https://reason.org/?post_type=commentary&p=86352 The law takes a narrow, transparency-first approach to regulating advanced “frontier” AI models, creating room for experimentation, while requiring timely disclosures that give the state the data it needs to address risks as they emerge.

The post California’s AI law works by staying narrow appeared first on Reason Foundation.

]]>
California Gov. Gavin Newsom signed the Transparency in Frontier Artificial Intelligence Act into law in late September. The law takes a narrow, transparency-first approach to regulating advanced “frontier” artificial intelligence (AI) models, creating room for experimentation and innovation, while requiring timely disclosures that give the state the data it needs to address risks as they emerge. 

This new law is already a better first step than last year’s heavy-handed—and ultimately vetoed—proposal, Senate Bill 1047. The value of the new law, Senate Bill 53, however, will depend on its execution and whether California continues to update its definition of “frontier” to reflect the growing capabilities of firms entering the market. 

Senate Bill 53 defines “frontier foundation models” as models trained with more than 10^26, or 10 to the power of 26, floating-point operations (FLOPs)—a.k.a. massive computing power—and imposes heavier obligations on larger firms with more than $500 million in annual revenue. 

Among the major provisions of the law, it requires large AI developers to publish a framework explaining their safety standards and risk assessment procedures. Before deployment, a developer must also post a public transparency report, including an additional requirement for large developers to disclose risk assessments and the extent to which third-party evaluators were involved in assessing those risks. Developers are required to report critical safety incidents to the state’s Office of Emergency Services (OES), and starting from 2027, the OES will release anonymized summaries of those reports. 

By choosing disclosure and incident reporting rather than rigid technical requirements or pre-deployment approvals, SB 53 leaves space for experimentation—building rules around demonstrated risks instead of hypothetical harms. California’s law also aligns with existing national and international safety standards, rather than creating its own arbitrary standards, which helps maintain consistency across jurisdictions. Because the AI field still lacks agreed-upon standards on dangerous behavior, the law’s framework and reporting provisions are intended to produce the information policymakers need to refine their laws and craft more responsive regulations in the future. 

Concerns with SB 53

Despite the law’s strengths, the definition of a “frontier” model still leaves room for improvement. For now, the threshold of 10^26 FLOPs and the $500 million revenue threshold for large developers create a clear and narrow scope. Former Google CEO Eric Schmidt is among those who recommended the 10^26 FLOPS threshold. But, in the future, this static threshold can drift away from the capability it was meant to capture.

History has shown that algorithmic efficiency often doubles every 16 months, meaning a new update to the law will be required time and time again. If the threshold stays the same, it will miss new models that are just as powerful but trained with less compute, while still flagging older, inefficient ones. Whether the newly created California Department of Technology (CDT), charged with recommending changes to that threshold annually, can successfully convince the legislature remains to be seen.

Another concern with SB 53 is that the reporting obligations, though well-intentioned, may become a mere administrative formality, with companies producing data that checks the box without improving understanding of real issues. The law requires large developers to file quarterly summaries of their internal catastrophic-risk assessments, even when nothing has changed. Unless the information collected is analyzed and shared by the OES in ways that genuinely improve a regulator’s understanding of risk, this could just turn into a bureaucratic sludge that buries insights into true risks.

Looking beyond California: State-based AI best practices in lieu of a federal standard

A flexible scope would also help keep state rules consistent until there is a federal law. Right now, however, the states point in different directions: New York’s “Responsible AI Safety and Education RAISE Act” (A 6953), for example, also covers models with 10^26 FLOPs, but goes further to include models with very high training costs (about $100 million) and even covers smaller models if building them costs at least $5 million. Michigan’s House Bill 4668 skips the compute threshold altogether and simply covers any entity that spent at least $100 million in the past year and $5 million on any single model. 

Looking ahead, if five or 10 more states adopt their own definitions, this emerging state patchwork will only grow more complicated and difficult to comply with. The practical solution could be keeping the definition of the “frontier” aligned by following the same national and international standards. This would avoid putting developers through a dozen different playbooks.

California Senate Bill 53, even with all its flaws, may serve as that model. But the real test of SB 53 will be the value of the information it produces from transparency reports and assessments. If those reports reveal meaningful patterns in model behavior and help the state more effectively respond to risks, California could set an example for others to follow. But if those reporting requirements turn into routine filings and formal checklists, the California experiment could show the limits of transparency laws, potentially pushing legislators toward heavier tools.

The post California’s AI law works by staying narrow appeared first on Reason Foundation.

]]>
DOJ v. Visa could prove an important battleground for tech antitrust  https://reason.org/commentary/doj-v-visa-could-prove-an-important-battleground-for-tech-antitrust/ Fri, 31 Oct 2025 20:18:23 +0000 https://reason.org/?post_type=commentary&p=86280 In its lawsuit, the Department of Justice alleges that Visa has monopolized the market for debit payment.

The post DOJ v. Visa could prove an important battleground for tech antitrust  appeared first on Reason Foundation.

]]>
While the aggressive antitrust agenda pursued against “Big Tech” by the Biden and Trump administrations has received much attention and debate, one major case is often overlooked. The Department of Justice (DOJ) took Visa to court in September 2024, alleging that the company had monopolized the market for debit card payments. While less of an ideological lightning rod than cases brought against Google, Meta, Amazon, and Apple, the case is likely to face many of the same questions arising from antitrust actions in complex multi-sided platform markets displaying rapid innovation. 

In its lawsuit, the DOJ alleges that Visa has monopolized the market for debit payment, violating Sections 1 and 2 of the Sherman Act. Visa earns about $7 billion in revenue from fees it charges to consumers, banks, and merchants on its network, which the complaint claims constitute monopoly profits. The DOJ alleges that Visa has maintained its dominant position over the past three decades with various exclusionary contracts, along with moves to partner with firms like Apple, whose own innovations in payment products might otherwise have eroded Visa’s position. The DOJ stops short of asking the court for breakups but seeks to enjoin and ultimately ban the firm from the many contracting practices it alleges to be illegal. 

Consistent with the bipartisan nature of the upticks in anti-tech sentiment and aggressive antitrust enforcement, the Trump administration has continued where Biden’s DOJ left off. The court rejected Visa’s motion to dismiss the case in June. Fact discovery in the case is not yet underway, and no trial date has been announced. We will learn much from both fact discovery and expert analysis, so it is too soon to predict the trial’s outcome.  

Visa’s debit network has important traits similar to the antitrust defendants commonly called “Big Tech.” These common features arise from the structure of these businesses, which use information and networking technology to connect different groups of consumers. This type of market complicates many of the core components of more traditional antitrust cases. For example, the rapid pace of innovation means new competition is likely to come from unpredictable sources rather than from entry into Visa’s market as defined in advance. Additionally, consumers derive significant benefit from the structure of the debit payment market as it exists. Along with antitrust litigation against the big tech firms, the DOJ’s case against Visa will likely help decide these questions. 

Two-sided platforms 

Visa’s debit network connects two groups of customers, debit cardholders and merchants, allowing the former to make purchases with the latter using a debit card. This business model, known as a two-sided platform, has become an increasingly important part of many industries and markets with the rise of information and networking technology.  

The modern debit payment market emerged from the combination of two other popular products: automated teller machine (ATM) cards and credit cards. ATM networks, which began in the 1960s and became commonplace by the 1980s, allowed consumers to withdraw cash from machines rather than visiting a bank. Visa and Mastercard first built payment platforms that were national in scope through credit cards, ultimately bringing similar reach to debit payment by partnering with banks to issue branded ATM cards also accepted by merchants.  

In order to connect cardholders and merchants, Visa’s and Mastercard’s debit platforms must also connect and transact with both the cardholder’s and merchant’s banks. This adds an additional layer of technical complexity to the market, along with more distinct groups of consumers. In the “general-purpose debit market” (which the complaint defines as the primary market in the case), Visa currently processes about 60 percent of transactions, with Mastercard in second place at about 25 percent. Visa charges a set of “interchange fees” to merchants and banks and “network fees” to cardholders’ banks. On a purchase of $60, issuer fees and network fees average out to 24 cents and 14 cents, respectively. Visa does not charge debit cardholders themselves any direct fees, though some portion of the network fees charged to their banks are likely passed along indirectly. 

Although the debit payment market evolved separately from the boom in internet technology that gave rise to Google, Facebook, and Amazon, these Big Tech firms are all two-sided platforms. Google and Facebook connect search and social media users, respectively, with advertisers, while Amazon Marketplace performs this function for online retailers and retail customers. The DOJ’s case against Visa is therefore a potentially important battleground, along with the major big tech cases, in deciding major open questions at the current frontier of antitrust. 

Antitrust cases of an earlier era usually involved firms charging one price to a single group of consumers in a relatively well-defined market. Economists in these cases were often able to apply standard statistical techniques to estimate firms’ market power. In two-sided platform markets where pricing is more complex and markets are harder to define, these standard techniques are less informative. This is one reason why economic analyses in the big tech cases have focused instead on alleged exclusionary conduct by defendants in which power in one market is leveraged to obtain a result in another connected by the platform. The allegations in the DOJ’s complaint against Visa are consistent with such a focus. 

Innovation and entry  

Debit payment systems have significant network effects, meaning the value of joining the network grows as more consumers and merchants join. They also exhibit economies of scale, where large firms such as Visa and Mastercard can take advantage of efficiencies to lower costs. These are classic barriers to entry that typically increase the monopoly power of large incumbent firms. But like other digital platform markets under antitrust scrutiny, meaningful entry and competition in the debit payment market is likely to come from different and less predictable sources.  

The IT and internet revolution of the late 1990s and early 2000s created numerous opportunities for innovative firms to disrupt the debit payment market with competing models. Payment apps and cryptocurrency are more recent examples of once unforeseen innovations that have partially, though never fully, disrupted the debit payment market. The DOJ’s complaint cites internal Visa documents that called Apple Pay an “existential threat” when first launched. Visa ultimately partnered with Apple, enabling the app to use its debit network and maintaining its market share. The DOJ includes this and similar contracts with firms like PayPal in its allegations of anticompetitive conduct, speculating that these partners would otherwise have been direct competitors to Visa. However, these partnerships undoubtedly delivered a more innovative and widely used service to consumers, complicating efforts to paint them as anticompetitive. 

While Visa’s debit payment model has proven robust to innovations in smartphone apps and cryptocurrencies, more substantial disruption could emerge unexpectedly, a possibility dramatized by the recent remedies phase in the Google search antitrust trial. In that trial, generative artificial intelligence (AI) dramatically altered the market for internet search with a speed and magnitude few had foreseen only a year earlier, when the first phase of the trial ended. As a result, U.S. District Judge Amit Mehta emphasized the need for “humility” when considering court-ordered interventions in such an uncertain climate.  

Consumer welfare 

In order to obtain a judgment against Visa, the DOJ must convince the court that the current debit card market structure or the alleged anticompetitive conduct has been harmful to consumers. Visa’s consumers for the purposes of such an analysis are those found on both sides of its payment platform: merchants and their banks on one side, and debit card users and issuer banks on the other. The current market structure is almost certainly to the benefit of debit card users. They do not directly pay the interchange fees at issue in this case, and benefit from the network effects and economies of scale of Visa’s debit platform. 

A future trial will likely focus on the merchant side of the market, and the potential benefits and harms associated with the many types of contracting discussed in the complaint. The two-sided structure of this market opens the door to many types of contracting that link firms across multiple markets, with parallels to both the Google search and Google ad tech antitrust cases. As the verdicts in both these cases demonstrate, contracting of the type alleged against Visa can raise antitrust concerns. However, there is also reason to believe many merchants have benefited from the discounting and partnership agreements alleged to be anticompetitive in the case.  

Should the case ever proceed to a remedies phase, the benefits to consumers on the user side of the platform may take on added significance. The DOJ is asking the court to ban many of the contracting practices Visa commonly employs on the merchant side of the platform. Such interventions could interrupt the smooth functioning of debit payment systems or potentially change the structure of the market in a way that harms debit card users. An analogous situation in the Google search antitrust case prompted Mehta to scale back remedies proposed by the DOJ. 

Would-be reformers such as Biden-era Federal Trade Commission (FTC) Chair Lina Khan have argued that the two-sided platform structure itself renders the consumer welfare standard, long the dominant paradigm in antitrust, obsolete. While many, if not most, economists and antitrust experts strongly disagree, this position has already influenced the behavior of the FTC and DOJ (primarily through merger guidelines updated in 2023), along with the political debate. However, the consumer welfare standard has mostly stayed in favor in courts, with the complaints in even the big tech cases brought by Khan herself still making arguments in those terms. 

Beyond these big-picture questions about the future direction of antitrust, cases involving two-sided markets raise other difficult and open questions about the standard toolkit economists bring to enforcement agencies and courts. Issues around market definition, entry, and how the welfare of different groups of consumers should be weighed when intervening in these complex markets are all examples of such open questions. Alongside ongoing and recently concluded litigation against Apple, Amazon, Google, and Meta, the Visa case is likely to be a major battleground where such matters are resolved. 

The post DOJ v. Visa could prove an important battleground for tech antitrust  appeared first on Reason Foundation.

]]>
Ohio House Bill 392 would clarify the right to compute https://reason.org/testimony/ohio-house-bill-392-would-clarify-the-right-to-compute/ Tue, 28 Oct 2025 10:29:00 +0000 https://reason.org/?post_type=testimony&p=86045 The bill is an excellent first start, but two areas for improvement currently limit its intended effect.

The post Ohio House Bill 392 would clarify the right to compute appeared first on Reason Foundation.

]]>
A version of the following public comment was submitted to the Ohio House Technology and Innovation Committee on October 28, 2025.

The Technology Policy Project at Reason Foundation has provided pro bono consulting to public officials and stakeholders to help them design and implement technology policy reforms around the regulation of artificial intelligence (AI) and other emerging technologies, digital free speech, data security and privacy, child online safety, and tech industry competition policy. Our team brings practical, market-oriented strategies to help foster innovation, competition, and consumer choice through technology policies that work.  

We submit this written testimony on House Bill 392 as an interested party.   

HB 392 is similar to other state legislation in that it creates a “Right to Compute.” This right to compute is a critical affirmative right for innovators, as it requires a state legislature to carefully weigh the compliance burdens of proposed, potentially heavy-handed legislation and affords innovators a right of redress when such burdens are imposed.   

The bill is an excellent first start, but two areas for improvement currently limit its intended effect. These problems revolve around the bill’s definition of  “compelling governmental interests.” As written, the bill would still allow for a state agency or political subdivision to impose burdens on innovators in two key areas.  

First, the bill makes AI-generated content the basis of a “compelling governmental interest” for further regulation or legislation, but does not specify the actor creating such content. Leaving this definition vague opens the door for laws that would punish an AI company rather than the person using an AI model with nefarious intent—an onerous legislative proposal that would be impossible to comply with. Such a law was proposed in California, and because of its unreasonable burdens on AI companies, Gov. Gavin Newsom vetoed the bill. These types of bills are not one-off bad ideas. Though well-intentioned, they fly in the face of the core intent of HB 392: to unburden innovators from impossible compliance demands so as to allow the U.S. to develop next-generation technologies that compete with the rest of the world.   

Second, aside from the public process of approving new construction on a data center, carving out exceptions for laws and local ordinances that undermine data centers under the veil of “public nuisance” law would ignore the possibility that hostile localities would create ordinances after the fact to hike up energy rates and extract revenue from data centers once they’re built. The U.S. needs more—not less—data center capacity, which is contemplated in Right to Compute legislation passed elsewhere.   

Thank you for the opportunity to submit this written testimony, and we welcome the opportunity to advise the legislature on this subject in the future.  

The post Ohio House Bill 392 would clarify the right to compute appeared first on Reason Foundation.

]]>
Comments to the Office of Science and Technology Policy on AI regulatory reform https://reason.org/testimony/comments-to-the-office-of-science-and-technology-policy-on-ai-regulatory-reform/ Mon, 27 Oct 2025 14:00:00 +0000 https://reason.org/?post_type=testimony&p=85964 A version of the following public comment letter was submitted to the White House Office of Science and Technology Policy on October 27, 2025.

The post Comments to the Office of Science and Technology Policy on AI regulatory reform appeared first on Reason Foundation.

]]>
A version of the following public comment letter was submitted to the White House Office of Science and Technology Policy on October 27, 2025.

On behalf of Reason Foundation, we respectfully submit these comments in response to the Office of Science and Technology Policy’s (OSTP’s) request for information on “Regulatory Reform on Artificial Intelligence.”

Reason Foundation is a national 501(c)(3) public policy research and education organization with expertise across a range of policy areas, including technology and communications policy.

There are numerous activities, innovations, and deployments currently inhibited, delayed, or constrained by federal statute, regulation, or policy. For this reason, we recommend a formal audit or review to identify areas of regulatory conflict with innovation—including the effect of state laws where federal regulation is silent. However, we offer the following specific examples in response to Question (i) for OSTP’s review:

  1. Legacy NEPA Rules and Expansion Create Major Delays in Energy Production
  2. Regulatory Barriers Limit the Expansion of Automated Track Inspection

Legacy NEPA rules and expansion create major delays in energy production

In order to maintain global technological superiority, the United States must focus squarely on reforms that increase energy capacity through streamlined permitting reforms in order to facilitate the development of artificial intelligence (AI) across industries. As of now, multi-year permitting delays are the status quo in any energy project. These delays set back the construction of new power plants, but also lead to the downstream effects of a restricted energy grid. As the United States competes with foreign adversaries for dominance in AI, energy capacity will either be a force multiplier in the country’s success or lead to failure on the global stage.

Congress passed the National Environmental Policy Act (NEPA) in 1969, directing federal agencies to evaluate the environmental impact of their decision-making prior to a major federal action. As part of this directive, agencies were required to produce an Environmental Impact Statement (EIS) when a federal action would significantly alter the environment, which is to include a comprehensive analysis of environmental effects, alternatives to the proposed action, and proposed mitigation measures (42 U.S.C. § 4332).

For federal actions that would impose smaller effects on the environment or where the size of the effect is uncertain, agencies must complete an Environmental Assessment (EA). An EA is a shorter-form document that aims to determine whether a proposed federal action warrants a full EIS or if the effects are small enough to render a Finding of No Significant Impact (FONSI). These mandated reviews were meant to inform both decision-makers and the public of potential significant environmental impacts and potential mitigations, but have evolved into increasingly lengthy and complex processes. Further, despite their extensive documentation, these reviews generate a substantial amount of litigation. As a result, the environmental review process that was designed to increase public transparency increasingly serves to delay and add costs to worthy projects.

For instance, the Nuclear Regulatory Commission (NRC) promulgated licensing rules that incorporate NEPA’s environmental review framework into nuclear power project approvals (10 C.F.R. Part 51). These NRC licensing processes have traditionally entailed lengthy reviews and administrative hurdles, delaying and often derailing reliable energy projects that could support AI infrastructure. Similarly, power grid interconnection regulations governed by the Federal Energy Regulatory Commission (FERC) under 16 U.S.C. § 824a et. seq. impose restrictive control over how new loads such as AI data centers connect to the grid. Lengthy wait times and cost allocation disputes in FERC’s interconnection queues compound delays to reliable, scalable power delivery essential to AI model performance.

The Supreme Court’s decision in Seven County Infrastructure Coalition v. Eagle County curtailed this expansion of agency review. Moreover, recent reforms, such as the expansion of categorical exclusions, recent executive orders on permit streamlining, and the U.S. Court of Appeals for the D.C. Circuit’s Marin Audubon Society ruling, may remove some of the chokepoints.

However, legacy NEPA implementation and statutes built upon decades of overexpansion continue to impose substantial procedural burdens on AI-related infrastructure—particularly energy.

As the need for abundant energy production grows more vital, this regulatory barrier to energy production is particularly relevant in light of small modular nuclear reactors (SMRs), which have emerged as a promising source of clean, abundant energy to power the energy-intensive AI data centers at the heart of U.S. technological superiority.

Regulatory barriers limit the expansion of automated track inspection

Automated track inspection (ATI) technologies have been tested in recent years to improve railway track defect detection and have the potential to improve rail safety while also increasing operational efficiency of the network. Instead of shutting down tracks for human inspectors to walk, or using specialized rail vehicles to inspect track visually, ATI sensors are mounted to trains as they are in service to collect track component data as part of normal rail operations. These robust sensor data are then fed to AI-powered models to better plan maintenance activities.

Through pilot programs established by railroads, which obtained waivers from the Federal Railroad Administration (FRA), ATI was demonstrated to more reliably detect defects than traditional inspections—and improve maintenance forecasting and planning over time. Pilot program data submitted to FRA show that defects per 100 miles of inspected track declined from 3.08 before the use of ATI to 0.24 during the ATI pilots, or a 92.2% reduction. Reportable track-caused train derailments on main track per year during that same period declined from eleven to three, or a 72.7% reduction. None of those three derailments was attributable to ATI-targeted defects, with two occurring while manual visual inspections were still taking place twice weekly and one while pilot testing was inactive.

These results are in line with successful ATI performance expectations, with a shift in maintenance practices from being guided by a “find and fix” approach to a “predict and prevent” approach. Better and earlier detection of geometry defects allows track maintenance to be performed in a more preventative manner. Further, the higher-quality data collected by ATI over time allows for AI-powered improvements to maintenance forecasting and strategy. As such, as ATI use is expanded and repeated over time, defect detection rates—and defect-related hazards—should decline.

Realizing the benefits of ATI requires changes to manual inspection practices. ATI cannot inspect turnouts (i.e., the point where trains switch from one track to another), turnout components (e.g., “frogs”), and other special trackwork. By focusing ATI on track geometry defects, human inspectors can be redeployed to infrastructure where they are best positioned to inspect. If legacy visual inspection requirements are not modernized, railroads will have less incentive to invest in ATI and improve their inspection practices.

Analysis of the ATI pilot program data found that visual inspectors identified far more non-geometry defects than track geometry defects. Prior to ATI testing on the pilot corridors, visual inspectors identified 10,645 non-geometry defects and 422 geometry defects. In 2021, during the ATI pilots, visual inspectors identified 14,831 non-geometry defects (a 39.3% increase) and 238 geometry defects (a 43.6% decrease). Of the non-geometry defects identified by visual inspectors, 60-80% were in turnouts and special trackwork that ATI cannot inspect.

Another important benefit of ATI is reducing visual inspectors’ exposure to on-track hazards. Substituting ATI for routine geometry defect inspection, coupled with a corresponding reduction in visual inspections, will remove inspectors from harm’s way. Data from the ATI pilot program indicate that inspector track occupancy duration declined by approximately one-quarter after visual inspections were reduced to once per week as part of the ATI pilots, suggesting substantial inspector workforce safety risk reductions are likely to occur if ATI is widely deployed.

The Association of American Railroads recently petitioned for an industry-wide waiver to enable significantly expanded ATI deployments. The necessity of a waiver is indicative of the inflexibility of legacy rail safety regulations, which mandate rigid manual visual inspection frequencies (49 C.F.R. § 213.233). Importantly, these long-standing inspection frequency rules are based on questionable assumptions about accumulated tonnage loads and lack the scientific rigor that ought to guide safety policy. FRA has yet to act on the pending ATI waiver petition, thereby preventing rail carriers, rail workers, shippers, and consumers from realizing the safety and efficiency benefits of ATI.

Conclusion

We greatly appreciate OSTP’s attention to regulatory barriers to the development and deployment of AI technologies. Realizing the full benefits of these various technologies and applications will require a sustained, concerted effort on the part of policymakers.

Thank you for the opportunity to provide these comments to OSTP. We look forward to further participation and stand by to assist as requested.

Download the full public comment:

Download this Resource

Comments to the Office of Science and Technology Policy on AI regulatory reform

Reason Foundation

Thank you for downloading!

Please provide your work email address to access this report:”
This field is hidden when viewing the form

The post Comments to the Office of Science and Technology Policy on AI regulatory reform appeared first on Reason Foundation.

]]>
Nobel Prize winners make powerful case for optimism amid technological change https://reason.org/commentary/nobel-prize-winners-make-powerful-case-for-optimism-amid-technological-change/ Mon, 27 Oct 2025 10:01:00 +0000 https://reason.org/?post_type=commentary&p=85971 The Nobel laureates’ work puts free minds and free markets squarely at the center of how societies prosper through innovation.

The post Nobel Prize winners make powerful case for optimism amid technological change appeared first on Reason Foundation.

]]>
Three economists have won the 2025 Nobel Memorial Prize in Economic Science for their work demonstrating that innovation and the free exchange of ideas are the most important factors leading to lasting high rates of economic growth. The Nobel laureates’ work puts free minds and free markets squarely at the center of how societies prosper through innovation. Their ideas are fundamental to the approach to policies we advocate at Reason Foundation.

Joel Mokyr of Northwestern University provided our best answer to an economic and historical mystery of epic proportions, the early 19th century takeoff in global economic growth. Philippe Aghion, of Collège de France, and Peter Howitt, of Brown University, showed how the processes of innovation and creative destruction unleashed during that time continue to drive economic growth today. In 2025 we find ourselves in multiple ongoing examples of creative destruction. Today’s proliferation of artificial intelligence (AI) technology happened before our economy and society were finished adjusting to other recent revolutions like e-commerce and social media. The work of this year’s Nobel laureates offers a much-needed case for optimism in the wake of technological change.

The great takeoff

Economies grow for many different reasons, but something happened around 200 years ago that changed the rules. Before the early 1800s, economic activity on planet Earth grew very slowly. There were booms and busts, golden ages, and disasters. These events mattered for people of a particular time and place, but only fleetingly. Around 1820, economies in Western Europe began growing rapidly, a new normal that came to characterize the entire world. The result, when we look at global gross domestic product (GDP) over time, is known as the “hockey stick.”

Some caution is in order. Summing up world GDP hides many layers of complexity and unevenness. Even as total GDP keeps increasing year by year, modern booms and busts have been more dramatic and less predictable than ever before. Along with the takeoff of the Industrial Revolution came all of modernity’s problems. And even modern GDP is hard to measure accurately, while estimating historic figures is almost a field unto itself. But none of these common criticisms of the hockey stick take away from the importance of understanding this unique moment in world history.

The geniuses are talking

Mokyr’s combination of history and economic analysis reveals the most compelling explanation of what happened and why. The two centuries leading up to the takeoff in economic growth witnessed an equally dramatic blossoming of science and invention. Western Europe in the 17th and 18th centuries is almost overflowing with hall-of-fame scientists (Newton, Galileo), discoveries (the cell, planetary motion), and inventions (the steam engine, the power loom). We call it The Enlightenment for a reason. These great ideas and inventions surely contributed to economic growth, but Mokyr’s findings are much more profound.

What emerged in Europe and particularly England leading up to the early 19th century takeoff was the ability for many minds and ideas to interact. Mokyr noticed that, unlike great discoveries in earlier times, those in early-modern Europe appeared to build on each other—they were cumulative. One reason for this change was “the Republic of Letters.” All around Europe, the geniuses began corresponding. Through letter-writing, the building of research institutions like the Royal Society, and emerging scientific best practices, great ideas could more easily be shared. Western Europe had now assembled a critical mass of propositional knowledge—the math, science, and other basic understandings of the world needed before one can learn other things.

Mokyr next provides an elegant explanation for the exact time and place where all of this core knowledge made the leap to prescriptive knowledge—how to make and do useful things. Through most of history, craftsmen and artisans—the people who made things—occupied a place in society just above rural peasants. By the turn of the 19th century, Great Britain had bucked this trend, with a growing skilled middle class as well as wealthy and educated entrepreneurs. They were increasingly able to put the knowledge created by the Republic of Letters to productive use. Inventions like the steam engine, power loom, and cotton gin formed the foundation of early factories, kicking off the Industrial Revolution.

In the Enlightenment and Industrial Revolution, Mokyr finds more than just innovation-driven economic growth. A critical mass of knowledge and innovation during this period caused a lasting change in the rules of how economies grow.

Why do economists, who usually look for hard numerical proof of new ideas, find this explanation so compelling?  Mokyr’s story explains why the change embodied in the hockey stick happened when and where it did, first in Britain, then Western Europe and North America, and ultimately spreading worldwide. Ideas work differently than other goods: They are not consumed after being put into use. Hockey-stick growth in poorer parts of the world requires improvements in education, governance, and integration with the world economy, but not recreating the Republic of Letters from scratch. This explains why, at the highest level, we do not appear to switch back to the low-growth rules that applied for most of history. No other explanation of the hockey stick comes close to explaining what we observe. 

More creative destruction

The other two 2025 Nobel winners, Aghion and Howitt, demonstrate that fresh rounds of innovation and creative destruction continue to fuel high rates of modern growth. Unlike Mokyr the historian, Aghion and Howitt work with the mathematical growth models of macroeconomics. This alone is an achievement, one of the best examples of economists fitting innovation into quantitative models. They provide mathematical economics with evidence in its own terms of the importance of innovation to the growth of modern economies.

Aghion and Howitt’s argument elegantly captures many features of economic growth missing from other explanations. Underneath the apparently smooth hockey stick, we find booms, busts, creative destruction, and upheaval few people expect until it happens. Economist Brian Albrecht writes:

“We had almost 8 million jobs created in the last quarter of 2024. Think about that number. Eight million new employment relationships formed in three months. But here’s the kicker: we also had over 7 million jobs destroyed in that same period. Firms constantly enter and exit. Workers move between employers. Products get launched and discontinued. The labor market churns.”

High overall growth rates aside, modern economies often feel at the mercy of unexpected and uncontrollable technological forces. Aghion and Howitt show that these forces are once again the result of ideas and innovations percolating from the bottom up, indeed unpredictable but harnessed to great benefit by free minds and free markets. Albrecht continues:

“In any single sector, you get sudden jumps when breakthroughs happen. Netflix enters and destroys Blockbuster’s profits essentially overnight. The iPhone launches, and BlackBerry’s market share collapses. Creative destruction is violent and discontinuous in a single industry at a single moment.”

Today we have large and diversified economies with many sectors. This allows unpredictable innovation many opportunities to take hold but also hedges against too much destruction at once. Aghion and Howitt explain the paradox between the immaculate hockey stick and the apparent chaos beneath.

“There’s no other way”

In the 21st century the pace of technological change is faster than ever. We are still in the process of learning how to adjust to a world of social media, for example, as we contemplate an AI revolution that might mean even greater change. A common thread in the work of all three Nobel laureates is that the truly meaningful innovations must involve a messy and uncertain process of adjustment. The anti-tech backlash that has gathered force in the last several years should therefore come as no surprise.

We must learn to view innovation and creative destruction with optimism and hope. The problems brought by technological change can seem impossible to solve, especially in the moment, but this is an illusion. They are not problems that one mind can solve, but we now live in a world where many minds and ideas work together without a central plan. Our record navigating the problems created by innovation is far from perfect, but our early fears do not come to fruition. This is born out time and again.

Mokyr, Aghion, and Howitt do not view the problems brought by innovation as inevitable or immune to good government policy. But innovation of the magnitude leading to creative destruction, and ultimately economic growth at the tip of the hockey stick, leads to disruption by its very nature. Our ability to keep prospering depends on a society where people are free to exchange ideas and put the best ones to use. As Mokyr said at the close of an interview after learning of his prize, “[T]here’s no other way.”

The post Nobel Prize winners make powerful case for optimism amid technological change appeared first on Reason Foundation.

]]>
New York’s stalled AI bill would have blurred the line between disclosure and restriction https://reason.org/commentary/new-yorks-stalled-ai-bill-would-have-blurred-the-line-between-disclosure-and-restriction/ Fri, 17 Oct 2025 10:30:00 +0000 https://reason.org/?post_type=commentary&p=85667 While pitched as a transparency measure, Assembly Bill 8595 would have set a new, unusually high bar for compliance.

The post New York’s stalled AI bill would have blurred the line between disclosure and restriction appeared first on Reason Foundation.

]]>
Earlier this year, New York state lawmakers advanced a proposal that would have required artificial intelligence developers to reveal the exact sources behind their models. Assembly Bill 8595, the Artificial Intelligence Transparency for Journalism Act, would have mandated a detailed, publication-level accounting of every uniform resource locator (the formal name for website addresses, which is typically shortened to URL) and data source accessed in every phase of model development. While pitched as a transparency measure, AB 8595 would have set a new, unusually high bar for compliance, raising the question of when transparency begins to look less like demanding openness and more like a deliberate barrier to entry. The bill’s progress appears to have stalled, but it is worth examining as the legislative approach it contains is likely to shape future legislation.

State Sen. Kristen Gonzalez (D-59) introduced the bill earlier this year. A key portion of the legislative text reads:

A developer of a generative artificial intelligence system or service … shall post and maintain on its website, with a link to such posting included on its homepage, the following information for each generative artificial intelligence system or service that utilizes covered publication content:

(i) the uniform resource locators or uniform resource identifiers accessed by crawlers deployed by the developer or by third parties on their behalf or from whom they have obtained video, audio, text or data … .

Despite the title, the bill defined a “journalism provider” as a “covered publication,” which is any print, broadcast, or digital outlet that “performs a public-information function,” and “invests substantial expenditure of labor, skill, and money.” The provision grants covered publications the right to “bring an action in the supreme court for statutory damages or injunctive relief”.

Ultimately, the bill did not define what is considered a copyright violation. Instead, it may have given publishers easier evidence to prove that a violation took place. And, importantly, courts have already begun outlining the contours of what AI may be considered under the fair use doctrine.

In a landmark case last September, AI developer Anthropic agreed to a $1.5 billion settlement with authors whose works were not purchased. Large language models (LLMs) are trained on vast amounts of data, some of which may include pirated copies of books. Notably, the case sets a precedent that AI models can be trained on works that are legally obtained. For instance, if the developer purchases a book, it can then train the model on the content and not have to compensate authors beyond the cost of the book itself. In Thomson Reuters v. Ross Intelligence, US Circuit Court Judge Stephanos Bibas held that Ross’s use of proprietary Westlaw headnotes to train its AI engine was not fair use, emphasizing the originality of the content and the commercial nature of Ross’s competing product. (Ross is a now-defunct AI company for legal research.)

In June, a U.S. district judge declared that Meta did not cause substantial harm to the market of publishers by using books to train its AI model, siding against a number of high-profile authors. A California court also sided with AI company Anthropic in a similar case involving book publishers.

Complying with New York’s proposal would have posed significant technical hurdles. LLMs are built on datasets containing billions of documents collected via automated web crawlers. Tracking and publishing every individual URL or identifier accessed during each stage is not standard practice. While engineers may spot-check a model’s citations or investigate suspected “hallucinations,” they rarely maintain exhaustive logs of every browser request or data pull.

Under the hood, LLMs learn by adjusting weights—numerical values that encode the statistical strength of connections between words—rather than storing or indexing URLs directly. Once training is completed, a model’s weights reflect aggregated patterns from the entire dataset, not discrete source pointers.

Even after training, engineers often conduct manual verification. For instance, one study describes clinicians checking whether an LLM’s medical citations matched real articles and assessing accuracy. If AB 8595 was passed and interpreted broadly, companies might be required to document every URL opened during such checks, in addition to all sources ingested into model weights.

“If a URL pointed to an uploaded PDF of one of my novels, that’s not proof that the model’s understanding of that came from that link. It could be from hundreds of discussions, promotional materials, or the Amazon page,” Andrew Mayne told Reason Foundation in an email. Mayne is a novelist and an AI consultant and was a technical consultant on a popular AI model from OpenAI’s ChatGPT-4.

Mayne’s observation highlights a fundamental ambiguity: Even with perfect logs of every URL a crawler hit, developers couldn’t trace how indirect discussions or metadata influenced model outputs. Must they disclose URLs opened for manual fact-checks? Or every ancillary page that informed a bot’s interpretation?

Such questions underscore how AB 8595 would have blurred the line between disclosure and restriction. Overly complex reporting requirements can impede innovation from large developers responsible for the most popular AI applications.

The last bill action was in June of 2025, when the legislation was referred to the rules committee. No further action on the bill is indicated on the New York state legislature’s website and it is immediately clear why this bill did not advance further. However, tensions between developers and publishers are far from settled, and a version of this bill could likely return to New York or another state next session.

The post New York’s stalled AI bill would have blurred the line between disclosure and restriction appeared first on Reason Foundation.

]]>
Democrats pivot on AI: Less regulation, more redistribution https://reason.org/commentary/democrats-pivot-on-ai-less-regulation-more-redistribution/ Tue, 14 Oct 2025 10:30:00 +0000 https://reason.org/?post_type=commentary&p=85584 The focus of Sen. Mark Kelly’s “AI for America” plan departs from other federal artificial intelligence policy proposals introduced by Democrats.

The post Democrats pivot on AI: Less regulation, more redistribution appeared first on Reason Foundation.

]]>
Sen. Mark Kelly (D-AZ) has released a new artificial intelligence (AI) policy roadmap. Notably, the focus of Sen. Kelly’s “AI for America” plan departs from other federal AI policy proposals introduced by Democrats, which emphasized strong regulation of AI model development and deployment. Instead, it calls for an “AI Horizon Fund” funded by taxes on large companies involved in the development and use of AI.

The proposal envisions channeling these dollars into various labor market interventions, such as union apprenticeship programs and a safety net for displaced workers, and infrastructure upgrades, especially for energy and water systems. This suggests Democrats may be shifting their rhetoric on AI, although the scant details so far make it hard to know how much difference this will make in terms of actual policy. Sen. Kelly’s proposal also comes at a time when Democrats control neither Congress nor the White House, so their priorities could shift whenever they regain congressional majorities and the presidency.

Setting aside these uncertainties, there appears to be support for Kelly’s approach. President Barack Obama tweeted on X that “We need more ideas like the ones @SenMarkKelly has outlined on how we can shape the future being created by artificial intelligence.”

Several other major Democratic Party figures and labor leaders have also declared support.

Sen. Kelly’s AI for America proposal can be contrasted with President Joe Biden’s 2023 Executive Order (EO) 14110, which envisioned a strong role for government intervention in the development of AI technologies. EO 14110 emphasized safety and “responsible innovation” as AI policy cornerstones. President Donald Trump rescinded EO 14110, then issued EO 14179, which is aimed at “removing barriers to American leadership in artificial intelligence.” In contrast to the safety-focused Biden era AI policy statement, Kelly’s AI roadmap stresses “strengthening the foundation of our success” in achieving “an early and commanding lead in AI thanks to our culture of innovation, world-class infrastructure, and unmatched ability to train and attract top talent.” Safety and equity considerations are present in Kelly’s proposal but receive much less attention.

Democrats have faced a series of high-profile setbacks when attempting to impose other strong regulations on how artificial intelligence is developed and used. In Colorado, Democratic Gov. Jared Polis convened a special session in August 2025 to amend Senate Bill 24-205, the Colorado Artificial Intelligence Act. That law would impose significant obligations on developers and users of “high-risk artificial intelligence systems” in sectors like healthcare. Lawmakers ultimately voted instead to delay implementation until June 30, 2026, rather than reopen the framework for amendment as Polis had sought. And in California, a federal judge blocked Assembly Bill 2839, which sought to restrict election-related deepfakes, as unconstitutionally overbroad, writing, “Most of AB 2839 acts as a hammer instead of a scalpel, serving as a blunt tool that hinders humorous expression.”

These defeats may be inspiring a new Democratic strategy that avoids direct restrictions on AI models and instead focuses on taxing industry to fund workforce and infrastructure programs. Kelly’s AI for America lays out a framework for managing the economic and social impacts of artificial intelligence without directly restricting innovation. The proposed AI Horizon Fund is the plan’s central feature. Kelly frames this fund as a way to ensure that the large technology companies benefiting most from AI’s growth also bear responsibility for addressing the broader costs that their expansion places on society. The fund is presented as a mechanism to channel private gains into public priorities.

Kelly describes these taxes as “common sense” because the firms generating “enormous profits” from AI should be required to offset the costs imposed on workers, communities, and public infrastructure. Of course, AI companies already do pay taxes just like any other business, so in practice, this funding mechanism transforms the proposal into a form of targeted redistribution, singling out private earnings from a particular type of technological innovation to be redirected into public spending priorities.

One of the key areas identified for this reinvestment is education and workforce development. The roadmap calls for expanding union apprenticeship programs and channeling resources into community colleges so that workers can gain the skills needed in an AI-driven economy. It also supports the creation of an “AI economic adjustment program” to supplement the incomes of displaced workers. AI for America also encourages increased labor union involvement in the design and deployment of AI to benefit workers, which raises questions about how much pivoting Democrats actually plan to do on their previous calls for AI model regulation.  Each of these interventions are cast as a way to ensure that technological change creates upward mobility for a broad base of workers, rather than widening inequality.

The second area of focus is infrastructure, where the plan highlights the strain that data center growth will place on water and electric systems. By directing AI company contributions into these public utilities, Kelly argues that firms can “offset these impacts” and “strengthen the systems and infrastructure on which they depend.” In practice, this would mean redistributing private sector gains into federally directed local or regional projects in energy, water, and other essential services.

This approach not only reinforces the idea that AI profits should be harnessed for broad social benefits rather than remaining in the hands of the companies that generate them, but it also raises practical questions about utility regulation and the roles of various levels of government.

Both public and private electric utilities typically rely on user revenue from their “ratepayers” to finance infrastructure improvements, which is subject to regulation by state and local utility regulators. AI companies are among those ratepayers, so if regulated rates are insufficient to generate revenue to finance improvements or if costs are poorly allocated, policymakers should direct their attention to state and local utility regulation.

Where federal involvement in utility infrastructure finance exists, it is principally in the form of loans and loan guarantees, such as the Environmental Protection Agency’s Clean Water State Revolving Fund and the Department of Energy’s Title 17 Energy Financing Program. These subsidized credit assistance programs play a relatively small role in U.S. utility networks and—importantly—require that a substantial amount of project risk be retained by utilities and their ratepayers.

Kelly’s proposal recommends new “financing mechanisms” to supplement the traditional utility ratepayer model, but it says nothing about how existing regulation is denying utilities the ability to, in the words of his roadmap, “raise capital quickly and recover their investments fairly without disproportionately impacting the communities that host new AI infrastructure.”

As energy economist Lynne Kiesling noted in a Reason Foundation commentary:

By temporarily scaling down operations or shifting workloads to off-peak periods, data centers can help balance supply and demand, stabilize prices, and reduce the need for expensive and emissions-heavy peaking power plants.

However, the regulatory and market institutions have to enable such markets and price signals to reduce frictions that maintain the timing mismatch between demand growth and increasing supply. They do not. While some demand response integration exists in wholesale power markets, it’s limited and heavily constrained.

Thus, the problem is not a relatively simple one of limited access to capital, which Kelly’s proposal aims to address in an equitable manner. Instead, the heavily regulated market design in utilities limits the ability of providers to match supply with customer demand efficiently. The upshot is that, absent market-oriented reforms, federal financing assistance will merely perpetuate and likely worsen the underlying problems that constrain utilities’ responses to the growth of data centers, and result in project risk being increasingly shifted to taxpayers.

There are limited instances in the United States where governments have asked specific technology firms to help offset the societal impacts of their operations beyond ordinary taxation.

One instructive precedent is the Universal Service Fund, which requires U.S. telecommunications providers to contribute to a pool that subsidizes broadband and telephone service in rural and underserved areas. These programs suggest that targeted levies or partnerships aimed at offsetting industry impacts are not without precedent, even if they remain relatively rare in the technology sector.

Ideally, a light regulatory touch would be the ideal path. But Democrats tend to heavily involve the government at some level in many proposals. Ultimately, whether this shift from regulation to redistribution benefits or harms innovation will depend on the scale of the required contributions, as well as how those revenues are directed. Heavy-handed regulation that restricts the design or deployment of AI models could stifle startups and slow the development of foundational technologies that underpin the broader ecosystem.

Yet if the new approach functions as an industry-specific tax that grows too large and funds programs of dubious value, it could limit the ability of U.S. companies to reinvest profits in research, infrastructure, and global competitiveness that would deliver real value to consumers. The balance between these two risks will determine whether policies like Sen. Kelly’s proposal strengthen the AI sector or instead constrain its long-term growth.

The post Democrats pivot on AI: Less regulation, more redistribution appeared first on Reason Foundation.

]]>
Sen. Ted Cruz proposes federal regulatory sandbox to encourage AI innovation, development https://reason.org/commentary/sen-ted-cruz-proposes-federal-regulatory-sandbox-to-encourage-ai-innovation-development/ Wed, 08 Oct 2025 04:01:00 +0000 https://reason.org/?post_type=commentary&p=85284 The SANDBOX Act would allow innovators to obtain temporary regulatory waivers for artificial intelligence technologies from federal agencies.

The post Sen. Ted Cruz proposes federal regulatory sandbox to encourage AI innovation, development appeared first on Reason Foundation.

]]>
Sen. Ted Cruz (R-Texas) has introduced draft legislation to create a program that would allow artificial intelligence (AI) pilot projects to operate under temporary exemptions from certain federal rules. The bill, known as the Strengthening Artificial Intelligence Normalization and Diffusion by Oversight and Experimentation (SANDBOX) Act, would allow innovators to obtain temporary regulatory waivers for AI technologies from federal agencies. Developers could commercialize their product for a specified period, subject to added oversight from regulators and reporting requirements.

The bill would authorize the creation of a federal program to oversee pilot projects of AI tools in sectors such as energy, infrastructure, healthcare, and education, allowing them to apply for temporary liability protections. Administered by the White House Office of Science and Technology Policy (OSTP), this program would enable AI users and developers to request waivers or modifications to existing federal rules and regulations. OSTP would route applications to the appropriate agencies and work in collaboration with them to determine whether to accept or reject an application, based on an evaluation of the risks and benefits to consumers.

As an example, AI-enabled medical devices may currently require Food and Drug Administration (FDA) approval for minor adjustments. However, by design, the benefits of AI are that it regularly learns from the experience of user data. A device that analyzes images of the heart to diagnose cardiac risk better could improve rapidly every time it receives data from more patients. The FDA has acknowledged that the approval process is a problem and is currently developing more effective rules for AI-enabled devices.

Under Cruz’s plan, the FDA could create a new rule that would only apply temporarily to a specific product. This is more politically palatable than revising rules for the entire industry.

Once an application is approved, developers and their products receive certain specific legal protections. Per the draft bill:

No existing right of action of a consumer to seek actual damages or an equitable remedy may be waived or modified under the Program. (2) While a waiver or modification is in effect, and the person is in compliance with the written agreement entered into pursuant to subsection (e), the person shall not be subject to the criminal or civil enforcement of a covered provision specifically identified in the waiver or modification.

Under this framework, developers would apply to release a product or service that requires a waiver from a specific federal regulation or rule under an agency’s jurisdiction. The legislation would not override existing state or local regulations. For example, it would not preempt state laws targeting fraudulent videos or so-called “deep fakes,” or restrictions on using AI in mental healthcare like Nevada’s. Local ordinances, including zoning laws that limit data centers due to concerns about water or electricity usage, would also remain in effect.

To understand what kinds of AI applications might qualify for this type of legal protection, it is helpful to examine how similar programs, known as regulatory sandboxes —hence the bill’s acronym —have evolved and the range of technologies they have enabled in practice.

Regulatory sandboxes have been in operation for several years, enabling a range of innovative financial services. One of the early motivations for sandboxes started with the United Kingdom’s Financial Conduct Authority, which sought to address a surge of financial technology (fintech) startups encountering unclear or overly burdensome rules. This allowed firms like MarketFinance to trial peer-to-peer business lending under supervised conditions. In Singapore, the Monetary Authority’s sandbox supported projects such as Project Ubin, a blockchain-based cross-border payment trial with Standard Chartered and local banks.

Following these pioneers, jurisdictions around the world rolled out ideas similar to sandboxes. Australia set up an Innovation Hub in 2016 to support fintech and insurance technology (Insurtech) pilots. Canada’s Ontario Securities Commission introduced LaunchPad in 2017, and Abu Dhabi established a sandbox in 2018 to spur financial technology growth in the United Arab Emirates. Each program shares the core feature of time-limited, supervised testing under regulatory relief, tailored to local market needs.

Public reports of sandboxes rarely look at the long-term success of products or companies in which they were incubated. One standout example is Zilch, a medium-sized fintech company that credits the UK’s sandbox for helping it navigate the complex regulation of its buy-now-pay-later approach to credit and consumer purchasing.

Stanford’s Center on Philanthropy and Civil Society notes that “a well-designed and executed sandbox can facilitate innovation and protect consumers, avoiding the pitfalls that concern many critics.” The center also observes that “one of the benefits of a regulatory sandbox is that it has the potential to provide clear rules of the road for market participants, particularly where new technologies or new products and services pose challenging questions with respect to regulatory requirements and ensuring consumer protection.”

A World Bank review similarly highlights the impact of sandboxes on financial innovation: “In only four years, sandboxes have become synonymous with fintech innovation, offering the unique benefit of providing the empirical evidence needed to substantiate decisions in the field.”

Regulatory sandboxes have expanded well beyond their origins in fintech to encompass a wide array of industries. Sandboxes now operate in the insurance industry, allowing new underwriting models to be tested under relaxed rules. In the health sector, national data institutes like Health Data Research UK have used sandboxes to pilot data-driven diagnostics and patient-monitoring services in a controlled setting.

Interest in AI sandboxes is now rising in Europe under the proposed AI Act. Article 57 of the AI Act requires each European Union member state to establish at least one AI regulatory sandbox by Aug. 2, 2026, creating controlled environments for the development and validation of AI systems before market launch.

In the United States, Utah stands out for its approach to AI. The state’s Office of Regulatory Relief oversees technology sandboxes. One early sandbox pilot involved a product called ElizaChat, an AI-powered mental health chatbot designed for teenagers. Dave Barney, CEO of ElizaChat, reports:

“The AI Policy team engaged with us, understood our business needs, and crafted a regulatory relief contract that freed us to explore creative products that will help teenagers improve their mental health, without fear of regulatory risk.”

In July, the White House released an AI action plan, which directed agencies to adopt a wide range of AI-related rules, including the creation of a regulatory sandbox. The SANDBOX Act would take Utah’s approach to the federal level and establish a more formal mechanism than outlined in the White House plan (the White House plan simply recommended the establishment of a sandbox without as much detail).

Sandboxes are new to the U.S. federal government, so it is unclear how willing agencies will be to consider waivers for AI products and services. We may learn more as the discussion around the SANDBOX Act progresses.

Still, sandboxes hold considerable promise. Total, permanent deregulation is often politically unpopular. Agencies may be more willing to experiment with temporary deregulation around AI products, which gives innovations an opportunity that they might not otherwise have.

The post Sen. Ted Cruz proposes federal regulatory sandbox to encourage AI innovation, development appeared first on Reason Foundation.

]]>
A look at the White House’s pro-innovation artificial intelligence ‘action plan’ https://reason.org/commentary/a-look-at-the-white-houses-pro-innovation-artificial-intelligence-action-plan/ Tue, 07 Oct 2025 04:01:00 +0000 https://reason.org/?post_type=commentary&p=85292 The White House's AI action plan represents a clear policy direction favoring rapid innovation and reduced regulatory oversight.

The post A look at the White House’s pro-innovation artificial intelligence ‘action plan’ appeared first on Reason Foundation.

]]>
Earlier this year, the White House released an artificial intelligence (AI) “action plan,” declaring that, “Winning the AI race will usher in a new golden age of human flourishing.” The document’s central purpose is straightforward: to preserve American AI superiority. In practice, the plan mostly recommends consolidating and formalizing a long series of pro-innovation, anti-regulation executive orders the White House has issued since January 2025. Overall reactions from industry have been positive and reflect optimism over the administration’s commitment to free market innovation.

The plan is structured around three pillars: accelerating AI innovation, building American AI infrastructure, and leading in international AI diplomacy and security. The first pillar focuses on removing regulatory barriers and promoting private-sector development, including open-source AI and workforce training. The second pillar targets energy, permitting, and semiconductor supply chains, aiming to rapidly expand and secure the physical and technical infrastructure needed for large-scale AI deployment. The third pillar advances an assertive international strategy—promoting U.S. AI standards abroad, tightening export controls, and countering adversarial influence, especially from China.

The plan builds on the regulatory shift that began when President Donald Trump rescinded an AI-related executive order from President Joe Biden. Biden’s order focused on public support for regulations that reduced bias in AI products, while Trump’s first executive order on AI, issued in January, explicitly called for the removal of barriers related to AI development.

The action plan recommends tasking a broad set of agencies with carrying forward a deregulatory mandate. Each would be charged with reviewing, revising, or eliminating existing rules, adjusting grant-making, and accelerating approvals to align federal AI policy with the administration’s pro-innovation priorities, which aligns with Reason Foundation’s testimony on how to promote AI innovation.

For example, the plan recommends the Office of Management and Budget (OMB) “work with Federal agencies that have AI-related discretionary funding programs to ensure, consistent with applicable law, that they consider a state’s AI regulatory climate when making funding decisions and limit funding if the state’s AI regulatory regimes may hinder the effectiveness of that funding or award.”

Should the Trump administration ultimately grant agencies this broad discretion, the plan potentially hands them a tool to reward or penalize states on AI regulation at their own judgment. For instance, the National Science Foundation has a $100 million grant for AI research that it awards to various universities. It’s possible these kinds of large grants could be in jeopardy for states that, from the perspective of an agency, create burdensome regulations.

State legislatures and many Republican governors publicly opposed a congressional moratorium on state AI regulations, arguing that it would preempt state powers to enact laws, such as those that would criminalize deceptive AI-generated media intended to influence elections. This plan, instead, takes an agency-centric approach. Each state’s AI policy would be evaluated by federal bodies, whose leadership is closely tied to the Trump administration. Strategically, this is a more politically directed tool than the proposed moratorium by Congress, which would have undercut the authority of both Republicans and Democrats to control AI policy.

The plan includes a dedicated section on government AI adoption that recommends “all Federal agencies ensure—to the maximum extent practicable—that all employees whose work could benefit from access to frontier language models have access to, and appropriate training for, such tools.”

This directive could materially affect government efficiency and labor costs. For example, preliminary evidence from Pennsylvania’s early generative AI pilot, in which 175 state employees across 14 agencies used ChatGPT Enterprise for drafting, summarization, research, and IT support, reported an average of 95 minutes saved per day on these tasks. While still in the early stages, these results suggest that broad-based adoption and training in frontier language models may yield significant productivity improvements across federal operations and potentially lead to labor cost reductions for an administration willing to replace overhead with automation.

The action plan also coincides with an executive order aimed at streamlining federal permits for data centers. Large-scale storehouses of computers have become an essential component of artificial intelligence programs, both for building (“training”) the foundational models and for enabling models to interact with users. The administration has recognized that current regulations hamper new data centers, with Environmental Protection Agency Administrator Lee Zeldin noting on Fox News that the “EPA wants to increase certainty for owner-operators in the permitting process, making it clear what kind of permits are needed for new and modified projects.”

New data centers have often been met with local resistance, with citizens utilizing environmental protection rules at their disposal in an attempt to delay or block the creation of facilities that they argue reduce the quality of life or consume excessive resources. At the same time, federal and state energy agencies have identified the need for extensive additions to electricity infrastructure to meet this new demand; however, such infrastructure comes at a cost and requires time, tending to grow more slowly than demand. It is unclear how this new executive order will actually impact the construction of new data centers, but it demonstrates the administration’s willingness to explore ways to cut red tape and accelerate the permitting process.

Finally, the AI action plan recommends that several agencies promote and incentivize the use of publicly available data, including the creation of a new data “portal” for datasets from the National Science Foundation.

This plan marks a departure from a purely free market approach by calling for federal agencies to be empowered with broad discretion on politically sensitive issues—such as cutting government contracts with software that explicitly promotes progressive climate change reform. The result is a policy framework that both crystallizes the administration’s deregulatory agenda and provides agencies with explicit “air cover” to reward or penalize states based on both political criteria and compliance with federal AI priorities.

The White House’s AI action plan represents a clear policy direction favoring rapid innovation and reduced regulatory oversight. The plan’s effectiveness will depend heavily on how federal agencies interpret and implement their expanded discretion. However, it will give executive air cover to agency leaders who wish to create rules that are friendly to the expanding market of AI products.

The post A look at the White House’s pro-innovation artificial intelligence ‘action plan’ appeared first on Reason Foundation.

]]>
Georgia could create a safer online environment for kids by empowering parents https://reason.org/testimony/georgia-could-create-a-safer-online-environment-for-kids-by-empowering-parents/ Tue, 16 Sep 2025 19:40:59 +0000 https://reason.org/?post_type=testimony&p=86189 Balancing safety, parental empowerment, and constitutional rights would foster a safer and privacy-respecting digital environment for all. 

The post Georgia could create a safer online environment for kids by empowering parents appeared first on Reason Foundation.

]]>
A version of the following public comment was submitted to the Georgia Senate Study Committee on the Impact of Social Media and Artificial Intelligence on Children and Platform Privacy Protection on September 16, 2025.

Thank you for the opportunity to provide Reason Foundation’s view on the impact of social media and AI on children and platform privacy protection. In a time when parents are concerned over their children’s safety, advocates for bills such as the proposed App Store Accountability Act at the federal level and recently enacted versions at the state level (Utah, Texas, and Louisiana) have called for age verification practices at the device level to ensure they do not access harmful content. The state bills do not go into full effect until 2026. 

In practice, these mandates would require users to provide verifiable age information, such as government-issued IDs, and/or biometric data (facial scans, for example), at the point of creating an account that can reliably establish their age. The system would then categorize users into predefined age brackets (children under 13, teenagers 13-17, and adults over 18) to tailor content restrictions and access rights accordingly. For minors, this would mandate linking their accounts to verified parental accounts, with explicit parental consent required before allowing downloads, purchases, or access to certain application features. 

While these checks aim to reduce minors’ exposure to harmful material, this approach both raises privacy concerns and risks eroding online anonymity. Requiring websites to view and store government IDs and biometrics greatly increases the risk of putting people’s privacy at risk if a site is breached, especially sites that are required to verify age but do not have sufficient data security measures. One clear example of this is when the dating app Tea was breached, leading to thousands of users’ information being made public for bad actors to potentially use.  

Furthermore, age verification negatively affects online anonymity substantially, as users must provide evidence of age, which could be linked to their identity, even when platforms claim to employ privacy-preserving technologies. Throughout the United States’ history, anonymous speech has been considered First Amendment-protected, including online speech. However, age verification, which links government IDs and biometrics to specific users, erodes the ability to participate pseudonymously or anonymously online, crucial for whistleblowers, activists, and vulnerable groups engaging with sensitive issues. The persistent digital footprints required by these laws raise risks of profiling, tracking, and surveillance, especially as verification systems integrate with government digital identity schemes. 

Reason Foundation urges the committee to instead consider policies that would empower parents—the primary decision-makers for their children’s online access.  

Rather than mandating invasive age verification systems that collect personal sensitive data, it would be better to promote and utilize existing technology and parental control features found at device and platform levels, such as screen time limits, content filters, and family account management. These tools can be flexibly adapted to individual preferences without exposing minors to privacy risks or chilling anonymous speech.  

Similarly, promoting age-appropriate educational programs within schools is critical to equipping youth with the skills and knowledge to navigate online environments safely and ethically. Digital citizenship curricula, such as those offered by Common Sense Education or Google’s Be Internet Awesome, guide students in understanding privacy, communication etiquette, digital footprints, and cyberbullying awareness. Such education fosters informed, responsible technology use from an early age, complementing parental controls rather than replacing them.  

A balanced approach that maximizes family autonomy, minimizes data exposure, and supports education over coercion creates a safer online environment while respecting constitutional freedoms and technical feasibility. 

Although age verification practices are meant to protect minors from harmful content and regulate online engagement, current proposals involve complex technical challenges that risk both children’s and adults’ online privacy and security. Balancing safety, parental empowerment, and constitutional rights would foster a safer and privacy-respecting digital environment for all. 

The post Georgia could create a safer online environment for kids by empowering parents appeared first on Reason Foundation.

]]>
Gen Z’s privacy preferences and the future of data privacy https://reason.org/commentary/lawmakers-should-heed-gen-zs-privacy-policy-preferences/ Wed, 27 Aug 2025 10:00:00 +0000 https://reason.org/?post_type=commentary&p=84358 We are in the midst of an ambitious legislative moment for data privacy regulation. But as lawmakers debate the legal frameworks that may shape the future of online interactions, one question remains underexplored: what should privacy regulators learn from the … Continued

The post Gen Z’s privacy preferences and the future of data privacy appeared first on Reason Foundation.

]]>
We are in the midst of an ambitious legislative moment for data privacy regulation. But as lawmakers debate the legal frameworks that may shape the future of online interactions, one question remains underexplored: what should privacy regulators learn from the habits and preferences of younger generations?

Europe has long enforced privacy through its notorious General Data Protection Regulation (GDPR), which centralizes data protection standards, mandates explicit user consent, and empowers regulators to levy fines of up to 4% of global revenue for noncompliance.

Meanwhile, 19 states in the U.S. have enacted comprehensive privacy laws, and many others are actively considering them. While Congress has not yet passed federal legislation, recent proposed bills, such as the American Data Privacy Protection Act (ADPPA) of 2022 and the American Privacy Rights Act (APRA) of 2024, have their foundations in comprehensive laws that are heavy on compliance and have been found to impact user experience and hinder innovation.

Practically, these types of proposed laws are often premised on an outdated understanding of how much and what kind of privacy individuals value — and how much they are willing and able to control. Many current privacy proposals, whether it’s Europe’s GDPR or congressional bills like APRA, rest on the assumption that people want maximal protection from all forms of data collection. These frameworks are primarily built around principles like state preemption, opt-in requirements for sensitive data, private rights of action, and data minimization. But examining the online behavior of younger generations offers valuable lessons about where these assumptions diverge from reality.

Privacy, but on their terms

Gen Z and millennials are more likely to be comfortable with personalized advertising, with just over one-in-five Gen Z and millennial (22%) respondents saying they’re comfortable with it, compared to Gen X and Baby Boomers (15%), according to online research firm YouGov.

Online ads following them around or apps logging their clicks are practically baked into the experience of being online today. According to Pew Research, while 56% of people over the age of 50 take issue with their data being used for personalization, for those under 50, the number drops to 41% percent being uncomfortable with it. In the same Pew series, 72% of adults under 30 say they immediately click “agree” on privacy policies without reading them, compared with 39% of those 65 and older — a good behavioral indicator of prioritizing convenience or being less alarmed by background data flows.

While this is hardly a majority, it points to a generational pattern: many young people see practices such as targeted ads and location tracking as more acceptable trade-offs for modern convenience (or simply unavoidable). Gen Z is also more comfortable with certain forms of surveillance in personal relationships. For example, they are more open to sharing location data with friends or significant others, whereas older generations might deem that less appropriate.

But crucially, ‘not minding’ institutional tracking does not mean Gen Z has no privacy boundaries. On the contrary, this generation places enormous value on consent and control in their social sphere. Gen Z is more likely to seek permission before posting about others, across all relationship types — from close friends to acquaintances. Broadly, this is a generation that views privacy not as secrecy, but as narrative control — deciding what to share, with whom, and when.

The personalization paradox

Most modern privacy frameworks treat personalization and tracking as risks to be minimized through consent mandates and strict opt-in requirements for sensitive data. But this framing may increasingly be out of step with how younger users, particularly Gen Z, approach their digital lives.

Perhaps the clearest illustration of Gen Z’s pragmatic approach to privacy is their love-hate relationship with social media. They are digital natives and are constantly plugged into platforms where they share enormous amounts of personal data. This is what is often referred to as the “data privacy paradox”: Gen Z willingly surrenders its data to social apps in exchange for the customized experiences they prefer. An Oliver Wyman Forum survey found that about 88% of Gen Z respondents said they were willing to share some personal data with a social media company if it improved their experience, a far higher share than the 67% of older adults who agreed. This suggests that Gen Z may largely accept personalization as the price of admission.

They’ve grown up with algorithmic feeds tuned to their tastes, and they know those algorithms run on data. When asked hypothetically, young people are much more likely than the prior generations to say they’d trade personal data for a better website or free content, according to a study commissioned by the hosting company WB Engine. In fact, by one measure, Gen Z rated their willingness to share data for a better online experience about 15% higher than non-Gen Z did. As the researchers summarized, Gen Z finds personalization to be “more of a non-negotiable need — even if it puts their privacy and data at risk.” This helps explain why many Gen Z-ers have rallied against the TikTok ban in the US — essentially choosing a tailored social media experience over abstract data protection concerns. The value they get from curated content, viral trends, and algorithmic discovery is seen as outweighing the privacy they give up in return.

Empowerment over restriction

The same research by the Oliver Wyman Forum found that Gen Z-ers clear their cookies, browse anonymously, and use encrypted communications “twice as often as other generations.” In practice, that means habits like opening incognito windows, using virtual private networks (VPNs) and encrypted messaging apps, and regularly purging trackers are second nature to many young people. These young people grew up with a smartphone in hand and understand the levers of digital privacy intuitively — adjusting app permissions, disabling location tags, and finding workaround tools to stay private. This everyday privacy hygiene shows a generational belief that protecting personal data is each user’s responsibility.

So, what does all this mean for protecting user privacy? The next generation doesn’t need a flood of consent pop-ups, like European users saw once GDPR took effect — they need privacy defaults that respect their intelligence. Gen Z’s behavior sends a clear message: empower us by providing us with technical solutions, not restrictive regulations.

Today, tools like browser-based universal opt-out mechanisms, tiered consent systems, and self-sovereign identity (SSI) frameworks offer a smarter, more user-centric approach to privacy.

Ironically, one of the cleanest fixes, Global Privacy Control (GPC), a browser‑based universal opt‑out, caught on because California required sites to honor it, even though the same standard could have emerged from industry coordination. GPC lets users set a single signal in their browser that websites must respect, reducing the need for pop‑ups or per‑site toggles.

Tiered consent matches the sensitivity of data to the level of required user involvement: low-risk data might require only a one-time agreement, while highly sensitive information — like health or biometric data — would trigger more detailed, explicit consent. Self‑sovereign identity systems, such as wallets built on W3C Decentralized Identifiers (DIDs) and Verifiable Credentials like Microsoft Entra Verified ID, take this further by giving individuals portable digital identities that include their privacy preferences. Once set, these preferences would automatically apply across platforms and services, saving users time while reinforcing their autonomy.

Together, these tools reduce friction, improve compliance, and better reflect how younger users want to manage their digital lives, all without overburdening small firms or chilling innovation.

But to build these tools at scale, the market needs room to experiment. Blanket regulatory mandates — especially rigid opt-in systems or inflexible consent formats — risk preventing the growth of innovation. Compliance with fragmented and overly prescriptive frameworks disproportionately burdens startups and small firms, who must divert resources from product design to legal conformity.

As Congress again debates how to legislate the future of online data, it would be wise to examine what younger users are already doing: customizing their own privacy boundaries and demanding tools — not rules — to support them. In other words, the lesson from Gen Z is not to tailor laws just for them, but to recognize how their habits illuminate a broader truth about modern privacy: people want agency, adaptability, and meaningful control over their data.

A version of this commentary first appeared in Tech Policy Press.

The post Gen Z’s privacy preferences and the future of data privacy appeared first on Reason Foundation.

]]>
Colorado can lead on AI fairness without a regulatory straitjacket https://reason.org/commentary/colorado-can-lead-on-ai-fairness-without-a-regulatory-straitjacket/ Tue, 26 Aug 2025 17:00:00 +0000 https://reason.org/?post_type=commentary&p=84367 There are evidence-based, market-oriented steps Colorado lawmakers could take in place of the state's existing artificial intelligence law.

The post Colorado can lead on AI fairness without a regulatory straitjacket appeared first on Reason Foundation.

]]>
Colorado Gov. Jared Polis has called lawmakers back to Denver for a rare special session, partly to revisit Colorado’s first-in-the-nation artificial intelligence (AI) law. The special session kicked off on August 21st. While the 2024 statute aimed to curb algorithmic bias in hiring, lending, and other high-stakes areas, Polis now warns its broad mandates could create costly compliance hurdles and discourage innovation.

Instead of scrapping the goal of AI algorithmic fairness, Colorado has an opportunity to lead the country in developing anti-discrimination tools and testing what works without erecting barriers that lock out smaller players or slow emerging markets. The state can do this through partnerships with industry, universities, and nonprofits to first study whether AI discrimination is actually occurring and then pilot new technologies or algorithms that could reduce it.

As background, in 2024, the Colorado Legislature passed Senate Bill 24-205. It became the nation’s first state to enact comprehensive AI regulation tailored to high-stakes automated decisions in areas like employment, housing, education, and healthcare. The law mandates bias risk assessments, transparency disclosures, and consumer recourse mechanisms for systems that significantly influence life-changing outcomes.

Unfortunately, there could be extraordinary compliance costs. Companies may simply avoid developing or using services in the state rather than figure out how to comply with a complicated and potentially costly law. Polis, though supportive of the law’s intent, warned in his signing statement that a patchwork of state laws could stifle innovation and create a “challenging regulatory environment.”

In an X post on Aug. 19, the governor reiterated his concerns:

“In Colorado, we can promote innovation while also protecting consumers. …There is clear motivation in the legislature to take action now to protect consumers and promote innovation, all without creating new costs for the state or unworkable burdens for Colorado businesses and local governments.”

Moreover, a new White House’s AI Action Plan now adds a risk factor Colorado didn’t have to consider when SB 24-205 was passed: losing federal contracts. The AI Action Plan states that “the Federal government should not allow AI-related Federal funding to be directed toward states with burdensome AI regulations that waste these funds.”  For instance, the National Science Foundation has a $100M grant to research institutes in various states, such as the University of Texas at Austin.

While agencies are only beginning to implement this guidance and no interpretations have been issued, it is possible that a future agency could deem Colorado’s anti-discrimination mandates overly burdensome and threaten federal grants or contracts—introducing a funding risk that was not on the table when the legislature first enacted the law.

There are evidence-based, market-oriented steps Colorado lawmakers could take in place of the existing law.

Before discussing novel approaches to artificial intelligence, it is worth noting that Reason Foundation has questioned the need for additional anti-discrimination laws to address these types of AI issues. There are, for instance, already laws against racial discrimination in lending for housing. However, as policymakers convene, the majority may insist that the state go above and beyond existing laws and create additional actions related to artificial intelligence technologies.

One option Colorado could explore is the potential launch of a task force bringing together industry leaders, local universities, and nonprofits to study how discrimination in housing, education, and employment may be occurring when AI tools are used. Right now, policymakers and the public don’t have a clear picture of the scope of the problem: which applications are driving biased outcomes, whether the bias stems from generalized models or from specific software deployments, and in what contexts it most affects individuals. Without this baseline understanding, it’s impossible to design targeted, effective interventions.

Task force findings should be published in a public report and delivered to the legislature, giving both lawmakers and stakeholders an evidence base for future debates. By focusing first on identifying the degree and sources of bias, Colorado can replace guesswork with data, ensuring any eventual rules are grounded in measurable harm rather than hypothetical risks. This approach would also signal to the broader market that the state is committed to problem-solving, not preemptive overregulation.

Following the creation of a task force, the state could develop solutions to reduce AI discrimination. Once the task force has mapped where and how AI-driven discrimination occurs, its next goal should be to experiment with ways to mitigate it. Because AI models and applications are evolving at a pace of months, not years, there is no static playbook for reducing bias.

The task force should work with model developers, deployers, and academic experts to create algorithms, prompt strategies, or operational guidelines aimed at identifying and reducing discriminatory outcomes in real-world contexts. While the government could offer grants, many academics are already working on this problem. For instance, last year, a University of Colorado Boulder faculty member published research on biases in AI mental health tools.

These efforts should be paired with clear, measurable benchmarks. AI company Anthropic has evaluated benchmarks, such as the Bias Benchmark for QA, to ensure that its models do not perpetuate stereotypes (such as a model assuming that the CEO of a company is male). By testing models and applications against metrics, Colorado researchers could not only assess the effectiveness of mitigation techniques but also create a repeatable standard for others to adopt. If successful, Colorado’s benchmarks could become a national model for innovation in AI fairness without the weight of one-size-fits-all mandates.

The final step is to ensure that all findings from these voluntary efforts are made public and inform future legislation. Regular reports should not only document progress on bias reduction but also flag where interventions are ineffective or counterproductive. This process will help lawmakers avoid locking in policies that can’t adapt to evolving technology and will keep the public informed about the trade-offs involved. By making transparency a norm, Colorado can encourage a culture of trust between industry, regulators, and citizens.

This is especially helpful for smaller technology companies, which cannot afford entire teams dedicated to developing new AI methods that avoid discrimination. Public research and open-source tools, including public benchmarks, make it easier for smaller companies to comply with new rules. Both the supporters of Colorado’s AI law and Polis raise valid concerns over compliance costs. The state need not surrender its role in addressing AI-driven discrimination, nor should it ignore the risks of imposing rules that make Colorado less attractive to innovators. By adopting an exploratory, science-driven approach that works in partnership with the private sector, Colorado can preserve its leadership in addressing legitimate fairness issues while keeping its economy open and competitive.

The post Colorado can lead on AI fairness without a regulatory straitjacket appeared first on Reason Foundation.

]]>
Free speech rights secure a legal victory over California’s restrictive deepfake laws https://reason.org/commentary/free-speech-rights-secure-a-legal-victory-over-californias-restrictive-deepfake-laws/ Thu, 21 Aug 2025 10:30:00 +0000 https://reason.org/?post_type=commentary&p=84320 The case underscores the difficulty of state legislators trying to regulate AI-generated content without infringing on constitutionally protected speech.

The post Free speech rights secure a legal victory over California’s restrictive deepfake laws appeared first on Reason Foundation.

]]>
Free speech rights recently secured an important legal win against one of California’s overly broad deepfake laws. The case underscores the ongoing difficulty of state legislators trying to regulate AI-generated content without infringing on constitutionally protected speech.

The California law, Assembly Bill 2655, would have required social media platforms to remove or label “materially deceptive” AI-generated political content near elections. Elon Musk and X, formerly Twitter, sued, and a federal judge just struck down the law, ruling it was preempted by Section 230 of the Communications Decency Act, which protects online platforms from liability for user-generated content.

Last year, the state enacted two political deepfake laws in the lead-up to the presidential election. AB 2655, the subject of Musk’s lawsuit, would have mandated online platforms to act against deceptive political deepfakes. The other law, Assembly Bill 2839, would have banned the creation and distribution of any political deepfakes depicting a candidate “doing or saying something that the candidate did not do or say” 120 days before and 60 days after an election. A federal judge blocked that law for lacking protections for parody and satire, which are essential pieces of free speech.

While the court rulings protecting free speech were correct, it is important to note that some concerns about deepfakes are well-founded. AI-generated media can depict people saying or doing things they never did, leading to reputational damage, misinformation, and defamation. This includes non-consensual, sexually explicit content known as “revenge porn.”

In 2024, California updated its existing laws to explicitly outlaw the creation and distribution of AI-generated revenge porn, giving victims and law enforcement stronger tools to address harms.

Rather than trying to overly regulate this fast-evolving technology in ways that restrict protected speech rights, California’s best course lies in leveraging existing legal tools. The state’s defamation, fraud, privacy, and right of publicity laws already provide strong remedies for victims of harmful deepfakes.

If someone finds themselves falsely depicted in deepfakes spread online, it can be a painful and difficult process, but they should document the deepfake and report it to the hosting platform. Major social media sites such as Facebook, Instagram, X, and YouTube all have reporting tools for manipulated content, and reporting such posts temporarily blocks their spread. If the deepfake is defamatory, invades privacy, or results in emotional distress, individuals can pursue legal remedies under existing laws covering defamation and privacy violations. Non-public figures choosing to sue for defamation only need to prove negligence in court.

Key tools in addressing deepfakes, more specifically the potential spread of misinformation, are California’s existing political advertising disclosure laws. State law requires any political ads created or distributed by committees containing AI-generated or substantially altered images, audio, or video to include a clear and conspicuous disclosure stating that the content has been altered using artificial intelligence. The law provides an exception for ads that only use AI in the editing process. Like other disclosures in political ads, deepfake disclosures are meant to inform voters of what they are viewing and provide for more transparency. In this way, California’s current framework is already well-equipped to address concerns over misinformation in AI-generated.

Technology can also help. There are AI detection tools that are increasingly good at identifying fake content, and media literacy initiatives could further help the public better recognize and question manipulated media.

Rather than holding online platforms broadly liable, which risks over-removal of legitimate content and threatens free speech protections, the state’s policy should focus on empowering users and institutions to address abuses directly. Liability should be imposed on the bad actors who create and distribute illegal deepfakes, rather than on the platforms that host third-party content.

Like any new technology, it can be used for both good and bad. Some of the risks of deepfakes are real and concerning. But rather than trying to implement broad mandates that violate the First Amendment, the state should enforce existing laws that can protect Californians without sacrificing core liberties.

A version of this column first appeared at The Orange County Register.

The post Free speech rights secure a legal victory over California’s restrictive deepfake laws appeared first on Reason Foundation.

]]>
Next steps after the Senate rejected an AI regulation moratorium https://reason.org/commentary/next-steps-senate-rejected-ai-regulation-moratorium/ Tue, 19 Aug 2025 10:30:00 +0000 https://reason.org/?post_type=commentary&p=84149 Reintroducing a version of this narrower approach to an AI moratorium may be a politically viable path forward to passing a balanced federal standard.

The post Next steps after the Senate rejected an AI regulation moratorium appeared first on Reason Foundation.

]]>

As part of the “One Big Beautiful Bill Act” signed by President Donald Trump last month, Congress debated advancing a federal moratorium on state regulations of artificial intelligence to prevent a patchwork of conflicting state laws on AI. During the legislative process, the Senate voted 99-1 to remove the state AI moratorium passed by the House. But the Senate also considered a narrower moratorium that took a more conservative approach to state-level AI regulation and could offer a politically-viable path to passing a balanced federal standard in the future.

The Senate’s proposal for an AI moratorium would have barred state regulation preventing the development of advanced AI models but permitted state rules focused on malicious applications. For example, the bill would have barred states from interfering with companies like Microsoft or Google as they develop large language models (LLMs) while still allowing states to regulate deceptive political videos created with AI.

The House’s original proposal sparked a bipartisan backlash that included many Republican governors who were concerned that it appeared to override state authority to regulate AI. A joint letter from 17 Republican governors argued that a moratorium “threatens to undo all the work states have done to protect our citizens from the misuse of artificial intelligence.”

The Senate’s revised and limited version of the moratorium aimed to preserve state governments’ flexibility to regulate harmful or malicious uses of AI, such as unauthorized “deceptive acts” like non-consensual sexually explicit deepfakes, while shielding the core development of AI technologies from fragmented state-by-state or premature restrictions.

The Senate’s amendment text, which was not included in the bill, would have reduced the House’s AI moratorium from 10 years to five and allowed states to enforce specific laws applicable to AI. Among the categories of state laws that were explicitly permitted under the proposed moratorium are those dealing with “unfair or deceptive acts or practices, child online safety, child sexual abuse material, rights of publicity, protection of a person’s name, image, voice, or likeness and any necessary documentation for enforcement, or a body of common law.” These laws may still apply to AI systems—but only if they do so “without undue or disproportionate burden…to reasonably effectuate the broader underlying purposes of the law or regulation.”

In practice, the proposed moratorium would have blocked state laws that target the development or deployment of foundational AI models, especially models created by well-resourced companies like OpenAI. For example, it would have likely preempted a reintroduction of California’s Senate Bill 1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, which was passed but vetoed in 2024 by Gov. Gavin Newsom. That bill would have imposed strict safety requirements and allowed civil penalties against developers of large-scale AI models if their systems caused harm.

These types of state laws focus not on specific misuse of AI, but on the underlying technology itself, often singling out firms capable of training and releasing powerful models. The federal moratorium aimed to block such direct regulation of core AI capabilities while still allowing states to enforce laws addressing harmful outcomes.

One argument advanced by supporters of the proposed moratorium is that, without federal preemption, AI companies would be forced to design their products around the most restrictive state laws. This, they argue, could stifle innovation by making a state like California’s policy choices effectively the national policy. Vice President J.D. Vance described those concerns in a recent podcast interview with comedian Theo Von.

VANCE: So the idea is you use — you basically have a federal regulation that prevent — a federal regulation that prevents like California from having a super progressive set of regulations on artificial intelligence.

VON: Okay.

VANCE: That that’s the argument for it. The argument against it is that if the feds aren’t protecting artists, then you’re not going to be able to protect artists either. And so I, honestly, I don’t think the provision, to be honest with you, I don’t think that’s going to make it in the final bill, but I usually have a pretty strong view on most things. I can kind of go both ways on this because I don’t want California’s progressive regulations to control artificial intelligence.

The proposed moratorium on state AI regulations aimed to ensure a more uniform national approach, allowing for innovation in the core development of AI systems. While members of Congress have yet to introduce another legislative approach to federal AI standards, the failed Senate proposal could be a first step toward federal legislation that is nuanced in a way that will not hinder the growth of AI and can also garner political support.

The post Next steps after the Senate rejected an AI regulation moratorium appeared first on Reason Foundation.

]]>