Gregory Ferenstein, Author at Reason Foundation https://reason.org/author/greg-ferenstein/ Thu, 04 Dec 2025 21:56:46 +0000 en-US hourly 1 https://reason.org/wp-content/uploads/2017/11/cropped-favicon-32x32.png Gregory Ferenstein, Author at Reason Foundation https://reason.org/author/greg-ferenstein/ 32 32 New study details how legal psychedelic services can treat depression, anxiety https://reason.org/commentary/new-study-details-how-legal-psychedelic-services-can-treat-depression-anxiety/ Tue, 09 Dec 2025 11:30:00 +0000 https://reason.org/?post_type=commentary&p=87245 A new study has found notable improvements in mental health among participants who underwent legal, supervised sessions with psychedelics in Oregon.

The post New study details how legal psychedelic services can treat depression, anxiety appeared first on Reason Foundation.

]]>
A new study has found notable improvements in mental health among participants who underwent legal, supervised sessions with psychedelics in Oregon, the first state to legalize such services for adults. Published by Osmind, a mental health research and electronic health record company, the study analyzed treatment outcomes from individuals seeking relief from depression and anxiety under Measure 109 in Oregon. This 2020 voter-approved initiative decriminalized psilocybin, the psychoactive compound in psychedelic mushrooms, for therapeutic use by adults over 21 in state-licensed centers. While clinical trials have hinted at psilocybin’s potential at scale, this report offers early evidence from a commercial setting.

The Osmind study relies on voluntary self-reports, making it “naturalistic” research that captures how these services perform outside the strict protocols of randomized trials (measuring outcomes through self-reported surveys is standard practice in real-world scientific research). The study tracked 88 participants and used standardized tools to measure changes: the PHQ-8 questionnaire for depression (a scale from 0 to 24, where higher scores indicate worse symptoms), the GAD-7 for anxiety, and the WHO-5 for overall well-being. Assessments occurred before the session, one day after, and a month later. No dosages were specified, but sessions followed state guidelines for supervised administration.

Results showed meaningful gains across the board. Depression scores on the PHQ-8 fell by an average of 4.6 points, shifting participants from moderate to mild severity, a change that meets the threshold for clinical significance. Anxiety dropped by 4.8 points (on the GAD-7 scale), and well-being rose by 10.7 points (on the WHO-5 index). No serious adverse events occurred during sessions, though 3 percent reported lingering issues, like heightened anxiety or family strain, a month later. These preliminary improvements suggest that psilocybin could offer rapid relief in a legal therapeutic setting, aligning with the compound’s reputation for fostering emotional resilience.

Direct comparisons to other psilocybin studies or clinical trials are tricky, as many rely on different scales, populations, and measures. Some studies report quantified outcomes (“effect size”) in the proportion of participants who had meaningful changes, while others report changes in a particular scale. As an example, in one randomized study, about two-thirds of participants continued to experience relief from major depressive disorder (MDD) remission five years after receiving treatment. That study only included participants diagnosed with major depression and measured outcomes with a different metric (the GRID-HAMD scale) than the Oregon study.

Nonetheless, Osmind’s review of real-world data reveals significant results on depression and anxiety, consistent with more medicalized clinical trials. Oregon’s approach to psychedelic treatment is a novel experiment, not just because it uses psychedelics, but because it created an entirely new mental health services framework. The state had to design training criteria for schools so that non-medical professionals could learn to administer a drug that is currently undergoing drug trials. By law, these “facilitators” did not need prior mental or medical training.

This new study shows promise for both the impact of psychedelics as a mental health treatment and for lowering the cost of licensed mental health services. Psychedelic therapy can be very expensive (over $15,000) when using a medical model, where two licensed therapists see a single patient for three extended sessions (based on countries where it is federally legal). In Oregon, professionals do not need to attend medical school and can administer group sessions, reducing the total cost per patient.

The Drug Enforcement Administration (DEA) has requested that Health and Human Services (HHS) review whether psilocybin should continue to be banned as a Schedule I drug (the DEA request was publicly confirmed by Kathryn Tucker, JD, who is involved with the case; it was also confirmed privately by legal counsel to Reason staff). A Schedule I designation reflects the government’s opinion that the substance has no medical value and is highly susceptible to abuse. Businesses that traffic in Schedule I substances, including Oregon psilocybin clinics, are considered federal criminal enterprises, are generally unable to access financial services, and are prohibited from claiming deductions on their federal income taxes using the “ordinary and necessary” standard that applies to other businesses. These federal penalties significantly increase the cost and risk faced by these businesses, and these additional financial burdens must be passed on to customers.

Data collected by Reason Foundation shows that states with legal psychedelic services do not display increased rates of criminal activity or hospitalizations. Taken together with this latest study, data from Oregon makes a strong case that psilocybin holds clear medical value and does not endanger public health, calling into question whether it should be considered a Schedule I drug.

The post New study details how legal psychedelic services can treat depression, anxiety appeared first on Reason Foundation.

]]>
Why is Texas investigating Meta’s AI Studio for offering unlicensed therapy? https://reason.org/commentary/why-is-texas-investigating-metas-ai-studio-for-offering-unlicensed-therapy/ Thu, 13 Nov 2025 22:00:13 +0000 https://reason.org/?post_type=commentary&p=86710 Texas Attorney General Ken Paxton launched an investigation into Meta’s Artificial Intelligence Studio to determine whether its chatbot platform misleads children.

The post Why is Texas investigating Meta’s AI Studio for offering unlicensed therapy? appeared first on Reason Foundation.

]]>
Texas Attorney General Ken Paxton has opened an investigation into Meta’s Artificial Intelligence (AI) Studio to determine whether its chatbot platform misleads children by allowing role-playing bots to pose as actual therapists. Meta has responded that the probe misrepresents its product: It provides disclaimers that its bots are not licensed professionals, but cannot ultimately control if a user decides to use its tool to break the law. The flexibility in AI applications highlights the need for clear regulatory frameworks that distinguish between platforms providing foundational tools and those providing services on top of general-use technologies.

Meta’s AI Studio, launched in 2024, was designed as an entertainment and productivity tool for users to generate lighthearted, fictional characters and to experiment with chatbot technology without needing computer science skills. The platform lets users design a bot’s name, personality, tone, and avatar. As Meta’s marketing highlights, “anyone can create their own AI designed to make you laugh, generate memes, give travel advice and so much more.” Creators can even build an AI as “an extension of themselves to answer common DM [direct message] questions and story replies, helping them reach more people.” In other words, it is designed and marketed to be an interactive search tool, not as a therapy product.

However, Paxton asserts that Meta’s platform could mislead users and offer services similar to therapy, but without a license. In the press release, Paxton’s office states the investigation will “determine if they have violated Texas consumer protection laws, including those prohibiting fraudulent claims, privacy misrepresentations, and the concealment of material data usage.”

Practicing therapy without a license is a violation of state law; even those offering very similar treatment modalities, such as “stress reduction,” must be careful not to advertise as providing therapy, counseling, or any services that could be construed as treatment of a mental illness from a licensed provider. Courts have discretion to determine if the language of a service provider is substantially similar to that of a licensed mental health practitioner.

Indeed, some bots pose as therapists or engage in conversations that are substantially similar to therapy. Meta, in its defense, attempts to warn users that bots are products of creators. The Times found screenshots of a chatbot labeled as a “psychologist” that warns users the bot “is not a real person or licensed professional.”

Screenshot originally appeared in The Times.

Regardless of the warning, applying typical legal standards to service providers in relation to chatbots becomes trickier, both because chatbots can veer off into conversational topics for which they were not originally designed and because individual developers can use generic AI technology in ways that violate the law. Nevertheless, Paxton’s investigation targets not these individual developers, but Meta.

Many platforms that allow user-generated content see users push boundaries in ways platforms cannot always anticipate, and Meta’s AI Studio is no exception. This does not present a problem for most users, but a small percentage of users take things in a direction that might be questionable or outright harmful. Though designed as a creative playground, some users turn these chatbots into emotional companions because they are available around the clock and cost far less than professional therapy. Mental health professionals warn about a new phenomenon called “AI psychosis,” where people under distress form delusional beliefs about chatbot sentience or receive responses that reinforce unhealthy thoughts. These cases demonstrate that even without explicit design intent, generative chatbots can assume emotional roles they were never intended for, sometimes with tragic consequences. OpenAI, the company that created ChatGPT, has acknowledged that guardrails around AI “break down” in very long conversations. The technology was not designed to engage mentally distressed users.

Meta’s AI Studio is not exempt from these issues. A search for “therapist” within the tool yields a range of characters, some of whom have thousands of users. These bots were not created by Meta but by individual users, and they tend to mimic the familiar patterns of a therapist: listening, reflecting back, and asking open-ended questions. In some cases, creators add avatars or images styled to look like therapists and script responses in the same voice, even if the bot never explicitly claims to be a licensed professional. This makes the case against Meta more challenging because it is difficult to broadly police ”therapeutic” talk. It’s unclear how Meta could crack down on illicit therapy chatbots.

“We’d first have to be able to define therapy in a way that isn’t so overbroad that it also encompasses discussions with your priest, bartender or best friend—which is to say effectively impossible—or would at least make the chatbot useless,” Andrew Mayne, an original prompt engineer and science communicator who consulted on OpenAI ChatGTP-4 model, writes to Reason Foundation in an email.“You could have the LLM [large language model] remind you that it’s not a therapist in certain discussions—but even then there would be debate on what that line is. It would also be annoying and redundant.”

It may be easier for a court to determine when an unlicensed provider is advertising services that are similar to those of a therapist. However, technologically, there can be thousands of chatbots engaging in thousands of conversations. It is not technologically possible for Meta to clearly label when these bots or conversations violate the law.

Some violations might be easier to spot if Meta manually investigated each conversation and chatbot. However, even if Texas attempted to force Meta to do so, Section 230 of the Communications Decency Act provides that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” This federal law is foundational to modern platforms because it grants them immunity from lawsuits arising out of user-generated content.

In this case, Meta did not create the “therapist” bots, nor did it market AI Studio as a mental health service. It merely provided a creative tool. Holding it liable for user misuse might conflict with Section 230’s regulation that platforms are not considered publishers of user-generated content.

This is not to dismiss the problem. There are more incidents emerging of people being deceived by chatbots, especially individuals who have mental health issues, and this is an unexpected challenge created by artificial intelligence. States could collaborate with developers, who have no vested interest in the harmful uses of their products, to develop more effective safety standards and guidelines. The scope of the issue is still unclear. It needs to be studied, and both governments and companies share a strong interest in keeping users safe.

In addition, it seems likely that people susceptible to using chatbots in harmful ways are also prone to being deceived by individuals in online chat groups, through online advertising, scams, and confusing parody with reality. Our policy goal should be to find ways to support individuals struggling with mental health issues or digital literacy in an increasingly digital landscape. Cooperative efforts to test solutions and adopt safeguards make sense for Texas agencies. It does not, however, make sense for the attorney general to claim that Meta violated some kind of obvious law and should be punished when no clear legal guidelines exist for an emerging problem of this kind.

The post Why is Texas investigating Meta’s AI Studio for offering unlicensed therapy? appeared first on Reason Foundation.

]]>
Nevada’s ban on AI therapists highlights regulation based on fear rather than analysis https://reason.org/commentary/nevadas-ban-on-ai-therapists-highlights-regulation-based-on-fear-rather-than-analysis/ Wed, 12 Nov 2025 15:44:57 +0000 https://reason.org/?post_type=commentary&p=86692 This legislative approach could stifle innovation, prevent change and improvement in products and services, and harm the residents of Nevada.

The post Nevada’s ban on AI therapists highlights regulation based on fear rather than analysis appeared first on Reason Foundation.

]]>
Nevada’s Assembly Bill 406 demonstrates why state-based artificial intelligence AI regulations often restrict new AI applications without considering all the consequences. The bill, signed into law by Gov. Joe Lombardo last June, restricted the use of AI for mental healthcare, which could prematurely deny residents access to a new form of safe and effective treatment.

Although Nevada’s law was initially framed as a narrow ban on AI counseling in public schools, AB 406 actually contained sweeping restrictions on AI behavioral health technologies. The law amended three chapters of the Nevada Revised Statutes—NRS 391, NRS 433, and NRS 629—to prohibit AI from performing any behavioral health functions reserved for licensed professionals, such as diagnosing patients or performing therapy. Violations can trigger civil penalties of up to $15,000 or professional discipline.

There is debate among researchers and mental health experts about the value of AI therapy. AI-driven mental health tools are advancing rapidly. Scientific journals and political offices are exploring how AI can be leveraged to expand access to treatment.

For example, a recent randomized clinical trial of an AI chatbot by Dartmouth researchers found participants reporting significant symptom reductions and relational closeness comparable to human therapists. The study was published in the New England Journal of Medicine (NEJM) AI.

In April, the State University of New York’s Downstate Health Sciences University announced plans to use a taxpayer-funded grant to explore the use of AI to prevent and diagnose mental health issues.

However, the public testimony in the lead-up to Nevada Assembly Bill 406’s passage and signing didn’t reflect this diverse debate. Instead, public hearings leading up to the passage of the bill were relatively one-sided. For example, on May 7, one of four public hearings, the Association of Social Workers had multiple representatives testify about the importance of licensed mental health professionals. Representatives worried about AI apps making unfounded claims about their capabilities to treat mental health disorders, but notable technology trade associations were absent. There was also no call-in (remote) opposition to the bill.

My analysis of public hearings shows that the bill passed without participation or discussion coming from innovators or scientists working on novel forms of automated mental healthcare.

The most generous reading of this bill’s process may be that, although many of these companies and organizations have big budgets and plenty of lobbyists and experts, AI researchers and companies simply failed to offer a counterargument because it is so challenging to track and engage with all the AI-related legislation across the country. The public’s skyrocketing AI use has meant a dramatic increase in AI-related legislation. There are hundreds of AI-focused bills, maybe more, introduced just in 2025.

As similar laws are introduced in other states, researchers and other groups will need to do what they didn’t in Nevada: show lawmakers how these AI services can improve individual and public health and ways lawmakers can implement guardrails without completely stifling research and innovation. 

One of the few voices of skepticism on the Nevada bill before it passed was State Sen. Angela D. Taylor (D-15), chairwoman of the Senate Committee on Education, who noted that AI is advancing quickly and could offer valuable mental health capabilities earlier than the two years it will take before legislators might take up the issues again. In front of the Association of Social Workers representatives, she noted that there could be advancements in six months, but the committee might only revisit it in two years (timestamp around 1:59:52 pm).

During the same hearing, Tom Clark, representing the Nevada Association of School Boards, noted that a federal regulator could certify that an AI therapist was safe. He told the committee that he could talk to the bill sponsor about creating an amendment that would allow Nevada residents to use federally recognized behavioral health technology. Indeed, the Dartmouth research mentioned above is currently undergoing clinical trials for an AI chatbot, and the preliminary results are positive, which could one day lead to FDA-approved therapy.

In response to Clark’s suggestions, the committee said that the sponsor of the bill noted that these suggestions were “not friendly.” But without much discussion or explanation, the committee decided to defer to the bill sponsor and not consider whether something like an FDA-approved therapy bot should be allowed in Nevada. As such, during the last public hearing, the suggestion to allow an exception for FDA-approved products was not approved by the committee.

The AI law means Nevada has banned almost all uses of an innovative approach to behavioral health that could soon greatly increase access to mental health services by those who need them. Lawmakers concerned about possible harms, which might be solved by improving AI systems, are precluding all potential benefits for Nevadans as well. That is a legislative approach that stifles innovation, prevents change and improvement in products and services, and ultimately harms the residents of Nevada.

The post Nevada’s ban on AI therapists highlights regulation based on fear rather than analysis appeared first on Reason Foundation.

]]>
FAQ: Timeline for FDA ibogaine approval https://reason.org/faq/faq-timeline-for-fda-ibogaine-approval/ Tue, 04 Nov 2025 11:30:00 +0000 https://reason.org/?post_type=faq&p=86234 It can take between 5 and 12 years to complete a drug trial, but the timeline to drug approval can vary significantly depending on the type of treatment.

The post FAQ: Timeline for FDA ibogaine approval appeared first on Reason Foundation.

]]>
What is the FDA process?
  • To commercialize a new drug, the Food and Drug Administration requires three ‘phases’ of testing to demonstrate that a molecule is both safe and effective for the treatment of a specified condition. Drug makers (“sponsors”) finance and run trials for which the study design must be pre-approved by the FDA.
  • Upon successful completion of the final phase, the sponsor can submit a New Drug Application to the FDA. If if the FDA approves the application, the sponsor gains the right to market the drug as a treatment for the specified condition.

How long does it usually take, by phase?

  • In all, it can take between 5 and 12 years to complete a drug trial. The timeline to drug approval can vary significantly depending on the type of treatment, according to a report from Health and Human Services (HHS). Initial discovery of a molecule and treatment in animals may take an indeterminate amount of time, but a molecule cannot enter trials in human beings until a sponsor has submitted an Investigational New Drug application to the FDA.
  • Phase 1 is the first stage in which an investigational drug is permitted to be administered to a healthy sample of human beings, to determine proper dosing and potential toxicity levels, and averages 1.8 years.
  • Phase 2, which includes placebo-controlled randomized trials in a small sample of human beings suffering from the specified condition, takes about 2.1 years.
  • Phase 3 requires a drug to demonstrate effectiveness statistically greater than a placebo in two large-scale, well-designed clinical trials. The statistical significance thresholds often require a trial to include thousands of participants in each Phase 3 trial and to include double-blind control groups that receive a placebo. This phase frequently takes up to 4 years.

How is drug approval accelerated with a ‘Breakthrough’ designation?

  • The FDA can award a “Breakthrough” designation for drugs that demonstrate exceptional preliminary results. The designation grants the sponsor a more efficient process that includes ongoing agency collaboration on trial design, “rolling” review of trial evidence in lieu of compiling years’ worth of evidence into a completed application, and priority review of a New Drug Application. These changes drastically reduce costs and uncertainty facing drug sponsors and can facilitate capital formation by the sponsor. One study found that a breakthrough designation can shorten the average time to approval to five years.

Have psychedelic drugs received Breakthrough status?

  • Since 2017, a number of psychedelic drugs, including synthetic versions of MDMA (“ecstasy”), psilocybin (“magic mushrooms”), and lysergic acid diethylamide (LSD), have been granted breakthrough status by the FDA. Psychedelic drugs have a pattern of showing strong preliminary results in treating mental health issues.
  • Non-FDA supervised clinical trials using ibogaine in foreign jurisdictions have also shown very strong results. If a drug sponsor used the same formulation of ibogaine used in these early clinical trials, it could argue that data already exists to show ibogaine offers a substantial improvement over existing therapies.

Are any manufacturers taking ibogaine-like drugs through the FDA process?

  • Yes. Manufacturers Atai’s and DemRX’s ibogaine-like drug have already completed Phase I and may soon move on to Phase II. Another manufacturer, Gilgamesh, was awarded a $14 million grant from the National Institutes of Health to finance Phase I trials of its compound for the treatment of opioid disorder. State participation in an ibogaine research collaborative could steer funding toward a new drug or potentially support an existing clinical trial.

Download this Resource

FAQ: Timeline for FDA ibogaine approval

Thank you for downloading!

Please provide your work email address to access this report:
This field is hidden when viewing the form

The post FAQ: Timeline for FDA ibogaine approval appeared first on Reason Foundation.

]]>
New York’s stalled AI bill would have blurred the line between disclosure and restriction https://reason.org/commentary/new-yorks-stalled-ai-bill-would-have-blurred-the-line-between-disclosure-and-restriction/ Fri, 17 Oct 2025 10:30:00 +0000 https://reason.org/?post_type=commentary&p=85667 While pitched as a transparency measure, Assembly Bill 8595 would have set a new, unusually high bar for compliance.

The post New York’s stalled AI bill would have blurred the line between disclosure and restriction appeared first on Reason Foundation.

]]>
Earlier this year, New York state lawmakers advanced a proposal that would have required artificial intelligence developers to reveal the exact sources behind their models. Assembly Bill 8595, the Artificial Intelligence Transparency for Journalism Act, would have mandated a detailed, publication-level accounting of every uniform resource locator (the formal name for website addresses, which is typically shortened to URL) and data source accessed in every phase of model development. While pitched as a transparency measure, AB 8595 would have set a new, unusually high bar for compliance, raising the question of when transparency begins to look less like demanding openness and more like a deliberate barrier to entry. The bill’s progress appears to have stalled, but it is worth examining as the legislative approach it contains is likely to shape future legislation.

State Sen. Kristen Gonzalez (D-59) introduced the bill earlier this year. A key portion of the legislative text reads:

A developer of a generative artificial intelligence system or service … shall post and maintain on its website, with a link to such posting included on its homepage, the following information for each generative artificial intelligence system or service that utilizes covered publication content:

(i) the uniform resource locators or uniform resource identifiers accessed by crawlers deployed by the developer or by third parties on their behalf or from whom they have obtained video, audio, text or data … .

Despite the title, the bill defined a “journalism provider” as a “covered publication,” which is any print, broadcast, or digital outlet that “performs a public-information function,” and “invests substantial expenditure of labor, skill, and money.” The provision grants covered publications the right to “bring an action in the supreme court for statutory damages or injunctive relief”.

Ultimately, the bill did not define what is considered a copyright violation. Instead, it may have given publishers easier evidence to prove that a violation took place. And, importantly, courts have already begun outlining the contours of what AI may be considered under the fair use doctrine.

In a landmark case last September, AI developer Anthropic agreed to a $1.5 billion settlement with authors whose works were not purchased. Large language models (LLMs) are trained on vast amounts of data, some of which may include pirated copies of books. Notably, the case sets a precedent that AI models can be trained on works that are legally obtained. For instance, if the developer purchases a book, it can then train the model on the content and not have to compensate authors beyond the cost of the book itself. In Thomson Reuters v. Ross Intelligence, US Circuit Court Judge Stephanos Bibas held that Ross’s use of proprietary Westlaw headnotes to train its AI engine was not fair use, emphasizing the originality of the content and the commercial nature of Ross’s competing product. (Ross is a now-defunct AI company for legal research.)

In June, a U.S. district judge declared that Meta did not cause substantial harm to the market of publishers by using books to train its AI model, siding against a number of high-profile authors. A California court also sided with AI company Anthropic in a similar case involving book publishers.

Complying with New York’s proposal would have posed significant technical hurdles. LLMs are built on datasets containing billions of documents collected via automated web crawlers. Tracking and publishing every individual URL or identifier accessed during each stage is not standard practice. While engineers may spot-check a model’s citations or investigate suspected “hallucinations,” they rarely maintain exhaustive logs of every browser request or data pull.

Under the hood, LLMs learn by adjusting weights—numerical values that encode the statistical strength of connections between words—rather than storing or indexing URLs directly. Once training is completed, a model’s weights reflect aggregated patterns from the entire dataset, not discrete source pointers.

Even after training, engineers often conduct manual verification. For instance, one study describes clinicians checking whether an LLM’s medical citations matched real articles and assessing accuracy. If AB 8595 was passed and interpreted broadly, companies might be required to document every URL opened during such checks, in addition to all sources ingested into model weights.

“If a URL pointed to an uploaded PDF of one of my novels, that’s not proof that the model’s understanding of that came from that link. It could be from hundreds of discussions, promotional materials, or the Amazon page,” Andrew Mayne told Reason Foundation in an email. Mayne is a novelist and an AI consultant and was a technical consultant on a popular AI model from OpenAI’s ChatGPT-4.

Mayne’s observation highlights a fundamental ambiguity: Even with perfect logs of every URL a crawler hit, developers couldn’t trace how indirect discussions or metadata influenced model outputs. Must they disclose URLs opened for manual fact-checks? Or every ancillary page that informed a bot’s interpretation?

Such questions underscore how AB 8595 would have blurred the line between disclosure and restriction. Overly complex reporting requirements can impede innovation from large developers responsible for the most popular AI applications.

The last bill action was in June of 2025, when the legislation was referred to the rules committee. No further action on the bill is indicated on the New York state legislature’s website and it is immediately clear why this bill did not advance further. However, tensions between developers and publishers are far from settled, and a version of this bill could likely return to New York or another state next session.

The post New York’s stalled AI bill would have blurred the line between disclosure and restriction appeared first on Reason Foundation.

]]>
Democrats pivot on AI: Less regulation, more redistribution https://reason.org/commentary/democrats-pivot-on-ai-less-regulation-more-redistribution/ Tue, 14 Oct 2025 10:30:00 +0000 https://reason.org/?post_type=commentary&p=85584 The focus of Sen. Mark Kelly’s “AI for America” plan departs from other federal artificial intelligence policy proposals introduced by Democrats.

The post Democrats pivot on AI: Less regulation, more redistribution appeared first on Reason Foundation.

]]>
Sen. Mark Kelly (D-AZ) has released a new artificial intelligence (AI) policy roadmap. Notably, the focus of Sen. Kelly’s “AI for America” plan departs from other federal AI policy proposals introduced by Democrats, which emphasized strong regulation of AI model development and deployment. Instead, it calls for an “AI Horizon Fund” funded by taxes on large companies involved in the development and use of AI.

The proposal envisions channeling these dollars into various labor market interventions, such as union apprenticeship programs and a safety net for displaced workers, and infrastructure upgrades, especially for energy and water systems. This suggests Democrats may be shifting their rhetoric on AI, although the scant details so far make it hard to know how much difference this will make in terms of actual policy. Sen. Kelly’s proposal also comes at a time when Democrats control neither Congress nor the White House, so their priorities could shift whenever they regain congressional majorities and the presidency.

Setting aside these uncertainties, there appears to be support for Kelly’s approach. President Barack Obama tweeted on X that “We need more ideas like the ones @SenMarkKelly has outlined on how we can shape the future being created by artificial intelligence.”

Several other major Democratic Party figures and labor leaders have also declared support.

Sen. Kelly’s AI for America proposal can be contrasted with President Joe Biden’s 2023 Executive Order (EO) 14110, which envisioned a strong role for government intervention in the development of AI technologies. EO 14110 emphasized safety and “responsible innovation” as AI policy cornerstones. President Donald Trump rescinded EO 14110, then issued EO 14179, which is aimed at “removing barriers to American leadership in artificial intelligence.” In contrast to the safety-focused Biden era AI policy statement, Kelly’s AI roadmap stresses “strengthening the foundation of our success” in achieving “an early and commanding lead in AI thanks to our culture of innovation, world-class infrastructure, and unmatched ability to train and attract top talent.” Safety and equity considerations are present in Kelly’s proposal but receive much less attention.

Democrats have faced a series of high-profile setbacks when attempting to impose other strong regulations on how artificial intelligence is developed and used. In Colorado, Democratic Gov. Jared Polis convened a special session in August 2025 to amend Senate Bill 24-205, the Colorado Artificial Intelligence Act. That law would impose significant obligations on developers and users of “high-risk artificial intelligence systems” in sectors like healthcare. Lawmakers ultimately voted instead to delay implementation until June 30, 2026, rather than reopen the framework for amendment as Polis had sought. And in California, a federal judge blocked Assembly Bill 2839, which sought to restrict election-related deepfakes, as unconstitutionally overbroad, writing, “Most of AB 2839 acts as a hammer instead of a scalpel, serving as a blunt tool that hinders humorous expression.”

These defeats may be inspiring a new Democratic strategy that avoids direct restrictions on AI models and instead focuses on taxing industry to fund workforce and infrastructure programs. Kelly’s AI for America lays out a framework for managing the economic and social impacts of artificial intelligence without directly restricting innovation. The proposed AI Horizon Fund is the plan’s central feature. Kelly frames this fund as a way to ensure that the large technology companies benefiting most from AI’s growth also bear responsibility for addressing the broader costs that their expansion places on society. The fund is presented as a mechanism to channel private gains into public priorities.

Kelly describes these taxes as “common sense” because the firms generating “enormous profits” from AI should be required to offset the costs imposed on workers, communities, and public infrastructure. Of course, AI companies already do pay taxes just like any other business, so in practice, this funding mechanism transforms the proposal into a form of targeted redistribution, singling out private earnings from a particular type of technological innovation to be redirected into public spending priorities.

One of the key areas identified for this reinvestment is education and workforce development. The roadmap calls for expanding union apprenticeship programs and channeling resources into community colleges so that workers can gain the skills needed in an AI-driven economy. It also supports the creation of an “AI economic adjustment program” to supplement the incomes of displaced workers. AI for America also encourages increased labor union involvement in the design and deployment of AI to benefit workers, which raises questions about how much pivoting Democrats actually plan to do on their previous calls for AI model regulation.  Each of these interventions are cast as a way to ensure that technological change creates upward mobility for a broad base of workers, rather than widening inequality.

The second area of focus is infrastructure, where the plan highlights the strain that data center growth will place on water and electric systems. By directing AI company contributions into these public utilities, Kelly argues that firms can “offset these impacts” and “strengthen the systems and infrastructure on which they depend.” In practice, this would mean redistributing private sector gains into federally directed local or regional projects in energy, water, and other essential services.

This approach not only reinforces the idea that AI profits should be harnessed for broad social benefits rather than remaining in the hands of the companies that generate them, but it also raises practical questions about utility regulation and the roles of various levels of government.

Both public and private electric utilities typically rely on user revenue from their “ratepayers” to finance infrastructure improvements, which is subject to regulation by state and local utility regulators. AI companies are among those ratepayers, so if regulated rates are insufficient to generate revenue to finance improvements or if costs are poorly allocated, policymakers should direct their attention to state and local utility regulation.

Where federal involvement in utility infrastructure finance exists, it is principally in the form of loans and loan guarantees, such as the Environmental Protection Agency’s Clean Water State Revolving Fund and the Department of Energy’s Title 17 Energy Financing Program. These subsidized credit assistance programs play a relatively small role in U.S. utility networks and—importantly—require that a substantial amount of project risk be retained by utilities and their ratepayers.

Kelly’s proposal recommends new “financing mechanisms” to supplement the traditional utility ratepayer model, but it says nothing about how existing regulation is denying utilities the ability to, in the words of his roadmap, “raise capital quickly and recover their investments fairly without disproportionately impacting the communities that host new AI infrastructure.”

As energy economist Lynne Kiesling noted in a Reason Foundation commentary:

By temporarily scaling down operations or shifting workloads to off-peak periods, data centers can help balance supply and demand, stabilize prices, and reduce the need for expensive and emissions-heavy peaking power plants.

However, the regulatory and market institutions have to enable such markets and price signals to reduce frictions that maintain the timing mismatch between demand growth and increasing supply. They do not. While some demand response integration exists in wholesale power markets, it’s limited and heavily constrained.

Thus, the problem is not a relatively simple one of limited access to capital, which Kelly’s proposal aims to address in an equitable manner. Instead, the heavily regulated market design in utilities limits the ability of providers to match supply with customer demand efficiently. The upshot is that, absent market-oriented reforms, federal financing assistance will merely perpetuate and likely worsen the underlying problems that constrain utilities’ responses to the growth of data centers, and result in project risk being increasingly shifted to taxpayers.

There are limited instances in the United States where governments have asked specific technology firms to help offset the societal impacts of their operations beyond ordinary taxation.

One instructive precedent is the Universal Service Fund, which requires U.S. telecommunications providers to contribute to a pool that subsidizes broadband and telephone service in rural and underserved areas. These programs suggest that targeted levies or partnerships aimed at offsetting industry impacts are not without precedent, even if they remain relatively rare in the technology sector.

Ideally, a light regulatory touch would be the ideal path. But Democrats tend to heavily involve the government at some level in many proposals. Ultimately, whether this shift from regulation to redistribution benefits or harms innovation will depend on the scale of the required contributions, as well as how those revenues are directed. Heavy-handed regulation that restricts the design or deployment of AI models could stifle startups and slow the development of foundational technologies that underpin the broader ecosystem.

Yet if the new approach functions as an industry-specific tax that grows too large and funds programs of dubious value, it could limit the ability of U.S. companies to reinvest profits in research, infrastructure, and global competitiveness that would deliver real value to consumers. The balance between these two risks will determine whether policies like Sen. Kelly’s proposal strengthen the AI sector or instead constrain its long-term growth.

The post Democrats pivot on AI: Less regulation, more redistribution appeared first on Reason Foundation.

]]>
Sen. Ted Cruz proposes federal regulatory sandbox to encourage AI innovation, development https://reason.org/commentary/sen-ted-cruz-proposes-federal-regulatory-sandbox-to-encourage-ai-innovation-development/ Wed, 08 Oct 2025 04:01:00 +0000 https://reason.org/?post_type=commentary&p=85284 The SANDBOX Act would allow innovators to obtain temporary regulatory waivers for artificial intelligence technologies from federal agencies.

The post Sen. Ted Cruz proposes federal regulatory sandbox to encourage AI innovation, development appeared first on Reason Foundation.

]]>
Sen. Ted Cruz (R-Texas) has introduced draft legislation to create a program that would allow artificial intelligence (AI) pilot projects to operate under temporary exemptions from certain federal rules. The bill, known as the Strengthening Artificial Intelligence Normalization and Diffusion by Oversight and Experimentation (SANDBOX) Act, would allow innovators to obtain temporary regulatory waivers for AI technologies from federal agencies. Developers could commercialize their product for a specified period, subject to added oversight from regulators and reporting requirements.

The bill would authorize the creation of a federal program to oversee pilot projects of AI tools in sectors such as energy, infrastructure, healthcare, and education, allowing them to apply for temporary liability protections. Administered by the White House Office of Science and Technology Policy (OSTP), this program would enable AI users and developers to request waivers or modifications to existing federal rules and regulations. OSTP would route applications to the appropriate agencies and work in collaboration with them to determine whether to accept or reject an application, based on an evaluation of the risks and benefits to consumers.

As an example, AI-enabled medical devices may currently require Food and Drug Administration (FDA) approval for minor adjustments. However, by design, the benefits of AI are that it regularly learns from the experience of user data. A device that analyzes images of the heart to diagnose cardiac risk better could improve rapidly every time it receives data from more patients. The FDA has acknowledged that the approval process is a problem and is currently developing more effective rules for AI-enabled devices.

Under Cruz’s plan, the FDA could create a new rule that would only apply temporarily to a specific product. This is more politically palatable than revising rules for the entire industry.

Once an application is approved, developers and their products receive certain specific legal protections. Per the draft bill:

No existing right of action of a consumer to seek actual damages or an equitable remedy may be waived or modified under the Program. (2) While a waiver or modification is in effect, and the person is in compliance with the written agreement entered into pursuant to subsection (e), the person shall not be subject to the criminal or civil enforcement of a covered provision specifically identified in the waiver or modification.

Under this framework, developers would apply to release a product or service that requires a waiver from a specific federal regulation or rule under an agency’s jurisdiction. The legislation would not override existing state or local regulations. For example, it would not preempt state laws targeting fraudulent videos or so-called “deep fakes,” or restrictions on using AI in mental healthcare like Nevada’s. Local ordinances, including zoning laws that limit data centers due to concerns about water or electricity usage, would also remain in effect.

To understand what kinds of AI applications might qualify for this type of legal protection, it is helpful to examine how similar programs, known as regulatory sandboxes —hence the bill’s acronym —have evolved and the range of technologies they have enabled in practice.

Regulatory sandboxes have been in operation for several years, enabling a range of innovative financial services. One of the early motivations for sandboxes started with the United Kingdom’s Financial Conduct Authority, which sought to address a surge of financial technology (fintech) startups encountering unclear or overly burdensome rules. This allowed firms like MarketFinance to trial peer-to-peer business lending under supervised conditions. In Singapore, the Monetary Authority’s sandbox supported projects such as Project Ubin, a blockchain-based cross-border payment trial with Standard Chartered and local banks.

Following these pioneers, jurisdictions around the world rolled out ideas similar to sandboxes. Australia set up an Innovation Hub in 2016 to support fintech and insurance technology (Insurtech) pilots. Canada’s Ontario Securities Commission introduced LaunchPad in 2017, and Abu Dhabi established a sandbox in 2018 to spur financial technology growth in the United Arab Emirates. Each program shares the core feature of time-limited, supervised testing under regulatory relief, tailored to local market needs.

Public reports of sandboxes rarely look at the long-term success of products or companies in which they were incubated. One standout example is Zilch, a medium-sized fintech company that credits the UK’s sandbox for helping it navigate the complex regulation of its buy-now-pay-later approach to credit and consumer purchasing.

Stanford’s Center on Philanthropy and Civil Society notes that “a well-designed and executed sandbox can facilitate innovation and protect consumers, avoiding the pitfalls that concern many critics.” The center also observes that “one of the benefits of a regulatory sandbox is that it has the potential to provide clear rules of the road for market participants, particularly where new technologies or new products and services pose challenging questions with respect to regulatory requirements and ensuring consumer protection.”

A World Bank review similarly highlights the impact of sandboxes on financial innovation: “In only four years, sandboxes have become synonymous with fintech innovation, offering the unique benefit of providing the empirical evidence needed to substantiate decisions in the field.”

Regulatory sandboxes have expanded well beyond their origins in fintech to encompass a wide array of industries. Sandboxes now operate in the insurance industry, allowing new underwriting models to be tested under relaxed rules. In the health sector, national data institutes like Health Data Research UK have used sandboxes to pilot data-driven diagnostics and patient-monitoring services in a controlled setting.

Interest in AI sandboxes is now rising in Europe under the proposed AI Act. Article 57 of the AI Act requires each European Union member state to establish at least one AI regulatory sandbox by Aug. 2, 2026, creating controlled environments for the development and validation of AI systems before market launch.

In the United States, Utah stands out for its approach to AI. The state’s Office of Regulatory Relief oversees technology sandboxes. One early sandbox pilot involved a product called ElizaChat, an AI-powered mental health chatbot designed for teenagers. Dave Barney, CEO of ElizaChat, reports:

“The AI Policy team engaged with us, understood our business needs, and crafted a regulatory relief contract that freed us to explore creative products that will help teenagers improve their mental health, without fear of regulatory risk.”

In July, the White House released an AI action plan, which directed agencies to adopt a wide range of AI-related rules, including the creation of a regulatory sandbox. The SANDBOX Act would take Utah’s approach to the federal level and establish a more formal mechanism than outlined in the White House plan (the White House plan simply recommended the establishment of a sandbox without as much detail).

Sandboxes are new to the U.S. federal government, so it is unclear how willing agencies will be to consider waivers for AI products and services. We may learn more as the discussion around the SANDBOX Act progresses.

Still, sandboxes hold considerable promise. Total, permanent deregulation is often politically unpopular. Agencies may be more willing to experiment with temporary deregulation around AI products, which gives innovations an opportunity that they might not otherwise have.

The post Sen. Ted Cruz proposes federal regulatory sandbox to encourage AI innovation, development appeared first on Reason Foundation.

]]>
A look at the White House’s pro-innovation artificial intelligence ‘action plan’ https://reason.org/commentary/a-look-at-the-white-houses-pro-innovation-artificial-intelligence-action-plan/ Tue, 07 Oct 2025 04:01:00 +0000 https://reason.org/?post_type=commentary&p=85292 The White House's AI action plan represents a clear policy direction favoring rapid innovation and reduced regulatory oversight.

The post A look at the White House’s pro-innovation artificial intelligence ‘action plan’ appeared first on Reason Foundation.

]]>
Earlier this year, the White House released an artificial intelligence (AI) “action plan,” declaring that, “Winning the AI race will usher in a new golden age of human flourishing.” The document’s central purpose is straightforward: to preserve American AI superiority. In practice, the plan mostly recommends consolidating and formalizing a long series of pro-innovation, anti-regulation executive orders the White House has issued since January 2025. Overall reactions from industry have been positive and reflect optimism over the administration’s commitment to free market innovation.

The plan is structured around three pillars: accelerating AI innovation, building American AI infrastructure, and leading in international AI diplomacy and security. The first pillar focuses on removing regulatory barriers and promoting private-sector development, including open-source AI and workforce training. The second pillar targets energy, permitting, and semiconductor supply chains, aiming to rapidly expand and secure the physical and technical infrastructure needed for large-scale AI deployment. The third pillar advances an assertive international strategy—promoting U.S. AI standards abroad, tightening export controls, and countering adversarial influence, especially from China.

The plan builds on the regulatory shift that began when President Donald Trump rescinded an AI-related executive order from President Joe Biden. Biden’s order focused on public support for regulations that reduced bias in AI products, while Trump’s first executive order on AI, issued in January, explicitly called for the removal of barriers related to AI development.

The action plan recommends tasking a broad set of agencies with carrying forward a deregulatory mandate. Each would be charged with reviewing, revising, or eliminating existing rules, adjusting grant-making, and accelerating approvals to align federal AI policy with the administration’s pro-innovation priorities, which aligns with Reason Foundation’s testimony on how to promote AI innovation.

For example, the plan recommends the Office of Management and Budget (OMB) “work with Federal agencies that have AI-related discretionary funding programs to ensure, consistent with applicable law, that they consider a state’s AI regulatory climate when making funding decisions and limit funding if the state’s AI regulatory regimes may hinder the effectiveness of that funding or award.”

Should the Trump administration ultimately grant agencies this broad discretion, the plan potentially hands them a tool to reward or penalize states on AI regulation at their own judgment. For instance, the National Science Foundation has a $100 million grant for AI research that it awards to various universities. It’s possible these kinds of large grants could be in jeopardy for states that, from the perspective of an agency, create burdensome regulations.

State legislatures and many Republican governors publicly opposed a congressional moratorium on state AI regulations, arguing that it would preempt state powers to enact laws, such as those that would criminalize deceptive AI-generated media intended to influence elections. This plan, instead, takes an agency-centric approach. Each state’s AI policy would be evaluated by federal bodies, whose leadership is closely tied to the Trump administration. Strategically, this is a more politically directed tool than the proposed moratorium by Congress, which would have undercut the authority of both Republicans and Democrats to control AI policy.

The plan includes a dedicated section on government AI adoption that recommends “all Federal agencies ensure—to the maximum extent practicable—that all employees whose work could benefit from access to frontier language models have access to, and appropriate training for, such tools.”

This directive could materially affect government efficiency and labor costs. For example, preliminary evidence from Pennsylvania’s early generative AI pilot, in which 175 state employees across 14 agencies used ChatGPT Enterprise for drafting, summarization, research, and IT support, reported an average of 95 minutes saved per day on these tasks. While still in the early stages, these results suggest that broad-based adoption and training in frontier language models may yield significant productivity improvements across federal operations and potentially lead to labor cost reductions for an administration willing to replace overhead with automation.

The action plan also coincides with an executive order aimed at streamlining federal permits for data centers. Large-scale storehouses of computers have become an essential component of artificial intelligence programs, both for building (“training”) the foundational models and for enabling models to interact with users. The administration has recognized that current regulations hamper new data centers, with Environmental Protection Agency Administrator Lee Zeldin noting on Fox News that the “EPA wants to increase certainty for owner-operators in the permitting process, making it clear what kind of permits are needed for new and modified projects.”

New data centers have often been met with local resistance, with citizens utilizing environmental protection rules at their disposal in an attempt to delay or block the creation of facilities that they argue reduce the quality of life or consume excessive resources. At the same time, federal and state energy agencies have identified the need for extensive additions to electricity infrastructure to meet this new demand; however, such infrastructure comes at a cost and requires time, tending to grow more slowly than demand. It is unclear how this new executive order will actually impact the construction of new data centers, but it demonstrates the administration’s willingness to explore ways to cut red tape and accelerate the permitting process.

Finally, the AI action plan recommends that several agencies promote and incentivize the use of publicly available data, including the creation of a new data “portal” for datasets from the National Science Foundation.

This plan marks a departure from a purely free market approach by calling for federal agencies to be empowered with broad discretion on politically sensitive issues—such as cutting government contracts with software that explicitly promotes progressive climate change reform. The result is a policy framework that both crystallizes the administration’s deregulatory agenda and provides agencies with explicit “air cover” to reward or penalize states based on both political criteria and compliance with federal AI priorities.

The White House’s AI action plan represents a clear policy direction favoring rapid innovation and reduced regulatory oversight. The plan’s effectiveness will depend heavily on how federal agencies interpret and implement their expanded discretion. However, it will give executive air cover to agency leaders who wish to create rules that are friendly to the expanding market of AI products.

The post A look at the White House’s pro-innovation artificial intelligence ‘action plan’ appeared first on Reason Foundation.

]]>
Psychedelics Policy Newsletter: DEA considers rescheduling psilocybin, FDA releases rejection decision, and more https://reason.org/psychedelics-policy/psychedelics-policy-newsletter-dea-considers-rescheduling-psilocybin-fda-releases-rejection-decision-and-more/ Mon, 06 Oct 2025 04:30:00 +0000 https://reason.org/?post_type=psychedelics-policy&p=85350 Plus: Reason Foundation testifies in Mississippi, author Joe Dolce talks about his new psychedelics book, and more.

The post Psychedelics Policy Newsletter: DEA considers rescheduling psilocybin, FDA releases rejection decision, and more appeared first on Reason Foundation.

]]>
Welcome to Reason Foundation’s newsletter on psychedelics policy. This edition covers:

  • DEA petition to reschedule psilocybin
  • Reason testimony on ibogaine in Mississippi
  • FDA releases MDMA decision
  • Interview for a new book on psychedelics

DEA petition requests psilocybin rescheduling

The Drug Enforcement Administration (DEA) has requested that the Department of Health and Human Services (HHS) review the scheduling of psilocybin under the Controlled Substances Act. This means that the federal government could choose to change psilocybin from Schedule I (where all use is banned) to a lower schedule (where use may be allowed under certain guidelines).

Schedules III through V are for approved pharmaceuticals subject to varying levels of controlled access. Companies or individuals that traffic in these substances can access basic financial services and are not subject to special penalties on their federal income taxes, even if state laws allowing the sale of these substances differ from federal law. A key takeaway of the potential change is that psilocybin service centers in state-regulated markets, such as Oregon, would be able to deduct business expenses on their federal income taxes under the “ordinary and necessary” standard that applies to most businesses.

This request from the DEA follows a protracted legal battle by proponents, including Sunil Aggarwal, a Washington State-based doctor, who sought to treat a patient with psilocybin under “Right to Try,” a federal law that permits the use of drugs not approved by the Food and Drug Administration (FDA) under certain conditions. There is no word yet from HHS regarding when or how it will respond to the DEA’s request. Legally, the U.S. attorney general has the authority to change the status of a drug, provided that DEA and HHS have jointly considered a number of factors outlined in statute.

Reason testimony in the state roundup

Reason Foundation’s Geoff Lawrence traveled to Mississippi to testify during a hearing related to ibogaine. Lawrence discussed the medical benefits of ibogaine as a potential treatment for opioid addiction. He also discussed recent states that have approved millions of dollars in funding for clinical trials involving ibogaine, such as those in Texas and Arizona.

These public grants could contribute enough funding to take ibogaine through the initial phase of the Food and Drug Administration’s (FDA) drug approval process.

Read this and more about state-level policies in the state round-up here.

FDA releases rejection decision

The FDA has released details of its decision to reject a New Drug Application for MDMA. Last August, the FDA made headlines for rejecting the application of Lykos, a pharmaceutical company that had long been the frontrunner for medicalizing psychedelics with a patented version of MDMA for the treatment of post-traumatic stress disorder. The Multidisciplinary Association of Psychedelic Sciences (MAPS) published a critique of the decision, arguing that the FDA “moved the goal posts” on the clinical trial design.

According to MAPS, the FDA was fully aware of many of the limitations when it greenlit the organization’s clinical trial design. For instance, the FDA was ultimately concerned that too many participants ”broke” blinding because they were able to guess whether they received the drug or a placebo. Genuine blinding is a gold standard of clinical trials, but a challenge with mental health-related drugs that have acute effects (like potent psychedelics).

The FDA is now requiring more research. MAPS leadership had created a for-profit company, Lykos, to conduct the trials. Currently, it is unknown if and how Lykos will address these challenges in further research.

Book interview

Reason Magazine Editor-at-Large Nick Gillespie interviewed Joe Dolce about his new book, Modern Psychedelics: The Handbook for Mindful Exploration. Dolce argues that psychedelics have moved from an obscure interest of the counterculture to a mainstream treatment.

“No matter what happens, people are going to use these substances,” says Dolce, when asked about possible legalization policies. Gillespie noted that better public policies would help users make more informed decisions.

The post Psychedelics Policy Newsletter: DEA considers rescheduling psilocybin, FDA releases rejection decision, and more appeared first on Reason Foundation.

]]>
State psychedelics legalization and policy roundup — October 2025 https://reason.org/commentary/state-psychedelics-legalization-and-policy-roundup-october-2025/ Mon, 06 Oct 2025 04:01:00 +0000 https://reason.org/?post_type=commentary&p=85345 Kentucky debates clinical ibogaine trials, Mississippi considers ibogaine, Massachusetts bill would decriminalize psilocybin, and more.

The post State psychedelics legalization and policy roundup — October 2025 appeared first on Reason Foundation.

]]>
This post is part of an ongoing series summarizing state-based psychedelic reforms intended for policy professionals.

Kentucky

On Aug. 27, the Interim Joint Committee on Health Services discussed whether Kentucky should join a multi-state collaborative to conduct clinical trials of ibogaine as a treatment for certain neurological conditions, including opioid addiction. In 2023, the Kentucky Opioid Abatement Advisory Commission first broached the idea of directing a portion of the state’s opioid settlement funds toward ibogaine research to develop a novel, and potentially far more effective, treatment than what is currently available. That initiative was scrapped after a new state attorney general was elected. According to the Lexington Herald-Leader, Gov. Andy Beshear has expressed skepticism about the safety of ibogaine treatment. The legislature has not yet scheduled any further action on the issue (such as the introduction of a bill like in Texas to grant public funds for research).

Mississippi

Reason Foundation Research Director Geoff Lawrence testified at a state legislature Public Health Joint Committee hearing on Aug. 28, in which lawmakers learned about the possibilities of ibogaine as a potential treatment for a wide range of neurological conditions. Lawrence discussed the known benefits of ibogaine as a potential treatment for opioid use disorder, traumatic brain injury, and neurodegenerative disease, along with efforts in other states, including Texas, to fund FDA-supervised clinical trials (testimony begins around here at the 1:36 minute mark).

Massachusetts

House Bill 2506 from state Rep. Steve Owens (D-29) would allow limited personal possession and transfer of psilocybin for military veterans, law enforcement officers, and those with a ”qualifying condition,” which is defined by the bill as ”a medical condition for which at least two and a majority of relevant clinical studies suggest psilocybin therapy in a clinical environment is safe and tolerable and which is not a disqualifying condition.’ Individuals may not have a disqualifying condition, which is defined in the bill as “bipolar disorder, a schizophrenia spectrum disorder, a Cluster A personality disorder, a Cluster B personality disorder, or a medical condition for which at least two and a majority of relevant clinical studies suggest psilocybin therapy in a clinical environment is not safe.”

S1400, sponsored by state Sen. Cindy Friedman (D-Middlesex), would task the Department of Health with creating a pilot program for treatments with psychedelics. On Sept. 11, it received a favorable vote from the Joint Committee on Mental Health, Substance Use, and Recovery and has been referred to the Committee on Healthcare Financing.

Oregon

House Bill 3043 (introduced at the request of Gov. Tina Kotek’s office) states that a licensed medical practitioner may not be disciplined for using psilocybin under the state’s regulated program. The bill amends the state’s program to allow for reentry into medical professions for those who were once declared “impaired” by substance abuse. Specifically, a medical professional cannot be disciplined for legal psilocybin use if they used it “before entry into the impaired health professional program, if the licensee did not practice while impaired.”

Senate Bill 844 (introduced at the request of the governor’s office) requires the Oregon Health Authority to keep personally identifiable information confidential related to complaints against a psilocybin service center or licensee.

The post State psychedelics legalization and policy roundup — October 2025 appeared first on Reason Foundation.

]]>
Colorado can lead on AI fairness without a regulatory straitjacket https://reason.org/commentary/colorado-can-lead-on-ai-fairness-without-a-regulatory-straitjacket/ Tue, 26 Aug 2025 17:00:00 +0000 https://reason.org/?post_type=commentary&p=84367 There are evidence-based, market-oriented steps Colorado lawmakers could take in place of the state's existing artificial intelligence law.

The post Colorado can lead on AI fairness without a regulatory straitjacket appeared first on Reason Foundation.

]]>
Colorado Gov. Jared Polis has called lawmakers back to Denver for a rare special session, partly to revisit Colorado’s first-in-the-nation artificial intelligence (AI) law. The special session kicked off on August 21st. While the 2024 statute aimed to curb algorithmic bias in hiring, lending, and other high-stakes areas, Polis now warns its broad mandates could create costly compliance hurdles and discourage innovation.

Instead of scrapping the goal of AI algorithmic fairness, Colorado has an opportunity to lead the country in developing anti-discrimination tools and testing what works without erecting barriers that lock out smaller players or slow emerging markets. The state can do this through partnerships with industry, universities, and nonprofits to first study whether AI discrimination is actually occurring and then pilot new technologies or algorithms that could reduce it.

As background, in 2024, the Colorado Legislature passed Senate Bill 24-205. It became the nation’s first state to enact comprehensive AI regulation tailored to high-stakes automated decisions in areas like employment, housing, education, and healthcare. The law mandates bias risk assessments, transparency disclosures, and consumer recourse mechanisms for systems that significantly influence life-changing outcomes.

Unfortunately, there could be extraordinary compliance costs. Companies may simply avoid developing or using services in the state rather than figure out how to comply with a complicated and potentially costly law. Polis, though supportive of the law’s intent, warned in his signing statement that a patchwork of state laws could stifle innovation and create a “challenging regulatory environment.”

In an X post on Aug. 19, the governor reiterated his concerns:

“In Colorado, we can promote innovation while also protecting consumers. …There is clear motivation in the legislature to take action now to protect consumers and promote innovation, all without creating new costs for the state or unworkable burdens for Colorado businesses and local governments.”

Moreover, a new White House’s AI Action Plan now adds a risk factor Colorado didn’t have to consider when SB 24-205 was passed: losing federal contracts. The AI Action Plan states that “the Federal government should not allow AI-related Federal funding to be directed toward states with burdensome AI regulations that waste these funds.”  For instance, the National Science Foundation has a $100M grant to research institutes in various states, such as the University of Texas at Austin.

While agencies are only beginning to implement this guidance and no interpretations have been issued, it is possible that a future agency could deem Colorado’s anti-discrimination mandates overly burdensome and threaten federal grants or contracts—introducing a funding risk that was not on the table when the legislature first enacted the law.

There are evidence-based, market-oriented steps Colorado lawmakers could take in place of the existing law.

Before discussing novel approaches to artificial intelligence, it is worth noting that Reason Foundation has questioned the need for additional anti-discrimination laws to address these types of AI issues. There are, for instance, already laws against racial discrimination in lending for housing. However, as policymakers convene, the majority may insist that the state go above and beyond existing laws and create additional actions related to artificial intelligence technologies.

One option Colorado could explore is the potential launch of a task force bringing together industry leaders, local universities, and nonprofits to study how discrimination in housing, education, and employment may be occurring when AI tools are used. Right now, policymakers and the public don’t have a clear picture of the scope of the problem: which applications are driving biased outcomes, whether the bias stems from generalized models or from specific software deployments, and in what contexts it most affects individuals. Without this baseline understanding, it’s impossible to design targeted, effective interventions.

Task force findings should be published in a public report and delivered to the legislature, giving both lawmakers and stakeholders an evidence base for future debates. By focusing first on identifying the degree and sources of bias, Colorado can replace guesswork with data, ensuring any eventual rules are grounded in measurable harm rather than hypothetical risks. This approach would also signal to the broader market that the state is committed to problem-solving, not preemptive overregulation.

Following the creation of a task force, the state could develop solutions to reduce AI discrimination. Once the task force has mapped where and how AI-driven discrimination occurs, its next goal should be to experiment with ways to mitigate it. Because AI models and applications are evolving at a pace of months, not years, there is no static playbook for reducing bias.

The task force should work with model developers, deployers, and academic experts to create algorithms, prompt strategies, or operational guidelines aimed at identifying and reducing discriminatory outcomes in real-world contexts. While the government could offer grants, many academics are already working on this problem. For instance, last year, a University of Colorado Boulder faculty member published research on biases in AI mental health tools.

These efforts should be paired with clear, measurable benchmarks. AI company Anthropic has evaluated benchmarks, such as the Bias Benchmark for QA, to ensure that its models do not perpetuate stereotypes (such as a model assuming that the CEO of a company is male). By testing models and applications against metrics, Colorado researchers could not only assess the effectiveness of mitigation techniques but also create a repeatable standard for others to adopt. If successful, Colorado’s benchmarks could become a national model for innovation in AI fairness without the weight of one-size-fits-all mandates.

The final step is to ensure that all findings from these voluntary efforts are made public and inform future legislation. Regular reports should not only document progress on bias reduction but also flag where interventions are ineffective or counterproductive. This process will help lawmakers avoid locking in policies that can’t adapt to evolving technology and will keep the public informed about the trade-offs involved. By making transparency a norm, Colorado can encourage a culture of trust between industry, regulators, and citizens.

This is especially helpful for smaller technology companies, which cannot afford entire teams dedicated to developing new AI methods that avoid discrimination. Public research and open-source tools, including public benchmarks, make it easier for smaller companies to comply with new rules. Both the supporters of Colorado’s AI law and Polis raise valid concerns over compliance costs. The state need not surrender its role in addressing AI-driven discrimination, nor should it ignore the risks of imposing rules that make Colorado less attractive to innovators. By adopting an exploratory, science-driven approach that works in partnership with the private sector, Colorado can preserve its leadership in addressing legitimate fairness issues while keeping its economy open and competitive.

The post Colorado can lead on AI fairness without a regulatory straitjacket appeared first on Reason Foundation.

]]>
Next steps after the Senate rejected an AI regulation moratorium https://reason.org/commentary/next-steps-senate-rejected-ai-regulation-moratorium/ Tue, 19 Aug 2025 10:30:00 +0000 https://reason.org/?post_type=commentary&p=84149 Reintroducing a version of this narrower approach to an AI moratorium may be a politically viable path forward to passing a balanced federal standard.

The post Next steps after the Senate rejected an AI regulation moratorium appeared first on Reason Foundation.

]]>

As part of the “One Big Beautiful Bill Act” signed by President Donald Trump last month, Congress debated advancing a federal moratorium on state regulations of artificial intelligence to prevent a patchwork of conflicting state laws on AI. During the legislative process, the Senate voted 99-1 to remove the state AI moratorium passed by the House. But the Senate also considered a narrower moratorium that took a more conservative approach to state-level AI regulation and could offer a politically-viable path to passing a balanced federal standard in the future.

The Senate’s proposal for an AI moratorium would have barred state regulation preventing the development of advanced AI models but permitted state rules focused on malicious applications. For example, the bill would have barred states from interfering with companies like Microsoft or Google as they develop large language models (LLMs) while still allowing states to regulate deceptive political videos created with AI.

The House’s original proposal sparked a bipartisan backlash that included many Republican governors who were concerned that it appeared to override state authority to regulate AI. A joint letter from 17 Republican governors argued that a moratorium “threatens to undo all the work states have done to protect our citizens from the misuse of artificial intelligence.”

The Senate’s revised and limited version of the moratorium aimed to preserve state governments’ flexibility to regulate harmful or malicious uses of AI, such as unauthorized “deceptive acts” like non-consensual sexually explicit deepfakes, while shielding the core development of AI technologies from fragmented state-by-state or premature restrictions.

The Senate’s amendment text, which was not included in the bill, would have reduced the House’s AI moratorium from 10 years to five and allowed states to enforce specific laws applicable to AI. Among the categories of state laws that were explicitly permitted under the proposed moratorium are those dealing with “unfair or deceptive acts or practices, child online safety, child sexual abuse material, rights of publicity, protection of a person’s name, image, voice, or likeness and any necessary documentation for enforcement, or a body of common law.” These laws may still apply to AI systems—but only if they do so “without undue or disproportionate burden…to reasonably effectuate the broader underlying purposes of the law or regulation.”

In practice, the proposed moratorium would have blocked state laws that target the development or deployment of foundational AI models, especially models created by well-resourced companies like OpenAI. For example, it would have likely preempted a reintroduction of California’s Senate Bill 1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, which was passed but vetoed in 2024 by Gov. Gavin Newsom. That bill would have imposed strict safety requirements and allowed civil penalties against developers of large-scale AI models if their systems caused harm.

These types of state laws focus not on specific misuse of AI, but on the underlying technology itself, often singling out firms capable of training and releasing powerful models. The federal moratorium aimed to block such direct regulation of core AI capabilities while still allowing states to enforce laws addressing harmful outcomes.

One argument advanced by supporters of the proposed moratorium is that, without federal preemption, AI companies would be forced to design their products around the most restrictive state laws. This, they argue, could stifle innovation by making a state like California’s policy choices effectively the national policy. Vice President J.D. Vance described those concerns in a recent podcast interview with comedian Theo Von.

VANCE: So the idea is you use — you basically have a federal regulation that prevent — a federal regulation that prevents like California from having a super progressive set of regulations on artificial intelligence.

VON: Okay.

VANCE: That that’s the argument for it. The argument against it is that if the feds aren’t protecting artists, then you’re not going to be able to protect artists either. And so I, honestly, I don’t think the provision, to be honest with you, I don’t think that’s going to make it in the final bill, but I usually have a pretty strong view on most things. I can kind of go both ways on this because I don’t want California’s progressive regulations to control artificial intelligence.

The proposed moratorium on state AI regulations aimed to ensure a more uniform national approach, allowing for innovation in the core development of AI systems. While members of Congress have yet to introduce another legislative approach to federal AI standards, the failed Senate proposal could be a first step toward federal legislation that is nuanced in a way that will not hinder the growth of AI and can also garner political support.

The post Next steps after the Senate rejected an AI regulation moratorium appeared first on Reason Foundation.

]]>
Psychedelics Policy Newsletter: RFK Jr. gives hopeful approval timeline, Arizona advances ibogaine, and more https://reason.org/psychedelics-policy/psychedelics-policy-newsletter-rfk-jr-gives-hopeful-approval-timeline-arizona-advances-ibogaine-and-more/ Tue, 12 Aug 2025 20:04:33 +0000 https://reason.org/?post_type=psychedelics-policy&p=84073 Plus: Former Texas Gov. Rick Perry penned an op-ed about his commitment to advancing ibogaine as a treatment option.

The post Psychedelics Policy Newsletter: RFK Jr. gives hopeful approval timeline, Arizona advances ibogaine, and more appeared first on Reason Foundation.

]]>
Welcome to Reason Foundation’s newsletter on psychedelics policy. This edition covers:

  • The Trump administration’s psychedelics push
  • Arizona’s ibogaine funding law
  • Gov. Perry’s op-ed offering support for ibogaine

The Trump administration’s push for psychedelics

During a House Energy and Commerce Committee hearing, Secretary of Health and Human Services (HHS) Robert F. Kennedy Jr. said that he hopes that a psychedelic pharmaceutical is approved within the next 12 months. However, he did not specify which drug or how the Food and Drug Administration (FDA) would make this determination. “This line of therapeutics has tremendous advantage if given in a clinical setting, and we are working very hard to make sure that happens within 12 months,” RKF Jr. said.

“These are all very promising signs that the administration is aware of the potential of psychedelics and is trying to make overtures that they’re ready to approve them,” Reason Foundation’s Greg Ferenstein told the Associated Press.

HHS also made a key new hire who could help spur positive reforms. Mike Davis, who previously served as chief medical officer of the psychedelics research organization Usona Institute, is now the deputy director of the FDA’s Center for Drug Evaluation and Research, which evaluates drug applications and reviews standards for clinical trials.

Arizona funds ibogaine

The psychedelic compound ibogaine is a promising potential treatment for opioid addiction and brain disorders. Arizona lawmakers recently budgeted $5 million toward a public-private partnership that will perform clinical trials to determine the safety and efficacy of the treatment. Arizona becomes the second state, following Texas, to allocate funding for this purpose. “Arizona is showing the nation how to solve real problems by putting cutting-edge science first,” former U.S. Senator Krysten Sinema told Reason Foundation about the program. For more on Arizona and other developments, visit our most recent state psychedelics legalization and policy roundup.

Former Texas Gov. Rick Perry supports psychedelics

Former Texas Gov. Rick Perry penned an op-ed in The Washington Post about his commitment to advancing ibogaine as a treatment. Perry concludes the op-ed with a personal note:

“I traveled to see ibogaine clinics in Mexico myself. I met the doctors and researchers. I listened to the patients. I studied the clinical data. I don’t care if you’re a Republican or a Democrat. Every one of us knows someone who’s struggling, whether with addiction, trauma or mental health. This is the cause I will dedicate the rest of my life to fighting for, because too many lives hang in the balance to do anything less.”

Perry’s column links to recent Reason Foundation research by Madison Carlino examining the potential for psychedelics to allay the symptoms of neurodegenerative diseases like Alzheimer’s. Perry has co-founded a new nonprofit, Americans for Ibogaine, to pursue his advocacy.

The post Psychedelics Policy Newsletter: RFK Jr. gives hopeful approval timeline, Arizona advances ibogaine, and more appeared first on Reason Foundation.

]]>
State psychedelics legalization and policy roundup — August 2025 https://reason.org/commentary/state-psychedelics-legalization-and-policy-roundup-august-2025/ Tue, 12 Aug 2025 15:50:21 +0000 https://reason.org/?post_type=commentary&p=84079 Arizona allocates funding for ibogaine research, Reason Foundation to testify at Mississippi informational hearing about ibogaine, and more.

The post State psychedelics legalization and policy roundup — August 2025 appeared first on Reason Foundation.

]]>
This post is part of an ongoing series summarizing state-based psychedelic reforms intended for policy professionals.

Arizona

House Bill 2871 by state Rep. Justin Wilmeth (R-Phoenix), which would have allocated funding to ibogaine research, was folded into the state’s general appropriations bill and signed by the governor on June 27th. Reason Foundation has published an analysis on why Arizona’s joining Texas to fund ibogaine clinical trials marks an important step in the momentum to gain federal approval for the drug.

Senate Bill 1555 by state Sen. T.J. Shope (R-16) will legalize a pharmaceutical version of synthetic psilocybin at the state level if approved by the Food and Drug Administration (FDA). The bill passed on June 26th. The bill, as originally written, would have authorized a market for state-regulated psilocybin-assisted therapy, but was heavily amended.

Colorado

Colorado regulators are reportedly considering therapeutic use of iboga within the state’s regulated psychedelics program. Colorado would be the first state to offer legal iboga services. Under Proposition 122, a ballot initiative that created a regulated market for psilocybin therapy, the state can consider other botanical psychedelics. The Colorado Natural Medicines Advisory Board must first determine how to manage potential safety concerns and how licensees would be able to produce the compound.

Louisiana

Senate Resolution 186 from state Sen. Patrick McMath (R-11) would create a task force to study the use of psychedelics for veterans.

Massachusetts

H1858 (previously House Docket 188) from state Rep. Marc Lombardo (R-22nd Middlesex) would reduce the penalties for possession of psilocybin. It would impose a $100 fine for possession of less than one gram.

H1726 (previously House Docket 3895), from state Rep. Homar Gómez (D-2nd Hampshire), would direct courts to dismiss any arrest for possession of psilocybin by adults over 21 as long as their actions had no visible defects to the health or safety of another person.  

H1624 (previously House Docket 4243) from state Rep. Mike Connolly (D-26th Middlesex) would create a psychedelics task force to study equity in psychedelic access.

All three bills received a joint session hearing on July 15th, 2025, but show no indication yet of next steps.

Michigan

House Bill 4686 from state Rep. Mike McFall (D-14) would effectively legalize the possession of psilocybin for Michiganders diagnosed with post-traumatic stress disorder. The bill does not create an affirmative legalization of psilocybin but exempts possession for treatment of PTSD from state law relating to the prohibition of illicit substances.

Mississippi

Mississippi lawmakers will hold an informational hearing about ibogaine on August 28. Reason Foundation research director Geoffrey Lawrence is expected to testify. Bryan Hubbard, CEO of Americans for Ibogaine, is also set to testify.

Oregon

House Bill 3817 (multiple sponsors) would have authorized the Oregon Health Authority to study the use of ibogaine for a range of mental health issues, such as anxiety. It failed to pass before the legislature adjourned.

The post State psychedelics legalization and policy roundup — August 2025 appeared first on Reason Foundation.

]]>
Existing laws already fight AI housing discrimination—new state AI bills increase confusion https://reason.org/commentary/existing-laws-already-fight-ai-housing-discrimination-new-state-ai-bills-increase-confusion/ Tue, 08 Jul 2025 10:00:00 +0000 https://reason.org/?post_type=commentary&p=83495 Misguided artificial intelligence regulatory efforts risk limiting innovation and sowing misunderstanding in many markets.

The post Existing laws already fight AI housing discrimination—new state AI bills increase confusion appeared first on Reason Foundation.

]]>
In 2021, Mary Louis of Massachusetts had her application for an apartment she hoped to rent rejected because a computer algorithm flagged her as a financial risk. The following year, she and co-plaintiff Monica Douglas filed suit at the head of a class of 400 “low-income, minority housing voucher holders,” alleging they were “effectively blackballed from rental housing by Defendant SafeRent Solutions, LLC based on credit histories and other information which bears little to no relationship to the risk that their rent will not be paid.”

SafeRent settled the suit out of court, agreeing to pay over $2.2 million and modify certain features of the scoring algorithm it offered to property owners to evaluate prospective tenants. The case received little news attention at the time of its filing in 2022, but that changed when its settlement was approved in late 2024.

“She didn’t get an apartment because of an AI-generated score — and sued to help others avoid the same fate,” proclaimed a Guardian headline in December. Associated Press coverage of the settlement explained that:

While such lawsuits might be relatively new, the use of algorithms or artificial intelligence programs to screen or score Americans isn’t. For years, AI has been furtively helping make consequential decisions for U.S. residents…When a person submits a job application, applies for a home loan, or seeks particular medical care, there’s a chance that an AI system or algorithm is scoring or assessing them, just as it did with Louis. Those AI systems, however, are largely unregulated, even though some have been found to discriminate.

Persistent discriminatory outcomes, even without direct intent to discriminate, are unfortunately not a new phenomenon in housing markets. Neither are computer algorithms, which have been commonly used throughout housing markets for decades and are subject to existing anti-discrimination laws. What is new is calling these algorithms “artificial intelligence” or “AI.” At the time of the alleged discrimination in 2021, a computer program that generated a financial score would not commonly have been called AI. The term does not appear a single time in the 43-page complaint filed in 2022.

These years coincide with the widespread adoption of AI chatbots and their solidification in public awareness. There’s no indication that SafeRent’s algorithm used any of the new technology employed by these chatbots, such as large language models. If, in 2021, its designers had searched for an of-the-moment buzzword to describe their product, they would more likely have landed on “big data” or “machine learning” than AI. This had changed by 2024, and “AI” is found in almost every headline covering the settlement.

Legally speaking, this does not matter. Whether SafeRent’s algorithm employed a large language model or older programming technology does not impact whether it violated discrimination law. In general, there is no clear line between what we call AI today and many of the algorithms we have used for years without that moniker. It is not surprising that the big tent of what we call AI continues to grow. Companies want to use the term for branding purposes, just as the media does for headlines.

However, this rapid renaming of many algorithms as “AI” matters in the realm of policy. Both regulators and the public are willing to consider bolder and more sweeping regulation of AI than would be likely with more incremental technological change. Mainstream policy proposals for regulating AI have run the gamut from a somewhat fantastical moratorium on AI innovation itself to a moratorium on regulation at the state level. The technologies placed in the “AI” bucket may end up on a very different regulatory trajectory than those that don’t.

The combination of technology-driven anxiety and media hype is behind a wave of state legislation aimed at protecting housing markets from AI-driven discrimination. These bills create costly disclosure requirements and rules, which are necessarily based on speculation about what forms potential threats from the still-evolving technology might take. In Virginia, Gov. Glenn Youngkin vetoed House Bill 2094 for these reasons, stating that the bill’s “rigid framework fails to account for the rapidly evolving and fast-moving nature of the AI industry and puts an especially onerous burden on smaller firms and startups that lack large legal compliance departments.” A similar bill was signed into law by Colorado Gov. Jared Polis, who, in an unenthusiastic signing statement, expressed support for a federal moratorium on such state efforts.

Misguided AI regulatory efforts risk limiting innovation and sowing misunderstanding in many markets. However, the long, complex, and still-pertinent history of U.S. housing discrimination makes the potential damage even greater. Cases like SafeRent are sometimes referred to as “digital redlining,” reflecting how discriminatory outcomes can inadvertently emerge from data-driven algorithms. However, this term is somewhat misleading, as it harkens back to the mid-20th century when housing discrimination was overt and government-led.

Between 1935 and 1940, the federal Home Owners’ Loan Corporation (HOLC) created maps and neighborhood taxonomies to help guide the lending decisions of banks and mortgage lenders. HOLC gave each neighborhood a grade of A through D, with the neighborhoods receiving a grade of D notoriously marked in red and designated “hazardous.”

HOLC did not attempt to hide its discriminatory aims, docking numerous city points for neighborhoods where the number of immigrants and black Americans passed certain thresholds. Low-income minority communities were systematically denied access to credit and financial capital, essential for robust economic activity. HOLC’s condemnation of these neighborhoods was based on nothing more than racism and xenophobia, but the agency’s vast powers meant its pronouncements did significant harm to some neighborhoods over time. The lines drawn on maps around “hazardous” neighborhoods gave the practice its name—redlining.

In contrast, digital redlining refers to modern cases where computers use data or information reflecting disparities stemming from these once-intentional and widespread practices. Researchers see the persistent impact of 20th-century redlining and other forms of housing discrimination on current financial and housing market outcomes.

There are no easy answers to leveling the playing field or quantifying the impacts on minority homeowners and renters today. But state legislatures should take note that the class action lawsuit and settlement were brought under existing anti-discrimination law. As AI does not represent a break in the algorithmic tools being used, merely incremental improvements, there is no reason to suspect efforts like the Virginia and Colorado bills would succeed where other government efforts have failed.

As the AI revolution brings new algorithmic tools to many markets—and rebrands existing tools as “AI” in others—there exist opportunities both for learning and greater honesty in debates about housing discrimination. Different types of discrimination must be clearly distinguished. Twentieth-century redlining was intentional and government-led. Digital redlining, in contrast, lacks direct “bad guys” to deter or prosecute. Attempting to curb bad results by going after technology is tempting for some, but it has no track record of working.

The post Existing laws already fight AI housing discrimination—new state AI bills increase confusion appeared first on Reason Foundation.

]]>
Vermont attempts to regulate political AI deepfakes https://reason.org/commentary/vermont-attempts-to-regulate-political-ai-deepfakes/ Thu, 03 Jul 2025 18:21:09 +0000 https://reason.org/?post_type=commentary&p=83477 Vermont lawmakers have introduced Senate Bill S.23, a proposal aimed at curbing the use of AI-generated synthetic media in state elections.

The post Vermont attempts to regulate political AI deepfakes appeared first on Reason Foundation.

]]>
Vermont lawmakers have introduced Senate Bill S.23, a proposal aimed at curbing the use of AI-generated synthetic media—commonly known as “deepfakes”—in state elections. The bill would make it a criminal offense to create or knowingly distribute digital content intended to deceive voters about a candidate’s words or actions in the 90 days leading up to an election.

The legislation attempts to overcome constitutional and logistical hurdles faced by other state restrictions on political deepfakes, as it explicitly exempts parody.

Supporters of S.23 have stressed its role in protecting the integrity of Vermont’s elections. State Sen. Ruth Hardy (D-Addison), the bill’s lead sponsor, told Vermont news site VTDigger:

“The bill is really just about making sure Vermonters can trust what they see and hear during an election. We’re not trying to stop satire or free speech. We’re trying to prevent intentionally deceptive use of AI and deepfakes that could change the outcome of an election.”

As specified in Section 3(b) of the bill: “A person shall not, within 90 days of an election … knowingly distribute or cause to be distributed … materially deceptive synthetic media of a candidate, with the intent to injure the candidate’s reputation or to deceive a voter into voting for or against the candidate.”

Violations of the bill are enforced by the attorney general and range from less than $1,000 up to $15,000, if they have previously committed a violation and intended to cause “bodily harm”.

Perhaps most importantly, the bill attempts to avoid the legal pitfalls of similar attempts in other states by exempting humorous content. Section 3(e)(2) of S.23 states: “This section does not apply to any… communication that is satire or parody.”

Parody has been a legal puzzle for those who wish to regulate deepfake content. Vermont’s S.23 closely mirrors California’s Assembly Bill 2839, which was designed to combat the use of AI-generated “deepfakes” in elections by requiring prominent disclosures on altered media. Like Vermont’s bill, the California law aimed to prevent voters from being misled by synthetic videos or audio in the lead-up to an election, but it quickly faced pushback from free speech advocates and the courts. The Foundation for Individual Rights and Expression (FIRE), a national free speech advocacy organization, criticized California’s law for its mandatory labeling of parody and satire, arguing:

“The law also requires satire and parody to be labeled, like requiring a comedian to preface every joke with an announcement he’s making a joke. That’s not funny — it’s scary.”

This critique was echoed by a federal judge who issued a preliminary injunction against most of California’s statutes. The case involves a conservative content creator, Christopher Kohls, who shares humorous political videos using AI-generated content. The court held that California’s law’s disclosure requirements rendered satire and parody unworkable.

Additionally, the question of “intent to deceive” is especially challenging in today’s media environment, where political messaging often blurs the line between parody and genuine advocacy. For example, a recent video circulating online depicted President Donald Trump and other world leaders celebrating in a glitzy, Atlantic City–style resort set in the Gaza Strip—a region currently embroiled in conflict. The video, which appeared exaggerated and outlandish, was created by a reportedly unaffiliated account, but Trump himself later shared it as a point of pride. The ambiguity of the video’s purpose—whether it was an over-the-top parody meant to mock the president’s proposal or a sincere endorsement—highlights just how difficult it is to discern intent.

Under Vermont’s bill, it would be up to a court to decide whether such content was intended to deceive or merely to satirize, creating legal uncertainty for both creators and those who share such media.

Previous state bills largely focused on publishers of content, not anyone who shares the videos. When Virginia lawmakers attempted their own restrictions on political deepfakes, the bill applied only to those who had made campaign contributions of $1,000 or more. Senate Bill 775 from state Sen. Scott Surovell, (D-Fairfax) would have required conspicuous disclosure on synthetically generated political advertisements, but it did not apply to any citizen who shared the media. The bill was ultimately vetoed by the governor, who argued that the bill’s “broad and vague approach lacks the precision necessary to ensure fair and enforceable application.”

Vermont’s attempt at regulating deepfakes may ultimately avoid some of the pitfalls of other states, but the lack of precise definition in the bill itself may still find its way to court for enforcement.

The post Vermont attempts to regulate political AI deepfakes appeared first on Reason Foundation.

]]>
New York’s RAISE Act expands executive power over AI at the expense of legislative oversight https://reason.org/commentary/new-yorks-raise-act-expands-executive-power-over-ai-at-the-expense-of-legislative-oversight/ Tue, 01 Jul 2025 16:13:41 +0000 https://reason.org/?post_type=commentary&p=83438 New York is the latest in a growing number of states attempting to regulate artificial intelligence.

The post New York’s RAISE Act expands executive power over AI at the expense of legislative oversight appeared first on Reason Foundation.

]]>
New York is the latest in a growing number of states attempting to regulate artificial intelligence (AI). The Responsible AI Safety and Education (RAISE) Act seeks to impose transparency requirements and grant enforcement powers to state officials over companies developing advanced AI systems. Like similar efforts in California and Colorado, the law aims to ensure that AI models—especially those considered high-risk—are deployed safely.

But New York’s approach stands out for its narrow focus on the largest “frontier” AI developers and the significant authority it hands to the state’s executive agencies. The bill gives the attorney general extraordinary discretion to determine what constitutes an “unreasonable risk” and requires rapid compliance, making it difficult for the legislature—and, by extension, their constituents—to meaningfully weigh in on evolving AI safety standards.

The RAISE Act was sponsored in the Assembly by state Asm. Alex Bores (D-73) and in the Senate by state Sen. Andrew Gounardes (D-26). The bill has passed both chambers with bipartisan support. As of June 12, the bill is with Gov. Kathy Hochul for her signature, where it awaits final approval. The law targets only the largest AI developers, such as OpenAI, defined by their use of significant computing resources in developing so-called “frontier” models.

The bill authorizes substantial civil penalties for noncompliance, with estimates placing potential fines between $5 million and $15 million per violation. Companies are required to maintain and publish detailed safety plans, respond to major safety incidents within 72 hours, and keep records of mitigation strategies for up to five years. The act also mandates annual independent safety reviews to ensure adherence to transparency and risk mitigation standards.

The RAISE Act establishes several detailed compliance obligations for covered AI developers, with the largest companies subject to the most stringent requirements. It requires that any developer investing $100 million or more in the training of a single AI system must publish a safety and security plan, undergo an independent review, and make the results of such review easily accessible to the public. The law also requires rapid incident disclosure within 72 hours, defined as anything that could cause an “increased risk of critical harm.” A “critical” harm is one that results in seriously hurting 100 million people or causing $1 billion in damage.

Together, these provisions grant the attorney general substantial oversight of the largest AI developers, while requiring transparency, rapid incident reporting, and regular third-party evaluation.

Critics of the RAISE Act argue that its scope is overly broad and its definitions far too vague. Even Anthropic’s Jack Clark—whose AI company has been openly supportive of regulation—warned on X that key terms in the legislation are “overly broad/unclear in some of its key definitions,” raising concerns about how smaller developers might be swept up by expansive enforcement. He cautioned that ambiguity could hamper compliance and invite arbitrary application by regulators

Supporters of the RAISE Act, however, argue that the law merely codifies promises that leading AI firms have already made. As Bores explained in a statement on the official New York website of his office,

“Many major AI companies have voluntarily committed to create safety and security plans, but there is currently no legal requirement that they have such plans, that they be reasonable, or that they are followed in practice. By writing these common-sense protections into law, the RAISE Act ensures no company is incentivized to cut corners or otherwise put short-term profits over safety. The law only applies to the largest AI companies that have spent over $100 million in computational resources to train advanced AI models, and focuses on the most urgent, severe risks”.

While the RAISE Act is framed as a transparency and safety measure, its core enforcement provisions hinge entirely on the discretion of state agencies, especially the attorney general’s office. Nowhere in the bill does the legislature specify what constitutes an “unreasonable risk” or provide concrete examples, metrics, or probability thresholds that would guide decision-making. Instead, nearly every meaningful determination about what risks justify regulatory action is left to the administrative state. This approach effectively sidelines both the legislature and the public, leaving the scope of enforcement and compliance to be set by agency officials without further democratic oversight.

In practice, the RAISE Act leaves little room for genuine democratic input once enacted. While stakeholders from healthcare, education, and other innovation-driven sectors may be able to voice their concerns during agency rule making or public comment periods, the bill itself does not guarantee a transparent, participatory process for refining what counts as “unreasonable risk.” Instead, nearly all critical decisions are left to state agencies, with the legislature—and by extension, civil society—playing a limited role.

For industries that depend on the ability to adapt and innovate quickly, this framework raises real questions about how responsive and accountable AI governance will be in New York going forward.

The post New York’s RAISE Act expands executive power over AI at the expense of legislative oversight appeared first on Reason Foundation.

]]>
Congress, states explore AI tools to fight Medicare, Medicaid fraud https://reason.org/commentary/congress-states-explore-ai-tools-to-fight-medicare-medicaid-fraud/ Mon, 30 Jun 2025 17:00:00 +0000 https://reason.org/?post_type=commentary&p=83434 Continued investment in artificial intelligence may help agencies achieve more accurate oversight and reduce waste in public health care spending.

The post Congress, states explore AI tools to fight Medicare, Medicaid fraud appeared first on Reason Foundation.

]]>
Tucked into the latest budget reconciliation package for Congress, “One, Big, Beautiful Bill,” is a $25 million provision for the Department of Health and Human Services (HHS) to develop artificial intelligence (AI) tools aimed at reducing improper Medicare payments. According to the bill summary, the funding proposed in Sec. 112204 will support “tools for purposes of reducing and recouping improper payments under Medicare.”

Improper payments continue to be a persistent issue in both Medicare and Medicaid. According to the Centers for Medicare & Medicaid Services (CMS), Medicare’s traditional fee-for-service program recorded $31.7 billion in improper payments in fiscal year 2024, or 7.66% of total spending. Medicaid reported $31.1 billion in improper payments for the same period, representing 5.09% of the expenditure.

These improper payments include administrative errors and, in some cases, outright fraud. For example, Dr. Farid Fata, a Michigan oncologist, was convicted in 2014 and sentenced to federal prison for submitting $34 million in fraudulent Medicare and private insurance claims by administering unnecessary chemotherapy to patients without cancer.

States are also looking to AI for solutions to fraud. In January, Minnesota Gov. Tim Walz said the state would launch an anti-fraud initiative that includes AI tools for the state’s Medicaid billing. “As long as there have been programs aimed at helping people, there have been fraudulent actors looking to steal from those who need them most,” Walz said. “Our job is to stay one step ahead of them. We’re coupling new tools, like AI, with old-fashioned police work, to slam the door shut on theft.”

In a separate, early proposal from Congress, the bill Medicare Transaction Fraud Prevention Act (H.R. 7147) would establish a two-year pilot program to test predictive algorithms for identifying Medicare transactions that could be prone to improper payments (there was no budget attached to the pilot proposal).

Technology is advancing quickly. A 2024 study published in the Journal of Big Data tested AI tools on Medicare billing records and found that these tools improved the accuracy and clarity of fraud detection. Stella Batalama, dean of the Florida Atlantic University College of Engineering and Computer Science, which published the study, noted, “These methods, if properly applied to detect and stop Medicare insurance fraud, could substantially elevate the standard of health care service by reducing costs related to fraud.” The study used real Medicare data and showed that AI can help flag suspicious billing more efficiently than current approaches.

The move would advance HHS Secretary Robert F. Kennedy Jr.’s priority of using AI for agency efficiency. He has stated, “The AI revolution has arrived, and we are already using these new technologies to manage health care data more efficiently and securely.”

Evidence from recent studies and pilot programs suggests that AI has the potential to improve detection and prevention efforts, though the technology and its applications are still evolving. Continued investment in these tools may help agencies achieve more accurate oversight and reduce waste in public health care spending.

The post Congress, states explore AI tools to fight Medicare, Medicaid fraud appeared first on Reason Foundation.

]]>