Nicole Shekhovtsova, Author at Reason Foundation https://reason.org/author/nicole-shekhovtsova/ Mon, 03 Nov 2025 19:49:35 +0000 en-US hourly 1 https://reason.org/wp-content/uploads/2017/11/cropped-favicon-32x32.png Nicole Shekhovtsova, Author at Reason Foundation https://reason.org/author/nicole-shekhovtsova/ 32 32 California’s AI law works by staying narrow https://reason.org/commentary/californias-ai-law-works-by-staying-narrow/ Mon, 03 Nov 2025 19:49:33 +0000 https://reason.org/?post_type=commentary&p=86352 The law takes a narrow, transparency-first approach to regulating advanced “frontier” AI models, creating room for experimentation, while requiring timely disclosures that give the state the data it needs to address risks as they emerge.

The post California’s AI law works by staying narrow appeared first on Reason Foundation.

]]>
California Gov. Gavin Newsom signed the Transparency in Frontier Artificial Intelligence Act into law in late September. The law takes a narrow, transparency-first approach to regulating advanced “frontier” artificial intelligence (AI) models, creating room for experimentation and innovation, while requiring timely disclosures that give the state the data it needs to address risks as they emerge. 

This new law is already a better first step than last year’s heavy-handed—and ultimately vetoed—proposal, Senate Bill 1047. The value of the new law, Senate Bill 53, however, will depend on its execution and whether California continues to update its definition of “frontier” to reflect the growing capabilities of firms entering the market. 

Senate Bill 53 defines “frontier foundation models” as models trained with more than 10^26, or 10 to the power of 26, floating-point operations (FLOPs)—a.k.a. massive computing power—and imposes heavier obligations on larger firms with more than $500 million in annual revenue. 

Among the major provisions of the law, it requires large AI developers to publish a framework explaining their safety standards and risk assessment procedures. Before deployment, a developer must also post a public transparency report, including an additional requirement for large developers to disclose risk assessments and the extent to which third-party evaluators were involved in assessing those risks. Developers are required to report critical safety incidents to the state’s Office of Emergency Services (OES), and starting from 2027, the OES will release anonymized summaries of those reports. 

By choosing disclosure and incident reporting rather than rigid technical requirements or pre-deployment approvals, SB 53 leaves space for experimentation—building rules around demonstrated risks instead of hypothetical harms. California’s law also aligns with existing national and international safety standards, rather than creating its own arbitrary standards, which helps maintain consistency across jurisdictions. Because the AI field still lacks agreed-upon standards on dangerous behavior, the law’s framework and reporting provisions are intended to produce the information policymakers need to refine their laws and craft more responsive regulations in the future. 

Concerns with SB 53

Despite the law’s strengths, the definition of a “frontier” model still leaves room for improvement. For now, the threshold of 10^26 FLOPs and the $500 million revenue threshold for large developers create a clear and narrow scope. Former Google CEO Eric Schmidt is among those who recommended the 10^26 FLOPS threshold. But, in the future, this static threshold can drift away from the capability it was meant to capture.

History has shown that algorithmic efficiency often doubles every 16 months, meaning a new update to the law will be required time and time again. If the threshold stays the same, it will miss new models that are just as powerful but trained with less compute, while still flagging older, inefficient ones. Whether the newly created California Department of Technology (CDT), charged with recommending changes to that threshold annually, can successfully convince the legislature remains to be seen.

Another concern with SB 53 is that the reporting obligations, though well-intentioned, may become a mere administrative formality, with companies producing data that checks the box without improving understanding of real issues. The law requires large developers to file quarterly summaries of their internal catastrophic-risk assessments, even when nothing has changed. Unless the information collected is analyzed and shared by the OES in ways that genuinely improve a regulator’s understanding of risk, this could just turn into a bureaucratic sludge that buries insights into true risks.

Looking beyond California: State-based AI best practices in lieu of a federal standard

A flexible scope would also help keep state rules consistent until there is a federal law. Right now, however, the states point in different directions: New York’s “Responsible AI Safety and Education RAISE Act” (A 6953), for example, also covers models with 10^26 FLOPs, but goes further to include models with very high training costs (about $100 million) and even covers smaller models if building them costs at least $5 million. Michigan’s House Bill 4668 skips the compute threshold altogether and simply covers any entity that spent at least $100 million in the past year and $5 million on any single model. 

Looking ahead, if five or 10 more states adopt their own definitions, this emerging state patchwork will only grow more complicated and difficult to comply with. The practical solution could be keeping the definition of the “frontier” aligned by following the same national and international standards. This would avoid putting developers through a dozen different playbooks.

California Senate Bill 53, even with all its flaws, may serve as that model. But the real test of SB 53 will be the value of the information it produces from transparency reports and assessments. If those reports reveal meaningful patterns in model behavior and help the state more effectively respond to risks, California could set an example for others to follow. But if those reporting requirements turn into routine filings and formal checklists, the California experiment could show the limits of transparency laws, potentially pushing legislators toward heavier tools.

The post California’s AI law works by staying narrow appeared first on Reason Foundation.

]]>
Supreme Court erodes online privacy and free speech in age verification ruling https://reason.org/commentary/supreme-court-erodes-online-privacy-and-free-speech-in-age-verification-ruling/ Thu, 31 Jul 2025 04:01:00 +0000 https://reason.org/?post_type=commentary&p=83844 The ruling in 'Free Speech Coalition v. Paxton' marks a shift in how courts approach online age-verification laws targeting sexual content.

The post Supreme Court erodes online privacy and free speech in age verification ruling appeared first on Reason Foundation.

]]>
The Supreme Court recently upheld as constitutional a Texas law that requires any user who attempts to visit a site where at least one-third of the material is labeled “sexually explicit” and “harmful to minors” to prove they are 18 years old. This ruling in Free Speech Coalition v. Paxton marks a shift in how courts approach online age-verification laws targeting sexual content. The decision is already emboldening lawmakers in other states to pursue broader restrictions under the banner of child protection, but it remains to be seen how far courts will let that logic stretch.

For three decades, laws that regulated online content, especially sexual content, were tested under the “strict scrutiny” standard. Strict scrutiny is the highest standard of judicial review, used by courts to evaluate whether laws or government actions infringe on fundamental rights. This standard requires the government to prove these laws serve a compelling interest and are narrowly tailored to achieve that interest.

In Free Speech Coalition v. Paxton, the Supreme Court’s majority concluded that intermediate scrutiny should be the standard here, not strict scrutiny. Under intermediate scrutiny, the government only needs to prove it has a substantial interest to address a harm and the law is reasonably tailored to that aim. Writing for the majority, Justice Clarence Thomas analogized the requirement to show ID for alcohol or tobacco purchases, which are longstanding and widely accepted practices. He recasts Texas’s law as regulating “unprotected conduct” (letting minors access material deemed obscene for minors) and said the new identification (ID) requirement only incidentally burdens adults.

This decision marks a significant departure from the strong First Amendment protections for online speech established in two major cases: Reno v. ACLU (1997) and Ashcroft v. Free Speech Coalition (2004).

In Reno, the Supreme Court struck down the Communications Decency Act of 1996’s anti-indecency provisions as unconstitutionally vague and overbroad, holding that online speech is entitled to the same strict scrutiny as print, and that blanket bans on “indecent” or “patently offensive” content to protect minors could not justify sweeping limits on lawful adult speech.

Similarly, in Ashcroft, the Supreme Court repeatedly invalidated the Child Online Protection Act of 1998, emphasizing that even regulations aimed at restricting “harmful to minors” material could not limit adults’ access to legal speech if less restrictive alternatives, such as user controls or filtering, were available.

Under this new precedent, states now have more leeway to regulate online content under the guise of child safety, signaling that well-crafted age verification mandates for obscene or explicit material need not meet the previously rigorous First Amendment bar set by Reno and Ashcroft, and potentially paving the way for further state-level digital content regulation.

The complications of NetChoice v. Carr (2025)

Just one day before the Supreme Court issued its decision, the U.S. District Court for the Northern District of Georgia blocked enforcement of a much broader law: Georgia Senate Bill 351, also known as the “Protecting Georgia’s Children on Social Media Act of 2024.” The law would have required social media platforms to verify the age of all account holders, obtain parental consent for minors, and ban targeted advertising to minors based on personal data. In her ruling in NetChoice v. Carr, Judge Amy Totenberg found that the law would restrict teens’ access to online forums, infringe on anonymous speech, and interfere with platforms’ rights to communicate.

“The Court does not doubt the dangers posed by young people’s overwhelming exposure to social media,” Judge Totenberg wrote. “But, in its effort to aid parents, the Act’s solution creates serious obstacles for all Georgians, including teenagers, to engage in protected speech activities and would highly likely be unconstitutional.”

Following the release of the Supreme Court ruling, Georgia state Sen. Jason Anavitarte (R-Dallas), the author of Senate BIll 351, stated that “Based on Friday’s ruling at The Supreme Court, Judge Totenberg should be left with no choice but to allow SB 351 to go into effect … in its entirety.”

Georgia’s law, however, is much broader than Texas’ law and regulates all speech on social media platforms, not just obscene content, so the Supreme Court’s Texas decision does not automatically validate Georgia’s approach. This ruling is likely to be appealed, thus requiring a higher court to eventually clarify under what exact circumstances a state can require age verification from websites. 

The District Court applied the strict scrutiny standard in this case, which was the precedent prior to the Supreme Court ruling. However, the Supreme Court’s decision the very next day established that age verification laws targeting access to obscene or sexually explicit material can be upheld under intermediate scrutiny, as long as they are narrowly focused and only incidentally burden adult speech. This means that if a law is specifically designed to prevent minors from accessing pornography and does not substantially restrict adults’ access to protected speech, it is more likely to be considered constitutional. 

However, it is also possible that states will attempt to word every proposed age verification bill with broad definitions of what is “obscene,” “sexually explicit,” or “harmful to minors.” In doing so, states will try to force large social media sites that are not hosting pornography to nevertheless verify the ages of all its users. This may chill adults’ ability to access websites of their choosing while remaining anonymous, a feature that is still protected by the First Amendment so long as the platform allows anonymous participation.

Privacy concerns persist

Justice Clarence Thomas’ comparison between checking an ID at a liquor store and digital age verification obscures a profound difference between offline and online age verification. Buying alcohol at a liquor store does not create a permanent record of your interests and habits. Online verification, however, may involve uploading a government ID, submitting biometric data, or verifying identity through third-party platforms.

As Reason Foundation explained in its amicus brief opposing the law, this sensitive information—detailing exactly who visits which sites, and when—can be stored indefinitely, commercially exploited, or exposed in data breaches. Without robust federal safeguards regulating the collection, storage, and use of this data, mandatory age verification not only compromises user anonymity but threatens to chill free expression online.

Furthermore, it is also questionable whether Texas’ age verification law will shield kids from harmful content. Similar age verification efforts have proven ineffective, as tech-savvy minors are able to circumvent restrictions using Virtual Private Networks (VPNs), borrowed credentials, or mirror websites, meaning the laws introduce serious privacy and speech burdens for adults without effectively achieving their stated goal of shielding minors from harmful content.

As states now have the authority to require age verification to block sexually explicit content to minors, it will be important to note if/when states try to encapsulate broad age verification requirements on any website they deem to have content harmful to minors.

It is highly likely in the coming years that the Supreme Court will have to clarify the exact line of what age verification requirements do not violate the First Amendment. It will be important to note if any of these age verification requirements actually keep children safe online or if they merely satisfy a political urge to appear protective while exposing millions of users to new privacy and data-security risks.

The post Supreme Court erodes online privacy and free speech in age verification ruling appeared first on Reason Foundation.

]]>
Consent requirements in comprehensive data privacy laws: Current practices and the path forward https://reason.org/policy-brief/consent-requirements-comprehensive-data-privacy-laws-current-practices-path-forward/ Tue, 24 Jun 2025 04:01:00 +0000 https://reason.org/?post_type=policy-brief&p=83173 Introduction In an era where personal data is both a critical economic asset and a sensitive aspect of individual autonomy, privacy laws worldwide increasingly rely on user consent as the primary mechanism for governing data collection, processing, and sharing. However, … Continued

The post Consent requirements in comprehensive data privacy laws: Current practices and the path forward appeared first on Reason Foundation.

]]>
Introduction

In an era where personal data is both a critical economic asset and a sensitive aspect of individual autonomy, privacy laws worldwide increasingly rely on user consent as the primary mechanism for governing data collection, processing, and sharing. However, despite its central role, the effectiveness of current digital consent frameworks remains highly contested.

This paper critically reviews how consent currently operates within major regulatory frameworks, particularly contrasting the European Union’s stringent, opt-in-based General Data Protection Regulation (GDPR) against the predominantly opt-out approach of the United States, exemplified by California’s Consumer Privacy Act (CCPA).

Part 2 outlines key legal frameworks and definitions.

Part 3 examines how consent mechanisms shape user behavior, finding that repeated prompts often lead to disengagement rather than meaningful choice.

Part 4 analyzes the broader economic effects, showing that consent requirements tend to favor large firms, raise compliance costs for smaller players, and constrain innovation.

In the final section, we synthesize leading policy and academic proposals to outline a set of potential reforms. These reforms include risk-based consent frameworks, universal privacy management tools, and co-regulatory accountability models.

The GDPR, adopted in 2016 and enacted in 2018, is built around the more stringent opt-in approach and aspires to consent that is “freely given, specific, informed, and unambiguous.”

The CCPA, adopted in 2018 and enacted in 2020, in effect for U.S. firms doing business in California, follows an opt-out approach, where consent is presumed unless actively withdrawn. As the U.S. and other countries debate national approaches to data privacy, the debates remain unresolved.

Drawing on empirical studies, this review highlights persistent shortcomings in current consent models—from interface design flaws to the disproportionate compliance burden on smaller entities. It concludes by identifying potential reforms aimed at balancing user autonomy, regulatory flexibility, and market competitiveness.

Full Policy Brief: Consent requirements in comprehensive data privacy laws

The post Consent requirements in comprehensive data privacy laws: Current practices and the path forward appeared first on Reason Foundation.

]]>
The App Store Accountability Act would undermine privacy and parental choice https://reason.org/backgrounder/the-app-store-accountability-act-would-undermine-privacy-and-parental-choice/ Wed, 07 May 2025 19:11:14 +0000 https://reason.org/?post_type=backgrounder&p=82181 The App Store Accountability Act would make age restrictions online more invasive than in any other area of daily life.

The post The App Store Accountability Act would undermine privacy and parental choice appeared first on Reason Foundation.

]]>
The App Store Accountability Act would require major app platforms to verify the ages of all users and restrict access for those under 18 without verified parental consent. While framed as a child protection measure, the bill would force app stores to collect sensitive personal data like government IDs or biometric scans from potentially hundreds of millions of users, posing serious risks to privacy, threatening free expression, and replicating the same constitutional flaws that have plagued previous online age-verification laws.

Mandatory age verification undermines privacy and security

  • The bill would require platforms to collect sensitive personal data, like government-issued IDs or biometric scans, before users can access apps.
  • This creates honeypots for hackers and significantly increases the risk of identity theft and surveillance.
  • California’s Age-Appropriate Design Code Act (CAADCA) introduced similar requirements. A federal judge blocked it, finding CAADCA “induces companies to collect additional personal information,” increasing rather than reducing risk.
  • Apple, Google, and potentially others would be forced to collect and store biometric templates or ID scans for every user, rolling back years of privacy gains.

It threatens free speech and limits access to information

  • App stores aren’t just for entertainment—they’re how people access civic tools, education, and independent journalism. Forcing ID checks to reach that content raises clear First Amendment concerns.
  • Courts have repeatedly struck down similar mandates. In Reno v. ACLU and Ashcroft v. ACLU, the Supreme Court made clear that age-gating access to legal speech is unconstitutional.
  • The bill attempts to bypass those rulings by targeting app stores instead of social media platforms. But as the Court ruled in Rutan v. Republican Party of Illinois, what the government can’t do directly, it also can’t do indirectly.
  • As Packingham v. North Carolina affirmed, “cyberspace” is now the most important forum for speech, and app stores are its front doors. Regulating that access point threatens core free speech rights.

The government can’t replace real parental involvement, but it creates a false sense of safety

  • Most online age-verification regimes assume parents want rigid digital barriers—but research shows that many underage users access social media with parental knowledge or help.
  • Legal mandates create a false sense of security, shifting responsibility from families to tech firms that cannot realistically enforce behavior within homes.
  • Industry-led models like ESRB and MPAA ratings work because they empower—not override—parental discretion, offering guidance without coercion.
  • A mandated age gate won’t stop kids from using VPNs, browsers, or sideloaded apps—it will just make parents think the problem is solved.
  • That false sense of safety undermines genuine efforts to educate kids, build digital literacy, and strengthen family-level boundaries.

Industry-led tools already help parents protect their kids online

  • For example, Apple’s parental control tools include Screen Time, Ask to Buy, content filters, communication limits, and age-based app restrictions.
  • Child accounts come with default safety settings and allow parents to block downloads, limit content, and customize age settings.
  • Current practices include: No personalized ads for users under 13, no cross-app tracking, and no forced identity collection.
  • The Declared Age Range API lets developers serve age-appropriate content without collecting birthdates or IDs—a privacy-enhancing alternative to state-mandated verification.
  • App stores already keep platforms safe from scams and malware precisely because they don’t require sensitive personal data. Mandating age verification would undermine that balance and introduce new security risks.

It punishes small developers by adding compliance costs they can’t afford

  • Developers could be held liable if minors access their apps without proper age checks, exposing them to legal risk and forcing them to contract with costly third-party age-verification vendors.
  • For small app developers operating on thin margins, even minor compliance friction (like age-gating pop-ups or verification screens) can be fatal. Adding age-verification and ID checks will lead to significant user drop-off. A Google study found just a one-second delay increases bounce rates by 32%, and three seconds by 53%.

Bottom line: The App Store Accountability Act would make age restrictions online more invasive than in any other area of daily life—requiring ID checks not just for social media, but for everyday apps that families already manage responsibly. Instead of building a surveillance regime around app downloads, lawmakers should support market-driven tools that empower parents and preserve user privacy.

Full backgrounder: The App Store Accountability Act would undermine privacy and parental choice

The post The App Store Accountability Act would undermine privacy and parental choice appeared first on Reason Foundation.

]]>
Best practices in building a federal comprehensive data privacy and security framework https://reason.org/testimony/best-practices-in-building-a-federal-comprehensive-data-privacy-and-security-framework/ Mon, 07 Apr 2025 16:25:19 +0000 https://reason.org/?post_type=testimony&p=81695 Reason Foundation submitted comments to the Privacy Working Group in response to a request for information.

The post Best practices in building a federal comprehensive data privacy and security framework appeared first on Reason Foundation.

]]>
Reason Foundation submitted comments in response to a request for information regarding a federal comprehensive data privacy and security framework. Comments were submitted to the Privacy Working Group within the U.S. House of Representatives Committee on Energy and Commerce on April 7, 2025.

On behalf of Reason Foundation, we respectfully submit these responses to the prompts contained in the February 21 request for information on the parameters of a federal comprehensive data privacy and security framework. Reason Foundation is a national 501(c)(3) public policy and education organization with expertise across a range of policy areas, including technology policy. Our responses below are numbered to correspond to the individual prompts.

III. Existing privacy frameworks and protections

A. Please provide any insights learned from existing comprehensive data privacy and security laws that may be relevant to the working group’s efforts, including these frameworks’ efficacy at protecting consumers and impacts on both data-driven innovation and small businesses.

Efficacy at protecting consumers

Comprehensive privacy laws such as the European Union’s General Data Protection Regulation of 2016 (GDPR) and the California Consumer Privacy Act of 2018 (CCPA) were enacted with the intent to give consumers more control over their data and set clearer expectations about how that data would be used. However, economic and social science research has not yet determined whether these laws provide meaningful additional protection for consumers. Moreover, these regulations appear to have had unintended negative effects on consumer behavior and business activity.

With respect to Europe’s GDPR, our own analysis of the Survey on Internet Trust (Ipsos) found that consumer trust did not change before (2017) or after the introduction of GDPR (2019). Another group of researchers, using the same data, looked at the interval between 2019 and 2022 and found that Internet users’ trust in the Internet has actually dropped. We have also previously warned that overbroad privacy regulations could make the Internet less user-friendly.

These concerns have been validated by the findings of a recent study funded by the European Research Council. The authors examined how GDPR affected online user behavior and found it had a negative impact on website traffic. After GDPR took effect, weekly website visits dropped by approximately 5% within three months and by about 10% after 18 months.

These traffic declines caused significant revenue losses—averaging $7 million for e-commerce websites and nearly $2.5 million for ad-supported websites after 18 months. However, the impact varied depending on website size, industry, and user location. Larger websites suffered less, suggesting that GDPR may have unintentionally favored large websites and increased market concentration by harming smaller competitors.

In an analysis of the California Consumer Privacy Act (CCPA), scholars from the University of California, Irvine, and New York University found significant correlations between the regulation and shifts in consumer behavior on commercial websites. Specifically, Californians decreased their purchases by approximately 4.3% and increased their product returns by 3.0%, resulting in an average reduction of $96 in discretionary spending per consumer within one year of the CCPA’s introduction. Browsing behavior data from commercial websites indicates that Californians spent more time online and visited more pages per website, suggesting that increased privacy restrictions may have compelled consumers to expend greater effort to locate suitable products or services.

Full Comments: Comments in Response to Data Privacy and Security Request for Information

The post Best practices in building a federal comprehensive data privacy and security framework appeared first on Reason Foundation.

]]>
The SEC’s crypto war has ended overnight. What happens next? https://reason.org/commentary/the-secs-crypto-war-has-ended-overnight-what-happens-next/ Mon, 17 Mar 2025 11:00:00 +0000 https://reason.org/?post_type=commentary&p=81228 A transparent, consistent legal framework would provide stability and unleash the benefits of blockchain technology.

The post The SEC’s crypto war has ended overnight. What happens next? appeared first on Reason Foundation.

]]>
Just over two months into President Donald Trump’s new term, we are already witnessing a shift in how federal regulators approach the cryptocurrency industry. The Securities and Exchange Commission (SEC) has dropped its high-profile lawsuit against Coinbase and is loosening its grip on other major crypto firms, including Consensys, Robinhood, and Gemini. 

For years, the SEC—led by former Chair Gary Gensler—aggressively opposed digital assets, claiming that most counted as unregistered securities. Innovators found themselves navigating a regulatory landscape riddled with ambiguity and politics. Now, federal enforcement seems to be stepping back.

Although this turn of events may look like a clear win for the industry, the deeper issue remains: If regulatory policy can swing in a new direction with each new administration, businesses and investors are left with perpetual uncertainty. America’s stance on crypto should not hinge on the whims of the White House. Instead, lawmakers must establish a clear and stable framework so businesses can plan with confidence and consumers receive consistent protection.

Under Gensler’s leadership, the SEC claimed that many cryptocurrencies qualified as “investment contracts” under the 1946 Howey test. According to this test, an asset qualifies as a security if it meets all of the following conditions:

  1. Investment of money – A person or entity invests capital in the asset.
  2. Common enterprise – The investment is tied to a shared business or project where multiple investors pool their funds.
  3. Expectation of profits – The investor anticipates financial gains from the investment.
  4. Efforts of others – The expected profits primarily rely on a third party’s managerial or entrepreneurial efforts.

Coinbase, like much of the crypto industry, argued that most digital assets do not meet these criteria. Coinbase contended that buying a crypto token is not necessarily an investment in a common enterprise, nor is profit always dependent on the managerial efforts of a third party. The SEC, however, had historically interpreted the test broadly, claiming that many tokens met these conditions and should, therefore, be regulated as securities. Despite ongoing attempts by crypto firms to align with often ambiguous regulations, Gensler’s SEC continued to press cases rapidly, ultimately pushing some innovations offshore. 

Worse, this aggressive approach failed to accomplish Gensler’s goals and prevent some major crypto collapses. TerraUSD, Celsius, Three Arrows Capital, and FTX all imploded anyway during Gensler’s watch, costing investors billions and demonstrating the limits of a heavy-handed yet ad hoc regime.

With the SEC stepping back from its aggressive enforcement approach and Paul Atkins nominated to lead the agency, cryptocurrency firms can feel more confident that innovation will be welcomed, at least temporarily. An innovation-friendly environment doesn’t require loopholes or favoritism; it simply needs well-defined rules that don’t lurch from one compliance philosophy to another.

This is why Congress should act. Multiple proposals are in the works, aimed at clarifying the roles of the SEC and Commodity Futures Trading Commission (CFTC) and providing a clearer regulatory framework for digital assets:

  • The Financial Innovation and Technology for the 21st Century Act: This bill specifies how cryptocurrencies can gain recognized status under SEC oversight. It also clarifies the SEC’s responsibilities in governing digital assets. The bill was approved by the House of Representatives and received in the Senate in September 2024.
  • The Digital Asset Market Structure and Investor Protection Act: This bill requires that digital assets be electronically created, maintain a secure transaction history, and be transferable through decentralized systems. Additionally, it includes measures to protect investors and promote market transparency. The bill remains in the early stages of the legislative process, undergoing review by multiple House committees.
  • The Responsible Financial Innovation Act: This bill grants the SEC jurisdiction over digital assets tied to financial interests in a business, while the CFTC oversees other digital assets. The bill also allows digital asset exchanges to register with the CFTC and permits depository institutions to issue payment stablecoins, requiring 100% reserve backing and one-to-one redemption. Additionally, it introduces tax exemptions for small digital asset transactions and includes consumer protection measures, reports, and studies to ensure transparency and oversight.
  • The Bridging Regulation and Innovation for Digital Global and Electronic (BRIDGE) Digital Assets Act: This bill establishes the Joint Advisory Committee on Digital Assets, co-led by the CFTC and SEC, to offer guidance on digital asset regulations and policies. Its goal is to align regulatory approaches between the two agencies. The committee will comprise at least 20 non-federal members, including digital asset issuers, registered entities, and industry participants. It would be required to convene at least twice a year to present its findings to both regulatory bodies.

Two high-profile stablecoin bills could establish firm guidelines for issuing digital dollars pegged to fiat currencies:

  • The Clarity for Payment Stablecoins Act: This bill seeks to establish clear reporting requirements for issuers of fiat-backed stablecoins to enhance transparency and accountability. It mandates that issuers hold all reserves in specific assets, such as U.S. government securities, fully collateralized security repurchase agreements, U.S. dollars, or other non-digital currencies. Issuers must publish a monthly report on their website detailing reserve holdings, with these reports subject to third-party audits. In May 2024, the Committee on Financial Services amended the bill, and it is now on the calendar awaiting further consideration.
  • The Lummis-Gillibrand Payment Stablecoins Act: This bill seeks to establish a regulatory framework for payment stablecoins, requiring them to be backed by one-to-one reserves to ensure stability. It explicitly prohibits unbacked, algorithmic stablecoins and includes provisions to safeguard consumers by mandating that issuers maintain sufficient reserves. Currently, the bill remains in the early stages of the legislative process.

Under President Joe Biden, banks were discouraged from serving crypto customers as a part of an enforcement scheme called “Operation Choke Point 2.0” by opponents, a sequel to a similar effort by President Barack Obama’s Department of Justice that critics alleged had targeted politically disfavored industries for debanking. This climate forced many legitimate crypto startups to seek banking services overseas. Federal Reserve Chair Jerome Powell and Federal Deposit Insurance Corporation (FDIC) officials now suggest that banks should be free to support crypto businesses that manage their risks responsibly, suggesting that enforcement will target individual bad actors rather than attacking a class of businesses. With the proper legislative framework, banks can confidently open their doors to innovative companies, spurring investment and job creation instead of pushing opportunity to friendlier jurisdictions like Switzerland, Singapore, or Hong Kong.

Although critics of a more open regulatory environment voice concerns about fraud and market manipulation, these issues call for clear, limited rules rather than vague threats against entire industries. Blockchain-based finance can significantly reduce transaction fees, enable cheaper cross-border payments, and bring financial services to communities that had previously lacked access to conventional banking. 

A transparent, consistent legal framework would provide stability and unleash the benefits of blockchain technology—reducing costs, updating outdated financial systems, and expanding economic opportunity. If the U.S. wants to remain a magnet for global talent, Congress must ensure that crypto isn’t subject to the impulses of individual regulators but governed by fair, openly debated laws that stand the test of time.

The post The SEC’s crypto war has ended overnight. What happens next? appeared first on Reason Foundation.

]]>
House task force seeks regulations that protect AI innovation https://reason.org/commentary/house-task-force-seeks-regulations-that-protect-ai-innovation/ Tue, 04 Mar 2025 11:00:00 +0000 https://reason.org/?post_type=commentary&p=80973 A report from the House Task Force on Artificial Intelligence outlines a wide-ranging framework for balancing innovation with necessary safeguards.

The post House task force seeks regulations that protect AI innovation appeared first on Reason Foundation.

]]>
In December, a House Task Force on Artificial Intelligence (AI), led by Reps. Jay Obernolte (R-Calif.) and Ted Lieu (D-Calif.), unveiled a report that outlines a wide-ranging framework for balancing innovation with necessary safeguards. The comprehensive analysis dives into AI’s impact across sectors such as agriculture, healthcare, and finance, presenting a roadmap with seven guiding principles, 66 findings, and 89 recommendations. After years of cautious, often heavy-handed regulatory discussions around AI, the report suggests that federal policymakers may be pivoting toward a more forward-looking strategy that embraces AI’s transformative potential while addressing risks in a way that doesn’t stifle progress. 

The report emphasizes incrementalism and flexibility in AI development. One key message is that policymakers should be careful not to micromanage AI development. The report urges Congress to monitor AI’s real-world outcomes and focus on adjusting existing laws as needed rather than prescribing heavy-handed rules up front. This contrasts with other jurisdictions, particularly in Europe, which have leaned toward more prescriptive, top-down rules. The task force contends that safeguarding open experimentation is vital for keeping the U.S. at the forefront of tech entrepreneurship.

For example, in the chapter on open-source AI, rather than supporting widespread licensing or certification mandates, the report calls for targeted support—federal funding, “safe harbors” for AI vulnerability research, and risk management aimed at specific misuse scenarios like cyberattacks or weapons development. The report acknowledges genuine security concerns but opts for narrowly tailored safeguards that keep the door open to healthy competition and transparency.

The task force takes a similarly pragmatic stance in discussing energy usage and data centers. As advanced AI models proliferate, data centers often outpace the construction of new power plants and transmission lines, threatening price spikes and reliability issues. The Task Force proposes pragmatic responses instead of restrictions: encouraging low-power computing, improving energy tracking, and ensuring that large AI users—rather than residential customers—bear the cost of expanding infrastructure. This approach shows flexibility by preserving AI’s growth potential while mitigating risks for ordinary consumers.

This approach marks a noticeable shift from some of the Biden administration’s more heavy-handed technology policies, which have sometimes leaned toward broad executive actions and precautionary regulations that risked stifling innovation. For example, the October 2023 Executive Order on AI imposed sweeping federal oversight, requiring developers to report detailed algorithmic information to the government to ensure AI systems are safe, secure, and free from bias. The goal was to prevent cybersecurity threats, fraud, and discrimination, but the heavy-handed approach risked bogging down progress with bureaucracy.

Similarly, the White House’s “Blueprint for an AI Bill of Rights” and other proposals heavily emphasize preventing algorithmic bias and discrimination before it occurs, which is a laudable goal. However, this initiative may inadvertently lead to overregulation by focusing too much on the opaque internal workings of AI systems, often referred to as “black boxes.” These are AI models whose decision-making processes are not transparent, making it difficult to understand how they arrive at specific outcomes. This intense scrutiny could impose complex and ambiguous compliance requirements, especially on smaller AI startups that may lack the resources to navigate vague or inconsistently defined guidelines. By trying to anticipate every possible harm in advance, the administration has created an environment of regulatory uncertainty where developers struggle to understand what is required of them, potentially slowing AI innovation rather than providing clear, actionable guidelines.

In contrast, the task force report notes that AI-based discrimination, fraud, or other harms can be addressed within existing consumer protection, civil rights, and safety statutes—just as we regulate any other product or service. Rather than inventing new laws for every AI scenario, the task force urges policymakers to adapt tried-and-true legal frameworks and work with regulators at the state and federal levels to ensure that emerging problems are quickly identified and rooted out. 

In short, rather than drafting new laws for every AI issue, the task force opts for a more agile, sector-by-sector approach that builds on existing rules, updating them only when AI changes the game. This stance might disappoint those seeking a sweeping fix for every AI concern, but it acknowledges the unpredictable nature of emerging technologies. By focusing on known risks and empowering regulators to adapt quickly, the task force’s framework preserves the flexibility that has historically spurred American tech innovation. It’s a pragmatic, bottom-up strategy that views regulation not as a static set of mandates but as an evolving process designed to protect the public without smothering AI’s transformative potential.

The following table offers a summary of the task force report’s key chapters. 

The post House task force seeks regulations that protect AI innovation appeared first on Reason Foundation.

]]>
Why California’s AI bill could hurt more than it helps https://reason.org/commentary/why-californias-ai-bill-could-hurt-more-than-it-helps/ Tue, 09 Jul 2024 17:02:34 +0000 https://reason.org/?post_type=commentary&p=75165 While the goal of safe AI is crucial, onerous demands and the creation of government bureaucracies are not the solution.

The post Why California’s AI bill could hurt more than it helps appeared first on Reason Foundation.

]]>
California’s proposed Safe and Secure Innovation for Frontier Artificial Intelligence Models Act attempts to improve safety by requiring developers to certify that their artificial intelligence (AI) models are not dangerous. In truth, the law would slow down critical AI advancements in health care, education, and other fields by discouraging innovation and reducing competition.

Over the past few years, AI has revolutionized diagnostics with algorithms that are increasingly capable of detecting diseases like cancer and heart conditions with unprecedented accuracy. AI-driven tools have streamlined the drug discovery processes, reducing the time and cost of bringing new treatments to market. In education, AI-powered platforms have further personalized learning experiences, adapting to individual students’ needs and improving engagement and outcomes.

Freedom to develop has allowed for rapid experimentation and implementation of AI technologies, leading to remarkable advancements benefiting society. However, many people are concerned about the long-term impacts AI could have.

California Senate Bill 1047, introduced by Sen. Scott Wiener (D-San Francisco), aims to prohibit worse-case harmful uses of AI, like creating or deploying weapons of mass destruction or using AI to launch cyberattacks on critical infrastructure, costing hundreds of millions in damage.

To prevent these doomsday scenarios, the bill would require developers to provide a newly created government agency with an annual certification, affirming that their AI models do not pose a danger. This certification would be provided even before the training of the AI model begins. However, it is difficult to accurately predict all potential risks of a model at such an early stage. Moreover, the responsibility for causing harm should be on the actor who committed the wrongdoing, not the developer of the model. Holding developers responsible for all possible outcomes discourages innovation and unfairly burdens those who may have no control over how their models are used. This extensive compliance is costly, especially for small startups that don’t have legal teams. Developers of AI models are instead likely to leave California for friendlier jurisdictions to conduct their training activities and other operations.

Violations of the law could lead to penalties that could reach up to 30% of the cost of creating an AI model. For small businesses, this could mean devastating financial losses. The bill also introduces criminal liability dangers under perjury laws if a developer falsely, in bad faith, certifies their AI model as safe. That may sound straightforward, but the law’s ambiguous framework and unclear definitions put developers at the whims of how state regulators may perceive any glitches in their AI models. In an industry where experimentation and iteration are crucial to progress, such severe penalties could impact creativity and slow down advancements.

While the bill intends to target only large and powerful AI models, it uses vague language that could also apply to smaller AI developers. The bill focuses on models that meet a high threshold of computing power typically accessible only to major corporations with significant resources. However, it also applies to models with “similar capabilities,” broad phrasing could extend the bill’s reach to almost all future AI models.

The bill would also require all covered AI models to have a “kill switch” to shut them down to prevent imminent threats and authorize the state to force developers to delete their models if they fail to meet state safety standards, potentially erasing years of research and investment. While the shutdown requirement might make sense in dangerous situations, it is not foolproof. For instance, forcing a shutdown switch on an AI system managing the electricity grid could create a vulnerability that hackers might exploit to cause widespread power outages. Thus, while mitigating certain risks, this solution simultaneously exposes critical infrastructure to new potential cyberattacks.

While the goal of safe AI is crucial, onerous demands and the creation of government bureaucracies are not the solution. Instead, policymakers should work with AI experts to create environments conducive to its safe growth.

A version of this column first appeared in the Los Angeles Daily News.

The post Why California’s AI bill could hurt more than it helps appeared first on Reason Foundation.

]]>
California’s Senate Bill 1047 is a troubling development for AI governance https://reason.org/commentary/californias-senate-bill-1047-is-a-troubling-development-for-ai-governance/ Mon, 03 Jun 2024 19:01:23 +0000 https://reason.org/?post_type=commentary&p=74555 While the intent behind California Senate Bill 1047 to ensure the safe use of AI is commendable, its current form poses significant challenges to innovation and the open-source AI community.

The post California’s Senate Bill 1047 is a troubling development for AI governance appeared first on Reason Foundation.

]]>
As state legislators across the United States move to create regulatory frameworks for artificial intelligence (AI), California is pushing a particularly aggressive bill that could subject AI developers to a wide range of civil penalties. Although it is unlikely to prevent harmful use of AI, the regulatory burdens and compliance costs introduced by the bill could discourage small companies and individual developers from pursuing groundbreaking AI projects, which are crucial for advancements in healthcare, education, and environmental protection. 

Senate Bill 1047, also known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, seeks to regulate the development and deployment of advanced AI models in California. The bill mandates that developers of significant AI models adhere to strict safety protocols, including the capability to shut down the model if necessary and certify compliance annually. Noncompliance can result in severe penalties, such as the deletion of the AI model and substantial fines.  

The bill, introduced by State Sen. Scott Wiener (D-San Francisco), establishes penalties that escalate from 10 percent of the cost of training an AI model for the first violation to 30 percent for every subsequent breach of the bill’s provisions. These fines can devastate startups and small companies, which often operate with limited budgets and resources.  

The bill also grants state regulators the authority to mandate the deletion of AI models, erasing years of research and development, substantial financial investments, and potentially valuable technological advancements. For small businesses, the threat of model deletion could mean the end of their business and discourage developers from exploring innovative AI applications, potentially stifling creativity and leading to a more cautious development environment. 

The bill’s safety certification and compliance mechanism could also lead to criminal perjury charges if officials believe developers misled them about the AI’s safety. This may lead to authorities arbitrarily deciding whether an organization’s mistakes are honest and charging people with crimes that could lead to up to four years of jail time. The threat of criminal liability may deter developers from being bold and taking risks when building models, fearing that honest mistakes or unforeseen outcomes could result in severe personal consequences. 

The bill aims to prevent the harmful use of AI, such as creating autonomous weapons or launching cyberattacks on critical infrastructure that could result in significant damage. However, the problem with introducing such high penalties is that it is nearly impossible to predict and mitigate every potential misuse of an AI model. Typically, developers create general-purpose tools without foresight into all possible future applications. The responsibility for harmful actions should lie with the individuals who intentionally misuse the AI, not the developers who created the tool. 

Moreover, assigning such responsibility to AI developers for harmful uses of their technology overlooks factors beyond their control. For example, an AI designed for autonomous drone navigation could be maliciously repurposed by a terrorist group to deploy weaponized drones, leading to severe casualties and destruction. Similarly, a hacker might exploit an AI system developed for network optimization to find and attack vulnerabilities in critical infrastructure, causing widespread disruptions and data breaches. These scenarios show the potential for technology built by developers in good faith to be exploited by bad actors. This complexity underscores the need for a nuanced approach to liability, where the intent and actions of the user are considered, rather than placing the entire burden on the developers. 

Senate Bill 1047 is meant to apply to only extremely powerful AI models, but our analysis concludes that startups and large corporations are both subject to regulation under the bill. While the bill’s text covers models at or above the threshold of computing power that is accessible only to major corporations with significant resources, it also rather vaguely applies itself to models with similar “capabilities.” This language opens the door to covering almost all future AI models because the speed of technological advances guarantees that tomorrow’s computers will routinely deliver today’s state-of-the-art computing power more efficiently and cheaply. The uncertainty about whether a model falls under the benchmark and threshold criteria creates a legal grey area, potentially holding back innovation by making R&D investment riskier and the path for startups less lucrative. 

The bill could potentially criminalize the development and use of open-source AI models, which commonly involve adapting and enhancing existing models to create new applications. For example, developers use open-source models like GPT-3 to create advanced chatbots, virtual assistants, and translation tools. These applications can automate customer service, assist in language learning, and provide real-time translation services. While it is common for users and creators of flawed tools to bear legal responsibility for any resulting harm, the proposed law extends this liability to developers who modify open-source AI models. This could uniquely impact the open-source AI community, where the culture of shared innovation and collaboration drives progress. The potential for legal consequences might deter developers from participating in open-source projects, hindering the collaborative efforts crucial for advancing AI technology. Under the proposed law, developers who use and modify open-source models could be legally responsible for any harm caused by their AI systems, even if the modifications are built on someone else’s original model. This interpretation could greatly inhibit the open-source AI community, as the threat of legal repercussions may discourage developers from engaging in shared innovation and collaborative efforts. 

An alternative to internal certification is a nine-step process with the Frontier Model Division–a new regulatory body established under the bill. Among these steps is a requirement to establish a mechanism to quickly shut down the model, along with all its copies and derivatives. This is technically impossible except for local models or those with tightly controlled deployments, making it a significant hurdle for developers working with distributed and open-source models. 

Another demanding step involves adhering to all existing standards and regulations determined by the National Institute for Standards and Technology, the State of California, academia, nonprofit sector experts, and standard-setting organizations before training of the model begins. While this might be a reasonable measure for a product already on the market, it makes little sense for a model that has not yet been trained. The high cost and complexity of compliance could discourage smaller entities from AI innovation, further consolidating power among a few large corporations. 

While the intent behind Senate Bill 1047 to ensure the safe use of AI is commendable, its current form poses significant challenges to innovation and the open-source AI community. A more balanced approach that protects society from potential harm while fostering an environment conducive to technological advancement is essential. Policymakers must work closely with AI developers and experts to create regulations that are both effective and supportive of innovation. 

The post California’s Senate Bill 1047 is a troubling development for AI governance appeared first on Reason Foundation.

]]>
Overview of state digital privacy regulations  https://reason.org/commentary/overview-of-state-digital-privacy-regulations/ Wed, 22 May 2024 20:08:14 +0000 https://reason.org/?post_type=commentary&p=74476 Fifteen states have enacted comprehensive data privacy laws, but variations in regulation have led to federal legislative efforts representing a more uniform approach.

The post Overview of state digital privacy regulations  appeared first on Reason Foundation.

]]>
The U.S. House Energy and Commerce Committee introduced The American Privacy Rights Act (APRA) in April, the latest attempt to create a national framework in response to a growing number of state-level laws regulating consumer data privacy after a 2022 bill stalled before reaching a full vote. Absent action from the U.S. Congress, many states have advanced privacy initiatives, enacting bills that attempt to tackle consumer data protection in many ways.  

In a nutshell, APRA introduces extensive privacy controls and allows consumers to decline consent on certain data practices, mandates clear privacy policies and compliance mechanisms for businesses, and offers consumers the right to take legal action for violations. The bill emphasizes data minimization, ensuring companies collect only necessary information, and it seeks to supersede varied state laws. 

State and local governments often benefit from being more attuned to the specific needs and contexts of their communities, allowing for tailored regulations. However, the realities of 21st-century data communication add potentially challenging new dimensions to tradeoffs between state and federal regulation. State lines can be an arbitrary and costly way to regulate data. Europe, in contrast, has taken a much more centralized approach through the European Union’s General Data Protection Regulation law. 

While APRA aims to address the current patchwork of state privacy legislation, it is important to analyze the various state privacy laws to consider whether a top-down federal replacement of them is necessary. While existing state regulations frequently have similar elements to APRA, there are important distinctions to consider.  

Currently, 15 states have enacted comprehensive data privacy laws (Figure 1). The accompanying map illustrates the progress of state-level legislation. In 2023 alone, eight states added consumer privacy laws to their statutes.  

Figure 1. U.S. State Privacy Legislation 2024 

Source: The International Association of Privacy Professionals (iapp.org)

While there are similarities among the laws, such as a mandate to use only the necessary amount of data to achieve a specified purpose and an obligation for companies to inform consumers of privacy policies, each law also possesses distinct features that require significant resources and investment to maintain compliance.  

All 15 state privacy laws apply to companies that conduct business with state residents regardless of whether the businesses are headquartered within or outside the state. Exceptions to these laws typically include businesses that, for example, process data of fewer than 100,000 consumers per year and do not derive more than 50% of their revenue from selling personal data. 

There are three important aspects of state privacy laws. First, they all define sensitive data, which sets the scope of regulation. Next, they define consumer rights that explain what consumers can expect from the organization when handling their data. Finally, they define business responsibilities that narrow organizational responsibilities and set expectations for data management. 

Definitions of sensitive data 

Currently, the laws define sensitive personal data as including information such as: 

  • racial or ethnic origin;  
  • religious beliefs; 
  • mental or physical health diagnosis; 
  • sexual orientation;  
  • genetic or biometric data; and 
  • citizenship or immigration status.  

Protecting sensitive personal data is a standard practice in privacy protection and is aligned with industry best practices. Virginia and Connecticut privacy laws also define sensitive data to include data collected from a child and precise geolocation data.  

California, Colorado, Virginia, and Connecticut require consent and data protection impact assessments (DPIA) for processing sensitive data so that organizations may identify and minimize the data protection risks of a project. Other states, like Utah, merely require notice and the ability to opt out of processing.  

While the pursuit of consent has become a common practice, it is problematic because it often lacks genuinely informed choice, is easily manipulated, can overwhelm users, and fails to ensure that individuals fully understand or can practically manage their privacy rights.  

Rights of consumers 

In the context of state consumer privacy laws, individuals are granted several rights regarding the accessibility and availability of their data. These rights include the ability to: 

  • access the personal data that an organization holds; 
  • request deletion of personal data; and 
  • ability to obtain and reuse personal data.  

Eleven states provide the right for individuals to request corrections to their data held by organizations. State consumer privacy laws primarily rely on opt-out rights—such as a right to opt out of the sale of personal data and targeted advertising, as an example. Four states (California, Virginia, Colorado, and Connecticut) provide a right to opt out of profiling, which allows consumers to prevent businesses from using their data to make certain algorithmic decisions, such as personalized marketing, credit scoring, or even behavioral predictions.  

Many states require opt-in to process sensitive data and data about children. However, some states, such as Utah, have an opt-out for sensitive data instead. Each law has a timeframe for responding to a consumer rights request. This timeframe ranges from 30 to 60 days.  

While these rights empower consumers to control their data, they can present problems for businesses due to the complexity and cost of implementing systems that comply with varying state laws and responding to requests for access or deletion appropriately within tight timeframes. 

Business responsibilities

State consumer privacy laws impose specific responsibilities on businesses to ensure the protection and proper handling of personal data. These responsibilities include the requirement to: 

  • publish a privacy notice;  
  • have reasonable data security practices; and  
  • collect and use only data reasonably necessary for the identified purposes (data minimization).  

Data minimization mandates that personal data not be used for new purposes without explicit consent, while data transfers require stringent processing agreements. Regulations also protect consumers from penalties when exercising their privacy rights. Virginia, Colorado, Connecticut, and New Jersey require data protection assessment when processing activities involving targeted advertising, certain forms of profiling, sensitive personal data, and the sale of personal data. The California Privacy Rights Act (CPRA), the Colorado Privacy Act (CPA), and the Connecticut Data Privacy Act, all target deceptive practices known as “dark patterns,” which can trick consumers into making decisions that aren’t in their best interests, such as giving more personal data than they intend to.  

State attorneys general are usually responsible for enforcing these regulations. An exception is California, which established the California Privacy Protection Agency. Most laws have no private right of action, except in California, which has a limited private right of action for violations involving a data breach. A private right of action allows individuals or entities to file lawsuits seeking damages or other remedies directly, without relying solely on government enforcement agencies. Private rights of action are crucial in privacy regulation because they empower individuals to enforce privacy laws directly, enhancing accountability and effectiveness by allowing judicial processes to refine the application of these laws in line with evolving social and technological contexts. This approach not only upholds the common law traditions of privacy rights in the U.S. but also ensures that privacy laws remain dynamic and responsive to public needs and expectations. 

The variations between state privacy laws have led to federal legislative efforts representing a more uniform regulatory approach, such as the one proposed in the APRA. Before any federal legislation is finalized, the nuances of the state laws reviewed here should be considered and their impact on consumer experience and corporate outcomes should be carefully evaluated.  

The post Overview of state digital privacy regulations  appeared first on Reason Foundation.

]]>