• Playlist
  • Seattle Startup Toolkit
  • Portfolio
  • About
  • Job Board
  • Blog
  • Token Talk
  • News
Menu

Ascend.vc

  • Playlist
  • Seattle Startup Toolkit
  • Portfolio
  • About
  • Job Board
  • Blog
  • Token Talk
  • News

Token Talk 25: Regulate or Terminate

July 10, 2025

By: Thomas Stahura

In many ways, AI is the antithesis of government: fast and data driven versus sluggish and bureaucratic. As AI innovation continues at breakneck speeds, "slow and steady" sounds less like a virtue and more like a path to a future we'd rather only see on a screen

I say “we” but it seems a select few are hellbent on preventing AI from ever being regulated. The One Big Beautiful Bill, which passed last Friday, nearly prohibited states from regulating AI for a decade — that is until the Senate stripped out that moratorium. Meanwhile, California’s own AI bill got axed faster than the GPT-4.5 API. And across the pond, the EU’s new AI Act is so strict it’s got startups wondering whether they should pack up and leave. With the U.S. playing catch-up, states in limbo, and Europe potentially overreaching, the question looms larger than ever: What does good AI regulation actually look like?

One thing is clear, no regulation is not good regulation. Sam Altman told congress, “We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models.” Sundar Pitchai in his hearing said, “AI is too important not to regulate, and too important not to regulate well.” Even Elon Musk, who described AI as “potentially more dangerous than nukes,” believes AI should be regulated. Yet, for someone who warned of a “Terminator future” and puts the odds of AI-induced human extinction at "10-20%," his rare silence on the rollback of AI safety guidelines is indicative of a deeper motivation.

There’s a reason the loudest voices for regulation are the ones with the fattest balance sheets. Some call it “responsibility,” others call it “regulatory capture” — a strategy where big players advocate for complex regulations that create barriers for smaller firms. Either way, AI isn’t making Google, Meta, or xAI rich (yet). Ninety percent of AI revenue today comes from building data centers and infrastructure, with only a small fraction actually being generated by AI products (some of which are free). The truth is copyright lawsuits and new compliance costs could make profitability even more elusive.

To recap: The House version of the O.B.B.B. (H.R.1) would put a 10-year freeze on any state or local AI regulation. The logic is to avoid a patchwork of 50 different AI laws and let innovation run wild. Today, however, the U.S. is back to a patchwork of state laws, big tech is sweating compliance, and startups need a 50-state legal decoder ring.

California tried (again) to pass a sweeping AI accountability bill. AB 331? Dead. AB 29301? Also dead, at least for now. Both bills would require companies to do annual “impact assessments” (bias audits), notify people when AI is making big decisions about their lives, and publish policies on how they manage algorithmic risk. Enterprise lobbyists argued the rules were too vague, broad, and expensive. Lawmakers worried about duplicating federal efforts (that never materialized). And the tech industry threatened to take its ball (and jobs) elsewhere. Still, California’s Civil Rights Council is working on anti-discrimination rules for AI in hiring, but the big, bold stuff is on ice. For now.

Meanwhile, the EU’s AI Act (mentioned in TT23) is now the law of the land and the world’s first comprehensive AI regulation. The Act sorts AI into four risk buckets:

  • Unacceptable (banned: think social scoring and real-time facial recognition) 

  • High-risk (strict rules: hiring, credit, healthcare, etc.) 

  • Limited-risk (disclosure required: chatbots, deepfakes) 

  • Minimal-risk (go wild: spam filters, video games)

If you’re in the EU building a “general-purpose AI model” (LLMs), you’re on the hook for transparency, documentation, and — if you’re big enough — red-teaming and copyright checks. Open source models get some exemptions, but if your model is “systemic risk” (>10²⁵ FLOPs), you’re back in the compliance hot seat. The backlash has been immense. So far:

  • 30+ top EU startup founders and VCs (most notably Mistral AI and 20VC) signed an open letter: “We urge the Commission to propose a two-year ‘clock-stop’ on the AI Act”

  • Compliance is expensive, rules are vague, and only the biggest players can afford to keep up. 

  • Some products (like Sora and Meta AI) have geo-blocked the EU due to compliance headaches.

It's impossible to talk about AI regulation without mentioning our current global geopolitical situation. The EU worries about losing its best minds to the U.S, while the U.S. frets about falling behind China. It’s a global standoff where every country wants to be the AI superpower and not the one that regulates itself into irrelevance.

“No rules” is a fantasy, and “one-size-fits-all” is a recipe for stifling open source and small players. Proper regulation should be balanced and based on how the model is used. So without further ado, here’s my blueprint for smart, open-source-friendly AI regulation:

1. Risk-Proportionate, Not Model-Proportionate

Focus on the harm potential of the application. A tiny model in a high-stakes context (like healthcare) needs more oversight than a massive model generating memes.

2. Transparency as a Default

Mandate “nutrition labels” for AI systems: disclose data sources, evaluation scores, and known biases. If users can’t peek under the hood, they can’t trust the output.

3. Safe Harbors for Open Source

If you publish your weights and training code, you get lighter compliance burdens. Openness should be rewarded and closedness penalized.

4. Accountability on Deployers

Shift some liability to those who use the tech. A model isn’t inherently dangerous; a bad deployment can be. Punish the reckless app, not the raw code.

5. Build on What Works

Adopt frameworks like NIST’s AI Risk Management Framework — it’s voluntary now but could be codified. And encourage crowd-sourced red-teaming, like HuggingFace’s Open LLM Leaderboard, to catch flaws early.

In the end, smart regulation is possible, if only it were more popular with those actually making the decisions. But as long as China panic and copyright maximalism dominate the headlines, we’re stuck with a patchwork of half-measures and overreactions. The recent Anthropic v. Authors & Music Publishers verdict is a perfect example: a federal judge ruled that training AI on copyrighted books is “fair use,” but stashing millions of pirated files is a no-go. It’s a nuanced win for open innovation, but the copyright minefield is only getting trickier from here. 

For startups, this regulatory environment is a mixed bag. On one hand, building and scaling AI products got harder, with compliance costs, legal ambiguity, and the constant threat of shifting rules making it harder to move fast and break things; on the other, the chaos creates a playground for nimble founders who can turn regulatory confusion into a moat — offering compliance-as-a-service, building tools to automate audits, or simply outmaneuvering slower larger incumbents paralyzed by red tape. Stay tuned!

Tags Token Talk, AI Regulation
← Token Talk 26: AI in the (Protein) FoldToken Talk 24: Meta’s Manifest Destiny →

FEATURED

Featured
Mapping Cascadian Dynamism
Mapping Cascadian Dynamism
Subscribe to Token Talk
Subscribe to Token Talk
You Let AI Help Build Your Product. Can You Still Own It?
You Let AI Help Build Your Product. Can You Still Own It?
Startup-Comp (1).jpg
Early-Stage Hiring, Decoded: What 60 Seattle Startups Told Us
Booming: An Inside Look at Seattle's AI Startup Scene
Booming: An Inside Look at Seattle's AI Startup Scene
SEATTLE AI MARKET MAP V2 - EDITED (V4).jpg
Mapping Seattle's Enterprise AI Startups
Our 2025 Predictions: AI, space policy, and hoverboards
Our 2025 Predictions: AI, space policy, and hoverboards
Mapping Seattle's Active Venture Firms
Mapping Seattle's Active Venture Firms
PHOTOS: Founders Bash 2024
PHOTOS: Founders Bash 2024
VC for the rest of us: A big tech employee’s guide to becoming startup advisors
VC for the rest of us: A big tech employee’s guide to becoming startup advisors
Valley VCs.jpg
Event Recap: Valley VCs Love Seattle Startups
VC for the rest of us: The ultimate guide to investing in venture capital funds for tech employees
VC for the rest of us: The ultimate guide to investing in venture capital funds for tech employees
Seattle VC Firms Led Just 11% of Early-Stage Funding Rounds in 2023
Seattle VC Firms Led Just 11% of Early-Stage Funding Rounds in 2023
Seattle AI Market Map (1).jpg
Mapping the Emerald City’s Growing AI Dominance
SaaS 3.0: Why the Software Business Model Will Continue to Thrive in the Age of In-House AI Development
SaaS 3.0: Why the Software Business Model Will Continue to Thrive in the Age of In-House AI Development
3b47f6bc-a54c-4cf3-889d-4a5faa9583e9.png
Best Practices for Requesting Warm Intros From Your Investors
 

Powered by Squarespace