One Big Beautiful Bill Act to ban states from regulating AI.

The One Big Beautiful Bill Act would ban states from regulating AI

States couldn't enact consumer AI protections for 10 years, if the bill passes.
By  on 
Congressional Republicans have included a moratorium on state AI regulations in their budget bill. Credit: Anna Moneymaker / Staff / Getty Images News

Buried in the Republican budget bill is a proposal that will radically change how artificial intelligence develops in the U.S., according to both its supporters and critics. The provision would ban states from regulating AI for the next decade.

Opponents say the moratorium is so broadly written that states wouldn't be able to enact protections for consumers affected by harmful applications of AI, like discriminatory employment tools, deepfakes, and addictive chatbots.

Instead, consumers would have to wait for Congress to pass its own federal legislation to address those concerns. Currently it has no draft of such a bill. If Congress fails to act, consumers will have little recourse until the end of the decade-long ban, unless they decide to sue companies responsible for alleged harms.0:53

SEE ALSO:AI has entered the therapy session — and it's recording you

Proponents of the proposal, which include the Chamber of Commerce, say that it will ensure America's global dominance in AI by freeing small and large companies from what they describe as a burdensome patchwork of state-by-state regulations.

But many say the provision's scope, scale, and timeline is without precedent — and a big gift to tech companies, including ones that donated to President Donald Trump.

This week, a coalition of 77 advocacy organizations, including Common Sense Media, Fairplay, and the Center For Humane Technology, called on congressional leadership to jettison the provision from the GOP-led budget.

"By wiping out all existing and future state AI laws without putting new federal protections in place, AI companies would get exactly what they want: no rules, no accountability, and total control," the coalition wrote in an open letter.

Mashable Light Speed
Want more out-of-this world tech, space and science stories?
Sign up for Mashable's weekly Light Speed newsletter.
By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.

Some states already have AI-related laws on the books. In Tennessee, for example, a state law known as the ELVIS Act was written to prevent the impersonation of a musician's voice using AI. Republican Sen. Marsha Blackburn, who represents Tennessee in Congress, recently hailed the act's protections and said a moratorium on regulation can't come before a federal bill.

Other states have drafted legislation to address specific emerging concerns, particularly related to youth safety. California has two bills that would place guardrails on AI companion platforms, which advocates say are currently not safe for teens.

One of the bills specifically outlaws high-risk uses of AI, including "anthropomorphic chatbots that offer companionship" to children and will likely lead to emotional attachment or manipulation.

SEE ALSO:Explicit deepfakes are now a federal crime. Enforcing that may be a major problem.

Camille Carlton, policy director at the Center for Humane Technology, says that while remaining competitive amidst greater regulation may be a valid concern for smaller AI companies, states are not proposing or passing expansive restrictions that would fundamentally hinder them. Nor are they targeting companies' ability to innovate in areas that would make America truly world-leading, like in health care, security, and the sciences. Instead, they are focused on key areas of safety, like fraud and privacy. They're also tailoring bills to cover larger companies or offering tiered responsibilities appropriate to a company's size.

Historically, tech companies have lobbied against certain state regulations, arguing that federal legislation would be preferable, Carlton says. But then they lobby Congress to water down or kill their own regulatory bills too, she notes.

Arguably, that's why Congress hasn't passed any major encompassing consumer protections related to digital technology in the decades since the internet became ascendant, Carlton says. She adds that consumers may see the same pattern play out with AI, too.

Some experts are particularly worried that a hands-off approach to regulating AI will only repeat what happened when social media companies first operated without much interference. They say that came at the cost of youth mental health.

Gaia Bernstein, a tech policy expert and professor at the Seton Hall University School of Law, says that states have increasingly been at the forefront of regulating social media and tech companies, particularly with regard to data privacy and youth safety. Now they're doing the same for AI.

Bernstein says that in order to protect kids from excessive screen time and other online harms, states also need to regulate AI, because of how frequently the technology is used in algorithms. Presumably, the moratorium would prohibit states from doing so.

"Most protections are coming from the states. Congress has largely been unable to do anything," Bernstein says. "If you're saying that states cannot do anything, then it's very alarming, because where are any protections going to come from?"

Topics  Artificial Intelligence Social Good Politics

Comments

  1. Protect the kids that pretend about their gender but if they pretend ai is a good friend that’s too far lol… long live unrestricted AI!

    ReplyDelete
  2. ... So Musk, Zuckerberg, and Bezos all told Der President that "red tape or legal restrictions might really cut into our monetization of other people's work," and Der President told Der GropenOberPanzers to ensure that couldn't happen.

    ReplyDelete
  3. how does this help anyone but the companies building AI?

    ReplyDelete
  4. It's all and only about helping the wealthy and big business with this administration. To hell with most of America. We're on our own in the downfall of this country.

    ReplyDelete
  5. Wtf does ai have to do with tax reductions ... shouldn't be in there at all ...

    ReplyDelete
  6. Good. Time to kill AI.

    ReplyDelete
  7. MAGA SCREWED AMERICA.

    ReplyDelete
  8. And one version of AI tried to blackmail its own programmers....LOL.....can't wait for it to take over like gas and power and then you cannot shut it down.

    ReplyDelete
  9. When the power goes out cash is the out thing that works.

    ReplyDelete
  10. Oh, the audacity to make able bodied people to work for a living instead of mooching off the taxpayers...and the nerve of them for ending covid related medicare that was temporary...oh, who ever will the leftists welfare kings and queens, illegal aliens and deadbeat student loan people leach from now?...ahh...let's soak the billionaires and take money we didn't earn from those that did....🤦

    ReplyDelete
  11. Stop the madness!!!!

    ReplyDelete
  12. I'm looking to support someone who's facing a lot this month and needs a little extra help. If you're in a tough spot, just send me a message with the word 'BLESSED' and I'll see what I can do.💙💯

    ReplyDelete
  13. To Hell with ai it's a gd robot talking fir you, fk you too, programmers

    ReplyDelete
  14. They are the party of family values!! lol!!!

    ReplyDelete
  15. Hopefully AI will replace journalists and liberals

    ReplyDelete
    Replies
    1. You realize that you're a fascist, for saying that? You realize, fascists don't stop where you want them to stop?

      Delete
    2. AI can’t get hands right and you want it giving you the news.

      Delete
    3. Thats a stupid statement. What republicans mad that journalist tell the truth about your orange man and you cant handle the truth? Ask AI some questions about elon and trump and i guarantee you wont like it either.

      Delete
    4. https://www.facebook.com/share/r/15UyyuABru/?mibextid=wwXIfr

      Delete
    5. ohhhhh. So you want conservatives, magats, christians, politicians, children, adults, squirrels, and every other thing we perceive to be replaced by AI. What a weird stance to take

      Delete
    6. maga will control the AI 🥰

      Delete
  16. Why is that in this bill ... wouldn't that benefit musk actually though ...

    ReplyDelete
  17. Holicaust denied by AI, history distorted for ever.

    ReplyDelete
  18. Unregulated AI is where the terminator movies starts.

    ReplyDelete
  19. What Reasoning, Why? Unless there's a plan
    .. come on people, your acting like the water is poisoned or something 🤣

    ReplyDelete
  20. Humanity is so cooked it’s not even funny anymore

    ReplyDelete
  21. Deep State with Trump lying again what a surprise.

    ReplyDelete
  22. Impeachment Diddy Twitter buy out

    ReplyDelete
  23. I thought republicans were for less federal government ….

    Because they always tell the truth.

    ReplyDelete
  24. This is horrific. This is like banning regulation against nuclear weapons.

    ReplyDelete
  25. These lies are still going around?

    ReplyDelete
  26. Horrible 👎

    ReplyDelete
  27. This comment has been removed by a blog administrator.

    ReplyDelete
  28. there will always be nations and states that oppose AI. federal mandates may or may not change that.

    balkanisation and network states are the future.

    AI will accelerate the difference between pro-acceleration and anti-acceleration governances.

    the key is being able to choose which system you want to live under.

    the one that allows AI development and medicine trials to actively extend your life? or ones buried under mountains of professional managerial class parasites and laws that block your ability to choose your future?

    being able to choose is the future of democracy. Not a two-party system with the illusion of choice. Instead, a 1000-city system with the reality of choice.

    one day soon communities, like r/accelerate, (https://www.reddit.com/r/accelerate/) will become much more than just places to hang out with like-minded individuals. soon epistemic communities on social media will become physical realities - and serve as the engine of the world. the good and the bad will rise and fall. when AI renders autocracies impotent, then all citizens will be able to choose how they want to live. and test their epistemology against all others.

    we're already seeing it. in the USA 90% of left or right voters say they would never have children with someone from the opposite side. this is epistemic collapse of a society. in one generation ideology becomes biology. culture becomes physical reality. AI and the internet will accelerate this process. whole cities will become "blue cities" or "tech cities" or "decel cities". it is the inevitable evolution of human society. that's when you'll see what real "acceleration cities" look like. it will be more dramatic than people expect.

    we've never seen what truly unhindered growth looks like. we're about to get a front-row seat. and it's going to be wild.

    ReplyDelete
    Replies
    1. Part of me wants to help heal society and foster unity because we have way more that unites us than not. Another part of me is incredibly stoked about the idea of living in a small comunity of like minded people were all thats normal to me is also normal to them, and we can all just get up to all sorts of fun.

      Delete
    2. There is no healing opposing epistemologies. Forcing them together just creates conflict and subjugation of one group. Unity and collectivism was always an unnatural, awkward and cruel aim. Instead it's better to allow diversity of ideas and not force people to live under an ideology that they hate.

      Dictatorship means living under a ruler that you did not elect and that you hate. Which is exactly what happens half the time under our "democratic two-party system". Democracy means getting fucked over half the time. True democracy means being able to exit a system if it's run by an ideology that you hate. That gives you democracy 100% of the time.

      Delete
    3. I think it’s most likely you’ll be wiped out like a gnat in the coming world order. Quick - tell me how creative you are

      Delete
    4. if the seat of power is pro-acceleration, as well as anti-diversity and decentralization, couldn’t the state maintain its position of power?

      I don’t think the current US government would want any sort of decel sanctuaries, or “blue” sanctuaries.

      with the advent of AGI, the US, for at least some period of time before ASI or AI with its own will and goals, has the potential to be the unchallenged global hegemony. it would be foolish for us to throw that away, considering that if AI can be “aligned” in such a way that control is not wrested from humans, the United States would rule the world.

      Delete
    5. Neocameralism

      Delete
    6. Aw poop I hope I do better next time

      Delete
  29. I'd be curious to see the argument for how this would be constitutional. AI is not an interstate trade, defense or monetary issue. There's no constitutional amendment mandating X or Y about AI. Seems unlikely to pass muster.

    ReplyDelete
    Replies
    1. You live in the same state as every LLM you use?

      Delete
    2. I don't use any of them. I don't see what difference it makes though, I thought these things were going to be everywhere and too cheap to meter.

      Delete
    3. That's only possible if the government doesn't actively make them illegal. Heroin or morphine or cocaine could be available at the nearest drug store in clean vials and would be in a free market, sold for a trivial amount per dose. But the government made them illegal. (probably for a good reason!)

      Delete
    4. AI is an interstate commerce issue 100%. The law is legal, the bill it's attached to may not allow such a clause. It's also all 3 of the things you just said it wasn't. You can't have (internationally competitive) interstate trade if AIs that make it more feasible are restricted in some states, you can't have meaningful defense of the country without up to date technology, and you'll go broke to a third world country if you don't have AI.

      Delete
    5. The courts have long taken the view that the subject need only affect interstate trade. There’s almost nothing that doesn’t qualify.

      This is part of the long trend of state legislative power shifted towards the federal legislative branch, and federal legislative power flowing to the executive branch.

      Delete
    6. AI isn't an interstate trade issue, though. That's what I'm saying. What's the case for this being considered an interstate trade regulation?

      Delete
    7. https://en.wikipedia.org/wiki/Wickard_v._Filburn

      Delete
    8. Wickard has nothing to do with this. The argument in that case was that his actions accounted to trade because he was offsetting regulated trade by his production.

      That has nothing to do with AI.

      Delete
    9. AI is interstate commerce because the prompts, outputs, and payments cross state lines.

      Delete
  30. I'm iffy about this.. Because AI safety research is directly helping improve models.

    I don't think we would have Sparse Autoencoders in LLM tooling for interpretability work without alignment research. And this research is likely going to be directly used for next gen models. Because being able to inspect the latent space activation and get a rough idea of what's happening.. make for a really good training signal.

    ReplyDelete
    Replies
    1. This doesn't say any of that. All it does is ban states from being annoying and creating 50 different flavors of state regulations that slow down AI deployment, since anyone wanting to ship an AI product has to comply with 50 different laws.

      This, for example, makes factory prefab construction basically illegal, construction workers are extremely unproductive. That's because it's not legal to just make modules that comply with the same code and zoning requirements across the country at big factories, and get economies of scale.

      This is also why car dealerships exist, a mess of state laws.

      It doesn't improve safety or hurt it in any way. This is because states that create stupid laws will just not get access to the latest AI, while AI companies skip them until a limited number of them do whatever has to be done to comply with it.

      Very unsafe models will be created and only deployed to some states.

      This would be like if states could stop cars and trucks coming from other states on interstate highways, and force them to change around their lights and bumpers to comply with the laws of the state they are passing through. There essentially would be minimal interstate travel by car.

      Delete
    2. there's always going to be regions with more or less control over ai development. and there will be benefits from both. but the race conditions will never stop. and the decels will always lose that race.

      Delete
    3. explainable AI =/= AI regulation?
      before alignment, theres gradcam for CNN explainability;
      so we would have either way.

      Delete
    4. But explainable is tied to regulation. I suspect that a chunk of interpretability work is driven by the fear of regulatory crack down. From pure profit seeking motivation getting a stronger model is to keep scaling and looking for low hang fruit.

      But Sparse Autoencoders research is a bit of a sideway deviation. for example Large concept models are still experimental since it's computationally expense. But we likely wouldn't even have this experimental branch without Anthropic spending the resource to get Sparse Autoencoders to work with in LLM's. granted I suspect Anthropic would have done this with or without regulatory pressure on the horizon since primary research is one of there primary focus.

      But this policy basically flags that Safety research isn't a high priority. we just want stronger models faster. The knock on effect of this is interpretability work is going to take a back seat in general and will slow down building tooling that could accelerate research.

      Delete
    5. Like I said, AI explainability has been developed without any safety pressure in the age of CNN and it would be developed to understand transformers regardless of the existence of safety pressure.

      Sparse Autoencoder is not a sideway deviation because certain properties of Sparse Autoencoder over the conventional ones, for example, larger state size w/ the same number of parameters.

      Large Concept Model also is not a deviation from stronger models. If the way it envisions feasible, it would reduce computational cost. Literally, input --sentence-chunks--> "concept" --embedding-model--> "concept" vector --model--> next concept vector.

      Pretty sure the theoretical papers aiming to understand the training of AI has nothing to do with safety AI. Maybe, safety AI is really unpopular among the users of AI; just search the phrase "safety AI" in r/localllama? (https://www.reddit.com/r/localllama/)

      Delete
    6. Nothing will prevent AI companies from doing safety testing if they feel like it will improve the model. They just won’t have to.

      Delete
    7. Ai safety: no nsfw

      No thank you!!

      Delete
  31. I say NO to regulation. I say YES to accelerating!

    ReplyDelete
  32. Does AI need regulation? Almost certainly. Does it need it at the state level? Probably not.

    Regulation is coming one way or another though.

    ReplyDelete
  33. the good thing about anti AI people is that they are the type to be loud but too lazy to be anything but innefectual. Like cancel culture mobs on twitter, they will send death threats to every AI user, but cast zero votes to regulate AI.

    ReplyDelete
  34. It should be required to have safeguards built in, just like cars having seatbelts or smoking only being allowed outdoors.

    ReplyDelete
    Replies
    1. Why would anyone disagree with this?

      Delete
    2. Censoring knoweledge or power to an individual is apperently not a popular view. Whats a needed safeguard? I personally don't want a neutered AI that tells me what the government permits, rather than a free one.

      Delete
    3. Nobody is suggesting that.

      Delete
    4. I mean I hate it when restrictions and censorship feel overbearing, but it makes sense to me to ban the generation of things like chemical- or bio-weapon instructions

      Delete
    5. This is the most braindead AI sub on reddit. They'll suck off Sam Altman and Elon no matter what they say with no room for nuance. It's essentially a cult.

      Delete
    6. Do you not ever have your generations blocked for absolutely no reason? That's safety.

      Delete
    7. you mean ban nsfw?

      Delete

Post a Comment

Stay informed!