As Seth Klamann reports for the Denver Post, the nominal second priority of the special session of the Colorado General Assembly set to convene on Thursday after mitigating the effects of the billion-dollar hole in the state budget opened by the passage of the federal “We’re All Going To Die Act” budget bill is for lawmakers to take another crack at regulating the rapidly-proliferating use of artificial intelligence in a variety of decision-making roles–some of which have straightforward implications for ordinary Coloradans who could find themselves on the losing end of life-or-death decisions made by a computer. This led to the passage of nationally-innovative reforms in 2024 that were signed into law by Gov. Jared Polis with some trepidation, followed by swift pressure from tech companies and business interests to delay implementation of the bill.
With the legislature formally charged with revisiting the question, this would-be side issue could become the biggest cash-fueled controversy of the special session:
[W]hile Democratic lawmakers who constitute the legislative majority are largely aligned on how to fill the budget hole, there’s significantly less agreement on how to answer Polis’ call to amend the AI regulations.
Two of the new bills, each backed primarily by Democrats, are the likeliest to advance. But their aims largely run counter to one another: One bill is accused of doing too much and implementing unworkable rules. The other is criticized for doing too little to protect consumers and to regulate a burgeoning — and affluent — industry.
Given that they start in opposite chambers and are backed by different power centers in the Capitol, they’re likely to collide in the coming multiday session.
The briefest explanation we can offer on the two competing bills referenced in this story without asking ChatGPT to do it for us, and that is something we will never, ever do, is that the bill supported by the original sponsor Sen. Robert Rodriguez preserves some regulation on the use of AI, including disclosure of its use job interviews and loan applications. Critically, this bill preserves a right for individuals to correct mistakes made by AI in this process, where the competing bill spsonored by fellow Democrat Rep. William Lindstedt would limit the ability of individuals to sue.
Titone and supporters of her bill have said Lindstedt’s approach is pro-tech fluff. The state’s consumer-protection laws already cover AI, they argue, and that bill would limit consumers’ ability to file lawsuits or know if they’ve been discriminated against.
But Lindstedt and supporters of his approach argue that while clarity and oversight are needed, the scale of the other bill’s regulations would significantly stifle the burgeoning AI industry’s ability to operate — or for its systems to be used — in the state. Requiring companies to list the characteristics assessed by AI and send them to each individual consumer would be “nearly impossible for many AI systems,” according to a fact sheet prepared by lobbyists supporting Lindstedt’s bill.
Other than a healthy distrust of artificial intelligence inspired by a generation’s worth of science fiction warning about the civilization-ending consequences of letting the machines take over, we don’t have a strong position one way or the other on the original state bill regulating AI, or these attempts to amend it before it takes effect. We will say that although there is an argument that federal regulation of AI makes the most sense, the current political climate in Washington is very much not conducive to it–enough that suggesting federal regulation at this point is tantamount to arguing for no regulation at all.
With that, we’ll open discussion of this issue to our readers. All we ask is that, like the content we bring you every day with authentic blood, sweat, and tears, your responses be 100% human-generated.
Though without disclosure, it’s awfully hard to know if it isn’t.
Subscribe to our monthly newsletter to stay in the loop with regular updates!
Comments