BOULDER — Taahaa Dawe graduated from the University of Colorado in May with a master’s degree in data science. Since then, he’s applied to roughly 2,000 jobs and had no luck landing a full-time position.
“I think the whole system is broken,” he said.
Stories like Dawe’s are common among recent grads, who are facing one of the roughest job markets in years.
The unemployment rate for “new entrants,” or those trying to break into the workforce, in the U.S., hit a nine-year high in 2025. A whopping 13.4% of them can’t seem to find a job, the highest percentage in decades.
Aside from recession fears and the slowing pace of entry-level job postings, artificial intelligence is widely blamed for the current grim state of job seeking. And it’s not only because some companies claim to be using the technology to replace portions of their workforce; recent grads like Dawe are worried that the software prospective bosses are using to screen candidates also plays a role.
Sometimes the filters seem so dense that applicants wonder if their resumes are even seen by human eyes.
This lack of technological transparency is part of what the Colorado legislature has been grappling with since 2024 as part of its effort to regulate so-called high-risk AI systems.
In 2024, the legislature passed Senate Bill 205, which aims to protect consumers by mandating disclaimers when AI is used to make decisions on things like job, college and loan applications, punishing companies that use the technology to discriminate against people.
When Gov. Jared Polis signed Senate Bill 205, it was set to take effect in 2026. But at the time, he, the sponsors of the legislation and the technology industry agreed the policy needed tweaking.
Efforts to reach a compromise that could appease consumer advocates and tech companies failed earlier this year, leading the General Assembly to take up the issue again in August during a special legislative session.
The Colorado Capitol on Oct. 24. (Myra Kirk/CU News Corps)A deal once again remained elusive, and state lawmakers opted to push the law’s start date to June 30, 2026, with the hope they could make revisions before then during the legislature’s regular session, which begins in January and runs through mid-May.
They’ll have 120 days to reach a deal.
But lawmakers, activists, industry professionals and consumers still can’t seem to agree on the best way to think about regulation in the ever-changing and often abstract world of AI. In the meantime, Colorado consumers are mostly unaware of the role that AI may be playing in their daily lives, experts say.
How algorithmic discrimination happens
Senate Bill 205 was the first state law in the nation regulating artificial intelligence in any “consequential decision” making, defined as a decision that has material, legal or similarly significant effect on a consumer’s education and employment, as well as access to loans, health care and insurance.
In its current iteration, the law requires developers of high-risk AI systems to use “reasonable care” to protect consumers from any known risks of algorithmic discrimination, including the potentially unfair assessment of people applying for things like jobs, loans, healthcare and housing based on the information available to the system.
As a part of that care, companies would have to provide disclosures of AI use, conduct impact assessments and maintain risk management procedures.
The law also holds companies liable for potential discrimination, with fines up to $20,000 per violation, and designates the attorney general to enforce infractions.
So how exactly can algorithms allegedly be discriminatory?
Automated decision-making systems are trained to produce outcomes based on input data. That data may contain biases that reflect historical societal inequities, which critics say lead the model to produce unfair outcomes.
“This is not hypothetical, these are real things. They’re real harms, they’ve been demonstrated,” said Robin Burke, a professor of information science at the University of Colorado who studies fairness in AI recommender systems. “There are probably examples where harms are continuing to go on, and we just don’t know about it because that’s one of the challenges of these systems: people interact with them all the time, because there’s very little disclosure.”
Professor Robin Burke lectures for an information science class at University of Colorado Boulder on Oct. 23. (Myra Kirk/CU News Corps)Burke stressed the clear regulations regarding discrimination against the seven federally protected classes in the U.S., stating that many experts don’t currently know if existing systems are causing this sort of harm, because there is no auditing taking place.
“Learning from the past doesn’t help you change anything, right?” he said. “So it just lets you repeat what’s happened before. And if what’s happened before is discriminatory, then it just repeats that.”
Alleged examples of this kind of discrimination are in California Mobley v. Workday, Inc., a class-action lawsuit originally filed in 2023.
The lawsuit alleges that Workday, an online human resources system, discriminated against jobseekers through its automated resume screening tool on the basis of race, age and disability status. Among the claims is that the tool was automatically rejecting candidates that it thought were over 40 years old.
In a motion to dismiss the case, Workday argued the plaintiffs had no proof its software was discriminating against applicants, and instead were basing their claims on a series of assumptions
Professor Robin Burke gives course instruction on modeling racial segregation with code in AI experimentation class on Oct. 23. (Myra Kirk/CU News Corps)“Simply put, plaintiff’s complaint accuses a software provider of providing an unidentified product to unidentified customers who allegedly used it in connection with unidentified jobs in unspecified ways,” Workday wrote in its motion. “Those facts do not amount to a plausible claim for employment discrimination.”
The judge overseeing the case sided with parts of Workday’s argument, but advanced the lawsuit, saying the software is “participating in the decision-making process by recommending some candidates to move forward and rejecting others.”
Anaya Robinson, public policy director Colorado American Civil Liberties Union, has been closely involved with regulatory efforts and has many concerns about systems like Workday’s being used in Colorado, particularly in government contexts. He’s concerned AI could be used to make determinations about whether people in prison should get parole and whether they qualify for social safety net programs for health care and food.
“Not only do we have just a fundamental moral and ethical issue there, but we also have a due process issue,” he said. “We get into this constitutional realm of: If these systems cannot tell the user what information was used to make the determination, can the government constitutionally use these tools at all?”
For Burke, implementing guardrails seems like common sense.
“Why would you put a discriminatory or dangerous product out into the world? But I think tech companies have gotten used to a world in which nobody asked that question,” Burke said.
The policy fight at the Colorado Capitol
State Rep. Brianna Titone, D-Arvada, was one of the lead sponsors of Senate Bill 205, and has seen the issue of regulating artificial intelligence devolve into controversy.
“Decisions are being made for you by a machine without any person involved,” she said. “Getting a job, getting into college — these are all things that are being left to these algorithms and these systems to determine. And there’s bias in these systems.”
When the bill was first being debated, Titone originally felt a willingness from interested groups to work on the policy and its legal definitions, a key source of industry pushback. But during the special session, she says large tech companies didn’t want to compromise about any proposed liability.
“That’s when everything broke down,” she said. “The only option that we really had was to kick the can down the road again.”
Colorado Rep. Brianna Titone, D-Arvada, poses for a portrait at the campus of University of Colorado Boulder on Oct. 24. (Myra Kirk/CU News Corps)Titone believes that the bill has the potential to prevent unknown harm to consumers, and reign in the technology before it becomes an unmanageable part of daily life.
“This is a turning point for everything,” she said.
Many tech industry professionals are amenable to the idea of creating some sort of disclosures for AI. But they’re worried about how the bill defines developers and deployers of AI, as well as being able to provide their systems methodologies.
“The bill was plainly drafted without much technological input. Most of the definitions are just wrong. And they are very broadly drawn,” said Mike Boucher, the founder of Dakota Scientific Software and Dakota Legal Software, two AI-run tools created in Colorado.
In Senate Bill 205, a developer is defined as someone doing business in the state of Colorado who “develops or intentionally and substantially modifies an artificial intelligence system.” A deployer is defined as someone doing business who deploys said system.
For this reason and others, Robinson doesn’t believe the bill will be protective enough.
“Pieces of the bill basically say that if the developer or the deployer are using the duty of care that they are required to use, if they are made aware that discrimination occurred or is occurring, and doing what they can to fix it, then enforcement basically can’t happen,” he said.
Robinson also is concerned about what the law would mean for deployers, who often use AI systems with little to no understanding of how they work. They would be solely liable for any bias under current antidiscrimination law.
(Illustration courtesy: Unsplash)“A lot of small businesses, school districts, things like that, that act as deployers wouldn’t necessarily be able to afford to do these impact assessments, nor do they have the technical capabilities on staff to do them in-house,” he said.
Industry professionals have similar complaints.
Boucher stated that the bill’s trade secret exemption will prevent companies from sharing the inner-workings and methodology of their platforms, as that information is critical to maintaining its competitive advantage.
“The things that people are hoping to get out of this, they will not,” he said.
Definitions are not the sole source of industry complaint, however.
Boucher says the level of transparency the bill requires is not possible, as most developers don’t even truly understand all the choices their systems make.
“Even if I somehow magically figure out how to comply with this bill today, tomorrow I might fall out of compliance and not know it,” he said.
Brittany Morris Saunders, president and CEO of the Colorado Technology Association, a trade group representing tech companies, believes that the bill would put an unfair strain on startups and stifle innovation.
“Those startup companies that need to have a lot of risk, they don’t have a lot of money, they don’t have compliance teams to do this type of work as they’re developing systems,” she said.
Saunders, who has been a key negotiator with lawmakers, described getting calls from businesses developing or using AI, including some of the biggest tech companies in the country, threatening to leave the state if the bill wasn’t altered.
Boucher believes that for smaller AI companies, the bill would need to offer financial incentives for compliance, such as tax breaks or other legal incentives.
The city of Denver has warned it could struggle with the cost of complying with the law.
With the city facing a $200 million budget shortfall for 2026 and recently laying off nearly 200 employees, Suma Nallapati, Denver’s first chief AI and information officer, says she just doesn’t have the bodies to meet the measure’s high compliance demands.
“With the budget constraints and reductions in our force, I’m just barely able to keep the lights on. I simply don’t have time before February 2026 to get all of this in order as a CIO,” she said.
Denver uses the human resources software Workday for hiring and other personnel operations, which is one of the ways it could be affected by Senate Bill 205.
Nallapati has concerns about preparing the city to comply with the terms.
“I would say the definitions are not very clear,” she said. “It’s pretty ambiguous. So that’s one of the biggest implications that I see is how do you comply with something that’s that ambiguous?”
The future for Colorado consumers
Colorado Gov. Jared Polis has been a proponent of AI regulation, but he prefers that it be done on the federal level. In lieu of Congress taking action, he’s acknowledged a need for states like Colorado to take action.
But Senate Bill 205 is not his preferred approach.
After lawmakers and industry failed to reach a deal during the August special session, Polis created an AI policy working group to try to break the impasse. It includes the Colorado Technology Association, tech companies and local consumer groups. The group has been meeting regularly in the lead-up to the regular legislative session in January, but so far doesn’t appear to have a solution.
Gov. Jared Polis speaks at a news conference on Wednesday, Dec. 10, 2025, at the National Western Stock Show complex about a jobs-training initative in Colorado. (Jesse Paul, The Colorado Sun)Complicating things further is the executive order signed by President Donald Trump on Dec. 11 banning states from regulating AI and granting the U.S. attorney general the authority to sue states to overturn laws that do not support the “United State’s global A.I. dominance.” Trump has also directed his administration to withhold funding for states that keep its AI laws in place.
The order specifically cited Colorado, stating that the law “banning ‘algorithmic discrimination’ may even force AI models to produce false results in order to avoid a ‘differential treatment or impact’ on protected groups.”
Colorado Attorney General Phil Weiser said the state will challenge the order in court.
Heading into the legislature’s 2026 session, Titone wants to see the bill evolve to put more liability on big tech companies.
“We’re trying to fight for other people who are actually going to be more hurt in this, because (large companies) have unlimited resources and you can do whatever you need to do,” she said.
Robinson agrees.
“We have these billion-, trillion-dollar companies creating these systems that discriminate very regularly and they hold zero liability for that discrimination,” Robinson said.
Stakeholders in the tech world, meanwhile, want to see more industry input as part of the deliberation process.
“We need a third-party facilitator to sort of mediate this conversation between labor and consumer groups and tech and business,” Saunders said.
According to Saunders, the governor’s AI policy working group is looking to move past Senate Bill 205 and start fresh to create a policy preventing algorithmic discrimination that protects consumers while not hindering innovation.
As for Dawe, fresh off getting his graduate degree from the University of Colorado in May, as his job search continues, he said figuring out how to tailor his experience to unknown computer hiring programs has been difficult, even for someone who has advanced education in the field of AI.
Taahaa Dawe poses for a portrait on the campus of University of Colorado Boulder on Oct. 3. Dawe graduated from the University of Colorado in May with a master’s degree in data science. Since then, he’s applied to roughly 2,000 jobs and had no luck landing a full-time position. (Myra Kirk/CU News Corps)In his home, on his computer screen, Dawe has a spreadsheet that tracks all the jobs he has applied to. Red rows fill his monitor, indicating no response or a rejection. Despite it he persists, spending long stretches on LinkedIn each day. He doesn’t know why or how he’s getting the results he is, but he wants to.
“I think with AI, everything is a black box. So what goes in and what comes up?” Dawe said. “We really don’t know.”
Hence then, the article about coloradans are feeling the effects of ai as the state s legislative deadlock over regulation continues was published today ( ) and is available on Colorado Sun ( Middle East ) The editorial team at PressBee has edited and verified it, and it may have been modified, fully republished, or quoted. You can read and follow the updates of this news or article from its original source.
Read More Details
Finally We wish PressBee provided you with enough information of ( Coloradans are feeling the effects of AI as the state’s legislative deadlock over regulation continues )
Also on site :