Good morning. For more than 120 years, UL has put its mark on products from tree lights to toaster cords to convey a promise: This won’t kill you. Last week, for the first time, the $3 billion-a-year safety science company issued a new certification for AI-embedded products. As UL Solutions CEO Jennifer Scanlon told me: “Innovation without safety is failure.”
Rarely has there been a technology that’s evolved so fast with so little oversight. (The patchwork of emerging state laws adds to the confusion.) This week, the spotlight is on OpenClaw, the autonomous virtual agent that’s spawned a new craze in China. It got a shout-out from Nvidia CEO Jensen Huang during his developers conference this week, where he announced NemoClaw and declared OpenClaw framework to be “the next ChatGPT.”
Can private-sector safety standards do what Washington has not: provide guardrails to fast-moving technologies with potentially profound consequences? The UL mark already goes on about 22 billion products worldwide every year. This latest standard, UL 3115, evaluates whether an AI-enabled product is safe, robust and well-governed with a “human in control” throughout a product’s lifecycle. “Whether or not there’s government regulation around this, our customers are coming to us because they need broader protections and assurances,” Scanlon told me. “They’re clamoring to have at least a standard that they can adhere to that gives them the confidence in how they’re getting out in front of their customers.”
UL’s expertise is in functional safety. As Scanlon puts it: “When you turn the radio on in your car, you do not want your brakes to slam. So how is that embedded software being tested and proven? They’re embedding AIs in toys. How do we know those toys are safe for kids?”
That’s why UL’s AI Center of Excellence set out to apply its safety protocols to the new world of AI-embedded physical products. “We start with an outline of investigation, which is a precursor to safety. That’s our engineers and scientists working with customers to understand what they’re worried about, what they believe the challenges are—and then we come at it from the scientific perspective, which is: what else should you worry about?”
“In the case of AI‑embedded products, they started thinking about: How transparent is the algorithm? How much bias is built into those algorithms? What’s the veracity of the training data? And if some of that training data is not true, how do you eliminate it from the learning model? And type of human oversight and verification—that essential final check—is in place? What are those processes?”
Thus far, two products have been AI‑certified: Qcells’ Energy Management System, an AI-enabled control engine for data centers, and the Omniconn Platform 4.0, a smart building solution. It’s one part of the puzzle in a world where leaders are trying to match speed with safety.
Contact CEO Daily via Diane Brady at diane.brady@fortune.com
This story was originally featured on Fortune.com
Hence then, the article about ul solutions rolls out a new standard to fill a gap in ai regulation innovation without safety is failure was published today ( ) and is available on Fortune ( Middle East ) The editorial team at PressBee has edited and verified it, and it may have been modified, fully republished, or quoted. You can read and follow the updates of this news or article from its original source.
Read More Details
Finally We wish PressBee provided you with enough information of ( UL Solutions rolls out a new standard to fill a gap in AI regulation: ‘Innovation without safety is failure’ )
Also on site :
- Ryan Gosling Says Harrison Ford’s “Not Like Us” After Seeing Him “House Half a Bottle of Advil as a Joke”
- Macy’s 'Sparkly' $34 Crystal Stud Earrings Look 'High-End' and They 'Go With Everything'
- Sundance-Winning Iranian Filmmaker’s ‘Rainy Dreams’ Explores Children’s Nightmares Amid Displacement at Hong Kong Financing Forum