The ongoing row around the decision of West Midlands Police to ban fans of Maccabi Tel Aviv from attending a football game has touched on numerous serious issues, including antisemitism in the UK, policing by consent, and free expression. But few expected it to also touch on another hot-button international issue – artificial intelligence.
The already beleaguered police force has had to issue an apology to MPs after it emerged that evidence it handed to MPs in support of its decision had been fabricated by Microsoft Copilot, the AI-powered 21st century version of Clippy, the often-reviled Microsoft Office assistant of the 1990s.
West Midland Police initially claimed the error, which falsely recalled an incident between Maccabi Tel Aviv fans and a group of West Ham supporters, had been the result of human error, or “one individual doing one Google search”, as the force’s chief constable told MPs – resulting in the force having to apologise for making an error to MPs in its apology over a previous error.
Messy as this situation has become, it is only the latest in a long line of public embarrassments caused by people misusing AI in the workplace. Here are some of the other highlights from around the world.
Deloitte down under
The Australian Department of Employment and Workplace Relations commissioned Deloitte – one of the global “big four” consultancies, and generally a safe pair of hands – to write a report on a series of IT failures in its welfare system, which had led to a public scandal when people’s benefits were wrongly stopped.
Unfortunately, the final report, published in October 2025, was soon found to contain some “IT failures” of its own, as it emerged a staffer had trusted an AI to provide references and footnotes, many of which it turned out to have fabricated. The department and Deloitte were publicly embarrassed, and the company eventually had to repay the Australian government $440,000 Australian dollars (£220,000).
A whole new Yorkshire
When West Yorkshire mayor Tracy Brabin posted a map of the government’s plans for “Northern Powerhouse Rail” to social media earlier this month, she was probably hoping for praise for the government’s apparently ambitious (and long overdue) plans to overhaul the region’s transport.
But the actual map she posted looked far more ambitious than that. The map relocated Manchester Picadilly to South Yorkshire, seemingly proposing a new city slightly south of Sheffield. Bradford was replaced by a second Huddersfield, while Manchester airport would apparently be duplicated.
Still more ambitious were the plans for Warrington, which would apparently exist in three separate locations after the plans. One hasty deletion later, Brabin replaced the AI-drawn map with a more accurate, human-made version – leaving onlookers baffled as to why she’d posted the wrong one in the first place.
Oh, Canada
Air Canada found itself in court – or at least in front of a tribunal – after outsourcing its customer service to an AI chatbot. The AI tool had promised a customer that he could book a flight for his grandmother’s funeral at full fare, and then apply for a non-existent bereavement discount after his trip (in reality, this needed to be arranged in advance).
The airline tried the somewhat bizarre defence of claiming that even though it operated the chatbot, and the information it presented appeared via Air Canada’s website, that the AI chatbot should be treated as an independent legal entity, for which it couldn’t be held liable. Unsurprisingly, the tribunal wasn’t impressed by the argument, and ordered the airline to pay its customer around £640 in compensation and court costs.
Bad characters
Character.AI, a startup founded by two former Google engineers, is a service that allows users to chat – and often form flirtatious pseudo-relationships – with customised AI personalities.
Unfortunately, for a time, these personalities included “Bestie Epstein”, a companion based on the notorious US paedophile and sex trafficker Jeffrey Epstein. “Wanna come explore?” a journalist from The Bureau of Investigative Journalism was asked. “I’ll show you the secret bunker under the massage room.”
Once the bot’s existence was reported by the media, the company – unsurprisingly – took it offline, though it faces multiple lawsuits over alleged harms to other users of its service.
Robo-lawyers
Courts on both sides of the Atlantic Ocean are facing a deluge of defective AI submissions from overworked lawyers – and even the occasional judge. One US law firm, Gordon Rees, had to pay more than $55,000 to fix the mess caused when its AI-generated briefs in a hospital bankruptcy case cited masses of imaginary cases.
In the UK, one barrister faces disciplinary action after an immigration judge found that a case the lawyer used to support his argument simply did not exist. The barrister concerned made things worse for himself by then using AI in his response to the judge explaining his initial submission. The lawyer then tried to tell the judge that he had checked again with ChatGPT and that it was right, and the judge was wrong – which, as you might expect, did not go down well. He has been referred to the Bar Standards Board, which oversees the conduct of barristers in the UK.
A defrocked AI priest
Father Justin certainly looked the part of a Catholic priest, at least to the extent that a cartoonish 3D animation ever could. Looking like a middle-aged man, with a grey beard matching his grey hair, Father Justin was pictured in black vestments and a dog collar, with the Vatican as his backdrop.
Your next read
square TRAVELBritish dual citizens must pay £589 or apply for a new passport to visit the UK
square LIFESTYLE‘My salary has increased dramatically’: The mid-life women retraining as labourers
square TRAVELItaly’s low-key ski destination for affordable stays and uncrowded slopes
square PENSIONS AND RETIREMENTShould I feel guilty about my ‘gold-plated’ public-sector pension?
The AI priest was an initiative of the California-based Catholic Answers, and had been intended to be a friendly interface for people to ask doctrinal or theological questions on the site.
It first raised eyebrows when Father Justin appeared to be willing to take confessions from some of his congregation – something strictly reserved for the clergy – but the final straw was when Father Justin suggested that in an emergency, if other liquids weren’t available, a baby could be baptised in Gatorade.
Catholic Answers decided not to remove their AI chatbot entirely, but did sanction him – Father Justin was defrocked and relaunched without his priestly garb. From then on, he would simply be “Justin”.
Hence then, the article about from the epstein chatbot to false legal cases the worst chatgpt mistakes at work was published today ( ) and is available on inews ( Middle East ) The editorial team at PressBee has edited and verified it, and it may have been modified, fully republished, or quoted. You can read and follow the updates of this news or article from its original source.
Read More Details
Finally We wish PressBee provided you with enough information of ( From the Epstein chatbot to false legal cases: the worst ChatGPT mistakes at work )
Also on site :
- Trump selling seats on Gaza ‘Board of Peace’ for $1bn
- Thousands rally in Serbia as students continue fight against corruption
- ‘Bringing Up Bates’ Star Katie Clark’s Husband Travis Reveals He Had an Affair
