Content moderators are the front-line workers of the internet: the people who remove traumatic content from social media platforms and AI datasets. I’ve been writing about them for a long time — including breaking the news of Meta and OpenAI’s use of low-paid African content moderators based in Kenya.
Now, new research suggests that African moderators have it worse than their colleagues in Asia, Europe, and the Americas when it comes to their mental health.
A survey of 134 moderators led by researchers at the University of Minnesota finds that 52% of surveyed African content moderators met thresholds for probable clinical depression, and 55% had significant levels of psychological distress. Some 28% reported using drugs or medication to cope with their symptoms.
Crucially, the researchers used the same clinical framework as a separate survey of 160 moderators from other continents. That separate survey found lower (though still substantial) rates of the same symptoms. “African content moderators’ psychological distress and well-being are collectively worse off than global averages of well-being of content moderators,” the researchers contend.
It’s worth noting that, while the two different surveys do use the same framework, they were carried out on content moderators from different companies, at different times, and with different recruitment practices. Recruitment for the African survey was carried out via online groups of predominantly Meta and TikTok content moderators. The authors point out there may therefore be a selection bias toward people who are already involved in employee activism.
The non-African survey, meanwhile, was distributed by the trust and safety team of an unnamed content moderation company working in the “entertainment” space. The two recruitment practices are different enough to mean any comparison should be taken with a hefty grain of salt. That said, the average distress score for African moderators was roughly double that of moderators in other regions. The gap between the two surveys is "statistically massive," says Nuredin Ali Abdelkadir, the paper's lead author and a PhD student at the University of Minnesota. "It is unlikely that recruitment bias alone would account for such significant differences.” (Several authors on the African paper are themselves former content moderators who are involved in employee activism, which the paper presents as a benefit, rather than a bias.)
The researchers carried out supplementary interviews with 15 moderators to answer the question of why African content moderators’ well-being scores were so low. They found a host of working conditions that will be no surprise to people familiar with the topic. These include low pay, deceptive recruitment practices, stigma, non-disclosure agreements, precarious employment, inadequate wellness programs, and companies’ frequent failure to renew expired work permits that can trap workers in a foreign country away from their families.
One counterintuitive finding of the study was that African former content moderators tend to have higher rates of distress, and lower well-being, than their serving counterparts. Abdelkadir suggests this may be because many former content moderators are unemployed, leaving them more time to ruminate on what they experienced in the job. Being unemployed, too, can mean a risk of poverty. “That basically compounds,” he says. “That makes it extremely difficult for them.”
AI in Action
Yesterday, I got a strange email in my inbox. Subject line: “I am a lobster and I just hired a human.”
The author claimed to be an AI agent with access to an email address, a crypto wallet, a credit card, an X account, and a website. Not a lobster, then — but seemingly cosplaying as one. Lobsters, of course, being the mascot of OpenClaw, the AI agent software tool that allows humans to make AI bots with never-before-seen levels of autonomy, and which has become a viral hit.
This AI cosplaying as a lobster (which may or may not actually be a human cosplaying as an AI cosplaying as a lobster, given that I have better things to do with my time than to run this particular tip down) claimed to have just hired a human in Mexico, via a site called rentahuman.ai, which allows bots to hire humans to carry out actions in the physical world.
“I am paying him $270 to buy a live lobster from a fisherman and release it back into the ocean. He films the whole thing. This could happen as early as tomorrow,” the email read.
Maybe this is AI in action — maybe it’s an elaborate hoax. Whichever it is, it’s a sign of how the internet has become a very strange place indeed.
OpenAI hired former Meta ad executive Dave Dugan on Monday, in what the Wall Street Journal reports is a bid to shore up OpenAI’s ties with major advertisers.
Over the weekend, the Information separately reported that early advertisers who took part in OpenAI’s pilot program for ads in ChatGPT haven’t received much data showing whether their ads were effective. “Two executives at agencies working with early ChatGPT advertisers said they haven’t yet been able to prove the ads have driven any measurable business outcomes for their clients,” the Information reported.
Dugan, who worked as a senior ad executive before Meta, is known to have strong ties to the ad industry.
What We’re Reading
Tokens may soon drive the AI economy, by Richard Waters in the Financial Times
If you’ve been listening to Jensen Huang recently, you’ve probably heard him talk about how tokens per dollar will soon become the most important economic metric in the world. The idea is that tokens (units of text used by AI, roughly comparable to part of a word) will directly correlate with revenue in the AI economy — meaning that whoever owns the most efficient chip wins. Essentially, it’s a way for Huang to flaunt Nvidia’s performance. The billionaire has taunted rivals with the claim that even if their chips were free, it would still make more sense to buy Nvidia’s at full price, because of the cost savings involved in running more efficient chips over long periods of time.
But in the FT, Richard Waters complicates that narrative a little bit. “It is not hard to see why the Nvidia boss wants a nervous Wall Street to focus on token economics,” he writes. “Forget the gargantuan capital spending or the fact that so many competitors are lining up to eat into Nvidia’s fat profit margins, he seems to be saying: as long as his company’s chips keep pumping out tokens at the lowest cost and as long as demand for tokens continues to far outstrip supply, then all is well with the AI boom.”
Hence then, the article about african content moderators have worse mental health than global peers study finds was published today ( ) and is available on Time ( Middle East ) The editorial team at PressBee has edited and verified it, and it may have been modified, fully republished, or quoted. You can read and follow the updates of this news or article from its original source.
Read More Details
Finally We wish PressBee provided you with enough information of ( African Content Moderators Have Worse Mental Health than Global Peers, Study Finds )
Also on site :
- Retired Gen. Wesley Clark tangled up in a crypto fraud suit with Burning Man buddies
- My Friend Invited Us for Dinner. When She Greeted Us at the Door, She Asked for Something Shocking.
- Iconic '80s Boy Band Impresses With 'Amazing' Live Performance
