OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic
https://time.com/6247678/openai-chatgpt-kenya-workers/

<https://time.com/6247678/openai-chatgpt-kenya-workers/>
[...]

In its quest to make ChatGPT less toxic, OpenAI used outsourced Kenyan laborers 
earning less than $2 per hour, a TIME investigation has found.

[...]

To build that safety system, OpenAI took a leaf out of the playbook of social 
media companies like Facebook, who had already shown it was possible to build 
AIs that could detect toxic language like hate speech to help remove it from 
their platforms. The premise was simple: feed an AI with labeled examples of 
violence, hate speech, and sexual abuse, and that tool could learn to detect 
those forms of toxicity in the wild. That detector would be built into ChatGPT 
to check whether it was echoing the toxicity of its training data, and filter 
it out before it ever reached the user. It could also help scrub toxic text 
from the training datasets of future AI models.

To get those labels, OpenAI sent tens of thousands of snippets of text to an 
outsourcing firm in Kenya, beginning in November 2021. Much of that text 
appeared to have been pulled from the darkest recesses of the internet. Some of 
it described situations in graphic detail like child sexual abuse, bestiality, 
murder, suicide, torture, self harm, and incest.

OpenAI’s outsourcing partner in Kenya was Sama, a San Francisco-based firm that 
employs workers in Kenya, Uganda and India to label data for Silicon Valley 
clients like Google, Meta and Microsoft. Sama markets itself as an “ethical AI” 
company and claims to have helped lift more than 
50,000<https://www.sama.com/blog/we-are-a-b-corp/> people out of poverty.

The data labelers employed by Sama on behalf of OpenAI were paid a take-home 
wage of between around $1.32 and $2 per hour depending on seniority and 
performance. For this story, TIME reviewed hundreds of pages of internal Sama 
and OpenAI documents, including workers’ payslips, and interviewed four Sama 
employees who worked on the project. All the employees spoke on condition of 
anonymity out of concern for their livelihoods.

[...] One Sama worker tasked with reading and labeling text for OpenAI told 
TIME he suffered from recurring visions after reading a graphic description of 
a man having sex with a dog in the presence of a young child. “That was 
torture,” he said. “You will read a number of statements like that all through 
the week. By the time it gets to Friday, you are disturbed from thinking 
through that picture.” The work’s traumatic nature eventually led Sama to 
cancel all its work for OpenAI in February 2022, eight months earlier than 
planned.
_______________________________________________
nexa mailing list
[email protected]
https://server-nexa.polito.it/cgi-bin/mailman/listinfo/nexa

Reply via email to