When AI Turns Women into Objects
Pam Saxby Sounds the Alarm
“AI art seems fun and harmless… until it isn’t.”
That’s the warning from Pam Saxby, a South African journalist and long-time activist who’s been raising awareness about how AI text-to-image tools are being used to objectify women and reproduce old patterns of exploitation through new technology.
In a series of recent articles and posts, Pam highlights how online ‘art’ platforms -many of them community-run and lightly moderated - have become spaces where over-sexualised, non-consensual, and distorted portrayals of women thrive.
“AI apps have taken the sexual objectification of women to another level,” she writes. “They make the tools for creating such imagery widely accessible, even free.”
Pam points out that while platforms claim to use automated filters to block explicit material, users have learned how to bypass them - for example, by swapping banned terms for coded language in prompts. “It’s a minefield,” she notes, “and even the most comprehensive legislation may struggle to navigate it.”
Drawing on her background as an anti-apartheid activist, Pam situates this within a broader struggle for justice and accountability.
“Having been an activist during decades of terrible inequality, I’ve learned how to hang in against all odds,” she says.
Her work exposes how easily AI can be misused to create deepfake sexual content, non-consensual use of women’s faces, hyper-sexualised distortions, harassment, and intimidation - harms that are already impacting women worldwide.
As one supporter amplified on X:
“This isn’t about attacking technology. It’s about protecting women, girls, and vulnerable people from a new form of image-based harm.”
Pam’s writing serves as a crucial reminder: AI ethics is a gendered justice issue. The challenge isn’t only how to innovate responsibly - it’s how to ensure that progress doesn’t come at the expense of women’s safety and dignity.
“We don’t have to be played,” she writes. “The issue is how not to be.”