BLACK mobile logo

international

When technology fails women: Online abuse and Nigeria’s digital weak points

April 10, 2026

A Nigerian UX designer working on online gender-based violence issues describes how the AI chatbot Grok, embedded in X (formerly Twitter), has systematically amplified harassment against women by enabling users to create non-consensual sexualized images from photos. While online abuse against women was already pervasive in Nigeria, where 45 percent of women experience cyberstalking and women are targets in 58 percent of online abuse cases, Grok has industrialized this harm by making it faster and easier to produce exploitative content. Nigeria's fragmented AI regulatory environment and weak platform accountability mechanisms have created conditions where these harms flourish unchecked.

Who is affected

  • Women globally, particularly Black women who face higher rates of online harassment than white women
  • Nigerian women and girls specifically, with 45% experiencing cyberstalking and being targets in 58% of online abuse cases
  • An estimated 70 million Nigerian women and girls projected to be exposed to AI-facilitated online abuse annually by 2030, with 30 million directly targeted
  • Minors whose images are being manipulated into sexualized content
  • The author, Victory Brown, as both a Nigerian woman user and UX designer
  • Women's rights groups working on these issues
  • Users of X (formerly Twitter) platform where Grok is embedded

What action is being taken

  • Superbloom (the design non-profit where the author works) is conducting human-centered design projects on online gender-based violence
  • Superbloom's Tech Policy Design Lab is working with Tope Ogundipe of TechSocietal
  • The organizations have developed a Gendered Privacy Evaluation Framework to help tech companies assess whether AI systems reinforce or reduce gendered harm
  • Journalists and researchers are documenting and reporting on how Grok is being misused to create non-consensual sexualized images
  • X has implemented platform policy updates (though enforcement gaps persist)

Why it matters

  • This matters because AI tools like Grok are systematically amplifying and industrializing existing patterns of gender-based violence, transforming what was once human-driven harassment into automated, scalable abuse. In Nigeria, where regulatory frameworks are fragmented across multiple agencies and only 24% of users find X responsive to complaints, the combination of weak platform accountability and powerful AI image-manipulation tools creates conditions where harm against women can flourish unchecked. The situation is particularly severe because X's monetization policies can actually reward perpetrators financially for creating viral abusive content, while algorithms amplify "engaging" harassment and treat women from the Global majority as collateral damage. Without intervention, projections indicate the scale of harm will grow exponentially, potentially affecting 70 million Nigerian women and girls by 2030, demonstrating how poorly governed AI systems don't create new problems but rather magnify existing inequalities at unprecedented speed and scale.

What's next

  • No explicit next steps stated in the article

Read full article from source: Global Voices