Instagram’s parent company Meta is investigating a surge of AI generated profiles that appear to sexualise and fetishise disabled people, following a BBC investigation that uncovered dozens of such accounts operating on the platform.
The report found networks of profiles built around synthetic images of women depicted with conditions such as Down’s syndrome, vitiligo, limb differences, visible scars, and wheelchair use. Many of the accounts presented themselves as disability influencers but were in fact powered by AI generated imagery designed to attract followers and funnel traffic to adult platforms.
According to the BBC’s findings, several Instagram accounts were entirely composed of AI generated personas. The images typically portrayed women with visible disabilities in lingerie or suggestive poses, often paired with motivational captions to create the appearance of authenticity.
In one notable case, an account posing as conjoined twins reportedly amassed around 400,000 followers within months of launching in December 2025. Many profiles linked out to subscription based fan platforms or adult content sites, indicating a clear monetisation strategy.
Investigators concluded that the people shown in the posts did not exist. Instead, the visuals were produced using generative AI systems trained on large image datasets.
Meta confirmed to the BBC that it is reviewing the flagged accounts and will remove content that violates its policies. A company spokesperson said enforcement would focus on rules related to sexual exploitation and content that targets individuals based on protected characteristics, including disability.
The review also comes under the shadow of the UK’s Online Safety Act, which requires platforms to actively address harmful and illegal material. UK regulator Ofcom has previously warned that companies must demonstrate effective systems to protect users, particularly minors and vulnerable groups.
Meta has not yet disclosed how many accounts are under review or whether broader automated detection changes are planned.
Advocacy organisations reacted strongly to the investigation. Alison Kerry of the UK disability equality charity Scope described the accounts as “discrimination disguised as content,” arguing that they objectify disabled bodies for profit.
Critics say the issue goes beyond misleading profiles. There are concerns that AI models are being trained on real images of disabled individuals without consent and then recombined into fetish content. Comment sections on some accounts reportedly contained harassment and sexually explicit remarks, compounding the harm.
The Equality and Human Rights Commission called the findings deeply disturbing and said the case highlights the need for stronger oversight of AI generated media.
The BBC report builds on earlier investigations that have tracked similar networks across social platforms. Previous reporting by outlets such as 404 Media and CBS News documented clusters of AI influencers falsely presented as having Down’s syndrome, often used to drive traffic to paid adult services or dubious fundraisers.
Some operators openly market courses teaching how to create and monetise synthetic influencers for niche audiences, a practice sometimes referred to in online communities as AI driven adult persona farming.
What appears to be changing now is scope. The latest wave expands beyond a single condition into a broader category of AI generated disability themed personas, suggesting the tactic is evolving and scaling.

Artist and disability advocate Danielle Gaeta, who studies generative image systems, told the BBC that some tools still produce sexualised images of disabled people even when prompts are neutral or only loosely specified.
According to her testing, certain safeguards can be bypassed with indirect wording. In other cases, the models appeared predisposed to generate sexualised portrayals once disability markers were detected in prompts.
Experts say this reflects a deeper training data problem. Generative models learn from large volumes of internet imagery, where disabled people have historically been portrayed either as inspirational figures or fetish objects. Without careful curation, those patterns can be amplified at scale.
Under the UK’s Online Safety Act, platforms such as Instagram are expected to proactively address harmful content rather than rely solely on reactive takedowns. Ofcom has indicated it is monitoring how AI generated media may introduce new risks.
Advocacy groups are now urging Meta to go beyond removing individual accounts. They are calling for improved automated detection of synthetic fetish profiles and tighter controls on monetisation pathways, particularly bio links that redirect users to adult or pseudo charity sites.
Instagram’s investigation highlights a fast emerging challenge for social platforms: AI generated personas that blur the line between representation and exploitation.
While Meta says it is reviewing the flagged accounts, the broader issue is unlikely to disappear quickly. As generative tools become more accessible and realistic, the ability to mass produce synthetic identities is growing faster than platform safeguards.
For disability advocates, the concern is clear. What once required stolen photos or manual manipulation can now be created at scale with AI, raising new questions about consent, dignity, and how platforms police synthetic content in the years ahead.
Discussion