A new study from the Oxford Internet Institute (OII) has found a sharp rise in the number of AI tools used to generate deep fake images of identifiable people, primarily targeting women. Will Hawkins, the leader of the study titled “Deep fakes on Demand”, suggests that there is an “urgent need” for improved safeguarding to address the creation and distribution of the AI models.
The research was carried out via a meta-data analysis of thousands of publicly available text-to-image models hosted on two popular platforms, Civitai and Hugging Face.
According to the findings, deep fake generators have been downloaded more than 15 million times since late 2022, and have led to a rapid increase in AI-generated non-consensual intimate images. Central to the findings was the ease with which these images can now be created. Many deep fake model variants can be generated with as few as 20 images of a target individual, and within 15 minutes of processing time on a standard consumer-grade computer.
Especially worrying was the fact that these models overwhelmingly target women, which account for 96% of deep fakes analysed. These women range from internationally recognised celebrities to smaller influencers with a relatively minor following.
But the results, the researchers acknowledge, may only be the “tip of the iceberg”. The study did not take into account those not publicly available on reputable platforms, which may lay host to “more egregious deep fake content”.
The sharing of sexually explicit deep fake images was made a criminal offence in England and Wales under an amendment to the Online Safety Act in April 2023. The UK Government hopes to also make creating such images an offence as part of its Crime and Policing Bill, which is currently at Committee Stage.
On the potential technical safeguards that could be introduced, Will Hawkins told Cherwell: “Firstly, platforms could decide to monitor and remove model variants which indicate that AI-generated NCII content is intended to be created. For example, models which are tagged with both “Celebrity” and “Porn” labels could be removed.
“To go further, platforms could choose to remove any model variant which targets the generation of an identifiable individual without their explicit consent. Secondly, technical safeguards could include research investment into the impacts of pre-training filtering for pornographic content, or investment in provenance techniques such as watermarking content generated by AI models.”