How prepared are we for deepfakes? Researchers call for shift in AI to protect women – CBC.ca

author
3 minutes, 24 seconds Read
Ads
Best Mobile Games Directory

Best Mobile Games Marketplace

In the picture, a blond woman in a bikini stands on the beach. A line then flashes across the screen, exposing her nude figure.

“Use undress AI to deepnude girl for free!” reads the description on the site.

Although it says consent is required, it only takes a few clicks to upload an image and see the person in it undressed.

Since last summer, the number of sites with publicly available AI image tools have multiplied and gained millions of views, and cases of AI-doctored photos of underage girls have already been shared by high school students in London, Ont., and Winnipeg. No charges have been laid in either case.

But abuse of the technology has been prosecuted in Quebec. Last year, a man from Sherbrooke in the Eastern Townships was sentenced to three years in prison for creating at least seven deepfake videos depicting child pornography.

Quebec, like the rest of the country, may not be prepared to deal with this ascendant AI technology, according to intellectual property lawyer Gaspard Petit. 

And as Ottawa plays catch-up in regulating harmful content on the internet, researchers are calling for greater diversity and transparency to stop women from being targeted by the technology without their consent. 

Petit says he has been taking a closer look at the development of AI technology as it continues to evolve.

“I think there’s a general consensus that in Quebec, we’re not quite prepared — in Canada as a whole,” he said.

According to Gaspard, protections in the Quebec charter and laws already exist to protect people’s privacy and reputation.

He says nude deepfake cases can fall into a legal grey zone where it’s not always clear if it’s possible to criminally prosecute a person who produces or distributes them — something he says Canadian legislators are debating how to improve. 

One problem, Gaspard says, is that the onus falls on the victim to prove they have been harmed and who is responsible and then, if they have the means, sue.

But he says the bigger issue is preventing the creation of distribution of the images in the first place.  

Fixing the gender disparity 

Dongyan Lin, a researcher at Montreal-based artificial intelligence institute MILA, studies the link between neuroscience and AI. She says these deepfakes are a “great example of not having women in the decision-making process.”

As a result, she says there are blind spots at these companies in thinking about how the technology would be used “once it’s massively commercialized.”

Affecting Machines, a project developed at Concordia University, tries to bridge the gender gap in AI and STEM by promoting the work of women in the field.

Lindsay Rogers, knowledge mobilization advisor at Concordia’s Applied AI Institute, is one of the people involved in the project.

“Gender diversity is really fundamental for having AI systems that are representative of the populations that use them,” she said.

“It’s not just about the numbers like AI [labs] hiring more women or non-binary folks in a room, it’s about creating a culture and an atmosphere where they can succeed and do well and become valued members of the team,” she said, putting the percentage of women working in the field in tech at around a quarter, barely creeping up in the past two decades. 

Ethics training, other solutions

Along with stricter regulations and public hearings on AI use, Lin says mandatory ethics training would help AI developers gain a broader understanding of how the technology could be used by the public.

Banning sites that use deepfake technology is also an option, but experts like Sasha Luccioni, a Montreal-based research scientist at AI company Hugging Face, points to tools that allow users to skirt bans in the country they’re based.

Other technical solutions like making images unusable by AI models are also on the table, but none of these solutions address the problem at its core, says Luccioni.

The root of the problem, she says, is how people decide to use the available technology — including using it to objectify women’s bodies. 

For that problem, she says the solution is educating the public and raising awareness. 

This post was originally published on this site

Similar Posts