AS a professional cosplayer, Elyana Sparks (stage name) is used to posting photos of herself dressed up as various fictional characters for her 100,000-strong followers across her social media platforms.
But she never expected things to take a sinister turn, when a friend told her in January that they found smutty images of her being circulated online.
Shocked, she looked it up and found an account selling albums of such photos. The thing is, she knew the naked photos were fake.
“I have never sent any nude images of myself to anyone,” says the 21-year-old.
That’s why Elyana believes the perpetrator had used an artificial intelligence-assisted editing application to “strip” her online photos and create these deepfake nudes of her.
She has since lodged a report with the Malaysian Communications and Multimedia Commission (MCMC) and the police regarding this incident.
But what happened to Elyana is not an isolated incident.
Data suggests that the creation and circulation of AI-assisted deepfake nudes or revenge porn, also known as image-based sexual abuse, is proliferating at an alarming speed with the growing accessibility of deepfake-enabling tools and platforms.
As reported, according to Deputy Communications Minister Teo Nie Ching, last year MCMC removed 1,225 postings of explicit content generated using AI as of Dec 1.
This dramatic spike from 186 such cases just two years earlier in 2022 seems to herald a disturbing new epidemic: the plague of the deepfake nudes.
‘A chair can be used for bad too’
Universiti Sains Malaysia criminologist and psychologist Assoc Prof Dr Geshina Ayu Mat Saat says deepfakes involve “digitally altering or creating realistic videos or images, which are used maliciously to create non-consensual explicit content and harm the reputation of innocent people”.
While image-based sexual abuse has been happening for decades, deepfakes have added another layer of complexity to tackling this problem.
Since these fake explicit images are usually created with AI-assisted apps, many argue that banning the apps could spell the end of the deepfake smut epidemic. Experts, however, say it is not that simple.
After all, the apps are merely tools and how they are used depends on the person using them, MCMC commissioner Derek Fernandez tells Sunday Star.
“You cannot have a blanket ban on the applications because they are just tools which can be used for good as well.
“For example, a chair can be used for a good purpose. You can sit on it, or I can take the chair to bash you over the head with,” he says.
Computer science expert Emeritus Prof Datuk Tengku Mohd Tengku Sembok also believes that deepfake technology itself is neutral and thus a balanced approach should be taken in handling it.
“It has legitimate uses in film, education, and accessibility aids for disabled users such as in using voice generation,” he says.
Besides, banning the apps would not be a lasting solution either, says Women’s Aid Organisation advocacy officer Tamyra Selvarajan, who says the mitigating effect of a ban would be limited.
“At the end of the day, there would be alternate ways for the perpetrators to continue creating deepfakes,” she says.
However, Geshina strongly believes that it would still be prudent to ban the AI “nudification” apps, with the condition that more needs to be done on top of that.
“(Banning the apps) is the tip of the iceberg. The more taboo a thing is, the more some people would be interested and put effort and creativity into unravelling the taboo.
“It is better to criminalise outright specific behaviours.”
More accountability for service providers
Currently, Malaysian laws do cover image-based sexual abuse situations under the Penal Code and the Communications and Multimedia Act 1998, says lawyer Sarah Yong, who is also the Bar Council technology, cyber and privacy law committee chairman.
“Our laws do not regulate the technology per se, we regulate the harm,” she explains.
When it comes to specific laws designed to combat deepfake- related abuses, Geshina says they are still under development in Malaysia.
We aren’t alone in this journey because this is an issue that knows no borders, so in recent years, countries around the world such as Canada, South Korea and the United Kingdom have been coming up with legislation targeting deepfake technology.
In early 2023, China also enacted provisions that require providers of “deep synthesis technologies” to take steps to prevent the use of their services for illegal or harmful purposes, protect user privacy by requiring consent, authenticate user identity, and label synthetic content, among others.
Malaysia is also trying to make service providers more accountable for abuses that take place on their platforms with the Class Licence for Application Service Providers (CASP), says Fernandez.
Coming into effect Jan 1, the MCMC mandated that all social media and messaging services with at least eight million registered users in Malaysia must apply for a licence
Using the same chair analogy from earlier to illustrate his point, Fernandez says the people who own the place where the chair is – which in this case would be the service providers and platforms – should kick out whoever uses the chair to hurt people.
“Platforms must proactively enforce their code of conduct to ensure abuses are not tolerated within their community. People who create these apps which are abused must have a robust way of knowing who is using their app so that action can be taken if necessary.
“That is why the government is trying to license these platforms so that we can have this as a requirement,” he explains.
Aside from legally requiring the platforms to detect and remove harmful deepfake content swiftly, Tengku Mohd says there should be transparency mandates that obligates companies to label AI-generated content to prevent misuse.
“Platforms failing to enforce these measures should face fines or legal liability.”
At the same time, he says that the regulations and legislation framework addressing this matter should explicitly define applications of deepfake in sexual abuse, defamation, fraud, and identity theft as criminal offences.
The provisions should also cover the consent of individuals and strict penalties for the non-consensual creation, distribution or possession of explicit deepfake content,” he adds.
Whichever shape the regulations and legislation framework takes, Geshina stresses that “It is important that justice is swift and appropriate, while continuous victim support is given”.