Five years of Pofma: How has the law been used to combat fake news?


The Singapore Democratic Party (SDP) and its leader Chee Soon Juan have also had brushes with Pofma.

While they complied, they made clear publicly that they disagreed, and in several cases made appeals and went to court.

In one case that concluded in 2021, the Court of Appeal allowed the SDP’s appeal in part. It was a landmark decision for the court as it was the first-ever opportunity for it to consider the interpretation and application of Pofma since the law’s enactment.

The apex court established the constitutionality of Pofma, and also resolved conflicting High Court conclusions as to which party bore the burden of proof – the statement maker or the minister on whose authority the Pofma direction was issued.

Ultimately, the court concluded that it lay with the statement maker.

Another opposition politician who received a Pofma directive was Mr Leong Mun Wai of the Progress Singapore Party.

The Non-Constituency MP removed his post and in February 2024 stepped down as the party’s secretary-general to “take responsibility” for his Pofma order.

When asked about these different responses to Pofma, Prof Chong said it would depend on the information involved and the preferences of the individuals and entities.

“The larger point is that if Pofma is not seen to be evenly and impartially applied, with sufficient evidence for citizens to make independent assessments of each instance of Pofma application, cynicism surrounding the law could arise, reducing its effectiveness over time,” he said.

Prof Tan said a person who is serially Pofma-ed will have a credibility deficit and suffer reputational harm.

However, one challenge is that some may expect a statement is probably not false if it is not Pofma-ed. 

The authorities will have to be judicious in the use of Pofma and use other means to deal with falsehoods, saving Pofma for the more egregious cases, said Prof Tan.

Concerns going forward

While Pofma directives have been issued to identifiable individuals and entities that have put forth misinformation and disinformation, Prof Chong said he was concerned about falsehoods that come from unknown origins.

For instance, some anonymous messages circulated during the Covid-19 pandemic alleged that the hygiene of particular races and ethnic groups contributed to the spread of the virus, propagated conspiracy theories about its origins, or listed folk remedies that could prove harmful.

“Despite the anxieties these narratives caused, it appeared that not much could be done about them – especially since there was no one to issue a directive against,” Prof Chong said.

Other growing concerns about Pofma’s effectiveness centre on the rise of artificial intelligence, with generative AI tools becoming easily accessible to the public, and with this, the potential for deepfakes to spread rapidly.

Dr Soon said that as the complexity and entanglement between truth and falsity increase, Pofma’s utility is likely to cease.

She cited her 2020 study on media and internet use during the 2020 General Election, which found that only 39 per cent of about 2,000 people surveyed were confident that regulations like Pofma reduced the amount of false information online during the election. In contrast, 41 per cent neither agreed nor disagreed, while the remaining 20 per cent disagreed.

“It suggests that members of the public understand the limitations of regulation, and that was before generative AI tools were made accessible to the masses,” she noted.

She pointed to the recent Indonesian election, where an AI-generated deepfake of late president Suharto endorsing particular candidates gained traction.

“When is a deepfake permissible and when is a deepfake not permissible? When does a deepfake cross the boundary from being creative to being unethical or deceptive? These questions require tackling issues relating to provenance and ethics of information production and dissemination, beyond what existing laws like Pofma address,” said Dr Soon.

Prof Chong said AI-generated visual, video and audio deepfakes, alongside algorithms meant to exponentially increase the reach of particular posts, can hasten the spread of information, including misinformation and disinformation.

Pofma may not work fast or precisely enough to address some of these instances.

“Going after entities and individuals or takedown orders may not be enough and may not be enough to replace transparency and education,” he said.

In today’s highly dynamic information environment, the risks of misinformation and disinformation are only likely to become more complex as conflicts and major power competition intensify, said Prof Chong.

“There may need to be multiple sources of independent fact-checking, greater data transparency to allow for pre-bunking – to pre-emptively debunk disinformation – and more extensive media literacy and civic education to better protect Singapore. These are approaches that Finland and the Baltics have tried with notable success even as they face a hostile information environment,” he added. 

Prof Chua said correction directions are premised on the rationality of readers.

“The idea of juxtaposing a government statement against an offending article is so that readers can sensibly reject the alleged falsehood and embrace the truth. In reality, deep-seated beliefs and subjective sentiments cannot be shifted easily,” he said, adding that this is worsened by AI and deepfakes that make falsehoods more realistic and convincing.

At the end of the day, Pofma cannot be the only solution to combat falsehoods, he said.

“A holistic approach calls for the involvement of different stakeholders including civil society, fact-checkers, journalists and academic institutions. It is in every­one’s interest, not just the Government’s, to develop national resilience against falsehood.”



Source link