Getting your Trinity Audio player ready...
|
In a parliamentary session, questions were posed regarding the accuracy and effectiveness of the government’s deepfake detection technologies. These questions focused on the current accuracy rates of the tools used to identify AI-generated deepfakes, the methods for distinguishing between harmful and legitimate content like satire or memes, and the procedures followed when videos are incorrectly identified as deepfakes.
The government’s response highlighted the advanced and varied tools employed to detect and assess manipulated content, including deepfakes. These technologies are sourced from commercial providers, developed internally, or created through partnerships with research institutions such as the Centre for Advanced Technologies in Online Safety (CATOS).
However, the Government does not disclose specific accuracy rates for these tools. This is primarily to prevent potential misuse by malicious actors and to maintain the effectiveness of the detection methods. The continuous updates and improvements to these tools are necessary to keep pace with rapidly evolving technology.
Regarding the differentiation between harmful deepfakes and legitimate content, such as satire or memes, the Government explained that action is governed by the Protection from Online Falsehoods and Manipulation Act (POFMA). Under POFMA, action can be taken against content deemed false and detrimental to the public interest.
However, satire or parody on its own does not meet the criteria for POFMA action unless it contains falsehoods that could cause significant harm to the public interest. For individuals who believe POFMA directives have been wrongly applied to them, including in cases of deepfake content, there is a legal avenue to appeal these decisions in court.
Globally, there is increasing recognition of the need to address the risks associated with AI, particularly the misuse of deepfakes. Many countries have implemented safeguards, especially around election times, to protect the electoral process from manipulation.
The Government of Singapore is closely studying these international practices and evaluating whether additional measures are required. It is committed to enhancing its strategies to manage deepfake content and mitigate related risks. Updates on any new safeguards or developments will be provided as the review progresses.
Singapore is committed to empowering its citizens to identify and manage false information and deepfakes through comprehensive educational programmes and advanced detection technologies. By equipping the public with critical digital skills, the nation aims to enhance media literacy and safeguard against online misinformation.
OpenGov Asia reported that Senior Minister of State Janil Puthucheary acknowledged the urgent need to maintain information integrity amidst the rise of misinformation and disinformation globally. He highlighted the pervasive influence of digital platforms and advancements like Generative AI, which have exacerbated the spread of false content.
Singapore’s comprehensive strategy to combat misinformation, including POFMA, CATOS, and the S.U.R.E. programme. The approach combines legislation, technology, public education, and partnerships to ensure accurate information and maintain societal trust and resilience.
The National Library Board (NLB) launched its largest S.U.R.E. community outreach programme, “Be S.U.R.E. Together: Gen AI and Deepfakes Edition”, on June 15, 2024. NLB’s CEO emphasised the importance of equipping Singaporeans with digital skills to navigate the evolving technology landscape.
The initiative aimed to educate over 430,000 people about information literacy, with a focus on Generative AI and online threats such as deepfakes and scams. This year’s expansion included the SURE Learning Community, developed in collaboration with CheckMate, to support learning and fact-checking efforts. Additionally, the programme offered online resources in multiple languages to ensure broader accessibility.
This comprehensive approach reflects the Government’s dedication to effectively managing the challenges posed by manipulated content while balancing the protection of legitimate expression. By staying informed about global practices and potentially implementing additional safeguards, the Government aims to ensure the integrity of information and address emerging technological threats.