YouTube's AI Nightmare: Deepfakes Smear Singapore PM, 70% Believe It's Real.
NovumWorld Editorial Team

Deepfake videos of Singapore’s Prime Minister Lee Hsien Loong peddling crypto scams illustrate the accelerating crisis of synthetic media’s impact on geopolitics and public trust.
- Singapore may fine social media services up to $1 million for failing to remove deepfake content, but can financial penalties keep pace with rapidly evolving AI tech.
- An Elon University survey revealed that 70% of Americans fear AI-generated misinformation will sway the 2024 elections, exposing vulnerabilities in the democratic process.
- The proliferation of convincing deepfakes requires both individual vigilance and platform accountability to combat the “erosion of trust” warned about by experts.
Singapore’s $1 Million Headache: Can Fines Stop the Deepfake Deluge?
Singapore faces an uphill battle against the rising tide of AI-generated disinformation, particularly deepfakes featuring Prime Minister Lee Hsien Loong. These synthetic videos, often promoting cryptocurrency schemes, have sparked serious concern within the government, leading to proposed legislation aimed at curbing their spread. The Elections (Integrity of Online Advertising) (Amendment) Bill seeks to criminalize manipulated content that falsely depicts political candidates, but the effectiveness of this measure in the face of increasingly sophisticated deepfake technology remains to be seen.
The government is considering several methods to regulate deepfakes, including labeling schemes and temporary bans, according to Josephine Teo, Singapore’s Minister for Digital Development and Information. Singapore’s regulatory strategy includes the potential for significant financial penalties. The country may fine social media services up to $1 million for failing to comply with orders to remove offending deepfake content. Individuals involved in creating or disseminating deepfakes could face fines up to $1,000 and/or a year in jail.
However, the core issue lies in the difficulty of detection and enforcement in a rapidly evolving technological landscape. As Hany Farid from UC Berkeley noted in 2020, “the capacity to generate deepfakes is proceeding much faster than the ability to detect them.” Can legislation and fines truly deter malicious actors when the tools to create and distribute deepfakes are becoming more accessible and advanced, and easily generated on consumer-grade PCs or rented via cloud-based GPU compute? This raises a critical question: is Singapore’s million-dollar headache merely a symptom of a much larger, systemic problem that requires a more comprehensive and proactive approach?
The TikTok Defense: Why Tech Platforms Are Failing to Stem the Synthetic Tide, according to Social Blade
Tech platforms, despite their vast resources, are struggling to contain the proliferation of deepfakes. In 2023, experts estimated approximately 500,000 global cases of deepfake video and audio circulating on social media. This sheer volume presents a significant challenge for content moderation teams. Social platforms are trying to address the problem by using AI to fight AI, but the deepfake tech continues to advance.
YouTube, TikTok and X (formerly Twitter) have each pledged to remove deepfakes that violate their terms of service, but their reactive approach often proves insufficient. The algorithms designed to detect fake content are constantly playing catch-up, as malicious actors refine their techniques. The cat-and-mouse game leaves a window of opportunity for deepfakes to spread rapidly, reaching millions of users before they can be flagged and removed.
This reactive posture is not merely a technical limitation, but also reflects a deeper conflict of interest. Engagement drives revenue, and sensational or controversial content, including deepfakes, often generates high levels of user interaction. Critics argue that platforms are incentivized to prioritize growth over safety, turning a blind eye to the damage caused by synthetic media.
Hany Farid’s Warning: The “Erosion of Trust” the Industry Ignores
The immediate harm caused by individual deepfakes—such as financial scams or political smear campaigns—is readily apparent. However, Hany Farid of UC Berkeley argues that the more insidious threat lies in the gradual erosion of trust.
Farid’s perspective highlights the long-term consequences of unchecked deepfake proliferation. If people can no longer trust what they see or hear online, the foundation of informed public discourse crumbles. This erosion of trust has far-reaching implications, undermining faith in institutions, fueling social division, and making it harder to discern truth from falsehood. The pervasive presence of deepfakes creates an environment of uncertainty and suspicion, where conspiracy theories flourish, and legitimate information is dismissed as “fake news.”
This is a multi-trillion dollar question for companies like Meta, Google, Microsoft and OpenAI. How does one build a generative AI business on the promise of perfect information fidelity when the very concept of visual truth is under threat. This may be the biggest open secret in Silicon Valley.
The TAKE IT DOWN Act’s 48-Hour Deadline: A Race Against Virality
The TAKE IT DOWN Act, introduced in 2025, represents an attempt to impose stricter accountability on platforms regarding the removal of non-consensual intimate imagery (NCII), including AI-generated deepfakes. This legislation mandates that platforms remove such content within 48 hours of notification. This aims to address the “revenge porn” and “intimate image abuse” markets accelerated by AI tech.
While the TAKE IT DOWN Act represents a step in the right direction, it faces several practical limitations. The 48-hour deadline is ambitious, particularly for smaller platforms with limited resources. The process of verifying the authenticity of a deepfake and determining whether it constitutes NCII can be time-consuming and complex. Platforms risk facing legal challenges if they incorrectly flag legitimate content as deepfakes, or if they fail to remove offending content quickly enough.
Moreover, the Act does not address the underlying problem of deepfake creation. As long as the tools to generate synthetic media remain readily available, the cycle of creation and removal will continue, placing a constant burden on platforms and law enforcement agencies. This calls into question the scalability and long-term effectiveness of a reactive approach that focuses solely on content removal.
The Post-Truth Era: How Deepfakes Will Reshape Elections and International Relations
Deepfakes pose a significant threat to the integrity of elections and international relations. A survey from Elon University indicated that 70% of Americans believe the 2024 election will be impacted by AI-generated false information. This widespread concern is justified, given the potential for deepfakes to be used to spread disinformation, manipulate public opinion, and undermine trust in democratic processes.
Deepfakes can be deployed to create false narratives about political candidates, misrepresent their positions on key issues, and even fabricate evidence of wrongdoing. These synthetic videos can go viral on social media, reaching millions of voters before they can be debunked. The resulting confusion and distrust can erode faith in the electoral system and discourage participation. As of April 2024, 11 US states have adopted laws covering AI in political ads.
The impact of deepfakes extends beyond domestic politics. Intelligence agencies are exploring integrating synthetic media into information warfare strategies. Deepfakes can be used to sow discord between nations, incite conflict, and undermine international alliances. The ability to create convincing but false narratives poses a serious threat to global stability and security.
The Bottom Line
The deepfake crisis is only accelerating as rendering costs plummet and API access expands. In Singapore, PM Lee Hsien Loong has been targeted. In the US, voters worry about AI influence. According to Dymples Leong, S Rajaratnam School of International Studies, our ability to discern between real and fake has never been more challenged as deepfakes become mainstream in the online information space.
Each individual must adopt critical thinking techniques, by cross-checking claims from secondary reporting and learning to spot common ’tells’ of AI video, like unnatural blinking, consistent lighting, and unrealistic reflections.
Prepare for an era where seeing isn’t believing.