Therian Identity Faces $78 Billion Crisis: AI Deepfakes Threaten Reality Itself
NovumWorld Editorial Team

AI deepfakes threaten to completely erode public trust, especially for marginalized groups. The existential question of therian identity faces a monumental challenge.
- Deepfakes are projected to cost the global economy $78 billion due to misinformation, creating an existential crisis for the therian community as AI blurs the lines between authentic identity and fabrication.
- DeepMedia estimates 8 million deepfakes will be shared on social media by 2025, exacerbating the therian community’s struggles with authenticity and acceptance.
- Therians and those who support them must advocate for media literacy and authentication tools to protect against the misuse of AI to misrepresent and ridicule their identities.
The Therian Identity Crisis: $78 Billion of Doubt
The proliferation of AI-generated deepfakes poses a unique threat to the therian community, a subculture of people who identify as non-human animals. This threat extends beyond mere misinformation, striking at the core of their identity and authenticity in an increasingly digital world. Fake news is costing the global economy $78 billion, according to a study by the University of Baltimore and Cybersecurity firm CHEQ.
Deepfakes, sophisticated AI-generated media that can convincingly mimic real people, threaten to create a “crisis of knowing itself,” eroding the foundations of shared understanding, according to Dr. Nadia Naffi. The projected rise in deepfake incidents, with DeepMedia estimating 8 million deepfakes shared on social media by 2025, will exacerbate the therian community’s existing challenges with authenticity and acceptance. This is especially acute considering the current levels of misinformation.
Eroding Reality: Why Corporate DEI Statements Are Worthless, according to Reuters
Corporate DEI (Diversity, Equity, and Inclusion) statements often ring hollow in the face of the technological tsunami that threatens vulnerable communities. The rise of deepfakes further complicates the reality for those already facing stigma. The therian community now faces the prospect of their identities being easily fabricated and misrepresented on a mass scale. Deepfakes erode the mechanisms by which societies construct shared understanding, argues Dr. Nadia Naffi, and this is a particularly frightening concept.
The lack of scientific validation for therianthropy, as highlighted by transgender influencer Camila D. Aurora, increases the risk of ridicule and misrepresentation. This potential for misuse makes it crucial for members and allies of the therian community to advocate for media literacy and authentication tools to defend against the misuse of AI to misrepresent their identities.
The Contrarian Crack: The Media’s Deepfake Blind Spot
Mainstream media often focuses on the political ramifications of deepfakes, while the exploitation of vulnerable communities like therians is frequently ignored. While the media spotlight shines on political deepfakes, the vulnerability of communities like therians gets obscured, leaving them exposed to targeted harassment and misrepresentation. This absence of media attention leaves vulnerable groups like the therian community inadequately protected from the misuse of AI. Camila D. Aurora highlights the lack of scientific validation for therianthropy, which further fuels the potential for ridicule.
The therian community, often misunderstood, faces unique challenges due to the rise of AI-driven deepfakes. There are also fears that far-right influencers are exploiting interest in the therian community to ridicule marginalized identities and bolster anti-gender rhetoric using AI-generated videos as discussed by Eurovision News Spotlight | Fact-Checking & OSINT Network.
The $25 Million Deception: Real-World Limitations of AI Defenses
Despite the advancements in deepfake detection, real-world incidents reveal the limitations of current AI defenses. While advancements in deepfake detection methods, such as Convolutional Neural Networks (CNNs) and hybrid models, offer a promising defense, they are not foolproof, and these methods still face challenges in maintaining robustness against novel deepfake variations. A finance employee in Hong Kong lost $25 million after being deceived in a deepfake video conference.
The rise in sophisticated scams, facilitated by increasingly convincing AI, makes fraud more accessible. Perry Carpenter, Chief Human Risk Management Strategist at KnowBe4, advocates for AI threat awareness training for employees. However, the evolving nature of deepfakes constantly challenges existing detection systems. One such method is Generative Adversarial Networks (GANs), which depend on two neural networks. The real issue is the human element.
The “So What?”: A Future of AI-Fueled Stigma
The proliferation of deepfakes doesn’t just threaten individual identities; it also weaponizes stigma. The intersection of deepfake technology and social stigma creates an alarming future for marginalized communities like therians. The increase in deepfake-related misinformation globally, with spikes in countries holding major elections, illustrates the widespread potential for misuse. With SEC scrutiny increasing, misrepresentation of AI capabilities could lead to enforcement actions, according to New York State Bar Association.
The threat of AI-generated harassment and exploitation, causing irreversible damage to victims before the content can be debunked, cannot be overstated. A deepfake robocall imitating President Biden reached voters in New Hampshire, urging them not to vote. The “Take It Down Act”, signed by President Trump, introduces criminal penalties and requires technology platforms to remove such content.
The Bottom Line
I stand with the therian community, against the misuse of AI to target and ridicule marginalized communities. We must also advocate for and support media literacy initiatives within the therian community.
The reality check: AI won’t just amplify misinformation, it will create entirely fabricated realities that are impossible to distinguish from truth, and marginalized communities will suffer the most.
FUENTES VALIDADAS DISPONIBLES
- SSRN - AI Techniques for Deepfake Detection
- IJNRD - AI DEEP FAKE DETECTION RESEARCH PAPER
- MDPI - A Comprehensive Review of DeepFake Detection Using Advanced Machine Learning and Fusion Methods
- CPI OpenFox - Deepfakes and Their Impact on Society
- Frontiers - An AI-driven conceptual framework for detecting fake news and deepfake content: a systematic review
- ResearchGate - A Comprehensive Evaluation of Deepfake Detection Methods: Approaches, Challenges and Future Prospects
- Stimson Center - AI in the Age of Fake (Imagined) Content
- Blackbaud - The Emerging Threat of Deepfake Technology in Social Impact Organizations
- GoWell - Therians: Identity, Psychology, and Digital Culture
- Reddit - My “unpopular” therian content opinion
- UNESCO - Deepfakes and the crisis of knowing
- The Beckage Firm - The Landscape of Deepfake AI Legislation
- SEC.gov - AI, Deepfakes, and the Future of Financial Deception
- Wiley Rein - Trump Signs Law Expanding Tech Platform Requirements and FTC Enforcement on Intimate AI Deepfakes and Images
- Reddit - A short history of the different “waves” of the online therian community
- Eurovision News Spotlight | Fact-Checking & OSINT Network - Breaking down the culture war against “Therians”: AI-driven social media frenzy or politically charged disinformation campaign?
- New York State Bar Association - Regulating AI Deception in Financial Markets: How the SEC Can Combat AI-Washing Through Aggressive Enforcement
- UNESCO - Deepfakes and the crisis of knowing