How can AI help reliably identify falsified and manipulated digital content such as deepfakes, synthetic voices, and multimodal misinformation? This question is at the center of the Korean National Police Academy’s fake news project, a joint initiative involving the State Criminal Police Office, University of Göttingen, Bergische Universität Wuppertal, Soongsil University, Sungkyunkwan University, Yonsei University, and Hancom.
AI-generated media has significantly expanded the scope of digital crime. Highly realistic deepfake videos, cloned voices, and automated disinformation campaigns are now frequently used in fraud, identity theft, impersonation, and digital extortion. As a result, police work is increasingly shifting into the digital domain. While traditional street crimes such as pickpocketing are declining, citizens are more often confronted with sophisticated scams in which they are contacted by seemingly trustworthy individuals, authorities, or superiors and urged to transfer large sums of money – only to later discover that these communications were entirely AI-generated.
At the GippLab, the project is being carried out by incoming graduate researchers Adam Lehavi and Kia-Jüng Yang, under the supervision of Dr. Terry Lima Ruas and Dr. Jan Philip Wahle. Their work focuses on developing a state-of-the-art multimodal dataset and generation pipeline for AI-manipulated content, designed to keep pace with the rapidly evolving capabilities of generative models.
The fake news project addresses a growing societal and scientific challenge: existing deepfake detection systems are increasingly strained by the realism, diversity, and scale of AI-generated manipulations. Over a structured, multi-phase research program, the team will analyze gaps in current datasets, collect and preprocess authentic source material, generate single-modal and synchronized multimodal deepfakes, and systematically annotate and standardize the resulting data. The outcome will be a robust benchmark dataset and an automated pipeline ready to handle the challenges of an evolving and changing landscape.
By combining expertise in natural language processing, computer vision, and generative AI, the project aims to strengthen the scientific foundations of deepfake detection research. Beyond academic impact, the project contributes to broader efforts to combat misinformation, cybercrime, and digital fraud, supporting reliable authenticity systems for media, research, and society at large.


