Institute for Digital Innovation

Real or Fake? Why AI Needs a Human Touch to Spot Deepfakes 

Deepfakes have evolved from a niche internet novelty into a pervasive threat to global cybersecurity. Because these synthetic media files can so convincingly mimic real images, speech, and text, they possess an unprecedented ability to weaponize misinformation.

The tech industry’s immediate response has been to fight fire with fire, using Artificial Intelligence and Machine Learning tools to detect these fakes. However, AI and ML have significant limitations when operating on their own. They can be outpaced by new deepfake generation techniques and often fail to account for the end-user.

Former IDIA Fellow Erika P. De Los Santos argues that the most effective defense mechanism is not purely digital—it is collaborative.

Her research focuses on designing platforms for Human-AI collaboration. Rather than relying solely on automated detection, Erika’s framework synthesizes the rapid processing power of machine learning with the nuanced strengths of human cognition.

A critical component of this research is deeply understanding the human element: our cognitive biases, our vulnerabilities to deception, and how we build trust in digital systems. By incorporating these psychological factors into the design of cybersecurity tools, Erika aims to optimize how people and algorithms work together to spot deepfakes.

Ultimately, this research goes beyond software engineering. It lays the groundwork for developing cybersecurity training that is inclusive and generalizable across diverse groups. By ensuring that defensive technology is accessible and intuitively designed for all users, Erika is helping to build a more resilient and thoroughly protected digital society.

Meet the Researcher: Erika P. De Los Santos is a researcher specializing in human factors and applied cognition, focusing on the intersection of human psychology, artificial intelligence, and cybersecurity.