Addressing Deepfake Technologies Through Detection and Regulation: A Systematic Survey
Main Article Content
Abstract
Deepfake technology is growing in popularity because of the rapid growth of artificial intelligence (AI), which creates realistic looking but fake audio and video content. Although this significant development has revolutionary possibilities, it also raises serious ethical issues, such as concerns to public confidence, privacy, and security, in addition to the chance for manipulation and false information. Deepfake technology, powered by advances in artificial intelligence, particularly Generative Adversarial Networks (GANs), has introduced both groundbreaking opportunities and serious ethical and security concerns. This survey provides a comprehensive overview of the current state of deepfake detection methods and regulatory frameworks aimed at mitigating the risks associated with synthetic media. After scanning over 73 documents, we reduced the selection to 57 applying evaluation criteria such abstract, title, irrelevant focus, and duplication. The author analyzes recent technological approaches, including convolutional neural networks (CNNs), multimodal analysis, and biological signal detection, and evaluate global legislative responses to deepfakes across various jurisdictions.
Article Details

This work is licensed under a Creative Commons Attribution 4.0 International License.