SDAIA Issues Deepfakes Guidelines to Regulate Responsible AI Use

Riyadh: To address the rapid evolution of artificial media, the Saudi Data and Artificial Intelligence Authority (SDAIA) has issued the "Deepfakes Guidelines Mitigating Risks While Fostering Innovation," a document that defines deepfakes as content manipulated via deep learning and warns that distinguishing such media from reality is becoming increasingly complex as these tools grow more accessible.

According to Saudi Press Agency, while acknowledging positive applications in sectors like education, healthcare, and entertainment, SDAIA warns of significant risks such as financial fraud, impersonation, and the spread of misinformation. The document highlights dangerous practices where voice and facial impersonation are used to bypass traditional security methods, emphasizing the potential for severe social and humanitarian consequences.

To mitigate these risks, the document mandates that developers and creators adhere to strict ethical and regulatory standards. These include protecting personal data, obtaining explicit consent, and implementing technical safeguards such as digital watermarks and source verification.