Credit: Boris SV/Getty.

An open letter signed by numerous academics, politicians, and AI experts advocates for legislation against deepfakes. The signatories express deep concern about the rising threat of deepfakes to society and urge governments to enforce measures to combat the dissemination of harmful AI-generated content.

The letter proposes the enactment of three key laws:

  • Making deepfake child pornography illegal

  • Introducing criminal sanctions for individuals involved in creating or spreading harmful deepfakes intentionally

  • Mandating software developers and distributors to a) prevent the creation of harmful deepfakes through their products and b) be held accountable if their safeguards can be easily bypassed

Notable figures who have endorsed the letter, which currently boasts 778 signatures, include businessman and politician Andrew Yang, filmmaker Chris Weitz, researcher Nina Jankowicz, academic and psychologist Steven Pinker, actor Kristi Murdock, physicist Max Tegmark, and neuroscientist Ryota Kanai.

The signatories represent various sectors, primarily in artificial intelligence, academia, entertainment, and politics. The letter emphasizes that “signers may have different motivations for supporting the statement” due to the diverse backgrounds of the individuals involved.

SEE ALSO:

Deepfakes of Taylor Swift have gone viral. How does this keep happening?

Concerns regarding AI-generated media and its risks have been significantly raised over the past year. Members of SAG-AFTRA have lent their support, following discussions during a prolonged strike concerning the utilization of AI in the entertainment industry. Politicians are hurrying to address the spread of deepfakes, with Homeland Security beginning to recruit AI experts, and countries such as India grappling with AI-generated misinformation in anticipation of elections.

Deepfakes have also sparked debates on nonconsensual pornography, with incidents like the circulation of fake images of Taylor Swift underscoring the urgency for quick legal and societal interventions.

This letter adds to the growing calls for measures to regulate and prevent the proliferation of deepfakes. The European Union is working on legislation to criminalize AI-generated images, particularly those depicting child abuse and pornography. In the UK, steps were taken last year to crack down on the sharing of deepfake porn, but there remains a global and industry-wide need for further action.

There is still a significant amount of work required on a global scale, both from a legal perspective and in collaboration with tech companies themselves.

Topics
Artificial Intelligence

Mashable Image

Meera Navlakha
Culture Reporter

Meera is a Culture Reporter at Mashable, joining the UK team in 2021. She writes about digital culture, mental health, big tech, entertainment, and more. Her work has also been published in The New York Times, Vice, Vogue India, and others.

Shares:

Leave a Reply

Your email address will not be published. Required fields are marked *