...
...
...
Next Story

The search for balance in regulating deepfakes

ByAshish Bharadwaj, D Daniel Sokol, Chirantan Chatterjee, Simrean Bajwa
May 09, 2024 09:43 PM IST

The complexity and multifaceted nature of the issue of deepfake underscores the need for a uniform regulatory and enforcement mechanism.

Deepfake videos have become a way to garner millions of views and generate revenue on social media. From Barack Obama calling Donald Trump “a complete dips...” to President Volodymyr Zelensky telling Ukrainians to lay down arms, we have seen deepfakes. While most of the rumours have been put to rest, it raises unsettling questions about where reality ends and fiction takes over.

PREMIUM
Photo manipulation can be traced back to as early as the 1800s wherein it was a regular occurrence to retouch and create an ideal picture. (Getty Images/iStockphoto)

Photo manipulation can be traced back to as early as the 1800s wherein it was a regular occurrence to retouch and create an ideal picture. What has changed with the advent of technology is the ease and speed with which such manipulation can be done. The consequences of deepfake technology could be devastating. In seconds, information gets communicated across various platforms. The diffusion of any technology in an ecosystem depends on the trust factor. Deepfakes have resulted in a trust deficit in a society wherein data alteration has become easier and more difficult to spot.

Yet, at the outset, the term deepfake is problematic as it gives a negative connotation, overlooking the innovation and positive effects of Artificial Intelligence (AI) technology. First conceived by a Reddit user (by the same name) in 2017, the term deepfake refers to synthetic/altered content generated by implementing deep learning algorithms. AI has a lot of potential to reshape society for the better. The deployment of AI technologies across various sectors has been a game changer. In the global campaign to end malaria, David Beckham delivers an appeal in nine languages using a deepfake voice. This illustrates how deepfake technology can be harnessed to bring creative ideas to life that could not be realised in the past.

On the flip side, many privacy and defamation issues have surfaced in the recent past. An overview of the risks identified includes, but is not limited to, deepfake revenge porn, reputational damage, defamation videos, voice cloning, news media manipulation, financial fraud, and threat to national security. A major part of deepfake adult content videos targets entertainment industry celebrities. Women, in general, are the most vulnerable to non-consensual deepfake videos. This raises questions about their safety and privacy. The underlying factors for the rise of deepfake pornography include the availability of user-friendly tools and software coupled with negligible costs involved in swapping faces.

Another concern is ethics. Music composer AR Rahman found himself in hot waters for using AI tools to resurrect the voices of late singer Bamba Bakya and Shahul Hameed to compose a track. Not only does cloning a voice trigger personality rights, but it poses serious questions of how far we can push technology in the name of creativity. Voice forms a part of personality rights and if such practices go unchecked, it may result in long pending lawsuits or even eventually replace human artistes.

Since the existing legislations were drafted long before the emergence of AI technologies, there exist gaps that need to be addressed. A blanket ban on deepfake content could stifle innovation and creativity. Another factor that requires attention is the lack of regulatory harmonisation across jurisdictions and specific areas of law within a particular jurisdiction. Consequently, enforcement remains a challenge.

Instead of starting afresh, there needs to be a dialogue between intermediaries, industry experts, and the government to arrive at tech-based solutions. This would broadly include identification of deepfake content, labelling and notifying the concerned party, and serving a takedown notice to the platform. Further, digital platforms need to adopt a comprehensive approach to address deepfake content. For instance, Meta has a three-pronged approach — the first is to ensure transparency. This helps the users understand when they are interacting with content generated by AI. The second is for digital platforms to enforce existing community standards and self-regulation. This guarantees the removal of content that does not adhere to the community standards prevalent in the industry. Lastly, there needs to be a cross-industry collaboration to combat the deceptive use of AI. Ahead of the 2024 elections, Meta announced revamping its strategy towards altered content.

The complexity and multifaceted nature of the issue of deepfake underscores the need for a uniform regulatory and enforcement mechanism. Governments have to strike a balance between innovation, on the one hand, and community welfare on the other before the perils eventually outweigh the perks.

Ashish Bharadwaj is dean & professor, BITS Law School Mumbai, D Daniel Sokol is professor, USC Gould Law School, Chirantan Chatterjee is professor, Sussex University. Simrean Bajwa, researcher, BITSLAW, contributed to this article. The views expressed are personal

Deepfake videos have become a way to garner millions of views and generate revenue on social media. From Barack Obama calling Donald Trump “a complete dips...” to President Volodymyr Zelensky telling Ukrainians to lay down arms, we have seen deepfakes. While most of the rumours have been put to rest, it raises unsettling questions about where reality ends and fiction takes over.

PREMIUM
Photo manipulation can be traced back to as early as the 1800s wherein it was a regular occurrence to retouch and create an ideal picture. (Getty Images/iStockphoto)

Photo manipulation can be traced back to as early as the 1800s wherein it was a regular occurrence to retouch and create an ideal picture. What has changed with the advent of technology is the ease and speed with which such manipulation can be done. The consequences of deepfake technology could be devastating. In seconds, information gets communicated across various platforms. The diffusion of any technology in an ecosystem depends on the trust factor. Deepfakes have resulted in a trust deficit in a society wherein data alteration has become easier and more difficult to spot.

Yet, at the outset, the term deepfake is problematic as it gives a negative connotation, overlooking the innovation and positive effects of Artificial Intelligence (AI) technology. First conceived by a Reddit user (by the same name) in 2017, the term deepfake refers to synthetic/altered content generated by implementing deep learning algorithms. AI has a lot of potential to reshape society for the better. The deployment of AI technologies across various sectors has been a game changer. In the global campaign to end malaria, David Beckham delivers an appeal in nine languages using a deepfake voice. This illustrates how deepfake technology can be harnessed to bring creative ideas to life that could not be realised in the past.

On the flip side, many privacy and defamation issues have surfaced in the recent past. An overview of the risks identified includes, but is not limited to, deepfake revenge porn, reputational damage, defamation videos, voice cloning, news media manipulation, financial fraud, and threat to national security. A major part of deepfake adult content videos targets entertainment industry celebrities. Women, in general, are the most vulnerable to non-consensual deepfake videos. This raises questions about their safety and privacy. The underlying factors for the rise of deepfake pornography include the availability of user-friendly tools and software coupled with negligible costs involved in swapping faces.

Another concern is ethics. Music composer AR Rahman found himself in hot waters for using AI tools to resurrect the voices of late singer Bamba Bakya and Shahul Hameed to compose a track. Not only does cloning a voice trigger personality rights, but it poses serious questions of how far we can push technology in the name of creativity. Voice forms a part of personality rights and if such practices go unchecked, it may result in long pending lawsuits or even eventually replace human artistes.

Since the existing legislations were drafted long before the emergence of AI technologies, there exist gaps that need to be addressed. A blanket ban on deepfake content could stifle innovation and creativity. Another factor that requires attention is the lack of regulatory harmonisation across jurisdictions and specific areas of law within a particular jurisdiction. Consequently, enforcement remains a challenge.

Instead of starting afresh, there needs to be a dialogue between intermediaries, industry experts, and the government to arrive at tech-based solutions. This would broadly include identification of deepfake content, labelling and notifying the concerned party, and serving a takedown notice to the platform. Further, digital platforms need to adopt a comprehensive approach to address deepfake content. For instance, Meta has a three-pronged approach — the first is to ensure transparency. This helps the users understand when they are interacting with content generated by AI. The second is for digital platforms to enforce existing community standards and self-regulation. This guarantees the removal of content that does not adhere to the community standards prevalent in the industry. Lastly, there needs to be a cross-industry collaboration to combat the deceptive use of AI. Ahead of the 2024 elections, Meta announced revamping its strategy towards altered content.

The complexity and multifaceted nature of the issue of deepfake underscores the need for a uniform regulatory and enforcement mechanism. Governments have to strike a balance between innovation, on the one hand, and community welfare on the other before the perils eventually outweigh the perks.

Ashish Bharadwaj is dean & professor, BITS Law School Mumbai, D Daniel Sokol is professor, USC Gould Law School, Chirantan Chatterjee is professor, Sussex University. Simrean Bajwa, researcher, BITSLAW, contributed to this article. The views expressed are personal

All Access.
One Subscription.

Get 360° coverage—from daily headlines
to 100 year archives.

E-Paper
Full
Archives
Full Access to
HT App & Website
Games

 
Get Current Updates on India News, Elections 2024, Lok sabha election 2024 voting live , Karnataka election 2024 live in Bengaluru , Election 2024 Date along with Latest News and Top Headlines from India and around the world.
SHARE THIS ARTICLE ON
Subscribe Now