close_game
close_game

Meet the artists who turned deepfake on its head

ByDhamini Ratnam
Dec 23, 2023 12:13 PM IST

In a conversation with policy expert Apar Gupta, and artists Shaina Anand and Ashok Sukumaran of CAMP, where they unpack the fear that deepfake tech engenders

As the Chemould Prescott Road’s 60th anniversary exhibition — held phase-wise across three locations since September — comes to a close this month, one particular work of art caught our attention for its use of a technology that is the subject of debate around the world. It formed the crux of the Silicon Valley drama in October, as OpenAI — the company that unleashed ChatGPT into the world in 2022 — fired and rehired Sam Altman. It also formed the crux of a consultation paper released earlier this year by the Telecom Regulatory Authority of India (TRAI) advocating for an authority to regulate the use of Artificial Intelligence, using a risk-based framework. In August, shortly before the G20 summit, Prime Minister Narendra Modi spoke of the need for a global framework to help artificial intelligence (AI) expand “ethically”.

DeepFacts, 2-channel video, Install view, Chemould Prescott Road. (Photo Credit: Anil Rane) PREMIUM
DeepFacts, 2-channel video, Install view, Chemould Prescott Road. (Photo Credit: Anil Rane)

The work, created by Mumbai-based Studio CAMP, uses software that employs AI to create a two-channel video of gallerists, Kekoo and Khorshed Gandhy, speaking to each other about a range of things, including but not restricted to, corrections of facts mentioned in the biography, ‘Citizen Gallery: The Gandhys of Chemould and the Birth of Modern Art in Bombay’, written by Jerry Pinto. The nearly 16-minute-long video installation uses archival footage of Kekoo and Khorshed, who died in 2012 and 2013, respectively, superimposing their faces over the bodies of actors (close friends of the family; the man who ‘plays’ Kekoo is a cousin) using AI software.

Titled ‘DeepFacts’, the artwork challenges the populist notion that places artificial intelligence within the framework of risk/harm and regulation, a position taken by policy and legal experts, and instead allows us to see the creative possibilities that this technology enables. We speak to the artists of Studio CAMP – Shaina Anand and Ashok Sukumaran — as well as advocate and public policy professional Apar Gupta, to comprehend the artwork and the framework. Are there aspects of this technology that art can open our eyes to?

Gupta pointed to the title of the work which turns the idea of a deep fake on its head by offering, quite simply, the truth. The work, Gupta said, cracks open a window in the current risk-centred discussions around AI, allowing us to also speak of the technology as a freedom of expression issue.

Anand said that it was inaccurate to call the work as an example of deep fake, given its method of production (invited by the Gandhys’ daughter, Shireen who runs the gallery; using family archives made accessible by the show’s curator Shaleen Wadhwana; training material employing footage of the Gandhys’ shot by and provided by the family) as well as the site within which it is being shown (a gallery, not available publicly online like the rest of their work). “Its impulse is to offer a sharp dig at misinformation and fake news, which is why the title itself becomes Deep Facts,” she said.

Read excerpts of the conversation which have been edited for brevity:

Q: What got you interested in making this ‘deepfake’ video?

Shaina: The invitation to participate came through our friendship, we do not have a commercial relationship with the gallerists. The prompt provided by Shireen and Shaleen was for artists to work with archives of the gallery, as an institution, and also the family’s own personal archives like Khorshed’s letters. We were also given the prompt that there is a lot of archival video as well as 8mm film and if we would like to work with that. We are artists who work with film and archival video but also create and run archives. As we started burrowing into the Chemould archives, we began finding factual errors, and fabulations, and somehow the work began coming together quite nicely.

Ashok: The personae of Kekoo and Khorshed, both of whom passed away some time ago, appear on screens and talk in a contemporary way about what is being written about the gallery, about their history and their image, the history of modern art in Mumbai and also, in India. These are like two people who are reflecting, sort of fact-checking their histories, hence the title, DeepFacts.

Deepfake is just a technique that has been used to present this material. It has to do with the long interest from our side in virtuality and virtual personalities and things that we could say are coming in the future as challenges for all of us, from the point of view of law as well as questions of new kinds of beings who can remember more than us and who challenge us as ordinary humans today.

DeepFacts, 2-channel video, with audience. Chemould Prescott Road. (Photo Credit: Anil Rane)
DeepFacts, 2-channel video, with audience. Chemould Prescott Road. (Photo Credit: Anil Rane)

Q: That's an interesting take on it. So you two leafed through archives that were provided to you by the gallery and used it as your source material?

Shaina: The film begins with an anachronism. Both of them are holding a book released in 2022, published long after both have passed away. They're holding the book, Citizen Gallery by Jerry Pinto, which has a photograph of the two of them in their 20s on the cover. A line on the blurb of the book says, “Can a gallery be a responsible citizen?" That is very much the setting of the video. The two pull out exhibits, take out documents from their own archives, they specifically cite things, and they are not just riffing off. And, as they fact check, they then produce new facts. So it is being very much a responsible citizen.

Q: Apar, do you want to jump in and tell us what you think of this video?

Apar: It challenged my view towards the possibilities of deepfake technology, because technology is not neutral, it has a certain political intent. There are possibilities of benefit through the deployment of deepfake technology for educational purposes — it could possibly be used to show a classroom of students the Dandi March in 4K, for example. But again, this is an instrumentalist example of why it should not be banned in a law and policy response. The national conversation sparked some weeks ago [around the use of actor Rashmika Mandanna’s face on another woman’s body], emerges from a moral panic. And it's a justified moral panic to some extent, given that India is a patriarchal society and men view women as bodies for amusement and immediately use any kind of technology which is for morphing, or visual technology, towards creating any kind of sexual content, which can then serve their own pleasure.

I really benefited from the insight into Shaina and Ashok's creation process. There are certain inherent characteristics which AI researchers call AI ethics. There is, to a large extent, a law and policy vacuum here. The aspect of consent, for instance. How are you actually making it in terms of the production? Because it's again going to involve some ethical choices of representation and, finally, transparency which is much towards the viewer. Are they going to know that they are seeing something which is synthetic and artificial? And as much as I don't like these adjectives, what also became clear to me is that both Shaina and Ashok implemented these various facets because it was a co-creation process with the family itself. They provided the archival material, and they went through a process of workshopping it.

Q: Are you saying that in this situation, is the consent of the family enough?

Apar: Consent is incredibly important, especially for people who are living. In terms of people who are deceased and public personalities, it becomes more difficult, in terms of it being prescribed as a property right. For instance, a person who's maybe a very famous actor could say that there's property interest and thereby my personhood should be used in X, Y, and Z ways by contracting it with people like my family members or administrators of my trust who hold the legal title to do so. Anil Kapoor recently approached the Delhi high court and won a case that gave him rights over his voice, style, and manner and people can't use them for commercial purposes.

Deep Facts, 2-channel video (Photo Credit: Anil Rane)
Deep Facts, 2-channel video (Photo Credit: Anil Rane)

Q: What made you go towards this tool that has clearly generated a lot of panic and fear in terms of how it is used?

Shaina: If you see CAMP’s work, our modes of filmmaking are very much about working with people's subjectivities, which could include workers, sailors, CCTV operators, and other kinds of people and groups we've worked with on our films. Even the actors in the 2-Channel film are closely connected. The person playing Khorshed is Jennifer Mirza, who like Khorshed, is assiduous and particular and careful about things. She, too, belongs to that generation that doesn't easily let loosely-written things pass. In Kekoo's case, the actor playing him is his 91-year-old cousin's brother. He's filmed in the original frame shop [on Princess Street] and she in Khorshed’s bedroom in their home [in Bandra]. A lot of the fact-checking of our script also happened with the family. So, it is not an alienated method. This is how we work collaboratively.

Q: Ashok, what do you think about generative AI as a tool?

Ashok: There's a lot of hype around Generative AI at the moment. In the art world, we see a lot of these hypes, like say around NFTs, and much of it could be around things that we won't be necessarily talking about in a couple of years. My guess is that some of this generative stuff is like that. But obviously AI, as a broader field as a form of analysis of large data, digesting information --- all that is going to stay. This is where the work also comes from. There is a large body of written knowledge and here is a kind of agent, who's not necessarily a person in the classical sense, but a personality which is going through this material, and can search for facts or errors. So that's not AI. It's just the way that computers have interacted for the past 50 years. Now, in combination with a person, there is a new sort of mixed organism. Think of it as a sort of extended person, like avatars, or all of us with our phones and our memory banks --- we are certainly as real as the humans of the 19th century. This is also reflected in the law, where now personhood is even granted to corporations and rivers and all kinds of entities which are not classical human beings.

We're in this moment, where a lot of these boundaries are getting mixed up. So we wanted to explore this and come up with a positive articulation rather than only truck in fear-mongering. In this particular video, this is a very specific form which is not even generated. It's the face of a person mapped onto the face of somebody who's playing them. So, it's not generative AI. In theatre, in Kathakali, and in other traditions, the face is different from your normal human personality. There's a long tradition of the use of masks.

Q: That's a very good point you raised. This is not generative AI because this video uses actual archival text, and the screenplay corrects the archival text. It's not generating anything new based on existing information, which is also where some of the current anxieties lie. In the union strikes by actors and scriptwriters of the Writers Guild of America and SAG-AFTRA, they spoke of the unchecked use of generative AI, and it’s not an unfounded fear. Apar, you talked about how law also looks at AI in a boundaried way.

Apar: The work helps us question the accepted policy discourse around generative AI, which in turn stems from an understanding that we need to view it as a risk and then deal with it through laws and policies. For example, what does it mean for loss of employment, its social impact on election, and the reputational harm it can cause for women and minorities? I think there is a fuller conception, which comes through this work, of free expression.

The reason I say this is if you look at the artwork, it's remarkable that the first thing it does is deal with the preconceived bias around the primary basis of this technology --- that the use of GPT gives us results that are a sort of hallucination, incorrect information.

This work opens the window of imagination that an alternative path is possible. The anxiety around generative AI, especially in the creative industry, is that it's reductive, it only leads to a probabilistic output based on text-to-image or image-to-word which is scraped off all known web information. So, all art, literature, analysis, and imagination will be blinkered or limited by our past. But the way that Shaina and Ashok have constructed the artwork, is that [Kekoo and Khorshed] have corrected the materials. They've said that, oh, this is a biographical error here, and it is a kind of revaluation of the material.

They fact check what's already there and have used AI, as a tool, to do so very effectively.

This advances an understanding of free expression through the artwork, and this is missing right now in our conversations around deepfakes.

Finally, I would also like to say that it is a very human thing to see something, say like a historical film, and presume that it is true even if it is clearly labelled fiction. We shouldn't just place the entire burden on people to fact check what they see on video just because it's created by deep fakes.

The ethical practices that come through in the creation of this work are important and we will all benefit if, over a period of time, CAMP does more of these kinds of works and also explains the process that they have adopted in the creation of the work.

Q: Shaina, Ashok, would you like to address what Apar said? You may not necessarily use the word ethics, but how would you think of this tool, as artists who work with media in subversive ways?

Shaina: Ethics is key for us, and ethics begins where the law ends. As Ashok said, the use of the tool was creative to bring alive Kekoo’s and Khorshed’s speaking selves in an anachronistic way, and the essence of the work is the fact checking. We were also making a very sharp dig at misinformation and fake news, which is why the title itself became DeepFacts.

Ashok: All our intelligences are hybridized. We are no longer simple human beings. We are all hybridized, there are lots of streams of information flowing through us. We’re at the beginning of a historical phase where humanity and technology are going to be connected in good and bad ways, and there will be a lot of virtual pressure on humanity. So these are really ethical experimentations on how to do this properly, how not to do it as a form of exploitation, but how to do it creatively, with the right processes, using the right tools, with the right documentation, and also with interesting results. This work, too, is an ethical experimentation with the idea of the archive and what people want to say about truth. Then there is the ethics of how to use this kind of technology with consent, with the participation of the community with thanks to people who have written past accounts. That’s the artistic ethics. And finally, the output should be beautiful from the inside, in the way that they are made, and not only on the outside.

Catch your daily dose of Fashion, Taylor Swift, Health, Festivals, Travel, Relationship, Recipe and all the other Latest Lifestyle News on Hindustan Times Website and APPs.
Catch your daily dose of Fashion, Taylor Swift, Health, Festivals, Travel, Relationship, Recipe and all the other Latest Lifestyle News on Hindustan Times Website and APPs.

All Access.
One Subscription.

Get 360° coverage—from daily headlines
to 100 year archives.

E-Paper
Full Archives
Full Access to
HT App & Website
Games
SHARE THIS ARTICLE ON
SHARE
Story Saved
Live Score
Saved Articles
Following
My Reads
Sign out
New Delhi 0C
Friday, May 09, 2025
Follow Us On