Toby Walsh – “We fear that what we create will get the better of us”
The author of Faking It: Artificial Intelligence in a Human World on the exciting possibilities of AI and on being banned by Russia for advocacy against “killer robots”
Your bio says you have “been dreaming about the artificial since a young age.” Were you dreaming of artificial intelligence, too? What were the dreams of young Toby like, and how different were they to those of Professor Toby?

Those dreams were fuelled by science-fiction works by authors like Isaac Asimov and Arthur C Clarke who painted a picture of a future full of robots and intelligent computers. Once I started programming, I realised that I could help build that future. I sold my first computer game at the age of 15 and started a software business shortly after the school holidays. Long term, my goals have always been the same: to build intelligence in those machines. But what has surprised the adult me was that this wasn’t without its problems. From the displacement of jobs to the transformation of warfare, there are many potential downsides as well as amazing upsides to AI.
After having written on the arrival of AI, the future of AI, and a meditation on the morality of AI, what compelled you to write Faking It?
Very simple. I realised I hadn’t said too much about deep fakes in Machines Behaving Badly, my last book on AI and ethics. And then, [OpenAI’s] GPT-3 came out, and I realised we had a real problem on our hands.

Besides bad journalism, what else do you think led to the AI hype?
I suspect it reflects some deep psychological and human fears. These fears can be found in many creation myths, as well as in stories like [Mary Shelley’s] Frankenstein. We fear that what we create will get the better of us. And given how powerful AI will be, there is perhaps much we should fear.
In an interview, Nobel Laureate Wole Soyinka rejected the term AI. He said, “All knowledge is a creation of human beings, including language, so that as a Yoruba, I will not presume to translate what AI means in English, as it is not my original language. I only have “artificial knowledge” of the English language, do you see?” Faking It also deliberates on nomenclature a lot. You have written about the “artificiality of artificial intelligence” and offer interesting historical accounts regarding “cybernetics” in the book. How comfortable are you with using “artificial intelligence” over other terms? Which do you prefer — artificial intelligence, cybernetics, or “surrogate intelligence”, as suggested by Soyinka. Also, given that AI, GenAI, and LLM are being used interchangeably, do you believe there will be more confusion in future?
I used to dislike the name “artificial intelligence” greatly. But it is a name that has stuck. And the deeper I got into writing Faking It, the more I realised that it was a good name, that it should remind us that artificial intelligence is going to be very artificial, very different to human intelligence. On the other hand, I still wish we hadn’t come up with the name “generative AI”. This name focuses your attention on the idea that the AI can generate stuff: photorealistic pictures, or poems in the style of Shakespeare. This is all very fun. But from a practical sense, the ability of these models to summarise and synthesise content is far more practical and useful.
In the book, you highlight problematic experiments like Stanford University’s ML algorithm that claimed to determine a person’s sexual orientation. Would you like to comment on other such bizarre experiments?
Another problematic example is generative AI which can remove the clothes from someone in a picture. There are very few uses of such a tool that are good, and many, many harmful ones.
Though data-collection practices of organisations like OpenAI have been questioned, it seems that they’re immune to any criticism. What can be done to combat such thefts of intellectual property?
There are some very strong class-action suits being brought against companies like OpenAI by authors, artists and organisations like the New York Times. I am confident that the courts will decide that training on intellectual property without consent or compensation is not acceptable. And just like music streaming services had to start paying for content, so will these AI companies.
Crimes like “cyber kidnapping” and rape in the metaverse are increasingly being registered. You too have cautioned about several dangerous outcomes. What can be done to prevent such crimes?
I believe we need to apply existing laws more strongly. We have often given the tech giants a free pass. We need to hold them more accountable for the content that they host. It’s not like they don’t make spectacular profits. They have margins greater than almost any other business, often returning 30% or 40% profits on turnover. Some of this needs to be invested in public safety.
You write that “any choice requires free will. But a computer doesn’t appear to have any free will.” Do you anticipate trouble that the deterministic nature of predictive intelligence invites?
The predictability of AI is one of its strengths. Humans are much less predictable. We have the possibility of replacing this with much more consistent and evidence-based decision-making.
Perhaps the world is geared towards more immersive experiences when it comes to the cohabitation of humans and AI. What words of caution do you have for the future generation?
AI is, like any other human invention, just another tool that can amplify human skills.
Your advocacy against the weaponization of robots has resulted in an indefinite ban from Russia. Do you anticipate that such censorship will become the norm in future if anyone tries to register their dissent against malpractices in relation to AI, ML, etc?
I was surprised but pleased to be banned indefinitely from Russia. Pleased because it meant that my advocacy was having an impact. But we do have to worry about the ability of individuals to stand up and advocate for change. The large tech companies have the income of small nation-states. This gives them a lot of power. On the other hand, Project Maven (thousands of Google employees signed a petition against the firm’s engagement with the US military to help them use AI “to improve drone strikes in the battlefield”) demonstrates that people can push back.
Could you please use ChatGPT to generate answers to select questions above and share the as-is response.
Input: Caution for Future Generations
ChatGPT response: As we move towards more immersive experiences with AI, future generations must approach this co-habitation with caution. Ethical considerations, responsible development, and a deep understanding of the societal impact of AI are paramount. Balancing innovation with ethical considerations will be crucial for a harmonious integration of humans and AI.
Input: Censorship and Dissent Against Malpractices
ChatGPT response: The ban from Russia highlights potential consequences for dissent against AI malpractices. While I cannot predict the future, it’s plausible that criticism of AI practices may face censorship, driven by various motives, including political, economic, or ideological interests. Vigilance, global collaboration, and advocacy for ethical AI practices can mitigate such risks.
Saurabh Sharma is a Delhi-based writer and freelance journalist. They can be found on Instagram/X: @writerly_life.