Vishal Mathur picks his favourite read of 2024
Decoding some tough realities about AI and how certain AI simply does not work as it is supposed to, or as advertised by tech companies
Chatbots, image generators, music creation tools and photo editing are just some of the use cases being redefined by artificial intelligence (AI). Amidst this relentless hype, Arvind Narayanan, computer science professor at Princeton University and Sayash Kapoor, PhD candidate in computer science at Princeton, in their book titled AI Snake Oil, decode some tough realities that often remain hidden behind the scenes. It is clear that Generative AI is rapidly moving forward. How else would you be able to generate a photo that realistically recreates the prompt “a cow in a kitchen wearing a pink sweater”? Once the fun and games are over, it isn’t too difficult to notice that this technology is still immature at best, and highly unreliable when it is spewing incorrect facts or misinformation.

Narayanan talks about how certain AI simply does not work as it is supposed to, or as advertised by tech companies. “A major societal problem” is upon us, because humans seem incapable or unwilling to identify AI that works from ones that don’t, and to correct AI when it makes a mistake. The book points to another type of AI that’s increasingly finding traction, called predictive AI. Unlike the technology shown in the film Minority Report, real-world applications are proving problematic. “Predictive AI makes consequential decisions about people applying for jobs, loans, or in criminal justice systems. This dubious AI predicts who will commit crimes or repay loans. These predictions are difficult, and the systems are being used unjustly,” Narayanan said, while speaking with HT.

Is AI the solution for all that ails the world, which is what tech companies would like us to believe? It may not be so. AI Snake Oil takes the example of computer programming or coding that software and app developers do. Replacing a human to have AI generate code may be cost efficient for businesses, but what about an eventuality when bugs in this AI-generated code are exploited by hackers? The book also tackles the question of AI being used by social media platforms for content moderation, and notes that even if the algorithmic errors that lead to incorrect flagging of posts is fixed, there is still the question about bias and that could hamper free speech. This book is an excellent assessment of an important chapter in the AI journey.