Artificial intelligence a double-edged sword? Experts discuss fake image concerns
Tech experts highlight the rising ethical issues in artificial intelligence, including biases and image manipulation. They call for responsible AI products.
While advances in artificial intelligence has eased previously difficult and time-consuming opportunities, related issues warn us of the larger risks. The modification of photos and films, even for malevolent intentions, is on the rise. (ALSO READ: Morphed photo of ‘smiling wrestlers’ viral? Bajrang Punia shares original with a warning)

The Hindustan Times interviewed two tech experts to determine the core cause and solution to the problem. Atul Rai is the co-founder and CEO of Staqu Technologies, an AI firm focused on security and corporate big data analytics. We also spoke to VPNMentor, which discovered in a recent report that AI-generated pictures frequently reflect social prejudices, with four famous image generators creating images that slant towards these biases when given stereotypically biased keywords. For example, for the keyword "nurse," the tools predominantly generated images of women, while for "CEO," they mostly generated images of men. (ALSO READ: The rise of AI-driven deception: How a fabricated Pentagon explosion shook social media and stocks)
How common is bias in AI tools, and how do you think it will affect society?
VPNMentor spokesperson: The damage from bias in AI is amplified a bit compared to other similar examples because we are aware each human being or each company may have their own bias towards a topic for any reason, but a significant majority of people perceive AI to be this emotionless perfect being that can't have any biases. Considering this, biased AI-generated images can perpetuate harmful stereotypes and exacerbate existing inequalities.
ALSO READ | Words to images: Blurring boundaries between real memories and AI creations
What do you think are the root causes of the bias in AI image-creator tools?
Atul: Biases are present in the data that is used to train the algorithms, it is the main root cause for the biasness. For example, if most of the data used to train an AI image creator tool is biased towards a particular gender or race, the resulting images generated by the tool may also be biased.
What ethical considerations should AI tool companies take into account when designing and testing their products?
Atul: AI is a reflection of society, any prejudices or biases that exist there can also exist there. The AI industry must therefore be aware of ethical issues and the information's source.
Instead of challenging AI, it is essential to challenge society and work to lessen deeply ingrained biases within it. To address ethical concerns and guarantee that AI technologies are developed in a responsible and ethical manner, the AI industry must collaborate with society.
VPNMentor spokesperson: AI tool companies designing and testing image generation algorithms need to consider ethical factors such as ensuring diversity and representation in the training data, avoiding the perpetuation of harmful stereotypes, and being transparent about algorithm functionality. Additionally, to ensure accountability and transparency, companies should allow independent audits and scrutiny of their algorithms and decision-making processes.
What resources are available to help identify and mitigate potential biases in AI-generated images?
VPNMentor spokesperson: Resources such as the AI Now Institute's guidelines for responsible AI use and the Algorithmic Justice League's toolkits can help users identify and mitigate potential biases. It is also essential to engage with developers and companies to encourage more ethical and transparent AI tools.
Do you support a government led regulatory body to keep check on AI or the industry evolved framework is better?
Atul: Whether a regulatory body or the industry itself is in charge of the AI development process, the important thing is to make sure there is authenticity of information, and accountability and transparency from the source of information.
However, the industry of generative AI is still in the early stages of development, making it somewhat difficult to develop a universal strategy for its regulation. Industry leaders and technological trailblazers can collaborate with the authorities to develop a flexible strategy for limiting the misuse of AI while ensuring credibility and transparency.