As India Preps For 2024, Why Sam Altman’s Warning Is Relevant
Drawing from OpenAI’s Sam Altman’s testimony to the US Senate on Tuesday, India must improve its regulatory efforts to shape a safe and responsible AI ecosystem.
Echoes from the Senate: Sam Altman’s Warning
“The more general ability of these models to manipulate, to persuade, to provide sort of one-on-one interactive disinformation… given that we’re going to face an election next year and these models are getting better. I think this is a significant area of concern” – a warning by Sam Altman, the chief executive of OpenAI, before a U.S. Senate subcommittee.
His words of caution should resonate loudly in the corridors of power in India, a nation of over a billion people rapidly digitising and increasingly vulnerable to the potential dangers of AI.
Altman is slated to visit India in early June. It’s a trip at crossroads – when nations like the U.S. and the European Union grapple with restless nights contemplating AI’s societal impact and regulation. Altman’s visit offers a golden opportunity for India’s policymakers and tech community to initiate a dialogue, not just about AI’s role in India but about India’s potential role in shaping global AI. It’s time for India to contribute to the worldwide conversation and ensure that artificial intelligence, this era’s defining technology, is harnessed efficiently and ethically. It is not enough for AI to be for the people; it needs to be ‘of’ the people and ‘by’ the people, catering to India’s diverse mosaic.
India’s 2024 Elections: A Playground for AI Manipulation?
As we approach the 2024 elections in India, the potential for AI to be weaponized presents a sobering thought. With over 600 million internet users and an increasing reliance on digital communication, the country offers a vast and vulnerable battlefield for AI-driven disinformation campaigns.
Consider the case of ChatGPT, a language prediction model by OpenAI. While it’s touted for its ability to write human-like text and is celebrated for its potential in assisting tasks from drafting emails to writing code, its misuse can have serious consequences. In the wrong hands, it could be used to automate the production of misleading news and persuasive propaganda, or even impersonate individuals online, contributing to the disinformation deluge.
Take the example of deepfake technology, which allows the creation of incredibly realistic and often indistinguishable artificial images, audio, and videos. In a country like India, with its diverse languages, cultures, and political ideologies, this technology could be leveraged maliciously, manipulating public opinion and disrupting social harmony.
The Spectre of AI in Elections: Global Examples
Indeed, the weaponization of AI during elections and campaigns is not a futuristic dystopia; it’s a reality we are already beginning to grapple with. An alarming precedent was set in 2016 during the US presidential election. Cambridge Analytica, a British political consulting firm, was accused of harvesting data from millions of Facebook users without consent and using it to create psychological profiles of voters. Jump ahead a few years, and we have seen deep fake videos spark a political crisis in Gabon. In Gabon, a deep fake video of President Ali Bongo in 2018 led to a political crisis, with rumours about the President’s health sparking a failed coup. In India’s own backyard, the 2019 general elections saw accusations of AI-driven bots being used to flood social media with propaganda and dominate online conversations.
Photoshop on Steroids
“When photoshop came onto the scene a long time ago, for a while, people were quite fooled by photoshopped images and then pretty quickly developed an understanding that images might be photoshopped. This will be like that, but on steroids,” Altman told the US Senate.
The photoshop analogy hits the nail on the head regarding AI’s potential to deceive. Just as photoshop ushered in an era where images could no longer be accepted at face value, AI technologies are reaching a point where they can generate content so convincingly real that it blurs the line between reality and fabrication.
As Altman rightly noted, the challenge is the speed and scale at which AI can produce this content. Unlike a photoshopped image, which requires individual time and effort to create, AI can generate a multitude of misleading content at an unprecedented speed. It’s photoshop on steroids, indeed.
This is a clear and present danger in a country like India, where the rapid spread of misinformation can have severe societal implications. Imagine a deep fake video of a prominent political figure spreading hate speech or fake news articles generated en masse by AI, fueling divisive narratives just days before the election. The potential for chaos is immense.
The Urgency for AI Regulation in India
India must heed the global wake-up calls and look inward and address its unique challenges. The policymakers need to understand that if India doesn’t act and develop its approach to using AI and generative AI tools, it may lead to societal and cultural issues.
The Altman warning bell is sounding at a time when India’s digital landscape is experiencing unprecedented growth. However, the noise of this growth should not drown out the alarm. As the world’s largest democracy gears up for another dance with destiny in its upcoming general elections, the call for stringent AI regulation has never been more pressing.
Now, imagine this scenario playing out in India during an election year. With over 600 million active internet users and millions more coming online every year, the potential for AI-driven disinformation to spread and influence is enormous. It’s a daunting prospect for a nation where electoral decisions often teeter on the razor’s edge of public sentiment.
AI’s ability to tailor content to individual users can be especially dangerous in a country as culturally and linguistically diverse as India. AI models can generate disinformation in local languages, tailored to prey on regional fears and prejudices, polarising communities and stoking discord.
The Quagmire of AI.: India’s Moment to Act
IP protection, creativity, and content licensing are all areas that could become a morass if India does not act now. Without regulations, the misuse of AI in these areas could lead to many legal, ethical, and societal issues. It’s time to stop looking towards Washington and Silicon Valley for directional policies and create a tailored, comprehensive approach that considers India’s unique socio-political dynamics.
The country has a vibrant tech ecosystem, dynamic startups, and a growing community of AI researchers and practitioners. Harnessing their knowledge and expertise will be critical in understanding the nuances of AI and developing informed regulations.
A Call to Arms
In the face of these potential threats, complacency is not an option. Policymakers, tech industry leaders, and society at large need to engage in a comprehensive dialogue about AI and its implications. Awareness needs to be raised, and safeguards must be implemented. Regulatory measures need to strike a balance between promoting innovation and preventing misuse.
Sam Altman’s alarm bells should resonate not only within the US but also across the globe. It’s an urgent call to action for nations like India, where the stakes are high and the consequences are far-reaching. The 2024 elections may seem distant, but the time to prepare our defences against the onslaught of AI is now.
If there is one thing that history has taught us, it’s that forewarned is forearmed.
(Pankaj Mishra has been a journalist for over two decades and is the co-founder of FactorDaily.)
Disclaimer: These are the personal opinions of the author.