Live
- Lakshadweep Coco Fest showcases a blend of tradition and innovation
- Maoist killed, jawan injured in encounter in Odisha
- Chennaiyin FC to face Gangtok Himalayan in Sikkim Gold Cup final
- BGT 2024-25: Captaincy is responsibility; not a title, says Bumrah
- Private consumption driving growth in Q3 with rural India taking lead: RBI
- Pawan Kalyan Pledges Action To Combat Pollution in Visakhapatnam
- Malaysia sees drops in marriages, divorces in 2023
- Siddaramaiah govt snatching away food security of poor, says ex-CM Bommai
- AAP releases first list of 11 candidates for Delhi Assembly elections
- District Collector Adarsh Surabhi wants disabled people to be inspired by the perseverance and hard work of the disabled
Just In
A multilateral response needed to fend off AI threat
British physicist Stephen Hawking warned, “Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the...
British physicist Stephen Hawking warned, “Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilization.”
Drawing attention to pernicious use of Artificial Intelligence, which evolved from resolving complex tasks in a jiffy to rapidly growing applications in healthcare, finance, robotics, gaming, virtual assistants, autonomous vehicles to fraud detection, NLP etc, India’s External Affairs Minister S Jaishankar hit the nail on the head, when he cautioned the world on Sunday, saying ‘AI is just as dangerous as nuclear weapons’. Today, AI is evolving so fast, teaching itself, be it playing video games, reconstructing images or diagnosing diseases.
To the unversed, AI is a collection of technologies that allow machines to learn, think, act, and comprehend like humans. It can perform complex tasks easily and fast. Many industries are scrambling to harness its utilities. It not only mimics human intelligence, its pace and accuracy are leaving users baffled. Tamlyn Hunt, a researcher affiliated with the University of California, Santa Barbara, wrote for The Scientific American, that Artificial Intelligence algorithms will soon reach a point of rapid self-improvement that threatens our ability to control them and poses great potential risk to humanity. He quotes Geoffrey Hinton, ‘the godfather of AI,’ as saying that, ‘It is hard to see how you can prevent the bad actors from using [AI] for bad things.’
In 1950, Alan Turing published “Computing Machinery and Intelligence,” introducing the Turing test and opening the doors to what would be known as AI. After lying low for five decades, research caught on in the 21st century, delving deep into the unfathomable super intelligence tool. 2023 saw major developments. Google fired Blake Lemoine for leaks claiming its LaMDA was sentient i.e., capable of perceiving or feeling things. DeepMind unveiled AlphaTensor, and Intel claimed its FakeCatcher real-time deepfake detector was 96% accurate. OpenAI released ChatGPT on November 30 enabling mass-use of AI.
What is dismaying is that AI can improve itself exponentially. It is already seen to inadvertently perpetuate societal biases. It has been found to return lower accuracy results for black patients than white patients. Amazon gave up an algorithm after it was found more in favour of applicants based on words like “executed” or “captured,” in their resumes. Predictive policing tools are often found to rely on historical arrest data, which can reinforce existing patterns of racial profiling and disproportionate targeting of minority communities. These are but a tip of the iceberg.
After the generative AI-based phishing attacks in 2023 and the 2018 hack of Facebook’s user data, the potential uses of AI by cybercriminals are alarming. AI has already become a key tool in information warfare. Deep fake technology using AI is already a formidable challenge. For all its multifarous uses, AI is also a disruptive force. It can be used not only to manage but also cripple critical infrastructure in be it defence or power.
How have we come to such a pass, to start worrying over what simply started as a machine learning tool? Now, governments and security experts are debating on the potential devastating that AI could be capable of, unless the nations coordinate to put in a place a code of ethics for its responsible use. Unleashed and sans any limitations, it can threaten global order, experts worry. It is time nations came together to enact legislations and coordinate to prevent illegal and criminal activities with the use ICT. Bodies like the United Nations should seize the initiative as few nations have wherewithal to stop the march from algorithms to armaments.
© 2024 Hyderabad Media House Limited/The Hans India. All rights reserved. Powered by hocalwire.com