'Godfather of Artificial Intelligence' Quits Google, Warns About the Dangers of AI | Explained
'Godfather of Artificial Intelligence' Quits Google, Warns About the Dangers of AI | Explained
Explained: Geoffrey Hinton has said AI systems were eclipsing human intelligence in some ways because of the amount of data they were analyzing

A computer scientist often dubbed “the godfather of artificial intelligence" has quit his job at Google to speak out about the dangers of the technology, US media reported Monday. Geoffrey Hinton, who created a foundation technology for AI systems, told The New York Times that advancements made in the field posed “profound risks to society and humanity".

“Look at how it was five years ago and how it is now," he was quoted as saying in the piece, which was published on Monday. “Take the difference and propagate it forwards. That’s scary."

Hinton said that competition between tech giants was pushing companies to release new AI technologies at dangerous speeds, risking jobs and spreading misinformation. “It is hard to see how you can prevent the bad actors from using it for bad things," he told the Times.

In 2022, Google and OpenAI — the start-up behind the popular AI chatbot ChatGPT — started building systems using much larger amounts of data than before.

Hinton told the Times he believed that these systems were eclipsing human intelligence in some ways because of the amount of data they were analyzing.

Maybe what is going on in these systems is actually a lot better than what is going on in the brain," he told the paper.

While AI has been used to support human workers, the rapid expansion of chatbots like ChatGPT could put jobs at risk. AI “takes away the drudge work" but “might take away more than that", he told the Times.

The scientist also warned about the potential spread of misinformation created by AI, telling the Times that the average person will “not be able to know what is true anymore."

Hinton notified Google of his resignation last month, the Times reported. Jeff Dean, lead scientist for Google AI, thanked Hinton in a statement to US media. “As one of the first companies to publish AI Principles, we remain committed to a responsible approach to AI," the statement added. “We’re continually learning to understand emerging risks while also innovating boldly."

Hinton is Not Alone

In March, tech billionaire Elon Musk and a range of experts called for a pause in the development of AI systems to allow time to make sure they are safe. An open letter, signed by more than 1,000 people including Musk and Apple co-founder Steve Wozniak, was prompted by the release of GPT-4, a much more powerful version of the technology used by ChatGPT.

Hinton did not sign that letter at the time, but told The New York Times that scientists should not “scale this up more until they have understood whether they can control it."

What are the Dangers of AI?

Yoshua Bengio is a professor and artificial intelligence researcher at the University of Montreal. He has spent the last four decades inventing technology that powers systems such as GPT-4, according to a report by the New York Times. For their work on neural networks, the researchers received the Turing Award, also known as “the Nobel Prize of computing," in 2018.

A neural network is a mathematical system that learns abilities through data analysis. Around five years ago, companies such as Google, Microsoft, and OpenAI began developing large language models, or L.L.Ms, which learned from massive amounts of digital text.

L.L.M.s learn to generate text on their own by identifying patterns in that text, such as blog posts, poems, and computer programmes, by identifying patterns in that text. They can even hold a conversation. This technique can assist computer programmers, writers, and other workers in coming up with new ideas and completing tasks more rapidly. However, Dr. Bengio and other experts cautioned that L.L.M.s can learn undesirable and unexpected behaviours, the Times reported.

These systems have the potential to generate false, biassed, or otherwise harmful information. Systems like GPT-4 make up information and misinterpret facts, a process known as “hallucination."

These issues are being addressed by businesses. However, experts such as Dr. Bengio are concerned that as researchers develop more powerful systems, they will introduce new risks.

Some Risks Involved

According to a report by Bernard Marr, there are many risks involved with the advancement of AI.

One way AI can bring problems is when it is trained to perform something hazardous, such as autonomous weapons programmed to kill. It is also possible that the nuclear arms race may be supplanted by a worldwide autonomous weapons race.

Another thing to be wary of is social media, which, with its self-powered algorithms, is extremely effective at target marketing. They have a solid idea of who we are, what we enjoy, and what we think. Investigations are still ongoing to determine the fault of Cambridge Analytica and others associated with the firm who used data from 50 million Facebook users to try to influence the outcome of the 2016 U.S. Presidential election and the Brexit referendum in the United Kingdom, but if the accusations are true, it demonstrates AI’s power for social manipulation. AI can target individuals identified by algorithms and personal data and spread any information they choose, in whichever style they deem most convincing—fact or fiction.

It is now possible to track and assess an individual’s every step both online and while going about their everyday business. Cameras are almost everywhere, and facial recognition algorithms recognise you. In fact, this is the type of data that will power China’s social credit system, which is expected to assign a personal score to each of its 1.4 billion citizens based on how they behave—things like whether they jaywalk, smoke in non-smoking areas, and how much time they spend playing video games.

Because machines can collect, track, and analyse so much information about you, it is entirely possible that those machines will use that information against you, which can result in discrimination, the report by Bernard Marr says.

AFP contributed to this report

Read all the Latest Explainers here

What's your reaction?

Comments

https://rawisda.com/assets/images/user-avatar-s.jpg

0 comment

Write the first comment for this!