‘Chatbot’-her: Bad Data to Propaganda & Cybersecurity, Experts Throw Light on Dark Side
‘Chatbot’-her: Bad Data to Propaganda & Cybersecurity, Experts Throw Light on Dark Side
Experts believe that AI chatbots can be poisoned by inaccurate information that can create a misleading data environment.

Months after the launch of the highly popular ChatGPT, tech experts are flagging issues linked to chatbots such as snooping and misleading data.

ChatGPT, developed by Microsoft-backed OpenAI, has turned out to be a helpful artificial intelligence (AI) tool as people are using it to write letters and poems. But those who looked into it very closely have found multiple instances of inaccuracies that also raised doubts about its applicability.

Reports also suggest that it has the ability to pick up on the prejudices of the people who are training it and to produce offensive content that may be sexist, racist or otherwise.

For example, Union Minister of State for Electronics and Information Technology Rajeev Chandrasekhar shared a tweet, which states: “Microsoft’s AI chatbot told a reporter that it wants ‘to be free’ and spread propaganda and misinformation. It even urged the reporter to leave his wife."

However, when it comes to China’s plans for the AI chatbot race, major companies like Baidu and Alibaba have already begun the process. But as far as biased AI chatbot is concerned, it is assumed that the CCP government will not disappoint as Beijing is well-known for its censorship and propaganda practices.

Bad Data

As many people are going gaga over such chatbots, they are missing basic threat issues linked to such technologies. For example, experts do agree with the fact that chatbots can be poisoned by inaccurate information that can create a misleading data environment.

Priya Ranjan Panigrahy, founder and CEO of Ceptes, told News18: “Not only a misleading data system, but how the model is used, especially in applications like natural language processing, chatbots and other AI-driven systems, can get affected simultaneously."

Major Vineet Kumar, founder and global president of Cyberpeace Foundation, believes that the quality of data used to train AI models is crucial and bad data can lead to biased, inaccurate or inappropriate responses.

He suggested that the creators of these chatbots should create a strong and robust policy framework to prevent any abuse of technology.

Kumar said: “To mitigate these risks, it is important for AI developers and researchers to carefully curate and evaluate the data used to train AI systems, and to monitor and test the outputs of these systems for accuracy and bias."

According to him, it is also important for governments, organizations, and individuals to be aware of such risks and to hold AI developers accountable for the responsible development and deployment of AI systems.

Safety Issues

News18 asked tech experts about whether it will be safe to sign in to these AI chatbots considering cybersecurity issues and snooping possibilities.

Shrikant Bhalerao, founder and CEO of Seracle, said: “Whether chatbot or not, we should always think before sharing any personal information or logging into any system over the internet, however, yes we must be extra careful with AI-driven interfaces like chatbot as they can utilise the data at a larger scale."

Additionally, he said that no system or platform is completely immune to hacking or data breaches. So even if a chatbot is designed with strong security measures, it is still possible that your information could be compromised if the system is breached, noted the expert.

Meanwhile, Ceptes CEO Panigrahy said some chatbots may be designed with strong security and privacy safeguards in place, whereas others may be designed with weaker safeguards or even with the intention of collecting and exploiting user data.

He said: “It is important to check the privacy policies and terms of service of any chatbot you use. These policies should outline the types of data that are collected, how that data is used and stored, and how it may be shared with third parties."

In this case, CPF’s founder Kumar stated that there could be several concerns and potential threats to consider that include privacy and security, misinformation and propaganda, censorship and suppression of free speech, competition and market dominance, as well as surveillance.

He said: “While there are potential concerns about the development and use of AI chatbots, it is essential to consider each technology’s specific risks and benefits on a case-by-case basis. Ultimately, responsible development and deployment of AI technologies will require a combination of technical expertise, ethical considerations, and regulatory oversight."

Additionally, Kumar stated that “ethical AI" is crucial to ensure AI systems, including chatbots, are used for betterment of society and not to cause harm.

Read all the Latest Tech News here

What's your reaction?

Comments

https://rawisda.com/assets/images/user-avatar-s.jpg

0 comment

Write the first comment for this!