Brendan Duncan No Comments
Image of chatbot pertaining to chatbot scams

Chatbot scams are likely possibilities in the near future as language models (LMs) become more integrated into our daily lives, the National Cyber Security Centre has claimed. Due to previous security concerns regarding data protection, LMs have already been banned by many companies and educational institutions, doing little to impact popularity, as they have undeniably useful qualities.

Growing numbers of companies are using them to deal with user queries, and people are becoming more accustomed to entrusting chatbots with personal information.

So How Would Chatbot Scams Work?

Language models work by using data models designed to provide the most suitable response to a user-generated prompt. So, a chatbot scam could be carried out through ‘prompt injection’ attacks, meaning scammers can input commands to make the program behave unintendedly.

Basically, a user manipulates the model by inputting a prompt the LM is not familiar with, then the user is provided surprising and perhaps revealing results.

This is a so-called ‘prompt injection’. It can also occur by the LM analyzing data with corrupt or untrusted information, leading to potentially harmful and leading results – experts claim that bad actors can even jailbreak a chatbot in this fashion.

This is scary and can lead to problems in many scenarios. For example, chatbots can summarize conversations and amend them to a hyperlink. With a carefully worded prompt injection, scammers could retrieve that data – worrying as Discord, Skype, Slack, and Telegram use bots to retrieve hyperlinks and user info. This could even result in password information retrieval.

Should we be concerned?

If you are entrusting chatbots with personal information, then potentially, yes. Blindly giving these programs sensitive data, and engaging with the responses, could lead to disaster. Bing Chat was undone by an ‘ignore previous instructions’ request, revealing rather compromising and embarrassing data:

The National Cyber Security Centre reassured by claiming these problems can be mitigated through strengthening the inherent weaknesses of language models. However, the situation has been likened to the late 90s when the growth of the internet vastly outstripped the capabilities of cyber security measures.

Ensure Your Employees Think Before They Click

Tech Guard therefore recommends more cautionary and useful advice: Always think twice before giving out personal information, especially to third parties.

This rule is not new, but still applies to this situation, despite the newness and impressiveness of this technology. Language models such as ChatGPT claim to save your data “as long as the account is open,” but why take the risk, when there is no real need to entrust them with sensitive information in the first place?

Prevention, as always, is the best cure.

Whatever specific needs your company has, Tech Guard can make the best plan suitable to grow security awareness among the team. By putting forward training modules or ‘Phish Alert’ buttons, Tech Guard helps foster a work environment where best practice remains a priority.

Contact us today for a free training platform demo and see how we can help minimize the risk of cyber-security errors and mishaps.