7 Essential Cyber Security Tips When Using ChatGPT and AI Tools

AI tools are nothing new but the ChatGPT craze has definitely boosted them back into the headlines - with the masses rushing to implement "How to use ChatGPT to [fill in the blank]". From writing poems to saving time at work, have we stopped to consider how to train employees on the smart use of these tools while protecting company data?

It's important your security awareness training includes current and up-to-date trends to help your employees stay aware of any new potential threats. Cybercriminals are constantly on the lookout for popular and trending topics to take advantage of, as is the case with the latest AI tool in the news.

Wizer Training covers these trending security awareness topics and more

Work use cases go from more than rephrasing a block of text, as SC Magazine reported, one AI researcher discovered he could get ChatGPT to act as a Linux emulator that could play games and even run programs and smaller startups who are running on a shoestring budget have turned to it for writing code and other tasks.

Inputting sensitive data, whether it is personally identifiable information or proprietary code, it's important to be sure our employees who are turning to these types of tools to increase their efficiency and workflow understand when and how to safely use them.

Read on for a quick guide to help start the conversation with your employees on how to securely use AI tools like ChatGPT and others. Be sure to download the cyber security training tips for employees as a PDF, too!

1 - Beware of fake aI apps and browser extensions

Any trendy topic opens an opportunity for cyber criminals to creatively take advantage of the unsuspecting. Be sure your security awareness training covers topics on lookalike apps and extensions. Just because an app or browser extension exists touting a well-known name does not mean it is an official app from the company. Lookalike apps and extensions give a false sense of security and the over-eager user who does not take time to verify an app's authenticity opens up their device (and the company) to malware and other threats.

2 - Never Enter sensitive info or PII while using AI tools

AI tools use information submitted to train the natural language learning program so employees should be careful to remove any sensitive or personal information before submitting text to be utilized in the program.

 


Train your employees to identify and avoid phishing attacks with Wizer Boost.


3 - Remove Mentions of Company Name, People, or Customers BEFORE inserting into AI Tools

Also, as with any online account, there is always a risk of the company hosting your account being breached. Information in the chat history needs to be clean of any customer or company data therefore it should never be submitted to AI tools.

4 - Verify with external sources bEFORE using AI-Generated Results

AI tools are pretty smart - and ChatGPT is definitely impressive in its ability to mimic and provide solid results, however, as with any information these days its critical to verify any results, especially stated stats and facts, before using the information, especially when representing your company.

5 - Be Aware of Potential Biases in AI Results When Representing the Company

While ChatGPT and other AI tools are based on enormous amounts of data, data can be misleading and involve bias - intentional or otherwise - from the creators of that data. As stated above, it's important to take a moment and consider if the results returned by an AI tool could potentially be affected by a bias of some sort.

6 - For Developers THOROUGHLY Review Any AI-Generated Code BEFORE Using It

ChatGPT has already proven it's not fully reliable when creating code as evidenced by Stack Overflows temporary ban on answers from ChatGPT as the code was found often to be too buggy. It's important when developers use AI tools to assist in their workflow that any content created by an AI tool needs to be carefully scrutinized before it is implemented.

Following the OWASP Top 10 should be maintained in any process for ensuring secure code, whether or not it was written with the help of an AI tool. For quick training on the security awareness training basics of the OWASP Top 10 check out the Wizer series for developers.

7 - Treat AI Tools Like A Knowledgeable But Overconfident Friend

You know the one - the friend who confidently states opinions on everything from moon landings to the best ways to cook a burger. True they may be knowledgeable but no one knows everything about, well, everything. So a little caution goes a long way before taking anyone's word without ever verifying. The same goes for the nifty tools that AI can be - just don't trust it is always right 100% of the time.

 

Download these 7 Essential CyberSecurity Tips For ChatGPT and Other AI Tools

ChatGPT Guide blue