Staying safe online while using AI tools

Have you ever considered, ‘Is ChatGPT safe?’. You’re not alone. According to the Real Digital Risk Report 2023, just under 3 in 5 (59%) Australians have heard about, tried or used the latest artificial intelligence (AI) language tools, like ChatGPT. Among those using AI technologies, over 2 in 5 (41%) use these tools regularly, with more than half (55%) trusting these tools for their accuracy. 

So, why do Australians turn to an AI system like ChatGPT? Fun and entertainment (38%), work or study tasks (34%) and writing and editing tasks (33%), are the top three reasons prompting AI usage. 

If you want to boost your artificial intelligence cyber security and reduce ChatGPT privacy risks, keep reading to discover five practical tips for staying safe online from potential cyber threats and boost your AI safety. 

The most common risks of using AI 

AI has been around for years and powers several features you likely already use daily (such as filtering out spam emails in your inbox or receiving recommended TV shows on streaming services).

However, generative AI (or GenAI for short) is a breakthrough technology that is changing the way we interact with content online. GenAI uses algorithms and machine learning to get smarter over time, using existing data to “learn” and improve the accuracy of its outputs. 

ChatGPT and Google’s Gemini are examples of GenAI tools that use artificial intelligence to create new content, such as images, videos, and written content, based on text-based prompts.

But according to the Real Digital Risk Report 2023, 7 in 10 Australians (69%) are concerned about what happens to information shared with AI language platforms, such as confidential work or personal information. 

So, what are the biggest risks and threats presented by AI language models?

  • Reinforcing current social biases: Unfortunately, we live in a world where unfair treatment and discrimination can occur due to a range of factors, such as a person’s race, age, sex or ability. Unfortunately, these societal biases can be carried over into AI tools and learning models used to train AI. This means the outputs generated further amplify existing biases. AI bias is becoming more common, such as discriminating against minority groups or making gendered assumptions.
  • Gaining access to sensitive data: The data fed into AI systems helps these models learn and improve over time. However, many users input sensitive personal or financial information (without realising the consequences), leaving this data vulnerable to being used by big tech platforms without an individual’s informed consent.
  • Potential security vulnerabilities: AI platforms don’t tend to store personally identifiable information securely or make it difficult for individuals to understand how their personal data is being stored and used, making it vulnerable in the event of a data breach. 
  • Helping scammers create fraudulent material: As cyber threats become increasingly sophisticated, cybercriminals are turning to tools like ChatGPT to produce hyper-realistic emails and messages to deceive users and potentially gain access to sensitive information. With AI tools, scammers can easily reproduce or replicate a company’s email style in seconds, using a simple text-based input, making it harder for individuals to spot the signs of a scam email. 

Real Insurance Tip: If you’re ever unsure about the authenticity of a message, avoid clicking any links, never respond to suspicious communications, and report the issue to your relevant service provider or security team ASAP. 

5 tips for staying safe while using AI tools 

GenAI tools, like ChatGPT, come with many potential data security risks, from sharing data with third-party providers to using inputs to train and improve AI models. 

However, there are practical steps you can take to lower your risk, increase your data protection and boost your artificial intelligence cyber security.

1. Avoid sharing sensitive information with AI tools 

While it can be tempting to use ChatGPT to create household budgets, review your businesses’ financial performance or rewrite your resume, there are key ChatGPT privacy risks to consider. 

While OpenAI (the company that operates ChatGPT) de-identifies the information you put into AI tools (that is the process to prevent personal identity from being revealed), there’s no guarantee that your information will be kept safe from cybercriminals. This is because AI tools are evolving in complexity at such a rapid rate that regulators are unable to keep up with the latest advancements, making it nearly impossible to effectively enforce privacy protections for users, particularly at a global scale.  

To reduce your risk of identity theft and scams, avoid sharing your personal information (such as full name, address, phone number or banking information) with AI tools. While it might be tempting to ask ChatGPT to rewrite your resume ahead of your job search, it’s important to never share this level of personal detail with AI tools, particularly when there is so much uncertainty about how these platforms store and leverage data. 

Plus, review the privacy policies of AI tools to understand how your data will be handled and what level of control you have over any personal information you share. 

2. Look for the warning signs of a scam 

Cyber-attacks and data breaches are becoming more common and harder to spot. 

However, by learning the red flags of a phishing scam (in which a scammer pretends to be a legitimate company), you can reduce your risk of getting caught out. 

While each scam is unique, be extra careful of messages or emails from unsaved numbers or addresses that use an urgent tone, include misspelled words or random characters and make demands with financial consequences, or even be aware of claims that seem too good to be true.

Keep Reading: Learn how to shop online with confidence while minimising your digital risks

3. Create anonymous accounts on AI platforms 

While you will need to provide at least one form of contact information to sign up to ChatGPT (such as a phone number or email address), leaving your personal information blank or using dummy information can help to keep your data secure when experimenting with AI tools 

4. Switch off chat saving functions

It’s a smart move to change your data control settings when using ChatGPT to protect any information you share.

By default, your conversations on ChatGPT will be shared with the OpenAI team to improve the AI model for everyone. 

However, you can disable this model training mode and opt into 'Temporary Chats', which the company deletes within 30 days and won’t use for algorithm training. 

5. Keep your passwords safe and secure

GenAI tools are experimental, and data privacy risks exist for users who leverage these platforms. 

Along with adjusting your settings on ChatGPT, it's worth following digital security best practices, particularly around password storage. These include:

  • Creating a strong, unique password for each account and never duplicating passwords 
  • Changing your passwords regularly (ideally once every quarter/3 months)
  • Using secure password storage systems that require multi factor authentication (MFA)

Keep Reading: Share these handy safety tips with your family to boost your security online.

Stay scam savvy

It’s important to stay alert and aware of the risks associated with the evolving world of technology, as it plays a key role in almost every facet of your life. And while there are many benefits of technology, the consequences of falling victim to cyber threats can have a huge impact on your life. As part of staying vigilant, read the full Real Digital Risk Report 2023 now to learn what Australians like you are concerned about, and what they are doing to keep themselves and their families safe online.