SeeSrpska

HERE'S WHAT INFORMATION YOU SHOULDN'T SHARE WITH ARTIFICIAL INTELLIGENCE

AI chatbots have become extremely popular and useful tools for obtaining information, advice, and assistance on various topics. While they can be very helpful, we must be cautious when sharing our data with them. Why? Because they cannot be trusted.

HERE'S WHAT INFORMATION YOU SHOULDN'T SHARE WITH ARTIFICIAL INTELLIGENCE
PHOTO: Pixabay

To understand the privacy risks associated with AI chatbots, it's important to know how they work. Chatbots collect and store transcripts of conversations with users. This includes all questions, queries, and messages you send to the chatbot and the chatbot's responses. Companies behind these AI assistants analyze and process these conversational data to train and improve their large language models.

Think of a chatbot as a student taking notes during class. It records everything you say, and the AI company then reviews these "notes" to help the chatbot learn, similar to how a student would review their notes to learn from them. This means that your conversation data, which may include personal information, opinions, and sensitive details you disclose, are collected, stored, and studied by artificial intelligence companies, albeit temporarily.

When you share personal or sensitive information with AI chatbots, you lose control over where that data goes or how it can be used. AI chatbots store data on servers that can be hacked. These servers contain a wealth of information that cybercriminals can exploit in various ways. They can steal data and sell it to other cybercriminals. Additionally, hackers can use this data to hack into your accounts and gain unauthorized access to your devices.

All data you provide can be exposed, hacked, or abused, leading to identity theft, financial fraud, or the public exposure of intimate information you would prefer to keep to yourself.

Privacy protection means being selective about the details you reveal to AI chatbots. So, what shouldn't you tell a chatbot?

Be extremely cautious with these types of data:

Personally identifiable information: Avoid sharing personal data such as your full name, home address, phone number, date of birth, social security number, or other government identification numbers. Any of this information can be used for identity theft, financial fraud, or other misuse of your personal data.

Usernames and passwords: Never share passwords, PINs, identity verification codes, or other login credentials with AI chatbots. Even hinting at your credentials can help hackers gain access to your accounts.

Financial information: You should never share any information about bank accounts, credit card numbers, or income details with AI chatbots. You can ask them for general financial advice, ask them general questions about budget planning, or even about tax rules, but keep your sensitive financial information private, as it can easily lead to compromise of your accounts and assets.

Your thoughts you should keep to yourself: While AI chatbots can serve as empathetic conversational partners, you should avoid disclosing deeply personal thoughts, experiences, or opinions that you wouldn't feel comfortable sharing publicly. Anything from political or religious views to relationship issues or emotional struggles could be revealed if conversation logs are hacked or mishandled.

Confidential business information: When it comes to trade secrets, insider information, or any type of confidential workplace data, don't discuss it with public AI chatbots. Avoid using AI chatbots to summarize meeting notes or automate repetitive tasks, as there is a risk of inadvertently exposing sensitive data or violating confidentiality agreements and intellectual property protection of your employer.

Let's recall the case where Samsung employees used Chat GPT for coding and accidentally uploaded sensitive code to the generative AI platform. This incident resulted in the disclosure of confidential information about Samsung, prompting the company to introduce a ban on the use of AI chatbots.

Technology companies like Apple and Google even have rules prohibiting employees from using AI chatbots for work.

Your original creative work: Never share your original ideas with chatbots unless you want to share them with everyone.

Health-related information: Research conducted by health technology company Tebra found that 1 in 4 Americans are more likely to talk to an AI chatbot than seek treatment, and more than 5% of Americans have turned to Chat GPT for diagnosis and followed its advice. Protecting your health data means preserving data confidentiality and protecting yourself from potential privacy breaches or misuse of sensitive medical information. So, never disclose your health status, diagnoses, treatment details, or therapy to AI chatbots. Instead, talk to qualified healthcare professionals in a safe and private environment.

How to safely use chatbots?

  • 1.       Be cautious about the information you provide.
  • 2.       Read the privacy policy and look for chatbot privacy settings.
  • 3.       Use the option to disable the use of your data for training language models when available.

So, using anonymous/private mode, deleting conversation history, and customizing settings are the main ways to limit data collection by AI chatbots. Most major AI providers offer these options.