ChatGPT exploded onto the scene in late 2022, taking the entire internet by storm. ChatGPT, or Generative Prep-trained Transformer, which is its full name, is the most technically advanced AI chatbot the world has ever seen. Its ability to take questions in natural language across a vast array of topics and respond with human-like queries captured the imagination of millions worldwide who started using this new tool. Every industry, from writing to content creation to big tech, is looking at this new tool and how they can use it. ChatGPt was responsible for bringing AI into the mainstream and a topic of daily conversation.
At the same time, however, privacy risks are also being introduced in a rush to adopt this new revolutionary tech. The very nature of AI requires data to learn and further refine itself. This article will ask whether ChatGPT poses a privacy risk to corporations and users.
ChatGPT and the data problem
ChatGPT is more advanced than other AI chatbots due to the massive datasets that have been used to train it, estimated at over 300 billion words. Its model was trained on internet articles, social media platforms, blog posts, etc., raising questions about the type of consent that was given and what misinformation also found its way into the model. For example, the General Data Protection Regulation (GDPR) and other privacy laws impose strict data gathering and collection requirements.
As per OpenAIs own statement:
“A large amount of data on the internet relates to people, so our training information does incidentally include personal information. We don’t actively seek out personal information to train our models.
We use training information only to help our models learn about language and how to understand and respond to it. We do not and will not use any personal information in training information to build profiles about people, to contact them, to advertise to them, to try to sell them anything or to sell the information itself.
Our models may learn from personal information to understand how things like names and addresses fit within language and sentences or to learn about famous people and public figures. This makes our models better at providing relevant responses.”
This raises serious concerns with countries and jurisdictions that impose strict requirements on what data can be collected and the potential risks involved. There has already been action taken, such as Italy’s temporary ban on the usage of ChatGPT and The European Data Protection Board, the privacy body that enforces GDPR setting a task force to look at potential privacy guardrails on the usage of the chatbot. We can expect further regulations to follow to control the usage of AI models as the rapid adoption of this tool grows.
The chat problem
ChatGPT and other AI models utilize chat conversations to further train and refine their responses over time, effectively “learning” with the more information they get. This can raise the issue of users accidentally submitting sensitive information to the tool, leading to a privacy nightmare in other conversations with users.
Thankfully OpenAI has realized this risk and allowed users to turn off chat history, allowing them to control what chats are used to train its model. With this option turned on, chat history will only be trained for 30 days before being permanently deleted.
Privacy tips when using ChatGPT
As privacy professionals navigate the new reality, it is essential to educate users on the tips and practices to use when sharing information with ChatGPT.
Some good practices are :
- Control what you share with ChatGPT: Users need to be educated that any information shared with ChatGPT could be posted on the Internet and shared with others. Only share information that is not sensitive
- Be aware of OpenAI’s privacy policies and how they use information that is shared with them.
- Do not use official or personal emails to sign up with ChatGPT. Use secure email services that are not linked to your accounts.
- Create a policy around this tool that details the Dos and Don’ts of what can be shared with it to hold users accountable for their actions.
ChatGPT is here to stay, and its adoption across industries will only increase over time. Companies and governments need to recognize this new reality and create rules and regulations that balance the need for privacy with productivity. Excessive rules and regulations will only stifle innovation and progress. ChatGPT is a potent tool that can be used for various purposes. However, users must educate themselves on the types of privacy risks present to get the best of both worlds going forward.
What is ChatGPT?
ChatGPT is an advanced chatbot developed by OpenAI based on the GPT-4 architecture. It’s designed to generate human-like responses, making it useful for various applications, such as content creation, customer service, and more.
How does ChatGPT gather data?
ChatGPT was trained on over 300 billion words from the internet, including articles, blog posts, social media sites, and books. Additionally, it collects data directly from users, such as account information, communication with OpenAI, and technical information about devices and browsing activity.
Is ChatGPT a privacy risk?
There are concerns regarding ChatGPT’s data collection practices, particularly regarding the use of user-generated data for training purposes and sharing information with various entities. These concerns have led to regulatory actions, such as Italy’s ban on ChatGPT in March 2023.
How can I protect my privacy while using ChatGPT?
To protect your privacy when using ChatGPT, consider registering with a private email, limiting the personal information you share in conversations, and exercising your right to be forgotten (or deletion) under GDPR or CCPA, if applicable.
Are there regulations in place for ChatGPT and other AI technologies?
Various governments are discussing and exploring regulatory measures, such as the European Data Protection Board and the National Telecommunications and Information Administration in the United States. The ongoing debate aims to strike a balance between innovation and privacy protection.
Can I delete my conversations with ChatGPT?
OpenAI states in its FAQ section that it “reviews” conversations users have with ChatGPT, which may be used for training purposes. Unfortunately, the prompts you submit cannot be deleted, according to OpenAI.
What types of information does ChatGPT collect from users?
ChatGPT collects Personally Identifiable Information (PII), such as account information, communication information, and social media information. It also gathers Technical Information (TI) about your device, operating system, browser, IP address, location, and browsing behavior.