ChatGPT - Privacy nightmare or helpful tool?

ChatGPT and privacy: What happens to your personal data?

ChatGPT and privacy do not go well together: The problem is the AI is fuelled by our personal data.

Everyone is talking about what the new AI can do, and lately everyone is starting to look at the ChatGPT privacy implications. The ban on ChatGPT in Italy and the looming privacy issues surrounding the AI chatbot in Germany have sparked a Europe-wide debate between those who are enthusiastic about the tool and those who warn against its development and what ChatGPT means for people's privacy.


ChatGPT has become the fastest growing consumer application ever launched. People are flocking to this tool as they believe it will help them with their everyday life - or even with work.

Within months of its release hundreds of millions of users have signed up - almost none of them have read the ChatGPT privacy policy.

But is it worth signing away your right to privacy for using the AI?

Students use ChatGPT to let the AI write essays for them, some news outlets like Buzzfeed announced they would use ChatGPT to write their articles - which made their shares soar.

Google did not take long to announce its own AI tool - Bard (now Gemini - and Chinese Baidu launched its AI ‘Erni Bot’ in March.

Big Tech is so excited about the new AI technology that they want us to not see the risks: It’s fuelled by our personal data.

How does ChatGPT get hold of your personal data?

The data ChatGPT uses to train its AI system is scraped from the internet and uploaded in bulk. Your personal data - whether you commented on Reddit or Hacker News or whether you wrote a blog post yourself - can be used by AI companies like OpenAI to train their chatbots.

However, if you use ChatGPT yourself, the system can log even more data about you. It can use the sentences and questions you enter to keep training its system. But on top of that ChatGPT keeps track of the following:

  • Log data: OpenAI logs your IP address, the browser you use as well as date and time of your usage. Of course, ChatGPT also collects and stores your chats.

  • Usage data: Usage data also includes the collection of your location, your time zone, your software version, your type of device and operating system, and more.

  • Cookies: Like all Big Tech companies, OpenAI uses cookies for analytics and to store data on your browsing activity. Cookies let OpenAI track you, even when your browse the web.

  • User content: Any data, literally every word your type, that you upload or enter into ChatGPT is being stored by OpenAI.

  • Account information: The information you enter when you register a ChatGPT account, like your name, contact information, and payment information, gets stored.

All in all, one can say that ChatGPT is not meant for privacy. AI is still a rather new technology. Since its take-off, the debate has started on whether we should regulate the use of our data by artificial intelligence companies - and how. These are still early debates, but it is expected that AI will get heavily regulated in the future.

ChatGPT privacy debate in Europe

Europe, known for its strict privacy protection law GDPR is the first to tackle the privacy issues that come along with AI. Italy has become the first country in the European Union to impose a temporary ban, sensing a breach of the EU’s General Data Protection Regulation (GDPR). But the chat bot could come back to Italy by end of April: “On Wednesday the Italian data protection authority The Garante gave OpenAI — the U.S. company behind ChatGPT — until April 30 to comply with specific privacy requirements, paving the way for lifting the chatbot’s temporary ban in the country” writes Politico.

Italy’s data protection authority wants to lift the blocking of ChatGPT from operator OpenAI under certain conditions: “We are ready to allow a return of ChatGPT on April 30, if OpenAI is ready to take adequate measures,” authority head Pasquale Stanzione said in an interview with the Corriere della Sera newspaper.

According to Stanzione’s assessment, the Microsoft-backed company is willing to implement the data protection authority’s requirements.

Artificial Intelligence has made waves in politics and business with the release of the software. The program can, for example, answer queries or generate complete texts based on a few keywords. The basis are large, pre-fed text volumes scraped from the internet.

However, concerns about incorrect answers and data protection are constantly increasing the more people use the generative artificial intelligence. According to the Italian authorities, OpenAI allegedly processed personal data without informing users and individuals. In addition, there was no legal basis for the extensive data collection used to train its AI models. Because of these suspected violations, Italy’s authorities had initiated investigations against OpenAI at the end of March. The AI software has since been blocked for Italian users.

Trained with your data

OpenAI, the company behind ChatGPT, trained the AI with billions of words scraped from the internet - with books, articles, papers, essays. All kinds of information available on the internet was fed to the AI, without consent from the authors.

If you have every posted anything online - whether a blog post or just a comment under an article - it is very likely that ChatGPT is using this information for its training.

Publishers go empty-handed

The compensation of publishers of online content is also problematic. Blog articles or books can be quoted via ChatGPT, but the actual authors do not receive any compensation. Depending on the popularity of the text, the industry is likely to miss out on huge revenues - while OpenAI will probably profit immensely.

What is wrong with AI data?

There are several reasons why using information from the internet for training an AI that then produces new texts is problematic.

  1. The data might be wrong: Not all data published on the internet is correct. In fact, the internet is full of propaganda, lobbying and opinion pieces.

  2. The data might be taken from copywrited sources, which could lead to legal conflicts.

  3. The data might be private: A lot of data on the internet contains personal information about people, sometimes even their addresses or other private information that might reveal their identity. This information was being scraped without consent by the people affected and, thus, a clear violation of privacy.

In addition, people have no option to find out whether ChatGPT uses their information and have it removed from the database. This is a legal requirement by the European Data Protection Regulation (GDPR) so it is questionable whether ChatGPT is compliant with this EU privacy regulation. The ‘right to be forgotten’, guaranteed by the GDPR, is particularly important if the information used by ChatGPT is wrong - which, unfortunately, is quite often the case.

Outlook in the EU

At this point, it can’t be said whether and to what extend ChatGPT - and other AI tools like Bard or Erni Bot - will be regulated in the EU. What can be said for sure is this: Politicians, and particularly data protection watchdogs in Europe and in the USA are looking closely at AI tools, their potentials, but also their potential harm.

”We are still years away from the AI law being applicable, and it is crucial that consumers are protected in the meantime,” Frederico Oliveira da Silva, senior legal officer at the European Consumers’ Organisation (BEUC), told EURACTIV.

According to Oliveira, The EU and national authorities should immediately launch an investigation into the risks of ChatGPT and similar large-scale voice tools. A “stronger public scrutiny” would be needed.

After Italy blocked access to the AI last month, the Spanish Data Protection Authority (AEPD) recently submitted a petition to the European Data Protection Board (EDPB), pointing out the need to assess the privacy implications of ChatGPT.

In France, MEP Eric Bothorel recently filed a complaint with the French data protection authority CNIL after it had already begun investigating several complaints. In Germany, the Data Protection Conference is currently assessing as well whether the chat bot violates the General Data Protection Regulation.

Belgium, Austria, Croatia, Finland and Poland have no plans to ban the software so far.

A regulation of AI tools is looming on the horizon, but in what way exactly is yet to be determined.

ChatGPT privacy policy

Finally, if you still want to use the AI, make sure to first read the ChatGPT privacy policy.

Be aware that OpenAI gathers a lot of information about you. According to the company’s privacy policy, it collects users’ IP address, browser type and settings, and data on users’ interactions with the site – including the type of content users engage with, features they use and actions they take. It even collects data about browsing habits and which websites you visit. What is more, OpenAI states that it might share the collected information with other third parties. Given that Microsoft is heavily invested in OpenAI, one should be aware that this Big Tech company might get full access to all of your data that you share with ChatGPT.

Conclusion

While ChatGPT can revolutionize the way we work, we must consider the costs. OpenAI is a for-profit company, and we should be careful with the data that we share with them.

The privacy risks are alarming while the benefits of ChatGPT - at least to date - are still very limited.

It’s not just that some information that ChatGPT reproduced is wrong, it’s also that the AI produces texts that sometimes run like boring summaries of hundreds of blog posts on the issue.

If you are not convinced, read this example produced by ChatGPT:


Write an essay on why privacy matters and why we need end-to-end encryption in our digital age.

We’re sure that you could write such an essay yourself, so now you can check who performs better: you or the AI.

Here is the essay on privacy by ChatGPT:

Privacy Matters: The Importance of End-to-End Encryption in the Digital Age

The digital age has brought with it countless benefits, from instant communication to unparalleled access to information. However, as we increasingly rely on technology to connect, communicate, and conduct business, we are also leaving behind vast amounts of personal information that can be collected, stored, and shared by companies, governments, and other organizations.

This is why privacy matters, as it allows individuals to maintain autonomy and control over their personal information and behavior.

The Importance of Privacy

In today’s world, personal information is valuable, and it can be used for a variety of purposes, such as targeted advertising, political manipulation, and even surveillance. When personal information is collected, stored, and shared without our knowledge or consent, we lose control over our own lives. Our actions, thoughts, and beliefs can be monitored and analyzed by people or organizations who may use that information to manipulate or exploit us. Additionally, the loss of privacy can lead to discrimination, especially for vulnerable groups such as minorities, immigrants, and those with different political views.

End-to-End Encryption

This is where end-to-end encryption comes in, it is a technique that allows for secure communication by encrypting the data on the sender’s device, so that it can only be decrypted on the recipient’s device. This helps to ensure that only the sender and the recipient can read the message, and it can’t be intercepted or accessed by any third party, including the company providing the service.

Protecting our Political and Civil Rights

End-to-end encryption also plays a critical role in protecting our political and civil rights. In a society where people feel free to express their views and opinions without fear of retribution, a robust marketplace of ideas can flourish. Conversely, in a society where privacy is not respected, individuals may feel they are under constant surveillance and may self-censor themselves to avoid retaliation. This can have a chilling effect on freedom of speech and other rights, and it can make it difficult for individuals to fully participate in the democratic process.

Trust and Security in Human Interactions

Privacy is also important for the security and trust in human interactions. When we can trust that our personal information is being kept private, we are more likely to share our thoughts, feelings, and experiences with others. This leads to deeper, more meaningful relationships and a stronger sense of community. On the other hand, if we feel that our personal information is constantly being exposed, we may become more guarded and less likely to form connections with others.

Conclusion

In conclusion, privacy is a fundamental right that allows us to maintain autonomy and control over our personal information and behavior. It is essential for protecting our political and civil rights, promoting trust and security in human interactions, and preserving human dignity. End-to-end encryption is a powerful tool that can help to protect our privacy in the digital age by ensuring that our communications are secure and private.

As we increasingly rely on technology to connect, communicate, and conduct business, it is important that we take steps to protect our privacy, and that includes using services that offer end-to-end encryption.