LinkedIn is using your data to train its generative AI. Opt out now!

By default your data is used to train LinkedIn’s artificial intelligence. And no, you didn’t opt in! Read here how you can stop this.

LinkedIn uses user data to train AI by default

Like most LinkedIn users, you may not be aware that the job hunting site automatically opted you in so that your data is used to train its AI models. Follow these steps to stop LinkedIn from scraping your data; easily on your iPhone, Android, or web browser.


LinkedIn is the next big tech company to quietly start exploiting its user data to train AI models. Rather than users being given the chance and time to decide whether to explicitly opt in or opt out to this data processing, LinkedIn decided to opt everyone in by default. This raises concerns for users’ data privacy.

6 steps to stop LinkedIn from using your data to train AI

  1. Login to your LinkedIn account
  2. Click on your profile picture
  3. Click to open settings
  4. Select Data and Privacy
  5. Click Data for generative AI improvement
  6. Next to Use my data for training content creation AI models click to turn the toggle off

Quick link to opt-out: Click this link to stop LinkedIn using future data for AI.

LinkedIn is using your data to train AI by default

LinkedIn user data is now used for training its generative AI models. Users around the world excluding those in the European Economic Area (including the EU), Switzerland, and the United Kingdom, must manually go to their account settings and toggle to turn off the option Data for generative AI improvement.

Once you have opted out, your data won’t be used for AI purposes in the future, but anything that LinkedIn had already scraped from you cannot be removed.

This opt-out toggle is not a new setting but comes as a shock to LinkedIn users who were not aware that their data is being used for AI training. Or that LinkedIn chose to opt them in without informing them of this practice.

404 Media reported on September 18th 2024 that LinkedIn didn’t even update its privacy policy before it started scraping user data. When 404 Media reached out to LinkedIn about this, it told 404 that it would update its terms of service “shortly”. This response is extremely questionable at best. On September 18th LinkedIn quietly released a blog post written by its general counsel Blake Lawit announcing changes to its privacy policy and terms of service. Which will come into place on November 20, 2024.

The changes made to LinkedIn’s User Agreement and Privacy Policy will explain how user data is to be used for AI by the company. Retroactive changes after being caught gathering customer data for previously undisclosed purposes must not become a common business practice.

One would expect the Microsoft-owned platform LinkedIn to announce the change in its privacy policy first before scraping user data, but in this case, it seems otherwise. This is startling as it is not the first time Microsoft has found itself in a privacy debacle.

A closer look at LinkedIn’s AI FAQs

In the LinkedIn and generative AI FAQs some details of how user data is used for generative AI purposes are explained.

“As with most features on LinkedIn, when you engage with our platform we collect and use (or process) data about your use of the platform, including personal data. This could include your use of the generative AI (AI models used to create content) or other AI features, your posts and articles, how frequently you use LinkedIn, your language preference, and any feedback you may have provided to our teams.”

We know that social media companies play fast and loose when it comes to the internal use of their data, and pretending to be surprised will not fix this problem. We need to have a wider societal discussion which aims to critique these predatory practices.

More worryingly the FAQs also say that the artificial intelligence models that LinkedIn uses to power its generative AI features, “may be trained by LinkedIn or another provider.” After this, they explain that some models are provided by Microsoft’s Azure OpenAI service. While they do not express who “another provider” is, it’s worth remembering that LinkedIn is owned by Microsoft which currently owns a portion of OpenAI.

While LinkedIn claims that where it trains AI models, it does “seek to minimize personal data in the data sets used to train the models, including by using privacy-enhancing technologies to redact or remove personal data from the training dataset.”

But upon reading the FAQ further they note that users who provide personal data as an input into any of LinkedIn’s AI-powered features might see their “personal data being provided as an output.” which would override any claims that they seek to minimize the use of personal data.

How to opt out of LinkedIn’s use of your data to power their AI projects on all your devices.

How to opt out of LinkedIn AI on iPhone

  1. Open the LinkedIn app on your phone and login to your account
  2. Click on your profile picture
  3. Click to open settings
  4. Select Data and Privacy
  5. Click Data for generative AI improvement
  6. Next to Use my data for training content creation AI models click to turn the toggle off.

Folgen Sie diesen Schritten, um zu verhindern, dass LinkedIn Ihre Daten zum Trainieren von KI auf einem iPhone verwendet. Folgen Sie diesen Schritten, um zu verhindern, dass LinkedIn Ihre Daten zum Trainieren von KI auf einem iPhone verwendet. Follow these steps to opt out of LinkedIn using your data to train AI on an iPhone.

Dropping LinkedIn’s AI on Android

  1. Open the LinkedIn app on your Android and login to your account
  2. Click on your profile picture
  3. Click to open settings
  4. Select Data and Privacy
  5. Click Data for generative AI improvement
  6. Next to Use my data for training content creation AI models click to turn the toggle off.

Using a browser to decline allowing the use of your data for training AI models.

  1. Login to your LinkedIn account from the web browser of your choice
  2. Click on your profile picture
  3. Click to open settings
  4. Select Data and Privacy
  5. Click Data for generative AI improvement
  6. Next to Use my data for training content creation AI models click to turn the toggle off.

Folgen Sie diesen Schritten, um zu verhindern, dass LinkedIn Ihre Daten zum Trainieren von KI über einen Webbrowser verwendet. Folgen Sie diesen Schritten, um zu verhindern, dass LinkedIn Ihre Daten zum Trainieren von KI über einen Webbrowser verwendet. Follow these steps to opt out of LinkedIn using your data to train AI from a web browser.

In addition to opting out of future data being used for AI purposes, you can also fill out this LinkedIn Data Processing Objection Form to reject or request a restriction to the processing of your personal data.

Past data is still used for AI

Users in regions where LinkedIn automatically collects their data for use in AI-related projects can opt out of having their personal data and any content they create on the platform used for this activity, but as it writes in its AI FAQ, opting out only means that “going forward” your data and any content you create on the platform will not be used. So any of your data the tech company has already managed to scoop up, cannot unfortunately be retroactively removed.

Data in the EU, UK, and Switzerland is excluded

LinkedIn users in the EU, UK, or Switzerland will be relieved to know that the big tech is not using data from users in these regions to train its AI models. This is most likely due to these regions’ stricter data privacy rules like the EU’s GDPR.

Your data can still be used for other purposes

LinkedIn doesn’t only use its user data for AI purposes. Like every other big tech company, the online platform for professionals is free because it monetizes the sale and collection of your data, which is worrying because for many users a lot of personal information is shared on LinkedIn, than on other social media platforms.

It’s also important to remember that LinkedIn is owned by Microsoft. This means that the data they collect from you could be spread and connected to any other services owned by the tech giant. For example: Skype or Outlook among other third-party partners.

A quick summary of data that LinkedIn collects:

  • Personal details: name, email address and/or phone number
  • Education History and work history
  • Skills
  • Data when you visit sites that use LinkedIn plugins
  • Each visit and use of services
  • Cookies
  • Messages on the platform
  • Device and location

In general, when it comes to using any Microsoft-owned service it’s always worth being cautious with what personal information you share online as this data is shared across many different platforms.

With that being said, another important reason to always be cautious of what you share online is that big tech companies, as we see in the above case with LinkedIn, are regularly changing their privacy policy and terms of use. Often these changes in the privacy policy or terms of use, do not benefit the user and happen without the user even knowing.

Screenshot der Datenschutzreichtlinie von LinkedIn. Screenshot der Datenschutzreichtlinie von LinkedIn. In LinkedIn’s privacy policy it says it will always notify the user when it makes any changes, but in the recent case of LinkedIn AI, it seems a notification is not enough for users to really know how their data is handled.

As LinkedIn says in its privacy policy, it is dynamic, and always changing. If it makes changes to what personal data is collected, how it collects your data or shares it, it will notify the user. What constitutes a notification to inform users, is questionable.

Beyond the sketchy privacy practices of tech companies, which can seemingly change shape at will, it is also important to minimize your digital footprint and avoid oversharing as a means of preemptively mitigating the possible damage done in the event of a major data breach. Data that you are not feeding into these cloud servers, and again a cloud server is just someone else’s computer, cannot be compromised should their service experience a security incident.

Privacy vs AI

LinkedIn is the latest big tech company to jump on the AI bandwagon by using its user’s data. But this is nothing new, and it will continue to happen far more often than we’d like.

For example, earlier this year Meta AI attempted to use EU user Facebook and Instagram data to train its Generative AI. Luckily this was met with vocal backlash and put on hold. Unfortunately for Facebook users in the US where there are no data privacy laws like the Eu’s GDPR, the Meta-owned Facebook has a much bigger reach on user data and can use it how it wants. For Facebook fans in the US, their data is being used for AI training and they have no way to opt out. This not only raises massive data privacy concerns but also highlights the user’s lack of choice and freedom when it comes to their data, and how it’s used.

Meta is not the only tech behemoth to mistreat user data, of course, Google does too. A Gmail user recently shared a conversation they had with Google’s Gemini AI chat bot. What was shocking about the conversation, was that the data the chat bot used came from the Gmail user’s emails. Even though the Gmail user had never given access or opted in to allow the Large Language Model access to their mailbox. What we learn from the two above examples is that when it comes to big tech, profit, and technical advancements like this crazy rush to board the AI train (often without your consent) is far more profitable than operating ethically.

Final thoughts

LinkedIn’s recent changes to how it handles user information may come as a surprise to some people using the platform, but sadly this is nothing out of the ordinary when it comes to how tech companies handle data. This is yet another tech company joining the AI trend and using user information to train its models.

If you visit Tuta’s blog regularly you will know that when it comes to big tech, they operate on the business model of profiting off your personal information. Their carefree approach to privacy follows a tried and true method by collecting as much data as possible, selling it to advertisers, who in return target you with ads. Rinse, wash, repeat. A perfect example is how Meta and Google offer their extensive products for free. But now, with the race for every company to utilize AI, your data is not just used to profit from advertising.

This craze to develop AI and using your data is a trend that is not going anywhere. What we can learn is that popular online companies, like Meta, Google, and Microsoft to mention a few will always put profits first, and operating ethically second.

With that being said, if you choose to keep using these platforms, it’s advisable to stay aware and up to date of any changes made to their terms of use and privacy policies– which is a hassle for those of us who don’t speak legal-ese but necessary. Especially if you are concerned for your data and privacy.

And if you truly are worried about your privacy, we’d recommend ditching services like Gmail or Facebook for good.

With growing concern over how companies deal with data and user privacy, there has been an increase in the development of amazing privacy-focused alternatives. Some are going a step further by choosing to De-Google and stop using big tech services entirely.

Luckily, in 2024 it is possible! For example, replacing Gmail with Tuta Mail, or replacing Meta’s Whatsapp with better messenger platforms like Signal.

Choose Tuta for no AI and no ads

Want to make sure that your emails are NOT used to train AI? Just switch to Tuta – as an end-to-end encrypted email provider, we will never join the AI train!

Tuta is a privacy-focused email and calendar service based in Germany. Operating under strict data privacy laws, Tuta is open source, GDPR-compliant, and fully end-to-end encrypted.

Unlike popular email providers, Tuta encrypts as much user data as possible, now with the world’s first post-quantum encryption protocol! With no links to Google, Tuta has no ads, does not track its users, and is designed with true privacy in mind.

Take back your privacy, and ensure your emails are not used for AI training. Get free end-to-end encrypted email with Tuta Mail!

Illustration of a phone with Tuta logo on its screen, next to the phone is an enlarged shield with a check mark in it symbolizing the high level of security due to Tuta's encryption.