As it continues to add generative AI elements, for everything from post creation to job applications, LinkedIn has now updated both its User Agreement and Privacy Policy to reflect how it uses your data to power its AI models.
Spolier alert: LinkedIn’s using everything that you post publicly in the app to feed its AI tools.
As explained by LinkedIn:
“In our Privacy Policy, we have added language to clarify how we use the information you share with us to develop the products and services of LinkedIn and its affiliates, including by training AI models used for content generation (“generative AI”) and through security and safety measures.”
The specific section of the policy now reads:
“We may use your personal data to improve, develop, and provide products and Services, develop and train artificial intelligence (AI) models, develop, provide, and personalize our Services, and gain insights with the help of AI, automated systems, and inferences, so that our Services can be more relevant and useful to you and others.”
While in its User Agreement, LinkedIn explains that you’re agreeing to the terms of its Privacy Policy, which includes usage clauses (including this one), by, effectively, using the app.
Worthy of note here is that LinkedIn doesn’t specifically exclude DMs from this agreement. So theoretically, LinkedIn could use the information that you share in your messages for AI training and ad targeting as well, which could be a concern for some users. Meta has specifically and repeatedly noted that it does not use people’s private messages to train its AI models, nor does it use information from accounts of people under the age of 18.
LinkedIn’s offered no such assurance, which again, is worthy of note in its legal documentation.
LinkedIn has also added an AI training opt out if you choose, so you can also switch this off entirely if you don’t want LinkedIn harvesting your info.
But as with virtually every other security setting, most people won’t switch it off, which means that LinkedIn will effectively default the majority of its users into this new agreement, except those in regions that are still debating AI training permissions.
That includes the EU, with data from European LinkedIn excluded from its AI training at all at this stage, and Switzerland, which is examining the parameters of such agreements.
As noted, Meta is also clarifying its regional requirements around AI training permissions, and was recently granted approval to use UK user data for such, while X also recently added an AI training opt-out to meet regional requirements.
But essentially, if you haven’t explicitly told a social platform that you don’t want your personal info used for AI training, it’s probably being used for AI training, meaning that your updates are likely being fed into a large language model somewhere.
Is that a big problem?
Well, probably not, as the information is aggregated and filtered to within an inch of its life, and likely unrecognizable in any form. But still, sharing personal info with LLMs could lead to problematic generation in these apps, which, depending on what you’re sharing online, could be a concern.
And either way, users should have a choice, which LinkedIn has added, and other platforms are incorporating. Even if it is after the fact, as most have already used your historic info without any specific approval.
And that’s probably the bigger consideration. You might want to opt-out now, but most of us have been on social media for over a decade, and all of that info has likely already been ingested into the various AI models.
So does it really matter if you opt out now?
It’s all relative to your own views on the process, and what you’re sharing, but more apps are now adding options to switch off data sharing, if you choose.
Which is a good thing, but again, it could be a little late in broader context.