Microsoft-owned LinkedIn is facing legal scrutiny after a lawsuit filed in a California federal court alleged that the platform disclosed private messages of its Premium customers to third parties for training generative AI models. The lawsuit claims LinkedIn breached its privacy commitments by quietly updating its policies to enable data usage for AI training without clear user consent.
Specifically, the plaintiffs argue that the platform’s changes to its privacy settings and policy language last year allowed data from private InMail messages to be shared with AI partners.
This lawsuit, which represents millions of LinkedIn Premium users, raises significant concerns about privacy and data protection. It alleges LinkedIn violated the federal Stored Communications Act and California’s unfair competition law, seeking damages and legal accountability.
The case comes amid growing scrutiny of tech companies’ use of personal data in training AI technologies.
Why It Matters: As AI technologies advance, the handling of sensitive personal data has become a critical concern. This lawsuit highlights the tension between innovation and user privacy, raising questions about transparency, consent, and compliance with privacy laws. For LinkedIn, which positions itself as a trusted professional network, the allegations risk damaging its reputation and user trust, particularly among paying subscribers.
- Policy Changes Without Adequate Notice: LinkedIn allegedly introduced a privacy setting in August 2024 allowing users to opt out of data sharing for AI training. However, the policy changes in September suggested data had already been shared, leaving users unable to reverse prior usage.
- Focus on Premium Subscribers: The lawsuit specifically targets Premium subscribers, who pay for enhanced services and enter into a LinkedIn Subscription Agreement (LSA). The LSA purportedly guaranteed higher privacy standards, which plaintiffs argue were violated.
- InMail Messages at the Center: Plaintiffs allege that private InMail messages, often containing sensitive employment and business information, were included in the data used to train AI models. However, evidence supporting this claim remains unclear, with the lawsuit focusing on LinkedIn’s lack of transparency.
- Geographical Exemptions Raise Questions: LinkedIn’s privacy policy excludes users in regions like Canada, the EU, and China from data-sharing practices due to stricter privacy regulations. This disparity underscores the vulnerability of U.S. users.
- LinkedIn’s Denial of Allegations: LinkedIn has categorically denied the claims, labeling them as baseless. The company has not provided a detailed public response clarifying whether private messages were used in AI training, leaving the issue unresolved.
LinkedIn accused of training AI on private messages – The Register