In today’s digital world, trust often comes with a blue checkmark. LinkedIn’s identity verification feature promises exactly that: a quick way to prove you are real in a sea of fake recruiters, AI-generated profiles, and bots. Three minutes, a passport scan, and a selfie later, and you get that dopamine hit of legitimacy. But the reality behind the badge is far more complex and for African users navigating privacy and data governance, it’s worth examining.
Not LinkedIn, But a Third Party
When you hit “verify” on LinkedIn, your passport and selfie aren’t going to LinkedIn itself. Instead, you are redirected to a San Francisco-based company called Persona Identities, Inc., which handles the verification on LinkedIn’s behalf. Persona is effectively invisible. You interact with it, but it’s not a household name.
Persona collects far more than you might expect for a simple badge:
- Full name, passport scans (both sides), selfie
- Facial geometry extracted from images (biometric data)
- NFC chip data from your passport
- National ID number, birthdate, nationality, sex
- Email, phone, postal address, device info, geolocation
- Behavioral metrics like hesitation and copy-paste detection
They also cross-reference your information with their “global network of trusted third-party sources,” including government databases, utility records, mobile providers, and credit agencies. In short, they run a background check, not for employment, but for your LinkedIn badge.
Your Face Becomes AI Training Data
Persona’s privacy policy confirms a surprising fact. Your passport images and selfies may be used to train their AI models. Under the guise of “legitimate interests,” they improve the verification service by feeding your data into machine learning systems. For African users, or anyone outside the U.S., this raises critical questions. Your government-issued ID and biometric data can be processed in ways you did not explicitly consent to.
A Chain of 17 Subprocessors
Once Persona has your data, it does not stop there. Your biometric and identity data touches 17 different companies, 16 of which are in the United States. Among them are AI companies such as Anthropic, OpenAI, and Groqcloud, cloud providers like AWS and Google Cloud, and database and analytics firms like MongoDB, Snowflake, and Tableau.
Even if some data is stored in Europe, the legal reality is that U.S. law still applies. The CLOUD Act allows U.S. authorities to request data from any U.S.-based company, even if servers are abroad. For European or African users, this means your biometric data could be accessed without your knowledge.
The Biometric Risk
Facial geometry, the numerical mapping of your features, is permanent. Unlike passwords, it cannot be changed if compromised. While Persona states they destroy biometric data after verification or within six months, there is a legal loophole. If required by U.S. law, they may retain it indefinitely.
Liability is Minimal
Persona’s terms cap liability for any data breach at $50 USD, and all disputes must go through arbitration in the U.S., even for European or African users. Your identity, privacy, and biometrics, in the event of compromise, have very limited recourse.
Lessons
For professionals navigating digital platforms, this story offers critical insights:
- Read the fine print: Verification processes may involve more data sharing than you expect.
- Understand third-party risks: Your data may leave your region, even if servers are nominally local.
- Question “legitimate interests”: Companies may use your data to train AI without your explicit consent.
- Consider the trade-off: A cosmetic badge may not be worth permanent biometric exposure.
Before clicking “verify,” know what you are trading. That blue checkmark may signal legitimacy, but the price is your identity, your biometric data, and your trust.

