Anthropic Demands Real-Name ID Verification for Claude Users
Anthropichas introduced a 'Know Your Customer' (KYC) identity verification requirement for some users of its Claude AI platform, asking them to submit government-issued IDs and real-time selfies before accessing certain features. The move marks a significant shift in how AI platforms approach user identity, and it carries real consequences for anyone who values online anonymity or relies on tools like VPNs to access services across borders.
What Anthropic's KYC Policy Actually Requires
The new policy asks affected Claude users to provide two things: a government-issued photo ID and a live selfie captured in real time. This type of verification is already common in financial services and some age-restricted platforms, but it is relatively new territory for AI chatbot services.
The requirement applies to users trying to access specific features on the platform. Anthropic has not publicly detailed exactly which features trigger the verification step, but the pattern is consistent with how other platforms have gradually expanded identity checks over time, starting with higher-risk or higher-access tiers before broadening the requirement.
For users in regions where Claude is not officially supported, this verification process creates an additional barrier that is difficult or impossible to clear, regardless of whether they are using a VPN or not.
VPN Users and Geographic Workarounds Are Directly Affected
The KYC requirement has an outsized impact on two groups in particular: people who use VPNs to access Claude from unsupported regions, and people who use VPNs specifically to preserve their anonymity while interacting with AI tools.
VPNs can mask a user's IP address and make it appear as though they are connecting from a different country, which some users rely on to access services unavailable in their location. But identity verification cuts through this workaround entirely. A VPN changes where you appear to be connecting from; it does not change who you are or what documents you can produce.
According to the report, Anthropic's policy can result in account bans for users who are caught using circumvention tools. This creates a direct conflict for users in restrictive regions who have historically used VPNs both to access services and to protect their personal information from surveillance.
A Broader Trend Toward Identity-Linked AI Access
Anthropicis not operating in isolation here. Across the technology sector, there is a clear and accelerating movement toward tying access to verified real-world identities. Social media platforms, financial apps, and now AI services are increasingly treating anonymous access as a risk to be managed rather than a norm to be preserved.
For AI platforms specifically, there are understandable reasons behind this shift. Concerns about misuse, regulatory pressure, and liability for AI-generated content are all pushing companies toward greater accountability mechanisms. Knowing who is using a platform makes it easier to enforce terms of service and respond to legal requests.
However, these same mechanisms also mean that user behavior on the platform becomes permanently linked to a verified identity. Every conversation, every query, every piece of content generated is now attributable to a real person with a real government document on file. For many users, that is a significant privacy consideration that goes well beyond simple account security.
What This Means For You
If you use Claude or are considering using it, there are a few practical things to keep in mind.
First, the KYC requirement does not currently apply to all users or all features. If you are using Claude in a supported region for standard access, you may not encounter this requirement immediately. But the precedent has been set, and it is reasonable to expect the verification requirement to expand over time.
Second, if you have been using a VPN to access Claude from a region where it is not officially available, you should be aware that continued use could result in an account ban, particularly if you are flagged during a verification step.
Third, this is a good moment to think more broadly about what you share with AI platforms and under what conditions. The terms under which you access a service shape what data is collected, how it is stored, and how it might be disclosed in the future.
Key takeaways:
- Anthropic now requires government ID and a live selfie for some Claude users to access certain features
- VPN users and those in unsupported regions face account bans if flagged for using circumvention tools
- This policy links AI usage to verified real-world identity, raising long-term privacy considerations
- The trend toward KYC requirements in AI platforms is likely to continue and expand
- Review the terms of service for any AI platform you use and understand what identity and usage data is being collected
The shift toward verified identity in AI services reflects a broader tension between platform accountability and user privacy. As more services adopt similar policies, users who care about maintaining control over their personal data will need to make more deliberate choices about which platforms they use, under what conditions, and what information they are willing to hand over in exchange for access.




