Get Apps
Get Apps
Get Apps
點新聞-dotdotnews
Through dots,we connect.

Deepline | The end of 'anonymous AI'? Anthropic enforces real-name verification, hitting VPN users

Deepline
2026.04.17 15:10
X
Wechat
Weibo

Anthropic has begun aggressively enforcing real-name verification without any warning.

Starting today (April 17), Anthropic is requiring some Claude users to submit government-issued IDs and real-time selfies to continue accessing certain capabilities.

According to Anthropic's official statement, this mechanism currently applies only to "a small number of use cases," with the aim of "preventing abuse, enforcing usage policies, and complying with legal obligations."

The reaction of the public has been explosive. On platforms like X and Reddit, Chinese users have responded most fiercely. Since China is not on Claude's list of supported regions, a large number of Chinese users who rely on VPNs, relay nodes, or overseas accounts are suddenly facing verification failures or even account bans.

One highly upvoted post said, "Claude truly harbors deep animosity toward China, to the point of unprecedentedly rolling out real-name verification…"

Another popular comment was full of sarcasm, "On the issue of real-name authentication, China and the US are finally running toward each other."

European and American users, meanwhile, have focused on privacy risks. Many have openly stated that they cannot accept such a surrender of privacy and are threatening to switch to competitors like OpenAI immediately.

One user bluntly said, "This is the turning point where an AI tool platform becomes an AI surveillance platform."

KYC (Know Your Customer) makes tracking and banning easier than ever before. A fierce debate has erupted, and many questions have quickly surfaced.

Given that Anthropic knew it would provoke strong backlash from users, why did it still choose to push ahead so forcefully? In this upheaval, who truly stands to benefit, and who will be the first to pay the price?

On the surface, Anthropic's justifications are legitimate: abuse prevention, policy enforcement, and legal compliance.

But upon closer examination, the biggest winner in this move is Anthropic itself.

First: Risk control benefits.

Claude's strength in reasoning, coding, and tool use makes it a prime target for underground black-market operators. Practices like API key reselling, shared account splitting, and reviving banned accounts with new identities have cost Anthropic dearly over the past few years.

Real-name verification is a powerful remedy. Once a real identity is bound, shared accounts become traceable, reviving an account requires a new physical identity, the cost of bulk abuse skyrockets from changing email addresses to changing passports, and account farms and resale channels are precisely suppressed. This way, Anthropic is no longer easily exploited by swarms of crawlers, scripts, and sock puppets.

Second: Commercial benefits.

In the past, ordinary subscription users, independent developers, enterprise customers, and government and security-sector users were often mixed, making fine-grained management difficult.

After real-name verification, Anthropic can implement differentiated pricing and services:

- Offering the standard version of Claude to ordinary users,

- Unlocking higher API quotas for verified developers,

- Providing customized private deployment for enterprise customers,

- Offering "whitelist access + high unit price" for government and high-security industries.

Real-name verification redefines Claude from a public product into a tiered commodity, giving Anthropic absolute control over pricing.

Third: Compliance benefits.

In recent years, Anthropic has strongly advocated for transparency and responsible scaling while actively pursuing government contracts and high-security industry business.

On this path, knowing who is using the platform, tracing responsibility when problems arise, and being able to delegate authority in a differentiated manner have become hard requirements. Real-name verification provides a reliable identity anchor, especially in a complex geopolitical environment.

If stricter export controls or sanctions policies emerge in the future, Anthropic will be able to precisely restrict access to capabilities for specific regions or users, avoiding compliance dilemmas.

With the rollout of real-name verification, however, three types of users are being left behind first:

Users in "unsupported regions" are the most directly impacted. With the introduction of real-name verification, users who previously relied on VPNs, relay nodes, or overseas accounts now face a higher risk of verification failure, functional restrictions, or even account bans.

Second, proxy IP and gray-market players are in trouble. Users who rely on VPNs, virtual phone numbers, or abnormal usage patterns—regardless of their true intent—are flagged by algorithms as high-risk and are precisely the targets Anthropic aims to eliminate with real-name verification.

Third, privacy-conscious individual users may feel awkward. Researchers, developers, and heavy individual users, particularly those with strong privacy awareness, will also feel significant inconvenience. Their greatest concern is that the privacy of their personal conversations will be directly linked to their real identities.

In contrast, large enterprises and government customers will feel reassured by this move. They have always cared more about auditing, permissions, and who invoked which capabilities. Combined with corporate identity systems, real-name verification better aligns with their compliance requirements when procuring models.

As soon as the news broke, a large number of users publicly expressed dissatisfaction on social media, with cries of "switch to OpenAI" echoing across the platform. Competitors are almost ready to welcome this gift. But why would a rational commercial company actively push its users toward rivals?

Unless it isn't worried at all.

Anthropic's confidence comes from Claude's current product competitiveness.

One user pointed out the key, "Only products in short supply require KYC; no one imposes real-name verification on something nobody uses." In areas such as reasoning ability, long-context processing, and tool use, Claude still maintains a leading edge.

Many enterprises, research institutions, and professional users are highly dependent on the model itself; they care more about capability ceilings than access authentication. For these high-value users, cooperating with real-name verification may be just an extra procedural step, not an unacceptable hurdle.

Anthropic is clearly betting that the short-term attrition will mainly consist of low-engagement or marginal users, not core paying users and enterprise customers.

But that's not all.

The most important outcome of KYC is that Anthropic has, for the first time, gained a lever to govern users by identity tier. Without KYC, the platform cannot accurately distinguish between ordinary subscribers, independent developers, enterprise customers, and potential high-risk users. Everyone is mixed in the same pool, using the same quotas and the same model capabilities.

After real-name verification, this ambiguous state is broken. Based on identity data, combined with existing account, device, region, payment, behavior, and capability access records, Anthropic now owns the ability to group users and apply differentiated governance.

Anthropic can achieve fine-grained management, much like a bank or telecom operator, precisely controlling who can access which capabilities, quotas, and sensitive functions.

This means that in the future, the Pro version of Claude may be the entry-level, above which there will be an enterprise-verified edition, a government whitelist edition, and a defense/security edition. Each tier will require stricter identity verification, corresponding to higher unit prices and deeper lock-in. This is not an endpoint but the starting point of a divide-and-conquer commercial empire.

Anthropic clearly believes that the value of obtaining this key far outweighs short-term fluctuations in user numbers.

When Anthropic wields the double-edged sword of real-name verification, a larger signal is emerging: the era of real-name systems for frontier models may have quietly begun.

In the past, real-name systems were exclusive to the social media, finance, and gaming industries. The AI industry has long been built on the ideal of openness—early versions of ChatGPT didn't even require email verification, and Claude itself was once a model of instant access.

Now, the wind is changing. AI's capabilities are crossing a dangerous threshold. When a model can write code, analyze financial reports, simulate cyberattacks, and generate convincingly false political content, anonymous use shifts from a convenience to a risk.

Anthropic's real-name verification is the first resounding response. It shows that in an era of strong regulation, an AI platform's competitiveness no longer depends solely on model performance but also on the completeness of its compliance infrastructure.

Will real-name verification for AI become a trend? When AI becomes sufficiently powerful, how much privacy are we willing to trade for safety, compliance, and sustainable development? The answer will likely be written jointly by many more AI companies and users.

(Source: World Model Workshop)

Related News:

Deepline | OpenAI and Anthropic: Battle of models, money, and messaging

Deepline | WeChat cracks down on AI-generated content: Why the panic?

Tag:·Anthropic· real-name verification· Claude· AI platform· privacy concerns· risk control· regional access restrictions

Comment

< Go back
Search Content 
Content
Title
Keyword
New to old 
New to old
Old to new
Relativity
No Result found
No more
Close
Light Dark