In March 2026, a wave of "lobster farming" swept across the Chinese internet. The "lobster" is not the crayfish people can feast on, but rather an open-source AI agent named OpenClaw.
Compared to previous AI agents that could only chat and generate content, OpenClaw's operation method represents a paradigm shift. It can directly take over a computer, autonomously executing corresponding tasks according to instructions, and operating the mouse and keyboard just like a real person to help you send emails, organize files, and even write code. Although most people have only a superficial understanding of it, it's clear no one wants to miss out on this trend.
Naturally, this craze quickly swept into the medical field as well. However, while this AI has brought efficiency improvements, just one instance of failure could lead to disastrous consequences.
The most infamous failure caused by OpenClaw undoubtedly comes from Meta. Summer Yue, director of alignment at Meta Superintelligence Labs, tried, as usual, to get OpenClaw to organize her cluttered inbox. To ensure safety, she specifically set a safety instruction: The AI must confirm with her before taking any action.
However, OpenClaw erroneously ignored this instruction. Not only did it fail to request confirmation, but it also began frantically deleting emails.
When the expert came to her senses, she immediately typed a stop command, but in her panic, she didn't enter the correct one.
"I had to RUN to my Mac mini like I was defusing a bomb," Yue wrote on X with a sense of despair.
When even an AI safety expert can encounter such a problem, the immense risks posed by OpenClaw are evident. The good news is that cybersecurity management agencies have been issuing consecutive risk announcements.
After the "lobster farming" trend gained momentum, on March 10, China's National Computer Network Emergency Response Technical Team/Coordination Center issued another risk alert regarding the secure use of OpenClaw.
According to statistics from the China National Vulnerability Database (CNNVD), from January 2026 to March 9, 2026, a total of 82 vulnerabilities were collected for OpenClaw. Among these, there were 12 critical risk vulnerabilities, 21 high-risk vulnerabilities, 47 medium-risk vulnerabilities, and 2 low-risk vulnerabilities, encompassing various types, including access control errors, code issues, and path traversal.
It is understood that renowned top-tier hospitals have issued notices strictly prohibiting the connection of OpenClaw to their hospital intranets, business-specific networks, and various medical information systems. Even if research teams require its use for study and testing, they must comply with security regulations and implement adequate safety precautions.
The swift response from regulatory bodies undoubtedly provides the public with some reassurance. However, this does not eliminate security issues caused by OpenClaw. After all, in the vast majority of cases, the main cause of security problems is not the system, but human error. No matter how strict the security rules or how perfect the system, if a person's security awareness lapses even slightly, then incidents like the one caused by OpenClaw in another time and space could very well happen in the real world as well.
The fundamental reason OpenClaw has gained such popularity is its ability to intelligently handle various tasks, which endows it with characteristics of high privilege, high automation, and high connectivity. However, these very characteristics that give OpenClaw its advantages run counter to security principles.
Take the most critical aspect, privilege, as an example. Traditional network security follows the principle of least privilege, granting only the minimum permissions necessary to complete a task. But to accomplish complex automated tasks, OpenClaw typically needs to be granted high system privileges. Once it is maliciously exploited or if the AI misjudges a situation, it could directly delete core data or tamper with system configurations, causing damage far exceeding that of ordinary programs.
Furthermore, in sensitive fields like finance and healthcare, critical operations such as data deletion or permission modification usually require secondary confirmation or manual approval. OpenClaw, however, can automatically plan and execute a series of operations based on natural language instructions without human intervention, showcasing its high level of automation. But if the instructions are induced, it might execute dangerous operations unnoticed, and tracing responsibility afterward becomes difficult.
According to an AI security expert from DBAPPSecurity, there are currently four core hidden dangers regarding the application of OpenClaw in medical environments.
The first is data privacy leakage. OpenClaw needs to call upon large models or search for information online, posing a risk of data exposure.
The second is the risk arising from AI hallucinations. "Previous lessons have proven that OpenClaw might hallucinate or misjudge operational commands, leading to erroneous actions; in scenarios involving vague instructions or incomplete information, it also tends to fill in the missing information on its own and then execute directly, leading to extremely high operational risks. More seriously, if it gives wrong advice due to hallucinations during diagnosis and treatment, it could be life-threatening," the expert noted.
The third is the risk of system privilege escalation. The expert believes that once OpenClaw, which carries high risks, interfaces with a hospital's core systems, it could very likely become a springboard for attackers, making prevention extremely difficult.
Finally, the expert believes the security risks of the model itself should not be underestimated, "OpenClaw is fundamentally built upon large models and may be susceptible to risks like prompt injection and data poisoning. Once a problem occurs, it's hard to trace, so it needs to be handled with caution."
Of course, through various security configurations and restrictions, the security risks of OpenClaw can be significantly reduced. However, by that point, its functionality would also be vastly different from its original form.
A seasoned programmer noted that if strict security configurations are applied to OpenClaw, such as complete localization, strict permissions, and read-only analysis, then, compared to existing solutions like custom automation scripts, it might not hold absolute advantages in terms of security, cost, and maintainability.
The security expert added that at the current stage, security and intelligence are indeed at odds with each other, but OpenClaw still has its merits.
"If you want it to be more intelligent, then privilege control needs to be more open, and security thus decreases; if you want it to be more secure, then you need more restrictions, and its intelligence level drops. If some hardening and customized development are done for OpenClaw, even if its intelligence level decreases somewhat, it might still be usable in medical environments. At the very least, it's still based on a large model, allowing users to interact through natural language, which could lower the barrier to using some professional software and assist with tasks like document editing and statistical analysis."
"AI agents, especially the popularity of end-side agents like OpenClaw, will be a watershed moment for security. Traditional security solutions are essentially based on static rules, or a kind of boundary defense, which are becoming less effective in an AI environment. In the future, we may need to strengthen behavior analysis by performing contextual analysis of system instructions, meaning analyzing behavioral intent. This will be the development direction for security solutions in the AI era," he added.
The expert also said that developing such security solutions capable of meeting the demands of the AI era still presents considerable difficulty.
"We need to consider all risk scenarios and handle massive amounts of data; we need to consider performance issues to avoid excessive system overhead; and we must also consider the security of the security products themselves. Currently, we are also conducting some preliminary work on AI protection, aiming to launch mature AI security solutions as soon as possible."
(Source: vcbeat, WeChat Public Platform)
Related News:
Deepline | It starts with AlphaGo: How move 37 on board predicts future of AI
Deepline | Is AI drama production already here? Unpacking viral and misleading 'Huo Qubing' story
Comment