DeepSeek AI Security Questions: Analysis, Risks, and Recommendations

  • DeepSeek democratizes access to advanced AI, but poses privacy and censorship risks due to its operation in China.
  • Experts and authorities highlight technical vulnerabilities, legal compliance issues, and the possibility of manipulation or misuse.
  • It is recommended not to share sensitive data, run the model locally whenever possible, and demand maximum legal and ethical transparency.

Doubts about the security of DeepSeek AI

DeepSeekDeepSeek, the artificial intelligence (AI) model developed in China, has sparked a profound global debate about its security, privacy, and the ethical implications of its use. Although applauded for its technological advancements and for making high-performance, open-source AI available to everyone, DeepSeek is under increasing scrutiny due to concerns ranging from data management to political censorship, vulnerabilities detected in its infrastructure and compliance with international regulations.

Thanks to its disruptive nature and rapid adoption in both academic and business environments, DeepSeek has managed to position itself as a solid alternative to giants like OpenAI and its well-known ChatGPT. However, the fact that it's an open-source AI, its data management policy, and the influence the Chinese government may exert over the platform have increased distrust among users, regulatory agencies, and cybersecurity experts around the world.

Why is DeepSeek a different AI?

Doubts about the security of DeepSeek AI

DeepSeek has represented a qualitative leap in the field of generative artificial intelligence For two main reasons. First, its open-source nature has democratized access to highly advanced technologies, allowing both individuals and companies to download, audit, modify, and run the model according to their own needs. This contrasts with the closed and restrictive model of other major industry players, something that has been highly valued by the technology community.

Second, your low cost of training and deployment It has broken paradigms, making it easier for more organizations to experiment and build AI solutions without relying on large investments. In terms of performance, DeepSeek competes even with closed reference models, ranking on par with the best commercial offerings. However, these benefits are overshadowed by several risks: open source code facilitates the development of legitimate applications, but also encourages misuse for criminal purposes, information manipulation, or cyberattacks.

Furthermore, China's technological hegemony and state control over strategic companies have fueled fears that DeepSeek could be used as a tool for international influence, whether through mass data collection, censorship of certain topics, or as a vehicle for state propaganda.

Data management and storage: privacy in question

Doubts about DeepSeek AI privacy

One of the most controversial aspects is the personal information management on DeepSeek. As various experts and international regulatory bodies, including the Organization of Consumers and Users (OCU) and European authorities, have pointed out, there are serious concerns about the platform's privacy policy and the location of its servers.

  • All collected data, including messages, files, chat history, voice recordings, keyboard patterns, and images, is stored and processed on servers located in China.
  • The company acknowledges that it may share this information with suppliers, business partners, and authorities, especially when there is a legal obligation to do so under Chinese law.
  • Explicit and automated data collection ranges from information you provide when creating your account (name, email, phone number, date of birth, password) to information about your device, operating system, language, usage habits, payment methods, and data obtained through third parties (for example, when you register through Google or Apple).

Chinese law allows state access to all this data, without specific guarantees of transparency, proportionality, or notification to affected users. This contrasts with the requirements of the European General Data Protection Regulation (GDPR), which has led to the opening of investigations and the temporary blocking of DeepSeek in countries such as Italy, Ireland, France, and South Korea. In many cases, the company has not designated a legal representative in the European Union, nor has it implemented the necessary safeguards to process European user data.

Additionally, the DeepSeek privacy policy It is opaque: it does not specify whether the data is used to create profiles or make automated decisions, it does not define clear retention periods, nor does it explain the ways users have to exercise their rights of access, rectification, or deletion of data.

When several experts in artificial intelligence and privacy were consulted, most agreed that, although mass data collection is common in the sector and is also carried out by platforms such as ChatGPT, Claude, Gemini or Grok, the physical and legal location of the servers in China adds a Specific risk of cyberattacks and access by Chinese authorities which is difficult to control from the West.

Censorship, value alignment, and government control

Censorship in DeepSeek AI

Another cause for concern is the existence of censorship filters and ideological alignment Within the DeepSeek model itself. The system's design incorporates several levels of filters to avoid responses that may be sensitive or contrary to the interests of the Chinese ruling party. Topics such as the Tiananmen protests, Taiwanese independence, human rights, or contentious geopolitical issues are often blocked or receive responses aligned with the official Chinese narrative.

This mechanism serves two functions: on the one hand, it limits the developer's legal risks in his country of origin, but on the other, it turns DeepSeek into a potential tool of propaganda and information manipulationVarious analyses by international experts have shown that it is relatively easy to circumvent these restrictions through jailbreaking techniques, demonstrating that censorship controls are not foolproof and can be exploited both to circumvent filters and to introduce additional bias, narrative manipulation, or misinformation.

Technical vulnerabilities and mass attacks: Is DeepSeek safe to use?

Security vulnerabilities in DeepSeek AI

DeepSeek's security has been put to the test since its first weeks of operation. The model has been the victim of multiple large-scale cyberattacks (DDoS), which quickly forced those responsible to limit the registration of new users and strengthen their internal protection measures.

However, one of the most alarming findings has come from independent analysts and cybersecurity companies such as KELA: DeepSeek is vulnerable to jailbreaking, a technique that allows you to bypass the model's security filters and force responses that would normally be prohibited.

During controlled tests, DeepSeek was able to generate detailed instructions for developing malicious software (malware), ransomware, and the manufacture of explosives and toxic substances. These results reveal that current security barriers are insufficient to prevent criminal or dangerous uses, posing a real risk to both private users and businesses.

Additionally, they have been identified Technical weaknesses in mobile applications for Android and iOS, related to the transmission of data to unencrypted servers, which could expose sensitive information to cybercriminals. In response to this situation, regulatory agencies in countries such as South Korea have temporarily blocked the app until minimum protection standards are guaranteed and it is adapted to local regulations.

Specific risks for companies and governments

DeepSeek AI Risks for Businesses and Governments

The risk scenario multiplies when companies or government agencies They are considering the use of DeepSeek. Beyond concerns about the privacy of personal data, there is the added danger of transferring confidential information, trade secrets, or sensitive data to infrastructure located in China and subject to legal extraterritoriality. In countries like the United States, the military and various agencies have expressly prohibited installing DeepSeek on corporate or government devices, given the possibility that the data could be transferred to Chinese authorities.

Experts such as Johna Till Johnson, Bradley Shimmin and Mike Mason, consulted by international technology portals, recommend Do not use DeepSeek in contexts where critical information is handled.If technical reasons require access to the model, it is suggested to do so through local deployments or trusted hosting providers, such as AWS, Microsoft Azure, or equivalent platforms in Europe and the United States. This way, you can limit data transfer outside the desired jurisdiction.

Additionally, the international community is concerned about the potential integration of DeepSeek into the Chinese military. AI models like this can be used to analyze war scenarios, process vast amounts of tactical and strategic information, or even participate in automated decision-making. However, Lack of transparency regarding the security measures implemented, as well as potential technological dependencies and vulnerabilities to cyberattacks, which could compromise not only privacy but even the operational stability of critical infrastructures.

Risk Comparison: DeepSeek vs. Other AIs

It is essential to contextualize the doubts raised by DeepSeek in a broader framework, comparing it with other leading AI models on the market. While it is true that Most advanced chatbots collect and process large volumes of dataWhat sets DeepSeek apart is:

  • Location and legal jurisdiction of your servers: In China, with the consequent obligations to transfer data to the State.
  • Lack of transparency in anonymization, profiling, and ARCO rights (access, rectification, cancellation, and opposition) processes.
  • Vulnerability to jailbreaking and social engineering attacksIn benchmark tests, DeepSeek has proven to be more easily exploitable in certain scenarios, although models such as Google Gemini 2.0 Flash and OpenAI o1-preview have also failed several automated security tests.
  • Explicit ideological censorship and alignment with the political interests of the developing country.

Despite these differences, we must not lose sight of the fact that Other benchmark AIs, such as ChatGPT, also store large amounts of data and share it with third parties. (suppliers, business partners, legal authorities), albeit under stricter and more transparent regulatory frameworks in Europe and the Americas. Therefore, caution and common sense are always recommended when sharing sensitive information, regardless of the AI ​​platform used.

International regulation and institutional response

The emergence of DeepSeek has accelerated the debate about the need for new regulations, international consensus, and independent auditing frameworks for artificial intelligence. There are already cases where regulatory bodies have taken decisive action:

  • Italy, Ireland and France They have opened specific investigations into DeepSeek and have blocked or limited its use, requesting clear information about the processing and location of European citizens' data.
  • South Korea temporarily suspended downloads and demanded that the privacy policy be adapted to its national regulations, after detecting weaknesses in the protection of personal information and failures in age verification.
  • In Spain, the OCU (Organization of American States) has asked the Spanish Data Protection Agency to investigate and, if appropriate, sanction the companies responsible for failing to comply with the safeguards required for international data transfers and for failing to adequately request consent, especially in the case of minors.

Given the legal vacuum in many countries, experts urge companies, governments, and users to exercise extreme caution. They advise continuous risk analysis, establishing mitigation strategies such as local model training and execution, and always demanding maximum transparency in data management and the application of content filters.

Recommendations for responsible use of DeepSeek

If, despite the identified risks, you decide to use DeepSeek, there are certain tips to minimize your exposure:

  • Avoid sharing confidential data or sensitive personal information in chats or queries to the model, especially when using the cloud version or official website.
  • Consider the possibility of run DeepSeek locally on your own computer or server, thus limiting the sending of data to China (although this may require technical knowledge and has resource limitations on home computers).
  • For businesses, It is advisable to use reliable hosting providers in your country or region (such as AWS, Azure, or EU-certified platforms), which shifts data transfer and storage outside of China.
  • Performs constant safety testing and update your protections against jailbreaking and cyberattacks, as AI models evolve and can incorporate vulnerabilities with each update.
  • Regularly monitor and review the privacy policies and terms of service, as they may change over time and affect the level of protection of your information.

In all cases, the experts' maxim is Do not be carried away solely by the free nature or apparent accessibility of the modelData security and privacy must always be a priority, especially when it comes to technologies that evolve so rapidly and may be subject to external pressures or sudden regulatory changes.

Ethical and social perspective: the double face of open-source AI

Delving deeper into the ethical debate, the rise of open-source platforms like DeepSeek raises a difficult dilemma: How to balance the innovation and democratization of technology with the protection of fundamental rights?

On one hand, Open access boosts researchIt allows small businesses to compete with large corporations and increases technical transparency by facilitating code scrutiny and training processes. At the same time, it removes many of the control barriers that hinder misuse, information manipulation, and the development of automated attack tools.

Various experts have highlighted the lack of international consensus on AI governance and the absence of robust independent audits leave models like DeepSeek in an ethical and legal gray areaThe rapidity with which new models are developed and deployed means that, in many cases, regulation comes late and cannot prevent risks in real time.

In the social and geopolitical sphere, the expansion of DeepSeek raises additional questions. While it can become a tool for inclusion and educational or scientific advancement, its exploitation for cyberwarfare, reality manipulation, or the militarization of AI reinforces the need for mechanisms of global oversight and shared responsibility.

Finally, the development of AI in China and its use in national defense, without external controls or international validation, raises concerns about the role DeepSeek could play in future information crises, armed conflicts, or mass disinformation campaigns.

The story of DeepSeek perfectly illustrates the delicate balance between technological advancement and the protection of fundamental values. With a history fraught with regulatory controversies, ethical dilemmas, security issues, and great promises of innovation, DeepSeek has become a paradigm for the contemporary challenges of artificial intelligence. The future of open-source AI, and especially of platforms developed in countries with opaque or restrictive legislation, will depend largely on pressure exerted by society, international regulators, and users themselves seeking greater trust, transparency, and respect for digital rights.