Into the unknown

Amid the shocked reactions this week to the release of the Chinese artificial intelligence model, DeepSeek, the risk we should be most concerned about is the potential for the model to be misused to disrupt critical infrastructure and services.
I wrote in 2023 about the many forms of Chinese AI-enabled technology we use that pump data back to China, where it is sorted by Chinese algorithms before it is sent back here.
These include things such as digital railway networks, electric vehicles, solar inverters, giant cranes for unloading containers, border screening equipment, and industrial control technology in power stations, water and sewerage works. Like DeepSeek, the vendors of these products are subject to direction from China’s security services.
This clear risk has been buried by the avalanche of commentary about the other implications—not least the panicked stock market reaction in which Nvidia’s share price plunged 17 percent and the Nasdaq fell 3 percent. With so much money chasing AI, investors are as twitchy as meerkats.
Don’t cry for Nvidia—cheaper AI models promise to broaden the market for its chips, and this is reflected in its recovering share price. Besides, Nvidia helped create its temporary setback by selling powerful H800 chips to Chinese companies—including DeepSeek—for a year before the Biden administration tightened up its chip export controls.
There may even be some upside when a company produces comparable results to leading US models—purportedly for a fraction of the price and using dumber chips. US big tech will be spurred to figure out how to do generative AI more cheaply. That’s good for business and good for the planet.
From a national security perspective, how worried should we be about an AI model with a chatbot algorithm that provides such lame answers on issues sensitive to the Chinese government?
Of course it’s undesirable for yet another wildly popular Chinese app to be shaping how we think. It’s also a worry that the company will make all our data available to Chinese security services on request. DeepSeek’s own privacy policy says as much: ‘We may access, preserve, and share the information described in “What Information We Collect” with law enforcement agencies (and) public authorities … if we have good faith belief that it is necessary to comply with applicable law, legal process or government requests.’
The policy also explains that the company stores ‘the information we collect in secure servers located in the People’s Republic of China’.
But the bigger question is what would happen if DeepSeek’s model lowered the costs and increased the competitiveness of Chinese AI-enabled products and services embedded in our critical infrastructure? If these offerings were even cheaper and better, they might become even more pervasive in our digital ecosystem, and therefore even more risky.
Here’s another case. What if DeepSeek became the default choice for Australian and other non-Chinese companies seeking to improve their products and services with customised, low-cost, leading-edge AI? As the Wall Street Journal notes: ‘DeepSeek’s model is open-source, meaning that other developers can inspect and fiddle with its code and build their own applications with it. This could help give more small businesses access to AI tools at a fraction of the cost of closed-source models like OpenAI and Anthropic.’
Useful applications might include customised chatbots and product recommendations, streamlined inventory management or predictive analytics and fraud detection.
Could DeepSeek embedded in tech made by non-Chinese companies be a vector for espionage and sabotage—an arm of China’s DeepState, as it were? Could DeepSeek be directed to alter embedded code or simply turn off access to its open-source model to disable these products and services?
Perhaps we can take some comfort here. One of the advantages of so-called ‘open source’ models is that users can host them in their own controlled environments to better protect their customers’ data. That would mitigate the espionage risk. Using isolated environments would also mitigate the sabotage risk to some degree as well. However, if DeepSeek AI were embedded in products and services that are used in sensitive and critical products—for example, essential components of an electricity station or grid—we might want additional mitigations, given the much higher stakes.
The key point is governments need take a close look at the potential risks of DeepSeek employed in sensitive areas in two contexts: by Chinese companies—given their legal obligations to co-operate with China’s security agencies—and by non-Chinese companies that might use applications derived from the DeepSeek model. In Australia, that sounds like a job for the security review process recently established under our framework to ‘consider foreign ownership, control or influence risks associated with technology vendors’.
It’s early days. US big tech is not going to rest on its oars. DeepSeek may not be as cheap as it claims, nor as original. Indeed, OpenAI is investigating whether DeepSeek leaned on the company’s tools to train its own model. But when it comes to protecting our digital ecosystems from emerging technologies with the game-changing potential of DeepSeek, it’s never too early to start planning.
This article was published by The Strategist.

Simeon Gilding is a senior fellow at ASPI and has previously held senior positions across Australia’s national security community, including at the Australian Signals Directorate where he was deputy director-general responsible for signals intelligence and offensive cyber operations.