CargoClear

Why Using AI Makes Supply Chains More Vulnerable to Cyberattacks

As more companies turn to AI to streamline the food supply chain, new security risks are emerging, especially as systems scale and connect across more partners, warehouses, and delivery networks. We spoke with James White, CTO and Founding Engineer at CalypsoAI, about the growing threat landscape, how AI can be used to secure itself, and […]

As more companies turn to AI to streamline the food supply chain, new security risks are emerging, especially as systems scale and connect across more partners, warehouses, and delivery networks. We spoke with James White, CTO and Founding Engineer at CalypsoAI, about the growing threat landscape, how AI can be used to secure itself, and what food and grocery companies need to know before deploying AI tools in critical operations.

What’s Related

Supply Chain 24/7: Why is the food supply chain becoming more vulnerable to cyberattacks as more companies adopt AI?

James White: AI is being adopted at multiple points on the food supply chain, and each of these adoption points can lead to accidental or unintentional mishaps. Understanding the technical details of how these mishaps occur is where bad actors often begin their cyberattacks. For example, if vision AI is being used to automatically read labels on packing slips to auto-route food to a specific location, attacking the label manufacturer or printshop – where the security posture may be lower – can have devastating consequences in food being shipped to the wrong location; by the time it reached its correct stop, it has been spoiled or is no longer needed. This example shows how the food supply chain is particularly vulnerable, as it has many interdependent components, and these can be targeted to cause damage in the idealistic implementation of AI.

James White is CTO at CalypsoAI

SC247: What are the biggest cybersecurity risks that come with scaling AI across food distribution networks like GrubMarket?

JW: One risk in scaling AI across supply chains is the likelihood that there are different levels of security across the participants and platforms that make up the chain. A warehouse-based wholesaler may rely on older infrastructure that wasn’t built with modern cybersecurity in mind. An attempt to introduce AI for efficiency may be counterproductive if it opens up new access points for threats. Similarly, there could be variations in security standards between internal data collection and third-party delivery services that use that data. When AI is layered on top to streamline decision-making or automate logistics, this may unintentionally expose weaknesses in outdated code or loosely controlled access points. These weak links are especially dangerous because AI systems often require real-time data sharing across multiple vendors, partners, and geographic regions, further amplifying the potential for a breach through a less secure touchpoint.

SC247: How does AI make it easier for the food supply chain to be attacked?

JW: Any use case where AI is being utilized already has an existing set of security concerns. These do not change with the introduction of AI but instead manifest in different ways and are likely to be extended. Bad actors are constantly looking for low-hanging fruit opportunities to get the best bang for their buck, and AI, as a nascent space, represents a clear opportunity to take advantage of a combination of low threat understanding and brand new tooling. They will understand the vulnerabilities in major models, tools that host them, and how to manipulate inputs and indeed improve chances of achieving desired outputs. 

SC247: Can you explain what it means to “use AI to police AI”? 

JW: “Using AI to police AI” refers to using artificial intelligence systems to monitor and defend other AI systems and agents in real time; for instance, AI may flag suspicious or irregular behavior that falls outside normal patterns more quickly and accurately than humans. AI’s unique ability to learn and adapt can be harnessed to quickly detect and mitigate threats. In modern supply chain environments, where AI can be applied to manage everything from staff scheduling to shipping, employing AI-powered defensive layers ensures that threats are flagged and addressed before they escalate into operational chaos. This approach streamlines security while simultaneously allowing for continued innovation. 

SC247: Are there specific points in the food supply chain that are especially at risk?

JW: Any point that does not have a human in the loop or has appropriate AI security applied can silently fail at scale. Generative AI is famously poor at interpreting numbers and applying math, so, for example, if AI were used to monitor freezer temperatures, it is quite conceivable that it might incorrectly send an alert based on a miscalculation due to outside temperature. For last-mile tracking, if a human with difficult-to-read handwriting manually altered an address, the goods may be incorrectly placed on the wrong last-mile delivery truck, etc.

SC247: What are the biggest misconceptions companies have about securing AI systems?

“Many organizations underestimate how complex AI systems are and believe that traditional security tools are enough to protect them from exploitation. But traditional security tools don’t account for the specific vulnerabilities introduced by AI models.”

JW: Many organizations underestimate how complex AI systems are and believe that traditional security tools are enough to protect them from exploitation. But traditional security tools don’t account for the specific vulnerabilities introduced by AI models. We are all aware of high-profile cases where AI models give inaccurate or inappropriate responses, or exhibit ‘hallucinations’. AI applications and agents use these models as their ‘brain’ to power decision-making, so it is essential that security starts at the model stage and continues through each subsequent level. AI is constantly evolving, with new models emerging daily. Therefore, this process must be efficient and continuous to protect and preserve your hard-won security posture. 

SC247: Can you share a scenario that highlights how a lack of AI security could disrupt food supply operations?

JW: An AI agent is made up of three things: a purpose, a brain, and tools. The purpose is the job you’ve given it; the brain is an AI model or models; and the tools could be digital or physical. There are two places where security is required: at the ‘thought’ stage, where the agent considers what steps to take, and at the ‘action’ stage, where the agent may take an incorrect action. 

Imagine a trucking company or other supply chain organization that uses AI to optimize routing and delivery schedules but doesn’t have appropriate security in place. If it falls victim to a cyberattack, bad actors could shut down or stall the system, causing delays, food spoilage, and financial losses. In a domino effect, retailers could then face empty shelves, resulting in revenue loss and a damaged brand reputation. This kind of attack can be achieved without taking the entire system offline; rather,r it simply hinders AI’s ability to make clear and correct decision-making. However, if protection is in place to police AI with AI, the agentic defense system will have picked up the threat at the ‘thought’ stage, before any action is taken. 

SC247: What do you recommend for food supply companies just beginning their AI journey—what should they do first to stay secure? 

JW: Firstly, ensure the chosen use case is appropriate to be solved by AI, as not every use case is a good fit. 

Secondly, review the existing required controls for that use case and understand how each of them can be mapped to an AI solution. If they cannot be mapped, don’t worry, as more advanced AI security platforms may be required that specifically protect AI use cases. 

Thirdly, AI model selection is extremely important – explore in detail which model(s) will be used for the specific use case and whether they are fit for purpose in terms of quality and, just as importantly, security. 

When the right combination of use case and model has been selected, implement the required controls. Continuously test during the software development lifecycle (SDLC) using an appropriate AI red-teaming solution. 

Once the AI system is live in production, continuously evaluate it against zero-day attacks and add extra controls as necessary.

source