Supply chains have always been global, interdependent, and inherently complex. What has changed is the extent to which artificial intelligence now sits at the center of that system. AI increasingly shapes supplier qualification, demand forecasting, logistics optimization, quality assurance, and compliance monitoring. At the same time, those AI systems are themselves assembled from sprawling networks of hardware components, open-source libraries, third-party models, datasets, and APIs—each introducing new points of risk.
For supply chain leaders, this creates a paradox. AI is becoming essential to managing volatility, disruption, and scale, yet it also expands the supply chain’s attack surface through opaque dependencies and embedded software risk. Resolving this tension requires a shift in thinking: supply-chain security must be treated as an end-to-end discipline, integrating hardware and software assurance from the physical edge to core enterprise systems.
The new “source of truth” challenge
One of the most persistent challenges in supply-chain management is establishing a reliable source of truth. Organizations must know where parts come from, how they were handled, and whether they meet regulatory and contractual requirements. This challenge is no longer limited to finished goods or tier-one suppliers. Modern supply chains now involve thousands of component providers, raw material sources, and subcontractors operating across multiple regulatory jurisdictions.
AI can help correlate and analyze this data at scale, but only if the underlying inputs are trustworthy. If data provenance is unclear, telemetry is incomplete, or components cannot be reliably identified, then AI simply accelerates the spread of uncertainty. In this sense, AI magnifies both the strengths and weaknesses in existing supply-chain practices.
That is why leading organizations are extending supply-chain security deeper into the technology stack. Rather than relying solely on documentation, audits, or after-the-fact reconciliation, they are embedding verification and attestation mechanisms directly into the systems that generate and move data.
Â
Hardware as the foundation of trust
Security discussions often focus on applications, networks, and cloud services. But many of the most damaging attacks exploit weaknesses at layers below firmware, device identity, and endpoint integrity. From a supply-chain perspective, this matters because every system ultimately relies on physical components executing instructions at the hardware level.
A hardware-rooted security approach enables capabilities such as unique device identification, tamper detection, and cryptographic attestation. These features make it possible to verify that a component or system is genuine, unaltered, and operating as expected throughout its lifecycle. When integrated into procurement, manufacturing, and logistics workflows, they provide a persistent chain of custody that software alone cannot guarantee.
All this is happening against the backdrop of constant evolution in AI models and their dependencies as models are retrained, weights are adjusted, libraries are updated, and regulatory requirements shift. A comprehensive security strategy, therefore, requires continuous validation, and hardware-level telemetry plays a crucial role by generating real-time signals about system behavior that software alone may not detect. This includes monitoring model behavior, tracking dependency changes, and validating that systems are operating within expected parameters.Â
Extending zero trust to AI and the edge
Zero trust principles for least privilege access, continuous authentication, and verification of every interaction are already well established in cybersecurity. Applying them to AI workflows, however, requires additional rigor. AI systems often span multiple environments, from edge devices and operational technology to data centers and cloud platforms. Each transition introduces risk.Â
Extending zero-trust concepts across this continuum means ensuring that every data flow, model invocation, and system interaction is verified regardless of location. Many organizations focus security investments on centralized infrastructure, yet a significant share of threats originates at endpoints and edge devices. These systems collect data, run local analytics, and often serve as gateways into core environments. As these risks become better understood, zero trust expectations are increasingly being pushed downstream, requiring suppliers, component providers, and service partners to meet the same verification standards to ensure finished goods are built with verifiable trust at every stage.
As AI capabilities move closer to the edge for performance, latency, and privacy reasons, securing these endpoints becomes even more critical. A unified approach that applies consistent security principles from edge devices to core systems reduces blind spots and limits lateral movement by attackers. Here again, hardware and software must work together. Hardware-anchored identity and attestation provide a reliable basis for trust decisions, while software enforces policies and monitors behavior at scale.
Securing AI in supply chain environments requires coordination across hardware design, software architecture, procurement policy, and operational governance. Organizations that fail to grasp this complexity will struggle both to realize value and to maintain robust security. Those that embed trust into the foundation of their systems, including trust in the AI infrastructure they utilize, will be better positioned to confidently leverage AI-driven insights to manage risk, adapt to regulatory change, and sustain resilience in an increasingly interconnected world.
Dave Richard is Technical Director of the Defense and National Security Group at Intel, and Gretchen Stewart is a Principal Engineer and AI Solutions Architect at Intel.
