Why AI is an Overlooked Cybersecurity Risk in Supply Chains

To fully reach the potential of AI, security needs to be at the forefront of implementation.

Interos ai Ted Krantz Headshot
Adam121 Adobe Stock 315095274
adam121 AdobeStock_315095274

Embedding AI technologies throughout supply chain operations has the potential to reshape risk management. In fact, new data found that enterprises using unified AI platforms in their supply chain operations achieve significantly better outcomes than those using isolated solutions. As AI proves its value in supply chain management, organizations that fail to adopt it risk falling behind, especially as competitive pressure accelerates the AI arms race.

AI offers organizations numerous competitive advantages in the management of supply chain risk. It creates visibility into supplier risk, ensures regulatory compliance, contributes to due diligence processes and drives operational resilience. Despite AI being the foundation of a digital revolution across the globe, the failure to leverage AI securely has potential to bring more threats than opportunities. JPMorgan’s CISO warned of the amplification of risk introduced by applications of AI and the need for more security controls.

To fully reach the potential of AI, security needs to be at the forefront of implementation. Not only within an enterprises’ own organization, but AI security must also be integrated into third-party security strategies to minimize the risks of AI adoption throughout intertwined supply chains.

The dual-use of AI surfaces risks

On the one hand, AI is used to enhance cyberattacks. On the other, businesses are using AI to streamline processes, boost efficiencies and make more informed, data-based decisions. Both applications surface new risk for enterprises beyond concerns about generative AI model hallucinations and misinformation.

Today, hackers are utilizing AI to enhance the sophistication of cyberattacks. Researchers are increasingly discovering malicious AI models on open-source platforms, but there are additional threats stemming from data input and outputs.

The input of sensitive data into public AI models could bring about privacy and IP risks as this data is made accessible to the model developer, or even hackers who have found a way to access it. On the other hand, the ability to spread data-stealing prompt injection worms through large language models creates concerns over data output risks.

These new third-party risks stemming from AI models require strict security assessments and considerations. With the global web of interconnected supply chains, this makes it more important for enterprises to understand the third-party risk that AI adoption introduces to their supply chain.

Top 3 AI risk vectors

The current narrative around AI fails to account for the data quality and the related risks as part of the security framework. However, the reality is that AI models that are not carefully managed or securely integrated can introduce significant risks from misinformation to system failures.

The top risk vectors for enterprises to be aware of throughout their supply chains are:

·        Input: The risk of data poisoning at the input level is especially common in large language models, with models dependent on the quality of signals. When training data poisoning occurs, it shapes the algorithms of the model.

·        Model: Algorithm corruption or poisoning in models can also enable attackers to potentially inject malicious code through openly available models.

·        Output: Prompt injection attacks are a growing concern where finely targeted prompts can manipulate AI model output.

From input to output, minimizing AI risk at every stage

So, how can enterprises ensure that AI is not opening their organization up to risk at any of these stages?

At the input level, cleaning data is an essential component to prevent data poisoning. This is even more critical for AI models that leverage public data like supply chain risk management applications. This can range from simple applications, for example, flagging a zip code that doesn’t contain five numerals, to requiring manual intervention and data scrubbing to account for new regulatory changes.

Similar to the input stage, human collaboration is required to prevent AI algorithms and models from becoming corrupt. When it comes to real-time mapping supply chain risk scores, there is a calibration of private signals, company signals, and external risk signals – as well as putting weight against each of these parameters. As scores are calculated, risk management firms must collaborate with enterprises to ensure that the score determined is accurate and meaningful to the customer.

Finally, as prompt injection attacks grow more common, bad actors can gain entry points to company data by calculating prompts to manipulate model output. In every stage, enterprises need to be aware of the AI risks that open the door to introduce flaws, vulnerabilities or other manipulations to their supply chain.

Managing risk across supply chain ecosystems

It’s not just physical and digital supply chains that need to be secured today. AI supply chains must also be secured, from input to algorithms to outputs. Today’s CISOs need to pay closer attention to AI risks as the technology is adopted at breakneck pace across organizations.

From a federal standpoint, there’s urgency for government agencies to spearhead AI governance frameworks with clear guardrails to prevent harmful effects of AI. Nonetheless, it’s also critical for enterprises to have visibility into the AI tools being adopted within their organization. On top of that, having a grasp on how partners and suppliers, both direct and indirect, are using AI in their operations can help enterprises be aware of potential vulnerabilities within their supply chain.

Driving supply chain resilience by securing AI

Cyber incidents in today’s digital age are no longer isolated incidents; they affect a number of companies downstream of breached organizations. To unlock AI’s full potential in supply chains, security must be prioritized from the start. With each organization adopting AI differently, a strong security framework is essential to managing unique risks and to scale safely.

 

 

Page 1 of 72
Next Page