AI benefits/risks

Three ways to start owning AI security

Share
Pen Tests and Bug Bounties

Security faced significant challenges in the past year. On the threat side, exploits and numerous zero-day attacks sent security teams scrambling.

On the regulation side, the U.S. SEC instituted stringent incident reporting requirements, while in the EU, government bodies proposed new frameworks like the Artificial Intelligence Act in response to rising AI threats.

Compounding this issue we see the rise of novel AI threats, which have further complicated the security industry.

Organizations are now under pressure not only to defend against these advanced threats, but also to harness the power of AI for their own security programs.

Today, AI has been woven into the fabric of entire organizational systems—and its presence has only increased—blurring the lines of responsibility and ownership within a company.

It’s reminiscent of the early days of cloud and SaaS. As organizations transitioned to this new way of computing, new security threats emerged, leaving many unsure of who should bear responsibility.

This uncertainty prompted calls for a more unified and collaborative approach to security, recognizing that protecting against evolving threats required collective effort.

It’s no different with AI. However, the speed at which AI moves and grows has accelerated more than anything we experienced with cloud-based technologies, which will require security leaders to think and act faster to get ahead of any threats.

A brave new world

There’s so much still AI unknown about AI. An industry peer recently described AI as an "alien-like technology," and I have to say that I agree. AI models are something completely new and different and will require education and new thinking.

While many security practitioners believe the foundational nature of AI offers an overall positive, at the same time, it's incredibly challenging to understand. There are three risks we face today:

  • Training on protected information: AI systems rely on data, often pulled from public sources. However, when private information gets used to train models, we need to take a very different approach to securing data. We must isolate the model so that data put in does not get served up to unauthorized individuals. This includes both the data we feed the model and data the model collects as part of its normal functionality. Teams need to ensure that only authorized individuals can access the data served back.
  • Trusting AI to do too much, too early: While AI creates efficiencies in many everyday activities, it can also make errors. Over-reliance on AI without proper controls can lead to significant issues. For example, Air Canada’s chatbot misinformed a customer about a bereavement policy, resulting in legal consequences for the company. This incident highlights the importance of ensuring that AI systems, especially those interacting with customers, are thoroughly checked and controlled to prevent misinformation.
  • Use of AI by malicious adversaries: New technology often attracts malicious adversaries who think of ways to use it for harmful purposes. Bad actors might use AI to create deep fakes or misinformation campaigns. They could also leverage AI to develop sophisticated malware or highly effective phishing campaigns targeting companies.

AI security will take a village

Moving forward, it will take a much stronger effort to build bridges between departments given the complexity and ubiquity of AI. Organizations will need different owners for the various risks associated with AI, and it's crucial to remember that some risks extend beyond the security scope, affecting many other parts of the business.

The hierarchy and responsibilities will vary across organizations and undoubtedly evolve, but here are a few thoughts on how I see it developing in the short-term:

Starting from the top, the CISO will be responsible for data loss, ensuring data integrity, and guarding against data manipulation. The CISO will also need to work very closely with the chief legal officer (CLO) to make sure models don’t break regulatory requirements such as GDPR or HIPAA.

The CLO will also own anything surrounding copyright issues and manage any and all liabilities that arise from the use of its technology externally. For example, imagine a case of a company facing legal scrutiny for its AI-powered image recognition software misclassifying images—that would fall squarely under the CLOs purview.

The chief product officer (CPO) or chief technology officer (CTO) will manage prompt injection threats and oversee the outcomes of malicious content. They will need to ensure that AI systems are not only secure, but also function correctly in the presence of potentially harmful inputs.

Three ways to take action

First and foremost, we need to start figuring out how to adopt AI technology and do it the right way. The security teams that push back on this will only get left behind, so we need to partner with business teams to adopt AI thoughtfully.

Security teams should start by taking these three steps:

  • Understand vendor usage of AI: Identify the vendors that are leveraging AI in their software, and ask specific questions to understand the application of AI to the company’s data. Determine if vendors are training models on the data the company provides and what that means in terms of protecting company data further. The Cloud Security Alliance (CSA) offers excellent resources, such as the "AI Safe Initiative" group, which includes valuable research and education and AI safety and security.
  •  Demand transparency and control: Ensure transparency in how AI gets used in the products the company uses. For example, at our company, we are very transparent about our use of AI and even let customers turn it off if they are not comfortable using the technology. These are the choices we should demand from our products. Find out which vendors are moving to a model where they are training the AI on the company’s sensitive data. While it’s risky, and security teams need to decide on their own level of comfort.
  • Follow evolving community frameworks: There are many frameworks being developed, but two that I recommend taking a look at now are the NIST AI RMF and ISO 42001. Other available frameworks include the OWASP AI Security and Privacy Guide and AI MITRE will help in staying up to date on the latest.

AI security will require a collaborative approach and clear ownership of responsibilities within organizations. By understanding how vendors use AI, insisting on transparency, and keeping up with community resources, security teams can better protect their organizations from any emerging threats.

Jadee Hanson, chief information security officer, Vanta

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.