Tackling Trust, Risk and Security in AI Models

By Lori Perri | 4-minute read | September 5, 2023

Big Picture

6 reasons you need to build AI TRiSM into AI models

Generative AI has sparked extensive interest in artificial intelligence pilots, but organizations often don’t consider the risks until AI models or applications are already in production or use. A comprehensive AI trust, risk, security management (TRiSM) program helps you integrate much-needed governance upfront, and proactively ensure AI systems are compliant, fair, reliable and protect data privacy.

If you have any doubt that AI TRiSM is needed, consider these six drivers of risk, many of which stem from users simply not understanding what is really happening inside AI models.

1. Most people can’t explain what AI is and does to the managers, users and consumers of AI models.

  • Don’t just explain AI terms; be able to articulate:

    • Details or reasons that, for a specific audience, clarify how a model functions 

    • The model’s strengths and weaknesses

    • Its likely behavior

    • Any potential biases

  • Make visible the datasets used to train and the methods used to select that data if that information is available to you. This can help surface potential sources of bias.

2. Anyone can access ChatGPT and other generative AI tools.

  • GenAI can potentially transform how enterprises compete and do work, but it also injects new risks that can’t be addressed with conventional controls.

  • In particular, risks associated with hosted, cloud-based generative AI applications are significant and rapidly evolving.

3. Third-party AI tools pose data confidentiality risks.

  • As your organization integrates AI models and tools from third-party providers, you also absorb the large datasets used to train those AI models.

  • Your users could be accessing confidential data within others’ AI models, potentially creating regulatory, commercial and reputational consequences for your organization.

By 2026, AI models from organizations that operationalize AI transparency, trust and security will achieve a 50% improvement in terms of adoption, business goals and user acceptance.

Source: Gartner

4. AI models and apps require constant monitoring.

  • Specialized risk management processes must be integrated into AI model operations (ModelOps) to keep AI compliant, fair and ethical.

  • There aren’t many off-the-shelf tools, so you likely need to develop custom solutions for your AI pipeline.

  • Controls must be applied continuously — for example, throughout model and application development, testing and deployment, and ongoing operations.

5. Detecting and stopping adversarial attacks on AI requires new methods.

  • Malicious attacks against AI (both homegrown and embedded in third-party models) lead to various types of organizational harm and loss — for example, financial, reputational or related to intellectual property, personal information or proprietary data. 

  • Add specialized controls and practices for testing, validating and improving the robustness of AI workflows, beyond those used for other types of apps.

6. Regulations will soon define compliance controls.

  • The EU AI Act and other regulatory frameworks in North America, China and India are already establishing regulations to manage the risks of AI applications.

  • Be prepared to comply, beyond what’s already required for regulations such as those pertaining to privacy protection.

The story behind the research

From the desk of Avivah Litan, Gartner Distinguished VP Analyst

“Organizations that do not consistently manage AI risks are exponentially more inclined to experience adverse outcomes, such as project failures and breaches. Inaccurate, unethical or unintended AI outcomes, process errors and interference from malicious actors can result in security failures, financial and reputational loss or liability, and social harm. AI misperformance can also lead to suboptimal business decisions.”

3 things to tell your peers

1

AI TRiSM capabilities are needed to ensure the reliability, trustworthiness, security and privacy of AI models.


2

They drive better outcomes related to AI adoption, achieving business goals and ensuring user acceptance.


3

Consider AI TRiSM a set of solutions to more effectively build protections into AI delivery and establish AI governance.

Share this article

Avivah Litan is a Distinguished VP Analyst in Gartner Research. Ms. Litan is currently a member of the ITL AI team that covers AI and Blockchain. She is the lead Gartner analyst who started and continues coverage of AI Trust Risk and Security, which has been in strong demand since the launch of ChatGPT. Ms.Litan also helps cover the use of AI including Generative AI in Cybersecurity. She specializes in all aspects of Blockchain innovation, ranging from public to private chains, use cases, security and emerging technologies. Ms. Litan has a strong background in many aspects of cybersecurity and fraud, including the integration of AI with these domains. This background is useful in her current coverage of securing and protecting AI models, applications and data as well as blockchain applications. Her background also supports her research into integrating advanced technologies such as blockchain, IoT and AI to solve specific use cases, such as detecting fake content or goods. Before joining Gartner, Ms. Litan worked as a Director of Financial Systems at the World Bank. She also worked as a journalist and columnist for the Washington Times. She earned her Masters of Science at M.I.T. and graduated a general manager's course at Harvard Business School.

Drive stronger performance on your mission-critical priorities.