AI trustworthiness: Is the biggest AI challenge cultivating trust?

Digital Eye, AI - Artificial Intelligence digital concept
image: @ Vertigo3d | iStock

Elham Tabassi, Chief of Staff, Information Technology Laboratory at the National Institute of Standards and Technology (NIST) in the U.S., says that cultivating AI trustworthiness may well be the biggest challenge to date

Dramatic advances in generative artificial intelligence (AI) over the past year – and their rapid availability in products and services – have catapulted AI into the public’s imagination with images and predictions which are both enormously promising and deeply concerning.

Along with their potential to bring about extraordinary good, AI technologies can also bring cascading negative consequences and harms to individuals, society, and the planet if proper safeguards are not in place.

Building AI trustworthiness

At the heart of many discussions about AI and its implications is a single word: trustworthiness. That is where the National Institute of Standards and Technology (NIST) is making major contributions, always working closely with both the private and public sectors.

As a federal laboratory focused on driving U.S. innovation and supporting economic security, NIST has a broad research portfolio and a long-standing reputation for cultivating trust in technology. AI designers, developers, users, and evaluators are turning to NIST to deliver much-needed measurements, standards, tools, and expertise at a time when the measurement science behind AI is still very nascent.

Recent releases of powerful large language models demonstrate that AI technologies are advancing much faster than the standards, benchmarks, policy, governance, or accountability mechanisms necessary to keep them trustworthy.

Even defining what AI trustworthiness looks like has been a challenge. Working closely with the broad community of AI actors, NIST arrived at generally agreed-upon characteristics that serve as the building blocks of AI trustworthiness. We did this as part of our work to develop the NIST Artificial Intelligence Risk Management Framework (AI RMF 1.0).

Technical standards that accelerate innovation

The Framework identifies these key characteristics of trustworthy AI systems: valid and reliable, safe, secure and resilient, privacy-enhanced, explainable and interpretable, accountable and transparent, and fair with harmful bias managed.

That’s an important start. But there is an urgent need for clear specifications (i.e., standards) of what to measure and how to measure it (i.e., standardized measurement methodologies and metrics) for trustworthy AI.

Technical standards can set the right safeguards that accelerate the pace of innovation in a safe and trusted manner. They can do so by defining common vocabularies, establishing the socio-technical characteristics of trustworthy AI, and developing metrics and testbeds to validate, verify, and evaluate AI systems. Without these tools, there is no way to know the consequences of AI technologies.

Advance trustworthy approaches to AI

NIST has been working with the AI community in the U.S. and abroad to provide scalable, research-based methods to manage AI risks and advance trustworthy approaches to AI that serve all people in responsible, equitable, and beneficial ways.

The AI RMF is a case in point. Called for by congressional statute and released in January 2023, the NIST Framework provides a voluntary, practical resource to organizations designing, developing, deploying, or using AI systems to help manage the many risks of AI and promote trustworthy and responsible development and use of AI systems.

A companion AI RMF Playbook offers approaches to help organizations put the Framework in place. Already, organizations are using the Framework, and work has begun on guidance, including ‘profiles,’ which illustrate how organizations or entire sectors can use the AI RMF. We also launched the NIST Trustworthy and Responsible AI Resource Center, a one-stop shop for foundational content, technical documents, and AI toolkits. It also provides a common forum for AI actors to engage and collaborate in developing and deploying trustworthy and responsible AI technologies and standards.

NIST launched the Generative AI Public Working Group in June 2023, with a short-term goal of developing a profile describing how the AI RMF may be used to support generative AI technologies.

Cultivate trust in AI, looking ahead

There is much more to be done to cultivate trust in AI. AI systems are socio-technical in nature, meaning they are a product of the complex human, organizational, and technical factors involved in their design, development, and use. Many trustworthy AI characteristics – such as bias, fairness, interpretability, and privacy – are directly connected to societal dynamics and human behavior.

AI risks and benefits can emerge from the interplay of technical aspects combined with socio-technical factors related to how a system is used, its interactions with other AI systems, who operates it, and the social context into which it is deployed. To that end, measurements beyond computational and system accuracy and functionality are needed to evaluate or assess the risk and impact of AI systems.

Effective AI risk management requires senior-level organizational commitment and may require cultural change to translate into AI trustworthiness.

Strengthen AI measurement science

But there is no substitute for the work that must be done to advance and strengthen measurement science for AI. NIST will continue to work with the community to conduct research and develop technically sound standards, interoperable evaluations and benchmarks, and usable practice guides.

We invite you to learn more by clicking here and by contacting us at ai-inquiries@nist.gov.

Contributor Details

LEAVE A REPLY

Please enter your comment!
Please enter your name here