Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Type

Transparency & explainability

Clear all

Origin

Scope

SUBMIT A TOOL

If you have a tool that you think should be featured in the Catalogue of AI Tools & Metrics, we would love to hear from you!

SUBMIT
Objective Transparency & explainability

TechnicalProceduralUnited StatesUploaded on Dec 6, 2024
Vectice is a regulatory MLOps platform for AI/ML developers and validators that streamlines documentation, governance, and collaborative reviewing of AI/ML models. Designed to enhance audit readiness and ensure regulatory compliance, Vectice automates model documentation, from development to validation. With features like automated lineage tracking and documentation co-pilot, Vectice empowers AI/ML developers and validators to work in their favorite environment while focusing on impactful work, accelerating productivity, and reducing risk.

ProceduralUploaded on Nov 7, 2024
Trustworthy AI Procurement CardTM is a non-exhaustive list of information which can accompany acquisition decisions. The Card is similar to the DataSheets or Model Cards, in the sense that the objective is to promote transparency and better due diligence during AI procurement process.

EducationalUnited StatesUploaded on Nov 6, 2024<1 day
The deck of 50 Trustworthy AI Cards™ correspond to 50 most relevant concepts under the 5 categories of: Data, AI, Generative AI, Governance, and Society. The Cards are used to create awareness and literacy on opportunities and risks about AI, and how to govern these technologies.

ProceduralSingaporeUploaded on Oct 2, 2024
Resaro offers independent, third-party assurance of mission-critical AI systems. It promotes responsible, safe and robust AI adoption for enterprises, through technical advisory and evaluation of AI systems against emerging regulatory requirements.

ProceduralUnited KingdomUploaded on Oct 2, 2024
Warden AI provides independent, tech-led AI bias auditing, designed for both HR Tech platforms and enterprises deploying AI solutions in HR. As the adoption of AI in recruitment and HR processes grows, concerns around fairness have intensified. With the advent of regulations such as NYC Local Law 144 and the EU AI Act, organisations are under increasing pressure to demonstrate compliance and fairness.

EducationalUnited StatesUploaded on Nov 5, 2024
Community jury is concept where multiple stakeholders impacted by a same technology are given the possibility to learn about a project, discuss with one another and provide feedback.

TechnicalFranceUploaded on Aug 2, 2024
Evaluate input-output safeguards for LLM systems such as jailbreak and hallucination detectors, to understand how good they are and on which type of inputs they fail.

TechnicalUploaded on Aug 2, 2024
Responsible AI (RAI) Repairing Assistant

ProceduralNew ZealandUploaded on Jul 11, 2024
The Algorithm Charter for Aotearoa New Zealand is a set of voluntary commitments developed by Stats NZ in 2020 to increase public confidence and visibility around the use of algorithms within Aotearoa New Zealand’s public sector. In 2023, Stats NZ commissioned Simply Privacy to develop the Algorithm Impact Assessment Toolkit (AIA Toolkit) to help government agencies meet the Charter commitments. The AIA Toolkit is designed to facilitate informed decision-making about the benefits and risks of government use of algorithms.

ProceduralUploaded on Jul 2, 2024
BSI Flex 1890 defines terms, abbreviations, and acronyms for the connected and automated vehicles (CAVs) sector, focused on those relating to vehicles and associated technologies.

ProceduralUploaded on Jul 2, 2024
This DIN DKE SPEC defines guidelines for the labelling of training data for QA systems and specifies the characteristics of labels.

ProceduralUploaded on Jul 2, 2024
The purpose of the present document is to provide information on different types of AI mechanisms that can be used for cognitive networking and decision making in modern system design, including natural language processing.

ProceduralUploaded on Jul 2, 2024
This Recommendation presents an overview of the framework for a language learning system based on speech and natural language processing (NLP) technology.

ProceduralUploaded on Jul 2, 2024
This document establishes an Artificial Intelligence (AI) and Machine Learning (ML) framework for describing a generic AI system using ML technology.

ProceduralUploaded on Jul 2, 2024
This DIN SPEC (PAS) defines requirements for the development of deep learning image recognition systems. This document gives the conditions, under which image recognition problems can be processed with the help of a Deep-Learning-System.

ProceduralUploaded on Jul 2, 2024
This guide is published to create clarity for individuals involved with Software-Based Intelligent Process Automation products so that industry participants may rely on a product manufacturer's functionality claims and understand the underlying technological methods used to produce those functions.

ProceduralUploaded on Jul 2, 2024
The AI development interface, AI model interoperable representation, coding format, and model encapsulated format for efficient AI model inference, storage, distribution, and management are discussed in this standard.

ProceduralUploaded on Jul 2, 2024
This standard specifies an architecture and technical requirements for face recognition systems.

ProceduralUploaded on Jul 2, 2024
This standard provides a framework to help developers of autonomous systems both review and, if needed, design features into those systems to make them more transparent. The framework sets out requirements for those features, the transparency they bring to a system, and how they would be demonstrated in order to determine conformance with this standard.

ProceduralUploaded on Jul 2, 2024
This document surveys topics related to trustworthiness in AI systems.

catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.