Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Type

Human wellbeing

Clear all

Origin

Scope

SUBMIT A TOOL

If you have a tool that you think should be featured in the Catalogue of AI Tools & Metrics, we would love to hear from you!

SUBMIT
Objective Human wellbeing

EducationalUnited KingdomUploaded on Dec 9, 2024
Newton’s Tree’s Federated AI Monitoring Service (FAMOS) is a dashboard for real-time monitoring of healthcare AI products. The dashboard is designed to enable users to observe and monitor the quality of data that goes into the AI, changes to the outputs of the AI, and developments in how healthcare staff use the product.

Objective(s)

Related lifecycle stage(s)

Operate & monitorDeploy

ProceduralUnited KingdomUploaded on Oct 2, 2024
Warden AI provides independent, tech-led AI bias auditing, designed for both HR Tech platforms and enterprises deploying AI solutions in HR. As the adoption of AI in recruitment and HR processes grows, concerns around fairness have intensified. With the advent of regulations such as NYC Local Law 144 and the EU AI Act, organisations are under increasing pressure to demonstrate compliance and fairness.

ProceduralUploaded on Jul 2, 2024
This document lists examples of and defines categories of use cases for machine learning in medicine for clinical practice.

Objective(s)


ProceduralUploaded on Jul 2, 2024
The impact of artificial intelligence or autonomous and intelligent systems (A/IS) on humans is measured by this standard.

Objective(s)


ProceduralUploaded on Jul 2, 2024
This document provides a high-level overview of AI ethical and societal concerns.

Objective(s)


ProceduralUploaded on Jul 1, 2024
This standard identifies the core requirements and baseline for AI solutions in health care to be deemed as trustworthy.

Objective(s)


EducationalUnited KingdomAustraliaUploaded on Apr 30, 2024
A toolkit for technology-makers interested in improving their technologies by applying wellbeing psychology to design.

Objective(s)

Related lifecycle stage(s)

Plan & design

TechnicalProceduralUnited StatesJapanUploaded on Apr 19, 2024
Diagnose bias in LLMs (Large Language Models) from various points of views, allowing users to choose the most appropriate LLM.

Related lifecycle stage(s)

Plan & design

EducationalUploaded on Apr 2, 2024<1 hour
Approaches to disability-centered data, models, systems oversight

ProceduralBrazilUploaded on Mar 14, 2024
Ethical Problem Solving (EPS) is a framework to promote the development of safe and ethical artificial intelligence. EPS is divided into an evaluation stage (performed via Algorithmic Impact Assessment tools) and a recommendation stage (the WHY-SHOULD-HOW method).

TechnicalEducationalProceduralUnited StatesUploaded on Jan 17, 2024
This document provides risk-management practices or controls for identifying, analyzing, and mitigating risks of large language models or other general-purpose AI systems (GPAIS) and foundation models. This document facilitates conformity with or use of leading AI risk management-related standards, adapting and building on the generic voluntary guidance in the NIST AI Risk Management Framework and ISO/IEC 23894, with a focus on the unique issues faced by developers of GPAIS.

Uploaded on Dec 14, 2023
Our work enables developers and policymakers to anticipate, measure, and address discrimination as language model capabilities and applications continue to expand.


ProceduralUploaded on Oct 26, 2023
The Playbook provides suggested actions for achieving the outcomes laid out in the AI Risk Management Framework (AI RMF) Core (Tables 1 – 4 in AI RMF 1.0). Suggestions are aligned to each sub-category within the four AI RMF functions (Govern, Map, Measure, Manage).

ProceduralUploaded on Oct 26, 2023
The AIRC supports all AI actors in the development and deployment of trustworthy and responsible AI technologies. AIRC supports and operationalizes the NIST AI Risk Management Framework (AI RMF 1.0) and accompanying Playbook and will grow with enhancements to enable an interactive, role-based experience providing access to a wide-range of relevant AI resources.

ProceduralUploaded on Oct 26, 2023
The goal of the AI RMF is to offer a resource to the organizations designing, developing, deploying, or using AI systems to help manage the many risks of AI and promote trustworthy and responsible development and use of AI systems.

ProceduralUnited KingdomUploaded on Oct 6, 2023
This report proposes a model for Equality Impact Assessment of AI tools. This builds on the paper research which finds that existing fairness and bias auditing solutions are inadequate to ensuring compliance with UK Equalities legislation.

EducationalUnited KingdomUploaded on Oct 6, 2023
A toolkit for employers and workers seeking to understand the challenges and opportunities of using algorithmic systems that make or inform decisions about workers.

ProceduralUploaded on Oct 6, 2023
Guidance aimed at encouraging employee led design and development of algorithmic systems in the workplace.

EducationalUnited KingdomUploaded on Oct 6, 2023
Provides the regulatory framework for incorporating rights, freedoms, and obligations relevant to work and people's experience of it, in particular technology specific guidance.

catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.