Academia

AI for everyone? A roadmap to substantive equality in AI ecosystems

diverse hands clasped the word equality

In the whirlwind of artificial intelligence’s development and deployment, one fact has made itself clear: AI systems tend to mirror the world we live in, magnifying existing societal inequalities and deepening the historical marginalisation of certain groups. 

You may have heard the aphorism that AI is only as good as the data on which it’s trained — and of course, that data is usually drawn from real life, from the flawed but ever-progressing world we inhabit. The same is true for AI’s design, development, and governance, where gaps and harms of historic proportions persist. If not approached with a deliberate focus on equality, every phase of AI systems’ life cycle could exacerbate existing disparities within and between countries. 

Although many would recognise that everyone should benefit from AI development, action toward greater equality in AI ecosystems is one of the areas of AI governance that receives the least investment. There is a glaring need to establish a strong global framework to enable policymakers to achieve gender equality and diversity in AI. 

Follow us on LinkedIn

Insights and recommendations for transformative AI policy

To address the root causes of AI inequalities, we need to reverse the historical exclusion of individuals and communities. To this end, the Global Partnership on AI (GPAI) Responsible AI Working Group, supported by Mila – Quebec Artificial Intelligence Institute and CEIMIA, has produced a report, Towards Substantive Equality in Artificial Intelligence: Transformative AI Policy for Gender Equality and Diversity,” and an accompanying policy guide to help policymakers implement its recommendations.

Based on extensive consultations with more than 200 participants from over 50 countries and a diverse array of communities, identities, and fields of expertise, the report and policy guide call for real equality in AI. Together, they provide policy insights, examples of promising practices, and actionable recommendations, as well as a step-by-step roadmap of how to implement these recommendations. All of these are based on a human-rights-based framework focusing on gender equality and diversity. 

Let’s look at some of the report and policy guide’s key recommendations. 

Levelling the playing field: Inclusive design and democratic innovation

When thinking about the inclusive development of AI, we need a paradigm shift to move beyond simply “adding” marginalised groups to AI discussions. Instead, we should decentre AI itself and centre communities through inclusive design and democratic innovation practices to directly address the systemic disadvantages faced by women and other marginalised groups to participate in the development of the technologies they need. 

Policymakers can contribute to meaningful inclusion in tangible ways, such as investing in capacity building for institutions, allowing the processing of special data categories, or funding transformative technology research and design.

One illustration of inclusive design in action is the Feminist AI Research Network (fr), an initiative of nearly 100 feminist AI academics, activists and practitioners from different fields championing multidisciplinary knowledge sharing and feminist innovation worldwide. It’s both a community-driven approach and a technologically advanced, innovative one. The network provides new data, algorithms, models, policies and systems that can be used to correct real-life harm and barriers to women’s and other marginalised groups’ rights, representation and equality. 

From a policy perspective, it is crucial to fund and support such initiatives and, by extension, the development and implementation of inclusive AI systems and processes. We must create the conditions for traditionally excluded people and communities to meaningfully participate as central actors in AI. Marginalised communities’ active, intentional involvement in every phase of AI development, deployment, and governance is essential to ensuring that AI benefits all.

Who gets a say? Meaningful participation in AI governance

Simply having access to AI doesn’t necessarily mean that women and other marginalised groups get to participate in shaping its development and governance. A human rights-centred approach to AI demands that people have a real say in developing and deploying technologies that can deeply impact their lives. Decisions about AI impact everyone, so those decisions must reflect the values and priorities of all communities, particularly those historically excluded. This requires active public engagement, capacity-building for marginalised groups, and legal protections for public participation rights and collective data rights.

Historically, systemic barriers have limited marginalised communities’ career growth, skill development, and community awareness of AI and other emerging technologies. These barriers result from economic inequalities, limited access to training opportunities, inadequate digital infrastructure, and cultural barriers. Thankfully, many initiatives are working to remove these barriers to democratic innovation and meaningful participation in AI governance.

One such initiative, the Indigenous Pathfinders in AI program, led by Mila – Québec AI Institute in partnership with Indspire, is a career pathway initiative that creates the conditions for Indigenous talent to shape the future of AI. Centred around Indigenous worldviews and values in AI development, the programme enables Indigenous communities to drive AI development in ways that benefit them. 

The lessons learned from this initiative are important for understanding how Indigenous communities and cultures worldwide can uniquely contribute to developing AI technologies that reflect their worldviews. Policymakers can work towards more inclusive AI governance by holding awareness and consultation sessions with marginalised groups to understand their unique priorities and needs and develop policies to address them. Equipped with these perspectives, they can then fund and support educational, professional, and financial initiatives that allow marginalised communities to meaningfully participate and lead within AI ecosystems and develop their own AI technologies.

Building trust: Transparency, accountability, and access to justice

In the current AI landscape, there is a critical need for transparency, accountability and access to justice in AI-related processes and decision-making. Ensuring robust frameworks are in place to prevent harm and discrimination from AI is crucial. Yet, to establish trust in AI, and to harness its benefits, there must also be mechanisms in place for when such frameworks fail. If communities are excluded, harmed or discriminated against by AI systems — incidentally or intentionally — they must have legal resources and means of taking action to redress harm. 

Transparency and accountability contribute to substantive equality by allowing us to audit the systems in place. If we can publicly scrutinise AI systems and processes, detect biases, and hold private and public providers accountable for harmful impacts, we can have far better visibility on the issues at hand. That, in turn, will allow us to advocate for equality and correct structural exclusion in an informed, cohesive way. The report and policy guide outline several recommendations to improve transparency and accountability. These include guaranteeing the right to information, enhancing algorithmic transparency, conducting human rights impact assessments, and establishing public procurement guidelines across the AI lifecycle. 

A practical example we can learn from is the Global Index on Responsible AI. This initiative analyses countries’ AI policies and national commitments to promoting inclusivity, accountability, and ethics in AI. The comprehensive, collaborative tool and its benchmarks then provide representative data to policymakers, researchers and journalists, allowing them to track and measure countries’ progress in defending human rights in AI. In turn, policymakers everywhere can use these benchmarks to design regulations that mandate transparency in AI operations and outline accountability mechanisms to ensure a responsible, inclusive use of AI. Once the data are turned into insights, it’s time to move to action.

A roadmap towards substantive equality in AI ecosystems

To ensure that AI benefits everyone, AI systems must be designed inclusively, governed democratically, and equipped with methods of accountability and justice to redress potential harms. With an impressive array of initiatives championing these causes, it is clear that governments are responsible for developing transformative AI policies to ensure that the AI ecosystem is not reproducing inequalities within and across countries. So now it’s time to move to what may be the toughest part of all — taking action. 

Many actors may rightfully feel unsure how to proceed with this nascent and rapidly evolving technology. This is where a step-by-step roadmap can come in handy, like the one provided in the Policy Guide for Implementing Transformative AI Policy Recommendations

Figure 1: A step-by-step roadmap for the implementation of transformative AI policy

Putting these policy insights into practice and integrating them into global regulations and legislation will require much coordination. Institutions around the world will need to invest in these areas to allow just AI. That could take a number of forms: investing in their own capacity as well as in capacity-building for others; investing in people, in training, in creating institutions; or outlining the rights and responsibilities institutions have to enable access to justice. Public regulatory sandboxes are also useful tools to enable safe innovation by allowing us to perform iterative testing to balance technological testing with the need for clear regulatory boundaries.

Granted, there is no one-size-fits-all solution for more responsible, equitable and inclusive AI ecosystems. The recommendations in the GPAI report and policy guide are also meant to be adapted and applied in geographical, social, cultural, historical, economic, legal, and political contexts. The actions stakeholders can realistically take will vary enormously depending on legal and regulatory frameworks, technical capacities, resource availability and stakeholder cooperation, and other community-specific obstacles to implementation.

We must support local initiatives while promoting international collaboration to realise this vision. Exchanging with diverse global stakeholders is necessary to develop inclusive policies that can adapt to the rapid pace of AI advancements. Therefore, global actors must seek out and amplify the voices of the global majority and ensure that they form an integral part of AI governance. By prioritising equity and inclusion, we can harness AI’s power to create a more just and equitable future for all.



Disclaimer: The opinions expressed and arguments employed herein are solely those of the authors and do not necessarily reflect the official views of the OECD or its member countries. The Organisation cannot be held responsible for possible violations of copyright resulting from the posting of any written material on this website/blog.