News

Article

WEF paper proposes principles to prevent discriminatory outcomes in machine learning

The World Economic Forum (WEF)’s Global Future Council on Human Rights recently issued a white paper to provide a framework for developers to prevent discrimination in the development and application of machine learning (ML). The paper is based on research and interviews with industry experts, academics, human rights professionals and others working at the intersection of machine learning and human rights.

The paper proposes a framework based on four guiding principles - active inclusion, fairness, right to understanding, and access to redress - for developers and businesses looking to use machine learning.

Artificial intelligence systems based on machine learning are already being used to make decisions which have significant, life-altering impact on people, such as hiring of job applicants, granting loans and releasing prisoners on parole.  Machine learning systems can help to eliminate human bias in decision-making, but they can also end up reinforcing and perpetuating systemic bias and discrimination.

Concerns around opacity, data, algorithm design

Previous algorithmic decision-making systems relied on rules-based, “if/then” reasoning. But ML systems create more complex models in which it is difficult to understand why and how decisions were made.

ML systems are opaque, due to their complexity and also to the proprietary nature of their algorithms. Moreover, ML systems today are almost entirely developed by small, homogenous teams, most often of men. The massive data set required to train the sysem are often proprietary and require largescale resources to collect or purchase. This effectively excludes many companies, public and civil society bodies from the machine learning market. Though there is increasing availability of open data, companies who own massive proprietary datasets continue to enjoy definite advantages.

Training data may exclude classes of individual who do not generate much data, such as those living in rural areas of low-income countries, or those who have opted out of sharing their data. The report presents an example where this might lead to discrimination. If an application’s training data demonstrates that people who have influential social networks or who are active in their social networks are “good” employees, it might filter out people from lower-income backgrounds, those who attended less prestigious schools, or those who are more cautious about posting on social media.

Similarly, loan applicants from rural backgrounds, with less digital infrastructure, could be unfairly excluded by algorithms trained on data points captured from more urban populations.

Data may be biased or error-ridden. For instance, using historical data might result in the ML system judging women to worse hires than men, because historically women have been promoted less than men. Whereas the actual reason is that workplaces have historically been biased.

Even if the data is good, the paper identifies five ways in which design or deployment of ML algorithms could encode discrimination: choosing the wrong model (or the wrong data); building a model with inadvertently discriminatory features; absence of human oversight and involvement; unpredictable and inscrutable systems; or unchecked and intentional discrimination.

The authors cite examples of systems that disproportionately identify people of colour as being at “higher risk” for committing a crime or re-offending or which systematically exclude people with mental disabilities from being hired. The risks are higher in low- and middle-income countries, where existing inequalities run deeper, availability of training data is limited, and government regulation and oversight are weaker.

Four proposed principles for businesses

The paper notes that governments and international organisations have a role to play but regulation tends to lag technological development. However, even in the absence of regulation, the paper says that businesses need to integrate principles of non-discrimination and empathy into their human rights due diligence.

As part of ‘Active Inclusion’, the paper recommends that development and design of ML applications must actively seek a diversity of input, especially of the norms and values of specific populations affected by the output of AI systems.

The second principle proposed is ‘Fairness’. People involved in conceptualising, developing, and implementing ML systems should consider which definition of fairness best applies to their context and application, and prioritize it in the architecture of the machine learning system and its evaluation metrics.  

To ensure ‘Right to Understanding’, the involvement of ML systems in decision-making that affects individual rights must be disclosed. Also, the systems must be able to provide an explanation of their decision-making that is understandable to end users and reviewable by a competent human authority. If that is impossible and human rights are at stake, the paper states that leaders in the design, deployment and regulation of ML technology must question whether or not it should be used.

The paper also proposes that leaders, designers and developers of ML systems must make visible avenues for redress for those affected by disparate impacts, and establish processes for the timely redress of any discriminatory outputs.

To help companies adopt these principles, the paper recommends that companies should identify human rights risks linked to business operations. Common standards could be established and adopted for assessing the adequacy of training data and its potential bias through a multistakeholder approach.

It is proposed that companies work on concrete ways to enhance company governance, establishing or augmenting existing mechanisms and models for ethical compliance. Additionally, companies should monitor their machine learning applications and report findings, working with certified third-party auditing bodies. Results of audits should be made public, together with responses from the company. The authors say that large multinational companies should set an example by taking the lead in this.

The authors express hope that this report will advance internal corporate discussions of these topics as well as contribute to the larger public debate.

“We encourage companies working with machine learning to prioritize non-discrimination along with accuracy and efficiency to comply with human rights standards and uphold the social contract,” said Erica Kochi, Co-Chair of the Global Future Council for Human Rights and Co-Founder of UNICEF Innovation.

Nicholas Davis, Head of Society and Innovation, Member of the Executive Committee, World Economic Forum, said, “One of the most important challenges we face today is ensuring we design positive values into systems that use machine learning. This means deeply understanding how and where we bias systems and creating innovative ways to protect people from being discriminated against.”  

Read the paper here.

Visit site to retreive White Paper:
Download
FB Twitter LinkedIn YouTube