Computers can only learn fairness from humans
Dr. Nuria Oliver is a computer scientist with over 20 years of research experience in Artificial Intelligence (AI), Human Computer Interaction and Mobile Computing. She is Director of Research in Data Science for Vodafone Group, the first Chief Data Scientist at Data-Pop Alliance and advises both the Spanish government and European Commission on strategic issues related to AI.
We caught up with Nuria to ask her views on the way that AI is used to make an increasing number of everyday decisions.
I have an optimistic view of how AI can help Gigabit societies…
AI can help us make everyday decisions in a fair, transparent, ethical and accountable manner – from approving a mortgage or deciding on a medical treatment to creating a shortlist for jobs.
AI can avoid decisions driven by emotion, corruption or a conflict of interest, but it still has limitations.
In her book, Weapons of Math Destruction, Cathy O’Neill gives examples of how the algorithms that power AI can be deliberately or accidentally biased.
There are numerous efforts to develop ethical frameworks for AI decision making.
Institutes and research centres like the Digital Ethics Lab in Oxford or the AI Now Institute at NYU, and initiatives like the High-Level Expert Group on AI by the European Commission – with which I collaborate – are great examples. However, they need greater support, impact and visibility.
We need a wider debate about the ethics of different AI-based decisions….
…and developers and professionals who work on data-driven algorithms should behave according to a clear code of ethics and conduct, defined by the organisations where they work, as we do at Vodafone.
The data AI uses should be as unbiased as possible.
Many of the AI systems that we use today are trained with massive amounts of data. However, even more important than the quantity of the data is its quality. For example, if the data that we use to train the models is biased, the resulting models might be biased as well.
Data-driven decision-making systems could also discriminate in an implicit way…
…for example, they could use race as a deciding factor when reviewing loan applications – even if that variable is deliberately left out – because of the high correlation between race and other factors, such as where a person lives.
This principle of fairness also includes the need for co-operation.
We need co-operation between experts in different disciplines, between the private and the public sector and even between countries.
We humans should always have the ability to decide our own thoughts and actions.
Autonomy is a central value in western ethics and it is important that it continues to be respected when we are interacting with intelligent systems that are able to model and predict our behaviour such that they can nudge it in a certain direction.
Accountability is also vital.
People need clarity about who is responsible for the consequences of a decision made by an automated system. We need an appeals mechanism just as there is when we are unhappy with a product or service.
AI needs to work for and with people.
My positive view is that AI systems should be used to augment or complement our human intelligence.
We have a right to understand why and how AI is being used to make decisions.
AI systems should be transparent and provide that explanation in a way that is easy to understand by an average citizen.
In addition, it should be made clear when humans are interacting with artificial systems (e.g. chat bots) as opposed to with other humans.
The teams working on AI need to be more diverse
As an example, the percentage of females who occupy technical positions in technology companies, or female computer science students, has significantly decreased since the 1980s and it is less than 20% in many countries. At Vodafone we are very focused on diversity in the company overall and particularly in the Data Science and AI teams.
Governments should invest heavily in digital literacy and emotional intelligence programmes.
Everyone in society, and particularly children and teenagers, should have the necessary knowledge to be active contributors to the highly technological world of tomorrow. We will only be able to make informed decisions about new technologies if we understand them. Moreover, in order to be able to cope with the constant change brought by technological progress, we will need to devote effort to nurturing our social and emotional intelligences.
Privacy and data protection are vital.
Ownership and control of data should be in the hands of the people who generate it. AI systems must be reliable, secure and reproducible. Developers should be prudent to minimise any unintended negative consequences.
‘Living labs’ are an interesting instrument to experiment and co-create AI-based systems so that they are human centric.
I believe in a world where we stay in control of the technology that we use…
…and where technology is designed to improve our quality of life, both individually and collectively. I think we’ll end up co-operating with AI systems that complement us rather than replace us. It is probably the only way we have to achieve fairer societies and to tackle the complex, pressing challenges that we face as a species like climate change and the ageing population.