UNESCO’s global agreement on the ethics of AI can guide governments and companies alike
By Gabriela Ramos and Ritva Koukku-Ronde
To read the published version in the Hindu click here.
Artificial intelligence (AI) is more present in our lives than ever: From predicting what we want to see as we scroll through social media, to helping us understand weather patterns to manage agriculture, AI is ubiquitous. Even the rapidity with which therapeutics and vaccines were developed to tackle COVID-19 can also be partially credited to the use of AI algorithms that crunched complex data from clinical trials being undertaken in all corners of the world, creating global collaborations that could not have been imagined even a decade ago.
But artificial intelligence related technology cannot be said to always be beneficial. The data used to feed into AI often isn’t representative of the diversity and plurality of our societies, producing outcomes that can be said to be biased or discriminatory: For instance, whilst India and China together constitute approximately a third of the world’s population, Google Brain estimated that they form just 3% of images used in ImageNet, a widely used dataset. Another often cited example of this problem is facial recognition technology, used to access our mobile phones, bank accounts and apartment buildings, and increasingly employed by law enforcement authorities, with problems often emerging in such technologies accurately identifying women and darker-skinned people. For three such programs released by major technology companies, the error rate was only 1 per cent for light-skinned men, but 19 per cent for dark-skinned men, and up to a staggering 35 per cent for dark-skinned women. Biases in face-recognition technologies have led to wrongful arrests.
These challenges come as no surprise when you look at how AI is developed. Only 1 in 10 software developers worldwide are women and they continue to come overwhelmingly from western countries.
As one of the world’s largest markets for AI-related technologies, valued at over $7.8 billion USD in 2021, these issues are of particular importance to India. Indeed, the National Strategy on Artificial Intelligence (NSAI) released by Niti Aayog in 2018 highlights the massive potential of AI to solve complex social challenges faced by Indian citizens across areas such as agriculture, health, and education, in addition to the significant economic returns that AI-related technologies are already creating.
To ensure that the full potential of these technologies is reached, the right incentives for ethical AI governance need to be established in national and sub-national policy. India has made great strides in the development of responsible and ethical AI governance, starting with Niti Aayog’s #AIForAll campaign, to the many corporate strategies to ensure AI is developed with common, humanistic values at its core.
However, until recently, there was no common global strategy to take forward this importance agenda. This changed last November when 193 countries reached a groundbreaking agreement at UNESCO on how AI should be designed and used by governments and tech companies. UNESCO’s Recommendation on the Ethics of Artificial Intelligence took two years to put together and involved thousands of online consultations with people from a diverse range of social groups. It aims to fundamentally shift the balance of power between people and the businesses and governments developing AI. Indeed, if the business model of how these technologies are developed does not change to place human interests first, inequalities will grow to a magnitude never before experienced in history; access to the raw material that is data is key.
Countries that are members of UNESCO – and this is nearly every nation in the world – have agreed to implement this recommendation by enacting actions to regulate the entire AI system life cycle, ranging from research, design and development, to deployment and use.
This means they must use affirmative action to make sure women and minority groups are fairly represented on AI design teams. Such action could take the form of quota systems that ensure these teams are diverse, or to have dedicated funds from their public budgets to support such inclusion programme.
The Recommendation also underscores the importance of the proper management of data, privacy and access to information. It establishes the need to keep the control over data in the hands of users, allowing them to access and delete information as needed. It also calls on Member States to ensure that appropriate safeguards for the processing of sensitive data and effective accountability schemes are devised, and to provide redress mechanisms in the event of harm, which takes enforcement to the next level.
Additionally, the broader socio-cultural impacts of AI-related technologies are also addressed, with the Recommendation taking a strong stance that AI systems should not be used for social scoring or mass surveillance purposes; that particular attention must be paid to the psychological and cognitive impact that these systems can have on children and young people; and that Member States should invest in and promote not only digital, media and information literacy skills but also socio-emotional and AI ethics skills to strengthen critical thinking and competencies in the digital era. This is all critical for ensuring that accountability and transparency of AI-related technologies is promoted, underpinning a strong rule of law that adapts to new digital frontiers.
In a number of countries, the principles of the Recommendation are already being used in their AI regulation and policy, demonstrating their practical viability. Finland provides an example of good practice of this regard, with its 2017 AI Strategy, the first of its kind in any European Country, demonstrating how governments can effectively promote ethical AI use without compromising the desire to be on the cutting edge of new technologies.
The new agreement is broad and ambitious. It is a recognition that AI-related technologies cannot continue to operate without a common rulebook. Over the coming months and years, the Recommendation will serve as a compass to guide governments and companies alike to voluntarily develop and deploy AI technologies that conform with the commonly agreed principles it establishes – similar moves happened after UNESCO’s declaration on the human genome set out norms for genetic research. Secondly, it is hoped that Governments will themselves use the Recommendation as a framework to establish and update legislation, regulatory frameworks, and policy to embed humanistic principles in enforceable accountability mechanisms. To accompany countries in the realization of the full potential of AI and with the aim to build institutional capacity of countries and all relevant stakeholders, UNESCO is in process of developing tools to help countries assess their readiness in the implementation of the Recommendation and to help them identify, monitor and assess benefits, concerns and risks of AI system.
With this agreement, we are confident of putting AI to work where it can have the most impact on the world’s greatest challenges: hunger, environmental crises, inequalities and pandemics. We are optimistic we have built the momentum for real change.
Ms Gabriela Ramos is the Assistant Director-General for Social and Human Sciences, UNESCO and Ms Ritva Koukku-Ronde is the Ambassador of Finland to India and Bangladesh. UNESCO is a member of Team UN in India, together helping deliver on the Sustainable Development Goals.
Comments
Something to say?
Login or Sign up for free