Hello everyone,
Last week, I was privileged to participate at the Artificial Intelligence for children in Africa workshop in South Africa. This is part of a series of workshops being led by UNICEF through funding from the Government of Finland. There were similar workshops held in New York and Helsinki last year and therefore this was the first in the Global South.
At this workshop, we had representation from UNICEF East, South, West and Central Africa regional offices, policy makers, representatives of regulatory authorities, technology partners, researchers and students.
The objectives of this workshop was really to:
- better understand the needs and challenges of governments and businesses in Africa with regard to child rights and AI policymaking and implementation.
- gather inputs on the policy guidance so that it can be most useful to governments and businesses
- raise awareness and identify champions to drive forward the child rights agenda, including governments and companies to pilot the guidance.
The workshop focused on the 3 key pillars on AI for children and what this means for policy makers and businesses. All of which have relevance to Medic’s data science vision.
Protection -protecting children against intended and non-intended harmful effects of AI such as bias and discrimination
Provision: Opportunities where AI shows promise to positively impact children lives, such as provision of health, education and social services. I found this pillar relevant to Medic’s predictive models pilot and took the opportunity to highlight this to participants.
Empowerment: equipping children to live in the AI world and to be responsible AI developers to ensure positive and trustworthy use of AI
These three pillars are guided by 5 principles that focus on i) upholding the rights of the child by ensuring AI systems are developed to respect, promote and fulfil child rights as enshrined in the convention of the rights of the child. ii) AI systems should prioritize children’s development and well-being iii) AI systems should empower children with the ability to own, access, securely share, understand the use of and delete their data iv) AI systems should ensure transparency and accountability for children by being transparent for example in how and why a particular decision was made or in the case of a robot an action occurred and v) AI systems should prioritize equity and inclusion of children-this principle stood out for me in terms of not only including children in the design process but also for children with disabilities.
In summary, there was consensus that
- Children and their rights ought to be considered in the development and use of AI systems-I was encouraged with the emphasis on the adoption of a human-centered approach in the development of these AI systems.
- Although there are data protection and privacy policies or laws at country and organizational level, these aren’t AI or children specific. There was emphasis on the need for organizational guidance documents or policies for providing AI interventions @isaacholeman.
- This workshop provided concrete suggestions on what to include in the policy guidance and how children can be prioritized in AI systems. UNICEF is working to provide the guidance for AI for children that would be launched in June this year.
We have an opportunity to include our predictive models pilot with Living Goods as a case-study for the workshop report.@erika @amanda
Below are resources on AI for children that you may find useful!
UNICEF video on AI for children: UNICEF #ai4children - YouTube
AI and Child Rights Workshop Report_Helsinki_26 November 2019_FINAL.pdf (288.5 KB)