Medic and partners build tools to support the delivery of healthcare to hard to reach communities. Being careful not to entrench biases in the delivery of healthcare in such settings is a pitfall that Medic and partners have to be proactive to avoid.
Indeed, most of the times that AI has made “news”, it has been for the very reason of disadvantaging a section of the society in one way or another. As Medic pursues the Precision Public health agenda, and given that we cannot eliminate bias from the real world, we can at least ensure our data and models are free of bias.
The AI Fairness 360 is a sample Open Source toolkit that can be used throughout the AI application cycle to identify, report and remedy biases in machine learning models. An arxiv link to the AI Fairness 360 can be found here - while an Introduction the toolkit which provides algorithms, datasets, explainers, and fairness metrics can be found here.