How should we evaluate AI in global health?

This talk has the very bold title, “Can We Please Stop Talking about AI for health?” but has a really important message. AI/machine learning systems are expensive to build and maintain and complex but are often built with the promise of improved care and efficiency. However, too often, they are evaluated on intermediate outcomes. These outcomes are useful initial indicators of the success of a system, but they can be taken as sufficient evidence for the adoption of AI tech when the more important, longer term outcomes (e.g., mortality, total cost of treatment) are never assessed. This gets at the very important question of how we balance the ethics of ensuring the safety and efficacy of new technology against withholding potentially beneficial technology from the very people who might benefit. I’d love to hear thoughts from the community on this issue.

2 Likes