Using AI as a tool for Reducing the Clinical Burden in Cancer Care in Sub-Saharan Africa
Some countries in sub-Saharan Africa report just one oncologist per 3,000 cancer patients, a stark contrast to the United States, where the ratio is closer to 1 per 300 in urban centers, and 1 per 1,000 patients in rural areas. These disparities reflect the limited clinical capacity available to meet the region’s growing cancer burden. This shortage affects every step of cancer treatment, from early detection and accurate diagnosis to follow-up care.
Mathida Ngamsiripol, a Master of Public Health (MPH) student in the University of Washington Department of Epidemiology, recently completed her MPH Practicum exploring how artificial intelligence might support cancer care in regions with severe healthcare workforce shortages. For this project, Mathida investigated whether artificial intelligence (AI) could help close this gap by enhancing their ability to serve more patients more effectively. Her practicum, conducted with the health technology startup Hurone AI, focused on ethical, practical, and cultural considerations of using AI in oncology across Kenya, Nigeria, and Rwanda. Mathida, who will complete her master’s program in June 2025, hopes her research will contribute to more equitable, effective, and culturally-sensitive applications of emerging technology.

To understand the real-world implications of AI in these settings, Mathida conducted 10 semi-structured interviews with oncologists and policymakers in Kenya, Nigeria, and Rwanda. After a qualitative analysis of these interviews, Mathida identified key themes: including the challenges of oncology care, current uses of AI in cancer care, cultural and clinical contexts, risk concerns of AI, challenges with new technology, payoff of AI integration, and implementation and responsibility.
A key theme in Mathida’s research was the importance of cultural and contextual sensitivity. AI tools trained on patient data from the United States or Europe may miss critical nuances in other parts of the world, leading to algorithmic bias. For instance, in the United States, symptoms like unexplained weight loss and persistent fever might lead to a cancer screening. But in Kenya or Nigeria, those same symptoms are more likely to be linked to common infectious diseases such as malaria. An AI system that doesn’t account for this local context could cost lives rather than help. Language and literacy barriers also present challenges. Many patients in Mathida’s study regions speak languages not represented in standard medical databases and may face barriers to literacy. In Nigeria, 80% of the population speaks three major languages, but the other 20% speak a diversity of up to 500 languages. While the national language is English, Mathida pointed out, “stakeholders felt it was crucial to incorporate the three major languages into the AI tool” in order to serve more people within the region. In addition, features like audio-based care instructions and support for multiple languages could help ensure AI tools are useful and accessible to all patients.
When analyzing the interviews Mathida was struck by the unexpected openness toward AI in cancer care from the practitioners and policymakers she spoke with. “As someone with the privilege of higher education and living in the global north, I went into my practicum thinking that there would actually be more apprehension towards the uptake of AI, considering certain legal, ethical, and operational barriers,” Mathida explains. The data from the interviews told a different story. Far from being hesitant, many of the professionals she spoke with saw real promise in AI—especially as a tool to reduce workload, improve communication with patients, and extend care to rural or underserved populations.
The oncologists she interviewed raised vital questions about data privacy, accountability, and the fears of misdiagnosis. They also emphasized the need for clear communication about how AI decisions are made—a challenge with “black box” models that don’t reveal their reasoning. Mathida adds, “I think it is important that we as public health practitioners think very critically about the role of artificial intelligence, the parties involved, and the lives involved in the process. While some of the doctors expressed these concerns, the bottom line was that they wanted to ensure there is strong data security so that patient confidentiality would never be breached. This calls for establishing clear, responsible entities to address ethical concerns and ensure compliance.” As AI becomes more embedded in healthcare, Mathida’s work underscores the need for thoughtful policy and design.
Mathida’s work contributes to an emerging conversation about how artificial intelligence can be implemented responsibly in global health settings. Her research offers insight not only into the potential benefits of AI in oncology, but also into the importance of tailoring new technologies to the social, cultural, and systemic realities of the communities they aim to serve. Mathida hopes her findings will help inform future efforts to reduce health inequities and support oncologists and patients across diverse care environments.