Context as Infrastructure: An AI Implementation Case Study
Beyond the Algorithm
Part 1
The most important African innovation stories are often not only what gets built, but how those systems survive contact with reality. That is the story this series tells. And it begins, as the best stories do, somewhere in the middle of something already in motion.
Work grounded in specifics
I launched The African Innovators Series (TAIS) on March 29, 2025. My very first feature was Lucien de Voux, a Machine Learning Expert, CEO, and Co-founder of Palindrome Data, a South African company at the forefront of data science, specialising in machine learning, predictive analytics, and alternative data services. Palindrome Data was founded with a singular mission: to bridge the gap between research and implementation in healthcare through AI and data-driven solutions.
Their work is grounded in specifics. From HIV care retention and viral load suppression to maternal health and disease prevention, they operate at the intersection of healthcare and AI, embedding decision-making tools into health systems across Africa and beyond, helping healthcare workers, program managers, and policymakers deploy targeted interventions where they are needed most.
A year after that first feature, I found myself inside that story. Not as an interviewer, but as a participant, embedded in exactly the kind of African AI implementation that TAIS exists to make visible, working alongside the very team I had introduced to this audience a year before.
That full-circle moment is what opens this series.
My Role in This Project
The project I was stepping into centred on a mobile application that helps girls and women follow their menstrual cycle, understand what is happening in their bodies, and navigate that experience more fully. Embedded within it was an AI-powered chatbot already providing menstrual and reproductive health guidance to its users. The Senegal-based implementing organisation had identified where the system was falling short of what was needed. Palindrome Data had come in as the technical partner to work through those gaps and I was being brought in alongside them.
Before anything else, it is worth addressing what might seem like an obvious question: how did I end up on this project?
My role was to interpret meaning and intent across English and French, helping technical concepts travel faithfully between the South African team and the Senegal-based implementers. But this was more than translation. It was the work of making sure technical assumptions remained legible, that implementation intent was not lost in abstraction, and that context could shape the work rather than merely absorb it.
When Lucien reached out, he said: “You’ll be the perfect fit for this project.” At the time, I had a broad sense of how my role might unfold, shaped by my own experience working with AI systems in African contexts. But I did not yet know what it would demand in practice. Lucien, on the other hand, seemed to. Having navigated similar terrain before, he understood that this project would need more than an additional pair of hands. It would need an additional way of seeing.
Palindrome Data had worked across the continent and beyond, but always in English-dominant environments. This was Palindrome’s first collaboration with a French-speaking implementer. On the surface, that might look like a straightforward language challenge. Any French-speaking translator could handle the literal conversion of words. Translation tools could do the same.
But what Lucien recognised, and what became clearer to me through the work itself, is that in AI systems, translation that stops at words is not enough. When meaning and intent are not carried through, something more consequential is lost. The sentence may arrive intact. The idea it was meant to carry does not. And in that gap, small distortions accumulate into larger ones.
So that way of seeing was never purely linguistic. It came from having lived and worked across both English- and French-speaking African contexts, where meaning does not always sit neatly inside words, and where what is understood depends as much on history, institutions, and lived experience as it does on vocabulary. It is one thing to translate what is said. It is another to recognise when something has been technically expressed but conceptually misplaced, when the words are correct but the meaning has shifted. In AI systems, those shifts are not harmless. They compound and shape how a system responds, what it prioritises, and ultimately what it gets wrong.
Context as infrastructure
What I have just described has a name, even if it rarely gets one in technical documentation.
Context is infrastructure. Not metaphorically but structurally. Infrastructure is foundational: it sits beneath the visible operations it enables, and its absence does not degrade those operations gradually, it stops them. We notice roads when they end, not when they are smooth. Water when it is absent, not when it flows. The more effective the infrastructure, the less visible it becomes. Context works the same way. When it is present and well-held, the work moves. When it is absent or misread, the system keeps running, it just stops arriving anywhere meaningful.
That is what this project made visible. Not as a theoretical proposition, but as a lived condition of the work: the moments when a technically correct output was contextually wrong, when the French was accurate but the meaning had slipped, when the system produced something the implementer could not use because the foundation beneath it had not been built for their ground.
The project could have moved forward without an extra pair of hands. Whether it could have achieved its purpose without an additional way of seeing is a different question entirely.
Those moments are not edge cases. They are the centre of what implementation actually requires and they are almost never documented.
What This Series Will Cover
For April, I am taking you behind the scenes of how this project unfolded: across languages, across institutional contexts, across the reproductive and menstrual health realities of the young girls and women the mobile application was built to serve. And into the gap at the centre of all of it, between what the technical team understood to be possible and what the implementer knew was needed on the ground; between what the chatbot could do and what its users actually required; between the assumptions baked into the system and the context it was being asked to hold.
Each piece draws from one phase of the work, a session, a decision, a moment where something had to be named before it could be resolved. The Palindrome Data team CEO, project manager, technical lead speak in their own words at the points where their perspective matters most. This is a narrative case study, not a technical report. It stays close to the people and the process.
It is written for two kinds of readers. For those already doing this work on the continent implementers, technical leads, programme managers navigating similar terrain, it offers the kind of honest account that formal documentation rarely provides. For those outside it who want to understand how AI systems are actually navigated in African contexts not in theory, not in policy papers, but in practice, it offers a window into work that is rarely made this visible.
The friction. The adjustments. The moments where context had to insist on its place within the system sometimes quietly, sometimes not.
Because the systems that survive contact with reality are the ones that were built by people who understood the ground they were building on. This series is about what that understanding actually costs, and what it makes possible.
Thank you for reading!
Support this work
Your support keeps it independent and community-rooted.
Thank you for standing with this work.


