Hidden Ground: An AI Implementation Case Study
Beyond the Algorithm
Part 2
Before two teams can work together, they have to find each other. Not on a calendar. Not on a call. In the deeper sense of understanding who is in the room, what they are carrying, and what the work will actually require from each of them. That finding takes time. It takes honesty. And it almost never makes it into the documentation.
There is a particular assumption that follows intra-African collaboration, that shared geography softens the edges of difference. That working with a team on the same continent means fewer adjustments, less translation, more common ground. This project began by quietly setting that assumption aside. Not dramatically. Just practically, in the way that real work tends to.
Two teams. One in South Africa, one in Senegal. Both working at the intersection of AI and health. Both committed to the same engagement. And between them: different time zones, different languages, different professional cultures, and different understandings of what the collaboration was actually for. None of these were insurmountable. But none of them were invisible either. And the work of navigating them began before the first call was ever scheduled.
This is what that looked like.
The Landscape This Work Sits Inside
Reproductive and menstrual health remains one of the most underdiscussed dimensions of health equity in most African regions. For adolescent girls in Senegal, access to accurate, accessible, culturally grounded information about their own bodies is uneven at best. The gap is not simply informational, it is relational and contextual. What is needed is not just facts but guidance that meets girls where they are: in their language, in their register, in a format that does not require them to navigate institutions that were not built with them in mind.
It was in response to this gap that WeerWi was built. A mobile application designed to help girls and women follow their menstrual cycle, understand what is happening in their bodies, and navigate that experience more fully. The name itself carries meaning. The app was not conceived as a generic health tool exported into a Senegalese context. It was built from within that context, by a team that understood the community it was serving.
At the heart of WeerWi is an AI-powered chatbot, built on Claude 3.5 Sonnet, using a Retrieval-Augmented Generation architecture to ensure grounded and accurate responses. The system had been validated through a dual-layered human review process: midwives reviewing content for clinical accuracy, and a focus group of young girls testing the system for tone and accessibility. Users could rate every response the chatbot generated, creating a direct feedback loop that allowed the team to track quality over time. The chatbot is accessible through both the mobile app and WhatsApp.
This is not a prototype. By the time the collaboration with Palindrome Data began, the system had been live for nearly a year and was serving a significant and active user base. The team behind it had built something real, and they knew it.
Who Palindrome Data Is in This Context
Palindrome Data entered this engagement under the Grand Challenges Canada(GCC) AI mentorship program, a structured initiative designed to support African health innovators in strengthening their AI capabilities. Palindrome’s mandate was not to build something for the implementing organization, Dakar Institute of Technology(DIT). It was to work alongside them to bring technical depth, an outside perspective, and a methodology built around the team’s own system and priorities.
The approach Palindrome brings to this kind of work is specific. It is not training. It is not consultation in the traditional sense. It is closer to what Palindrome’s project manager described as a “with-you model rather than a for-you model”, co-working sessions structured around the implementer’s real system, real challenges, and real decisions, with Palindrome providing structure and guidance while the implementing team remains in control of the direction and the outputs.
In previous engagements under the same program, Palindrome used this model with organizations that were moving toward AI integration, teams for whom the question of how to embed AI into their work was still open. The DIT team was different. They had already answered that question. What they needed was not an introduction to AI but a technically sophisticated partner who could help them understand why their already-functioning AI system was falling short and what to do about it.
That distinction mattered more than anyone fully appreciated before the first call.
Getting Into the Room
Scheduling the first call was itself a small lesson in what intra-African collaboration actually requires. The assumption, often rarely examined, is that teams working within the same continent share the same operational context. That proximity means alignment. But Africa does not operate on a single time zone. GMT, SAST, CAT: the differences are real, and they matter when you are coordinating across Dakar and Cape Town at pace.
The first scheduled call was missed. The DIT team had indicated 12 PM CAT as their preferred time. What they meant was 12 PM GMT. The distinction is not a mistake, it is the kind of misalignment that happens when teams are moving quickly and the assumption of shared context goes unexamined. On the day of the meeting, Palindrome waited, reached out, and when it became clear the call would not happen that day, responded with patience rather than frustration. The DIT team reached out the same day, explained what had happened, and expressed their readiness to reschedule. It was handled with care on both sides.
Just as we don’t share the same languages, currencies, or laws across the continent, we do not share the same time zones. The adjustments required to work across Africa are not so different from those required to work across any other set of distinct contexts. The continent is not a unit.
The call was rescheduled. This time with explicit time zone confirmation. And when it finally happened, the full cast was present: the DIT team, Palindrome’s technical lead and project manager, a representative from GCC, and myself.
The First Call
The atmosphere in the room was one of carefulness. That is the word that stayed with me. Not tension, not warmth, not formality but carefulness. Both teams were paying attention to each other in the way that people do when they are not yet sure of the ground they are standing on. No one wanting to step on anyone’s toes. Everyone wanting to be clear enough to achieve the goal of the call.
The DIT team presented WeerWi: its architecture, its validation process, its user feedback mechanisms, its performance metrics. What emerged was a picture of a system that was already technically sophisticated. Built on a frontier model. Grounded in a RAG architecture. Validated by medical professionals and real users. Operational for nearly a year.
The pain points they brought to the call were specific, and they had not waited for the meeting to articulate them. Before the call was ever scheduled, the implementing team had already shared their technical specifications and a clear statement of where the system was falling short: “only two thirds of user queries were being resolved by the chatbot”, a figure they considered insufficient. They had diagnosed the likely causes:
the chatbot sometimes misunderstood the exact meaning of a question and returned an inappropriate response;
in other cases it simply could not answer, redirecting the user to call a health adviser, which the team had identified as a frequent barrier to continued engagement.
Their expectations for what the engagement would help them achieve were equally specific, and equally pre-formed:
a 95% adequate response rate,
response times under eight seconds,
a non-response rate below 15%,
and a 25% increase in monthly interactions; a minimum of 12,500 per month.
These were not vague aspirations. They were defined, measurable targets that the team had already set for themselves and were bringing to the table as the benchmark against which progress would be measured.
This matters because of what it meant for how the call unfolded. The implementing team arrived having already done the work of diagnosis and target-setting. They were not looking to be helped to understand their problem. They were looking for a partner who could help them solve it. When the DC proceeded in the exploratory, discovery-oriented register that Palindrome’s model naturally produces and when Palindrome clarified that their role was advisory rather than development, meaning they would guide and support rather than build solutions, the gap between what the implementing team had expected and what was being offered became harder to ignore.
Palindrome’s technical lead had come to the call prepared with two areas of exploration:
an inductive error analysis of chatbot transcripts, to identify and group the specific ways the system was failing;
and an assessment of the RAG pipeline, to examine how retrieval was being done and where it might be improved.
Both were directly responsive to the DIT team’s pain points and the fit was genuine. The call ended with an agreed next step: the DIT team would share logs of past chatbot conversations so the technical lead could begin the error analysis and bring concrete findings to the next session.
On the surface, it had gone well.
What Was Not Said Out Loud
Something else was present in that room that did not make it into the agreed next steps. I noticed it during the call. It was a carefulness in how the DIT team engaged that felt like more than politeness. Something that read, to me, as a quiet questioning of whether this engagement was really worth their time. I sat with that observation through the call and said nothing. I questioned my place: Was it mine to name? I was there to bridge language and context, not to redirect the room. So I held it.
The DIT team had arrived at this engagement having already integrated AI into their solution at a sophisticated level. They were not just familiar with LLMs, they were running one, in production, serving real users, with medical validation and performance monitoring in place. Palindrome’s previous engagements under this program had been with organizations that were earlier in that journey, teams for whom AI integration was still ahead of them. The model Palindrome had built and refined (patient, exploratory, discovery-oriented, advisory rather than developmental) was calibrated for a different starting point than the one the DIT team was standing on.
The result was a mismatch that neither team named during the call. The DIT team had arrived with specific targets they had already set, a system they had already built, and an expectation that the engagement would help them reach those targets. Palindrome had arrived ready to discover what was needed, offering to guide rather than to build. Two different pictures of the same engagement. The same room.
There is a question I have sat with since: was this a failure of the pre-call written communications? The correspondence before the call had happened entirely in English, before my involvement began. The DIT team had communicated in English too fluently, clearly, in writing. But I am aware that my own position makes this difficult to assess honestly. I understand both languages and both professional cultures well enough that I cannot fully inhabit the experience of reading across that divide with translation tools and a different cultural framework for how professional communication works. What read as clear to me may not have landed the same way for a French-speaking team receiving technically framed English correspondence through a translation layer. I cannot know. And I think it is important to say that I cannot know.
What I can say is that something was lost somewhere between what was sent and what was understood. Not in the words. In the picture of the engagement that each party had formed before they arrived on the call.
It is one thing to translate what is said. It is another to recognize when something has been technically expressed but conceptually misplaced, when the words are correct but the meaning has shifted.
It was the DIT team who named it first. Shortly after the call they expressed in writing that the value Palindrome could add (given their existing level of technical maturity) was not yet clear to them. It was a valid observation, stated with directness and professionalism. I am genuinely grateful for that courage. It is not easy to name, in a funded engagement with an external technical partner, that you are not yet sure what you are getting out of it. They named it anyway and that naming is what made everything that followed possible.
The project manager responded first. Her reply was thoughtful and substantively right. It laid out clearly how Palindrome’s model worked in practice, what a “with-you rather than for-you” engagement looked like, and how the scope would be calibrated to the DIT team’s actual level of maturity rather than assumed from an early-stage starting point. It was the right response to which the DIT team responded with gratitude.
The technical lead noted internally that the project felt sensitive at this stage, that as per their email after the first call, the DIT team did not sound fully convinced given how advanced their work already was, and that the tone of whatever came next would need to be handled with particular care. His willingness to name that honestly, rather than assume the PM’s response had resolved it, kept the door open.
The Most Defining Moment of My Role
This is where I finally understood what I was actually there to do.
My hesitation during the call had not been about uncertainty over what I was sensing. I had read the room accurately. What I had been uncertain about was my place and I was not the only one. None of us had yet fully mapped what my role in this project could and should be. I had been brought in to bridge language and context in live sessions. But the gap that was opening between these two teams was not happening in a live session. It was happening from the start of the written communications between both teams, in the register and language of the messages being exchanged, in the space between what Palindrome was sending and what the DIT team was receiving.
The project manager’s response had been everything it could be within the tools she had. But it had been written in English, by an Anglophone team, to a Francophone team that was already uncertain and reading carefully for signals. In Francophone professional contexts, alignment and trust are built more relationally and indirectly. A direct, structured clarification however well-intentioned, can land as prescriptive rather than collaborative, confirming rather than dissolving distance. The substance was right. The vessel it was travelling in was not quite right for the terrain.
I reached out to the team. I named the communication risk I was seeing not as a criticism of what the PM had done but as an observation about what was still needed. I offered to translate the core message into French and reframe it in a register that preserved every point but made it easier to receive on the other side. I also suggested that this communication come from me, a small shift in sender that carried its own signal about how the engagement was being held and who was in the room.
What I was doing, I understood only as I did it, was claiming the full scope of my role. Not just live interpretation. Not just language conversion. But the work of making sure that meaning (relational meaning, not just technical meaning) travelled faithfully across every form of communication the project required. That is what Lucien had seen when he said I would be the right fit. It took this moment for me to see it too.
I took the technical lead’s analysis and the proposed next steps and translated them into French. But I also did something else, I put them into a properly formatted document rather than an email. That choice was not incidental and it is also not universal. In a faster-paced Anglophone professional environment, a well-written email would have been enough. The content would have carried. But this was not that context. In many Francophone professional settings, a well-structured document carries relational weight that an email chain does not. It signals deliberation. It signals that time was taken, that the situation was looked at carefully, that the response was not a reaction but a considered position. It says, without saying it: “we took your work seriously enough to present ours with equivalent care”.
That signal (the language, the format, the register) is rarely named. It operates beneath the level of content. But it is often more consequential than the content itself, because it shapes whether the content gets received at all. This is what I mean when I say that context is infrastructure. It is not decoration. It is the foundation that determines whether anything built on top of it will stand.
The team agreed. I sent the document.
What followed (the next session, the subsequent communications, the working dynamic between both teams) moved with a fluency that had not been present before. I would not overstate what a single communication can do. But I would say this:
what made it possible was a sequence that required every person in it. The DIT team’s courage in naming what they were feeling. The project manager’s substantive and careful first response. The technical lead’s honesty in recognizing that something more was still needed. And finally, my willingness to step into the full scope of what I was there to do, anchoring what had already been built, in the language, register, and form that would allow it to land.
What This Means Beyond This Project
It would be easy to read what happened in this first call as a one-off. A specific communication gap between a specific South African team and a specific Senegalese institution, resolved by a specific intervention. Something to learn from but not necessarily to generalize.
I think that reading would be a mistake.
What this project surfaced in its scheduling, in its pre-call communications, in the unspoken tension of the first call, and in what it took to resolve that tension, is not specific to this collaboration. It is structural. And as AI implementation work multiplies across the African continent, involving teams from different countries, different linguistic traditions, and different professional cultures working together on systems that were themselves built outside all of those contexts, the conditions that made this gap possible will appear again and again. Most of the time, nobody will catch them. Most of the time, they will not be documented.
The first thing this project makes visible is that Africa is not a unit.
This is obvious when stated directly and consistently ignored in practice. The continent contains over fifty countries, three dominant colonial languages (English, French, and Portuguese) and within each of those language traditions, professional cultures that operate with meaningfully different norms around communication, formality, trust-building, and the relationship between technical and relational work. A West African English-speaking professional context does not operate the same way as a Southern African one. A Francophone West African institution does not communicate the same way as a Lusophone one. These differences are not superficial. They shape how messages are sent, how they are received, what signals respect and what signals dismissal, and what conditions need to be in place before technical work can move freely.
It is worth naming where this linguistic landscape comes from. English, French, and Portuguese are not African languages. They are colonial inheritances, imposed over decades of dispossession and still shaping, long after independence, how institutions communicate, how professional norms were formed, and how teams from different parts of the continent relate to each other. When a South African team and a Senegalese institution navigate a language divide in 2026, they are navigating something that was not of their making. That history does not paralyze the work but it deserves to be named rather than treated as neutral background; especially in a field that is already grappling with AI systems that carry their own invisible histories of whose world they were built to understand.
When we treat intra-African collaboration as automatically easier than collaboration across continents, when we assume that shared geography softens the edges of difference, we are not just making an intellectual error. We are creating the conditions for the kind of mismatch that many projects like this one will experience: a mismatch that was felt before it was named, that accumulated in the gap between what was sent and what was received, and that required significant invisible labor to repair.
The second thing this project makes visible is that the AI layer does not simplify this problem.
It compounds it. When you are implementing an external AI system, a system built from training data that already carries its own contextual assumptions, calibrated to a world that is not the one it is being asked to operate in, into a collaboration that itself requires navigating multiple internal African contexts, you are stacking layers of complexity that the field is nowhere near equipped to handle systematically. The system arrives with its own invisible infrastructure. The collaboration arrives with its own. And the work of making both legible, of catching what slips between them, of ensuring that what is built actually holds the reality it is supposed to serve, that work is not in any technical brief.
The third thing this project makes visible is what that work requires in terms of people.
Not just translators. Not just technical leads. Not just project managers. People who can hold multiple contexts simultaneously, who understand not just the languages but the professional cultures, the relational logics, the communication norms, and the history that shapes how institutions in different parts of the continent receive and respond to external technical input. People who know when to speak and when to hold. People who understand that their role may not be fully defined until the moment it is needed and who are willing to claim it when that moment arrives.
This is not a role that currently has a name in most project structures. It is not budgeted for. It is not listed in scope of work documents. It is treated, when it is considered at all, as a nice-to-have: a language resource, a cultural liaison, something that smooths the edges of work that would happen anyway. What this project demonstrates is that it is not peripheral. In cross-contextual AI implementation work on the African continent, it is foundational. Its absence does not make the work harder. It makes certain kinds of failure invisible until they have already compounded beyond the point of easy repair.
As collaborative AI implementation efforts across Africa grow between Anglophone and Francophone teams, between East and West, between institutions with different levels of technical maturity working on systems that serve communities whose realities those systems were not built to understand, the question of how to staff this kind of contextual intelligence into the work is not a secondary consideration. It is one of the most important questions the African AI ecosystem needs to start answering seriously, and soon.
This project is one small piece of evidence for why and this series is the detail.
A Note on Palindrome Data
None of this series would exist without a specific set of choices that Palindrome Data made, and those choices deserve to be named directly, because they are not the norm.
From the very beginning of this engagement, the Palindrome team made a deliberate decision to let the DIT team set the tone. They did not arrive with a fixed agenda. They did not treat their technical expertise as authority. When the time zone misunderstanding meant the first call was missed, they responded with patience. When the unspoken tension of that first call surfaced in writing, they responded with honesty rather than defensiveness. When it became clear that something in the communication was not landing, they did not double down, they made room for a different approach. When a role that had no formal name in the project structure needed to be claimed, they supported it. These were not incidental decisions. They were a consistent posture, held from the first scheduled call to the last session.
And then they did something rarer still. They agreed to make the whole process visible.
In a field where most implementation work stays internal, where the lessons learned in co-working sessions are kept within teams, where the friction and the adjustments and the moments of genuine uncertainty never make it into any public record, Palindrome Data chose transparency. Not because everything went perfectly but because they understood that the value of this kind of documentation is not in showcasing success. It is in making the real work legible for the people who will do it next.
That is an institutional behavior the African AI ecosystem needs far more of. And I am grateful to have witnessed it from the inside.
A Note for funders and program designers
There is also a lesson here that extends beyond implementing organizations and technical partners one for the funders and program designers who structure engagements like this one. The conditions created upstream determine what is possible downstream. Whether cross-contextual and cross-linguistic capacity is treated as essential or optional. Whether a model of engagement is flexible enough to meet implementing organizations at their actual level of maturity rather than an assumed one. Whether documentation and knowledge-sharing are built into a program as a deliverable rather than left to the goodwill of individual participants. These are structural questions that sit above any single project. And as AI implementation programs across the continent multiply, the organizations that fund and shape them have an opportunity and a responsibility to ask them explicitly, early, and with the same rigor they bring to technical design.
Part 3 goes inside the first working session, where the transcripts were analyzed, the failure modes named, and the real shape of the problem began to come into focus.
Thank you for reading!
Support this work
Your support keeps it independent and community-rooted.
Thank you for standing with this work.



Really solid work, both in your insights and action and writing this up, Rebecca.
I know that moment when you are sitting there, sensing something, interpreting the environment as quickly as you can, and wondering what your role is. The way you handled that situation - with introspection and a steady thoughtful intervention - is a great example of system sensing and steering.
I have been struggling to find a name for this role too. Previous names seem to undervalue the role, and make it a "nice to have". In fact, as you've illustrated, it was essential for the future of the project. We talk a lot about this translation in cybernetics.
I appreciate you saying that this is upstream work. It's also multi-system. Very similar to some research I'm doing currently. So your thoughts on the conditions required for success are really useful.
As I read, I found myself asking "what would I do?" So, you really got me thinking. Thank you for sharing this case study! Yes, there are lessons in here for all of us.
I'm curious, what do you think might have made your role clearer to yourself and others at the outset?