Learnings and outcomes: An AI Implementation Case Study
Beyond the Algorithm
Final part
Every collaboration ends twice. Once when the work stops. Once when the people in the room finally say, out loud, what the work actually was.
The co-working sessions had closed. The knowledge base had been rebuilt. The evaluation framework had been built and handed over. The cost analysis had been done. What remained was the retrospective, the session where everyone who had been in the room across the weeks of this engagement would have the chance to name what they had experienced, what had worked, what had not, and what they were carrying forward.
I had been in every session. I had interpreted, observed, flagged, intervened, held back, and stepped in. The retrospective was the first session where I was not primarily doing any of those things. I was listening. And what I heard was worth documenting carefully.
The retrospective brought together the full cast of the engagement. The Palindrome team. The DIT team. The GCC representatives including a portfolio manager who had come specifically to understand the project’s learnings and outcomes for GCC’s broader programme design. And myself, in my now-familiar role of carrying meaning across the language divide.
The DIT team’s lead, spoke through me throughout the session as had been the pattern across the engagement. That detail matters for how to read what follows. When I write that the DIT team said something, I mean that they said it in French, and that I carried it into English for the room. The words are theirs. The English is mine. The responsibility of faithful translation of ensuring that what landed matched what was meant was mine too.
The portfolio manager’s presence added the institutional accountability layer the previous sessions had not had. Asking what the project had produced and what it meant for how GCC would design similar programmes in the future. That question asked with genuine curiosity rather than as a formality gave the retrospective a different quality. It was more than just a project closing. It was the project being examined for what it could teach beyond itself.
What the DIT Team Said
The most important moment in the retrospective came when the DIT team was asked directly: what value did the project deliver?
The answer arrived in three parts:
Before this engagement, the DIT team had not been paying attention to knowledge chunks in the retrieval process. Not because they were unaware that retrieval existed, they had built a RAG system, they understood the architecture. But the granular question of what was actually being retrieved, chunk by chunk, and whether it was appropriate for the query being answered, had not been part of their regular diagnostic practice. The engagement gave them that practice. The transcript viewer made it possible and now it was part of how they read their own system.
They now had the ability to audit their logs and pinpoint exactly where a question went wrong. This was something they had overlooked before. Not a gap they had identified and not yet filled. A gap they had not seen. The engagement did not just give them tools. It gave them a way of seeing their own system that had not existed before.
The cost analysis had shown them not just what the system was spending, but how to think about which models to use for better cost and performance trade-offs. The ability to analyse and optimise model selection not as a one-time exercise but as an ongoing practice was now something the team felt equipped to do. They added something that had not been explicitly surfaced before: the engagement had helped them discover updates to the Voiceflow platform they had not been aware of, which opened new possibilities for improving the retrieval process.
The engagement did not just give them tools. It gave them a way of seeing their own system that had not existed before.
The Sentence That Closed Part 2
There was a moment in the retrospective that I want to name carefully, because it is the moment this series has been building toward since Part 2.
When asked whether the engagement had met their expectations, the DIT team expressed that they had felt they were not fully understood at the beginning. But that they appreciated that the team had been able to meet them at their current stage. And that this, being met where they actually were rather than where the programme had assumed they would be is where the value was found.
I have thought about that sentence many times since the retrospective.
Part 2 documented the tension of the discovery call: two teams arriving with different pictures of what the engagement was for, the mismatch between Palindrome’s discovery posture and the DIT team’s already-advanced technical position, the unspoken heaviness that needed to be named before the work could move properly. The DIT team had been the ones to name it, in writing, after the call. That act of naming had made everything that followed possible.
The retrospective closed that loop. Not with a formal resolution or a certificate of completion, but with the DIT team saying, plainly, that the team had been met. That after the difficulty of the beginning, the uncertainty about value, the communication across languages and cultures and professional registers, the engagement had found a way to serve them where they actually stood.
That is not a small thing. In a funded collaboration between a South African technical team and a Senegalese implementing organisation, within a programme designed in Canada, across English and French, across the complex terrain of what it means for an external partner to add value to a team that has already done serious work, the fact that the team felt met is the outcome the engagement was always trying to reach. And it almost never gets said out loud.
The Cost Credit and What It Revealed
Toward the end of the retrospective, one of the GCC representatives reminded the DIT team that they had a credit allocation for AI tokens that needed to be used before the end of March. The deadline was days away.
It wasn’t the first time the credit had been explicitly raised in a full session. The DIT team confirmed they would try to use it and named something that immediately reframed the conversation. They were planning to use it for the expansion of WeerWi into local African languages. Specifically Wolof and Serer, the languages spoken by the communities the chatbot was built to serve, languages that had not yet been part of the system’s architecture.
The expansion into Wolof and Serer was named as the most important unresolved problem the team wanted to prioritise next. Not a technical gap but a contextual one, a system built to serve young girls in Senegal that had not yet found its way into the languages those girls speak at home, with their families, with each other.
I want to sit with the credit for a moment without naming the specific amount, because the point is not the number. The point is the process. The credit had been available throughout the engagement. It had not been used. The reimbursement model (purchase first, submit receipts, receive reimbursement) is a standard administrative mechanism in many funding contexts. It assumes a set of conditions:
the ability to make a significant upfront purchase,
the cash flow to absorb that cost temporarily,
the familiarity with receipt-based reimbursement as a normal way of accessing programme resources.
Those conditions are not universal. For a university-affiliated technology institute in Senegal, operating within a consortium, with specific financial processes and constraints, the path from “credits available” to “credits used” was not as straight as the mechanism assumed. This is not a criticism of the programme. It is an observation about how programme design (like AI system design, like technical partnership design) carries contextual assumptions that are invisible until they meet the ground.
Context as infrastructure. Again. Still. In the administrative layer of a funding programme, as much as in the technical layer of a chatbot.
Wolof and Serer
The series opened with a chatbot. Mina, Eugénie, and Yaay, three personas, three registers, three ways of being in conversation with a young girl about her body and her health. Deployed on WhatsApp and serving a community in Senegal.
The series closes with the next question: what about Wolof? What about Serer?
Wolof is the most widely spoken language in Senegal, a lingua franca that crosses ethnic and regional lines, the language of the market, the street, the family home. Serer is spoken by one of the country’s major ethnic groups, carrying its own history, its own ways of naming the world. Neither is French. Both are the languages in which young girls in Senegal understand themselves.
A chatbot that can only speak to them in French(or english) the language of colonial administration, of formal education, of institutional communication is a chatbot that can serve them, but not quite meet them yet. The information can arrive but it arrives in a language that carries distance, that requires a certain kind of fluency, that does not feel like home.
The DIT team’s priority for the next phase is to expand into Wolof and Serer. This is not a feature request but a recognition that the work of making a system serve its community is not yet complete until the system can speak in the language the community speaks to itself.
That work has not happened yet. The engagement documented in this series was a step toward the conditions that make it possible, a better knowledge base, a more reliable retrieval system, an evaluation framework, a cost structure that can be sustained. The foundation. What gets built on it next is the DIT team’s to decide.
What Both Teams Said About the Working Method
The retrospective ended with a question about process: what had worked in terms of how the engagement was structured, and what would they change for a future phase?
The DIT team said they had appreciated the working sessions and the fluidity of the communication. If there was one thing they would change, it was time. They would have wanted more of it dedicated to analysing the issues in depth. The sessions had been productive, but the depth they wanted was sometimes constrained by what the timeline allowed.
There was a moment I want to name because it was small and human and honest in the way that small human moments often are. It happened almost in passing with a lightness, even a touch of humour. The DIT team said they would try to improve their English. Palindrome’s project manager responded that they will improve their French. Both commitments sounded genuine and landed with laughter.
Both were quietly more revealing than the lightness of the moment suggested. The language divide that had shaped this entire engagement, that had required my role to exist, that had nearly derailed the project before it began, that had been navigated session by session through translation, cultural interpretation, and the slow building of shared understanding was being named at the end as something both teams wanted to close. I found that moving and then I found it complicated because the languages both teams were committing to learn (English and French) are not African languages. They are the colonial inheritances that made the divide possible in the first place. Two African teams, one Anglophone and one Francophone, getting closer to each other by going deeper into the languages that were imposed on both of them. Leaving behind what was theirs (Wolof, IsiXhosa, Serer, IsiZulu…) the languages that actually belong to the communities they serve, in order to meet each other in a space that belongs to neither.
This is one of the quietest and most persistent trade-offs that teams collaborating across the African continent face, and it is named far less often than it occurs. The mechanism that should bring us closer, a shared African linguistic heritage does not yet exist as a functional bridge in professional collaboration. What exists instead are the colonial languages, which do function, which do enable the work to happen, and which carry the weight of everything they replaced. Getting closer to each other through them is real progress but it is also, simultaneously, a reminder of what was lost and has not yet been recovered.
I am sitting with that. I do not have a resolution for it but it felt important not to let the laughter of the moment cover the weight of what it contained.
What This Series Was
This series set out to document what AI implementation actually requires when it meets the ground. Not in theory. Not in a policy paper. In the specific sessions of a specific collaboration, across specific languages and contexts and institutional realities, with specific people making specific decisions under specific constraints.
What it found, across six parts:
That the work of getting two teams into genuine collaboration is itself a form of implementation and it begins before the first technical session, in the scheduling, the pre-call communications, the assumptions that travel unexamined until something catches them.
That the most consequential gaps in AI systems are not always the ones the metrics surface. They are the ones that only become visible when someone is paying attention to the full context, to the language, to the community, to the ground the system is actually standing on.
That ownership, real ownership, the kind where a team diagnoses its own problems, makes its own decisions, and comes back to verify whether those decisions worked, is both the goal and the measure of a well-run technical engagement.
That cost is not a peripheral concern. It is a structural one. And the systems that survive contact with reality are the ones that were built by people who understood both what the system needed to do and what it could afford to keep doing.
That context is infrastructure. Not metaphorically but structurally. It sits beneath everything else. And when it is absent or misread, the system keeps running but just stops arriving anywhere meaningful.
And that the most important African innovation stories are often not only what gets built, but how those systems survive contact with reality. That was the opening line of this series. It is still true at the close.
A note on this series
This series was written from the inside, embedded in the collaboration it documented, present in every session, carrying meaning across languages and contexts as the work unfolded. It is a case study-style narrative synthesis, not a technical report. It stays close to the people and the process, tracing not just what was built but how the building actually went.
The account is as faithful as memory, notes, and session recordings allowed.
WeerWi is still running. Mina, Eugénie, and Yaay are still answering questions from young girls in Senegal about their bodies, their cycles, their health. The knowledge base is better than it was. The retrieval is more reliable. The system has tools now to keep reading itself clearly.
The next work is Wolof and Sereer. And the question of what it means to build a health chatbot that can speak to girls in the languages they speak at home.
That work belongs to the DIT team. This series was the documentation of the conditions that made it possible. What they build next is their story to tell.
I hope someone documents it.
Thank you for reading!
Support this work
Your support keeps it independent and community-rooted.
Thank you for standing with this work.


