By Oscar Akaba
The first time I was introduced to systems thinking at IPMC, I learned a simple but powerful principle: every system has inputs, processes, and outputs—and more importantly, consequences. Not all consequences are visible at the design stage. Some only emerge when the system meets real life.
Today, as artificial intelligence rapidly enters the domain of goods classification and valuation in customs operations, I find myself returning to that lesson. What appears efficient on paper may carry hidden costs in practice. And increasingly, I am confronted with a troubling question: are we about to rob Peter to pay Paul?
The Promise of AI in Trade Systems
Artificial intelligence is being introduced into customs administration with the promise of efficiency, accuracy, and transparency. Guided by frameworks such as those promoted by the World Customs Organization, AI systems are expected to automate Harmonized System (HS) classification, standardize valuation processes, and reduce human discretion.
On the surface, this transformation is necessary. It promises to reduce delays at the border, curb corruption, and improve revenue mobilization for the state. In a country like Ghana, where revenue targets are central to fiscal planning, such improvements are not just desirable—they are urgent.
Yet, beneath this promise lies a deeper concern.
Where Efficiency Meets Reality
Trade at borders like Aflao is not always neat, predictable, or easily codified. It is human, dynamic, and often informal. Goods are not always packaged according to textbook standards. Values are sometimes negotiated realities rather than fixed figures. Classification, in many cases, requires context, experience, and judgment. This is where my fear begins.
AI systems, by design, rely on structured data and predefined logic. They thrive on uniformity. But border trade—especially among small-scale and cross-border women traders—does not always conform to such uniformity. When an AI system encounters this complexity, it does not interpret; it enforces. And enforcement without context can become injustice.
The Invisible Shift of Burden
The phrase “robbing Peter to pay Paul” captures what I fear may happen. In the pursuit of higher efficiency and increased revenue, the system may inadvertently shift the burden onto those least equipped to bear it.
An AI system trained predominantly on formal trade data may misclassify or overvalue goods carried by informal traders. A mixed consignment might be treated as a high-value commercial import. A trader’s lived understanding of her goods may be overridden by an algorithm’s rigid interpretation.
The result is subtle but significant: increased duties for small traders, Reduced profit margins.
Gradual exclusion from formal trade systems In this scenario, the state (Paul) gains through improved revenue performance, while the trader (Peter) pays the price.
The Disappearance of Human Mediation
Traditionally, actors such as freight forwarders and clearing agents have played an important role in navigating the complexities of customs procedures. They interpret, negotiate, and advocate—especially for those who lack technical knowledge.
With the introduction of AI, much of this mediation is being removed. Decisions become automated, standardized, and less open to dialogue. While this reduces opportunities for corruption, it also eliminates critical layers of support for vulnerable traders. What remains is a system that is efficient, but potentially indifferent.
A System Optimized, But for Whom?
From a software engineering perspective, AI systems optimize for what they are designed to measure: speed, accuracy, and consistency. However, they do not inherently account for fairness, inclusion, or socio-economic impact.
If these values are not deliberately built into the system, they are simply absent.
This raises an important policy question:
Are we designing AI systems to facilitate trade, or merely to maximize revenue? The distinction matters.
The Need for a Balanced Approach
My fear is not an argument against technology. Rather, it is a call for thoughtful implementation. AI should not replace human judgment entirely; it should complement it. Systems must be trained on diverse, localized data that reflects the realities of places like Aflao.
Safeguards must be introduced to prevent disproportionate impacts on small-scale traders. Equally important is communication. Traders must understand how these systems work, how decisions are made, and how they can seek redress when errors occur. Digital platforms such as WhatsApp can play a critical role in bridging this knowledge gap.
Conclusion: A Personal Reflection
As someone trained in systems design and deeply engaged with cross-border trade realities, I cannot ignore the tension I see emerging. AI has the power to transform trade systems for the better—but only if it is guided by principles that go beyond efficiency.
If we are not careful, we may build systems that work perfectly in theory but fail those who depend on them most in practice. And in doing so, we may realize too late that in trying to fix one part of the system, we have quietly shifted the burden onto another. That is my fear.
The writer is a Trade Consultant and Public Relations Practitioner.
