How Prosocial AI Can Redefine South-South Cooperation in a Hybrid Era

As AI reshapes geopolitics, nations in the Global South have a rare chance to design systems built for people, planet and potential.

December 23, 2025
clean_2025-11-14_CIGI_South-South_Fourth Path_CR
Rather than accepting a pre-fabricated future, actors in the Global South have the agency and imperative to forge a fourth path. (Danish Siddiqui/REUTERS)

The current geopolitical landscape is being reshaped by artificial intelligence (AI), forcing nations into what appears to be a three-way contest. The dominant models for AI governance are well-established: the market-driven, “business-over-all” approach of the United States; the rights-based, “regulation-over-all” framework of the European Union, which has created a global “Brussels Effect” due to the European Union’s ability to shape international regulations and business standards beyond its borders, as companies and countries adopt EU rules to maintain access to its large single market; and the “state dominance” model of China, which leverages state-controlled data and exports digital infrastructure.

For the nations of the Global South, this landscape has often presented a false choice: become either a “tech taker” reliant on foreign platforms, a market for data extraction, or a geopolitical pawn in a great power competition.

However, this transition to a hybrid era of co-existence between human and machine intelligence offers a distinct opportunity. Rather than accepting a pre-fabricated future, actors in the Global South have the agency and imperative to forge a fourth path. This alternative is not a mitigated version of the three options above, but a pro-social and affirmative vision, one that is pro-people (valuing human capital and agency), pro-planet (demanding environmental equity) and pro-potential (oriented toward a sustainable and flourishing future). The mechanism to build this path is prosocial AI, AI-systems that are tailored, trained, tested and targeted to bring out the best in and for people and planet.

The Hybrid Tipping Zone

We — everyone who learned to read, write and interact before November 2022 — are part of the last analogue generation. We are the last cohort to grow up without the possibility of 24/7 cognitive offloading. This generation of ours stands at a juncture. We are defining the algorithmic architecture for all generations to come, and we are doing so within a hybrid tipping zone — a period of intense instability across four interconnected levels:

  • Micro (the individual): This is leading to “agency decay.” As we outsource our cognitive work to AI, we risk the atrophy of our own critical thinking, judgment and decision-making capabilities.
  • Meso (the organization): We are experiencing mass “AI-mainstreaming.” AI is being integrated into all business and social workflows, often optimized for narrow metrics of productivity or engagement without regard for second-order consequences.
  • Macro (the geopolitics): The “AI supremacy race” dominates headlines. This is a fragile, zero-sum competition for technological dominance that prioritizes speed and control over safety or societal well-being.
  • Meta (the planet): This all rests on a foundation of “planetary boundaries.” The immense computational and energy costs of training large-scale AI models are in direct conflict with urgent climate and environmental goals.

This four-level crisis makes the existing geopolitical models of AI governance insufficient. A model focused purely on profit (US) ignores agency decay and planetary boundaries. Conversely, a model focused purely on regulation (EU) can be too slow to adapt. Lastly, a model focused purely on state control (China) actively accelerates agency decay.

The underpinning logic of prosocial AI is the respect for cultural diversity and local value systems.

A Pro-Social Alternative

The fourth path is built on Prosocial AI, a concept that moves beyond the often-vague discussions of “ethical AI” and “AI for good.” First, ethical AI often consists of principles such as fairness, transparency and accountability, which are essential but can be culturally specific and difficult to enforce. As the Responsible AI Institute notes, ethics can vary, whereas responsibility requires actionable mitigation. Second, AI for good projects are frequently top-down and focus on “accepted good” problems, such as tracking deforestation or disease. This work is important but often avoids contested good issues, such as challenging power structures or ensuring deep equity.

In contrast, prosocial AI is different. It is an actionable, proactive framework for designing and deploying AI systems with measurable and positive outcomes for humanity and the environment. As outlined by researchers at institutions such as theWharton school and the Thomson Reuters Institute, prosocial AI is defined by “4Ts,” referring to AI systems that are deliberately:

Tailored with a logic of co-design to meet specific local societal and ecological challenges, avoiding the imposition of one-size-fits-all models developed in the Global North. These systems are trained based on diverse and representative data sets that are continuously audited for bias, ensuring equitable service across varied populations. And these systems are tested with the aim ofrigorous evaluation both before and after deployment, not only for technical performance but also in view of tangible effects on human well-being, social equity and planetary health. Finally, this term refers to systems that are targeted with explicit, measurable pro-social objectives, such as improved personal autonomy, community resilience, reduced inequality or the restoring of ecosystems. Regenerative intent is woven into the DNA of these systems from design via delivery to deployment. The underpinning logic of prosocial AI is the respect for cultural diversity and local value systems. Hence, it shares its aim with the core dogma of Kharma yoga: give your best and then let go. Intent anchored in the Golden Rule/Platinum Rule — do to others what you want done for yourself and avoid what you do not want to suffer — stands at the core of prosocial AI; it is for those who design, deliver and deploy it to translate that logic into the algorithmic architecture that they are designing and using.

South-South Collaboration as the Engine

The Global South is primed to lead this fourth path. Moving beyond the longstanding debate around development aid, it opens space for genuine South-South reciprocity and agency at scale.

Nations across Africa, Asia and Latin America are already making AI a core economic and geopolitical priority. They possess competitive talent, growing domestic demand and, most importantly, the opportunity to champion locally relevant AI from the ground up rather than retrofitting ethics onto flawed systems.

Through new coalitions, such as the African Union’s Continental AI Strategy and collaborations via the United Nations Office for South-South Cooperation, nations can pool resources to build their own data trusts and compute infrastructure. They can develop and share culturally attuned AI models that are inherently tailored, trained, tested and targeted for their own populations, thereby breaking the cycle of digital dependency.

Pragmatic Takeaways

Navigating the hybrid tipping zone requires immediate, clear-eyed action.

To secure digital sovereignty and collective progress, the Global South must prioritize shared investment in data trusts, compute infrastructure and open-source foundational models. Building these systems collaboratively — through South-South cooperation — ensures that the algorithmic architecture powering AI remains locally governed rather than externally owned. This approach not only strengthens self-reliance but also allows for contextually relevant innovation that aligns with regional needs and values.

Governments should also make the “4Ts” of the prosocial AI framework — tailored, trained, tested and targeted — mandatory in public procurement. Any AI system purchased or deployed by public institutions should be assessed against clear social and environmental objectives. By embedding these principles into regulation and procurement, policy makers can guarantee that AI serves collective well-being rather than reinforcing dependency or inequality.

It is time to move beyond the fragile notion of “AI supremacy.” The race to dominate AI risks undermining long-term stability and cooperation. A more sustainable path lies in cultivating resilience — supporting a diverse, multipolar AI ecosystem that includes the Global South as an equal partner in innovation. This requires a shift from competition to collaboration, from dominance to shared advancement.

The Global North can also benefit from studying necessity-driven innovation emerging from the Global South. While many advanced economies struggle to retrofit ethics onto profit-driven AI systems, countries in the South are developing equitable governance frameworks from the ground up. Observing and integrating these models can help create a more balanced and adaptive technological future — one that values fairness and inclusivity over market dominance.

For the general population, cognitive capacity is one of the most valuable assets — guard it wisely. It is important to resist the slow erosion of agency that comes from overreliance on automation. Additionally, AI should be used as a partner, not a substitute. We should question its recommendations, explore diverse perspectives and continually refine our natural intelligence to form what can be termed “hybrid intelligence” — a deliberate fusion of human intuition and machine capability.

Equally important, we must hold the systems we interact with accountable. As citizens, employees and consumers, we should ask whether the AI tools around us are optimized merely for engagement or for genuine human well-being.

Individually and as a species we have the unique opportunity, and obligation, to shape an algorithmic architecture that makes the new hybrid era a space in which every human being — independently of their place of birth, their gender, the colour of their skin, the language they speak or the culture and socioeconomic background they inherit— has a fair chance to fulfill their inherent potential. Are we ready to take that chance and put it into practice?

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Cornelia C. Walther is a CIGI senior fellow, a visiting fellow at the Wharton Neuroscience Initiative/Wharton AI & Analytics Initiative, as well as an adjunct associate faculty at the School of Dental Medicine at the University of Pennsylvania.