Season 2 / Episode 9

The Middle Powers Stag Hunt

We can gain more together, but unity is not a prerequisite.

PP_S2E9_Web

Episode Description

AI governance is not a one-note issue. It spans geopolitics, science, philosophy, sociology, and as it grows and impacts more aspects of human governance and decision making, so does its span. Unlike the Space Race, there’s little reason for international actors to limit collaboration, given the intangible nature of this progress and its broad reach into so many aspects of our lives across the globe. What stags are we missing by limiting ourselves to hunting rabbits?

Paul and Vass are joined by Boris Babic and Brian Wong of the University of Hong Kong to discuss the intersection of governance, philosophy, physics and more to tackle today’s beast of AI. Touching on both responsibilities and freedoms of governments and private actors, they open a window to a world in which AI advancements are shared and grown, not kept under lock and key to wither without sunlight.

Mentioned:

Further Reading:

Credits:


Policy Prompt is produced by Vass Bednar and Paul Samson. Our supervising producer is Tim Lewis, with technical production by Henry Daemen and Luke McKee. Show notes are prepared by Rebecca MacIntyre, Libza Manna and Isabel Neufeld, who also handles social media engagement, brand design and episode artwork by Abhilasha Dewan and Sami Chouhdary, with creative direction from Som Tsoi. Original music by Joshua Snethlage. Sound mix and mastering by François Goudreault. Be sure to follow us on social media.

Listen to new episodes of Policy Prompt on all major podcast platforms. Questions, comments or suggestions? Reach out to CIGI’s Policy Prompt team at info@policyprompt.io

Featuring

PP_brian wong

Brian Wong

Boris Babic

Boris Babic


Boris Babic (guest)

When we think about races and the history of geopolitics, something like the space race comes to mind. And what's unique about that situation is that it has clear wind conditions, clear when the race ends, what I'm going to get when it ends, what the value is. And when we look at competition in AI, it's really not like this. It's not clear that you gain something by not cooperating.

Boris Babic (guest)

I may live in a country that is nominally and procedurally democratic, and yet I don't feel like I'm any more capable of influencing the usage of AI and the implementation of AI and the safeguards, lack thereof in AI's developments, than my counterpart in some non-democratic state. That I think is the first step to taking action.

Paul Samson (host)

Hey, Vass, how's it going?

Vass Bednar (host)

It's going well. How are you, Paul?

Paul Samson (host)

Good, good, good, good. Is it just me or does it seem like technology, especially AI developments are moving faster by the week? How do you do it? Any tips on how to keep up? How do you keep up?

Vass Bednar (host)

I do not have tips to keep up. I have a fantasy that I have many AI agents assisting me every day. And I have more and more productivity and it's just out of control. But I think just following along through podcasts, blogs, learning from what other people are doing, is really the way to go. Sometimes I feel more like an observer than somebody who's participating in AI progress. How about you?

Paul Samson (host)

Yeah, I agree with you. I love podcasts, whether they're just audio or video. And I used to feel like I was totally up on it, listening to a couple a week. I had a few, but now I've actually found that tougher in the last couple months. And AI is flooding the zone, or at least my zone. Maybe it's intentional. Maybe AI is trying to flood our zones, but I'm struggling a little bit with how much is happening.

Vass Bednar (host)

Maybe it's your algorithms and the agentic internet. But today, we have a pretty deep conversation lined up on the geopolitics of AI. It's goes way beyond the issues of tech competition between states. And we really are going to be digging into how AI is shaping the international order.

Paul Samson (host)

Yeah. It's deep for sure. And it's not just theoretical. We're going to be talking about strategies, implications. It's all very real in how it's driving things. But exploring how artificial intelligence could shape issues like responsibility, fairness, and even cooperation at the global level, I think is a super interesting topic.

Vass Bednar (host)

And we'll get into what kind of international order AI can produce or reproduce. Could a more equal order emerge that's both more just and more strategically stable? These are some of those big questions coming up in our conversation.

Paul Samson (host)
Our guests today are two professors from the University of Hong Kong. Brian Wong (guest)
Vass Bednar (host)

Brian and Boris, welcome to Policy Prompt.

Paul Samson (host)

It's great to see you again, Brian. We met in Hong Kong and that was a great conversation. Boris, meeting you for the first time, welcome. And we wanted to start off today by... because we're going to be talking about geopolitics and AI. Just we now hear in the news geopolitics every two minutes. That's almost like the top headline in the world right now. What is geopolitics to you? What are we talking about here? And then we'll talk about AI. But to start with, what is geopolitics?

Boris Babic (guest)

Yeah, absolutely. Well, first of all, thank you both for having Boris and myself on the show. And it's a real pleasure to be here. Now, in front of geopolitics, perhaps I can take a start or take a lead on this front and offering a tentative definition. It's really hard to pinpoint what exactly it is. You can say it's a study, it's a discipline, it's an analytical framework or a cluster of analytical frameworks, even. But by and large, we would take geopolitics in the context of the book, but also the work we do, I suppose, as the intersection between geological and geographical factors on one hand. And of course, political forces and manifestations of power, both internationally, but also within countries, so to speak. And of course, a correlation and links between the domestic and international are most germane in that regard. So, on the front of the geographical and geological, we can broadly conceptualize really this landscape is comprising territorial and also disputes over borders, positionalities, vis-a-vis, that's one cluster.

Then secondarily, the nature of the power, is it a land power? Is it a sea power, maritime power? Is it neither, so to speak? Is it a landlocked power, doubly landlocked power? Or as Laos would like to call themselves, a land bridge power, so to speak. And thirdly, of course, the distribution of resources, such as energy, but also increasingly critical raw materials, including but not limited to rare earths and other metals are highly relevant in a sustainable, renewable transition. Lithium, cobalt, nickel, you name it. And fourthly, on a geo-front, of course, there's the climate as well, which determines the usability and the extent to which the land's arable. And produce and material goods that it could be generated, really, with the territories available to a certain country. So, that's really what I call a geographical basket.

And then very briefly on a political front, when we think of geopolitics, it's tempting to see countries as homogenous entities. Oh, China versus US, Russia versus EU, which by the way, is a supernational entity with 27 members and maybe more to come. But that's misleading, because as Boris and I argue in a book, many countries really cannot be reduced into singularities. They comprise multitudes, including different interest groups, different lobbying groups, different populations, segments of demographics, individuals with different religious faiths, denominations, socioeconomic stratifications. And of course, in some cases, you have a deeply federalized system. And even in the case of Canada and Australia, federalism is indeed a fairly salient phenomenon, so to speak.

So, we refuse to believe and subscribe to narrative that countries are holes, but instead we've got to take them as they come, as small gas boarda of conflicting and competing in occasionally coagulant interests as well. So, Boris, perhaps you can also jump in here on your work with geopolitics. And maybe that's something you can fill in on as well.

Boris Babic (guest)

Yeah. And let me just say as well, thank you both for hosting us. And great to meet you for the first time in my case. I think what's interesting to me about geopolitics in particular, it's like what characterizes geopolitical questions, instead of just political questions, full stop, is that they involve scarce resources whose distribution is unequal along geographic lines. Either due to a certain kind of like a comparative advantage added by human labor or natural resources. But that creates particularly vexing ethical and philosophical questions about what's a fair distribution, not just within national boundaries, but across national boundaries. And that I think or we think is particularly relevant when it comes to AI development in the next couple of decades.

Paul Samson (host)

Yeah. It's super interesting, because in a way there's a classic geographical frame here of the list of population size and military and resources and things. But you're suggesting that the political entity, the political geography is evolving potentially quite a bit on its own, but perhaps also precipitated by some of these new technologies and globalization and things. So, I think this is a very interesting dynamic of the new geopolitics.

Vass Bednar (host)

Absolutely. Boris, I'll stick with you as we maybe kick off on this next question. But maybe you can link us forward in terms of why and how AI development and systems are changing or challenging geopolitics. Is it AI itself changing the balance of power or is it more around the production of the associated resources?

Boris Babic (guest)

Both. The scaling of AI has really brought the geopolitical questions to the fore. So, 10 years ago, but actually even just five years ago, what the most sophisticated... one could use the most sophisticated prediction systems or classification systems, machine learning systems, basically on your local machine. Or if you were doing something very complex with one NVIDIA H100 or your institution. And so, that doesn't naturally raise these kinds of questions, well, what will happen along geopolitical fault lines? But when we start to build out data centers that are just of unbelievable sizes. It's not, I need one H100 or I need 1,000 of them or I need 100,000. It's like, we're going to build a nuclear power plant and next to do a nuclear power plant, we're going to build a whole infrastructure and we're going to use that to power the AI models.

But now we can work with nations, for example, that are friendly nations nearby borders to share some of this energy or some of this data computing power. And that all of a sudden creates a host of political questions. So, I mean, one thing that we try to do in sort of like pushing the conversation towards geopolitics, traditional questions in AI ethics where you could think about them as more traditional domestic ethical questions. What happens when you make a particular prediction and it turns out to be unfair or unbiased or your system is not sufficiently private? These are all questions that one could address within a standard normative lens. But where we're going in the future, we try to shift the conversations towards geopolitical frame.

Paul Samson (host)

I always like to ask the question of a little bit more about your background to get the listeners in on what are your origin stories here. And you've come at this from a philosophical angle, from a data science angle, from a geopolitics. What drove you to this AI space, as like, this is the issue to work on now? What drove you to where you are right now in your own journeys?

Boris Babic (guest)

I went to law school and was practicing intellectual property law, at a time that I wouldn't have thought about IP law as having anything to do with machine learning or AI or anything like that. And in graduate school, I studied philosophy and statistics. And I think even during graduate school, I wasn't quite sure how law, philosophy and statistics would all combine. I think I thought that they were in some sense disparate interests. Although there has always been some literature in the law on how statistical evidence should be used in trials, how we should think about it to frame guilt or innocent judgments, how to compute images and things like this. But what became really interesting was during my postdoc at Caltech, all of a sudden there were people, technical people in computer science and in statistics, as well as almost no philosophers. But increasingly thinking about decisions made by machine learning systems from an ethical perspective and also combining the ethical perspective with a mathematical perspective.

So, if we think that these decisions are unfair, what makes them unfair? That requires a philosophical analysis. But if we want to make them fair, then how would we formulate fairness into an optimization problem and then constrain an algorithm so that it does what we think? And often, we get unintended effects. It's like, oh, you constrain it to capture this one definition, but actually you look at the decisions, oh, they're bad along another dimension. So, all of a sudden there's this whole new research paradigm. And it was effectively thinking about ethics, like what's fair, what's reasonable, what's sensible, what's moral. But also, like law, how would some statutes apply? If you think there's discrimination, how would that apply? What would be intentional discrimination?

And an algorithm is not bigoted or something or prejudiced, but if it's not, then how would you evaluate this? So, I think it was around that period, like 2019-ish, that people started to think about these questions in a way that brought together law of philosophy and statistics. And that's what I've been working on since.

Boris Babic (guest)

And on my end, I was trained as a political and moral philosopher, having spent eight years at Oxford reading philosophy, politics, and economics as an undergrad. Then my info and default in political theory. And by and large, I suppose my interface and intersection with AI really came in almost like a pincer. The first prong of the pincer had a lot to do with theorization about non-ideal context. So, my anthel thesis in research was centered around colonialism and a moral debt we accrued from historical injustices, which of course, incredibly salient now in light of the usage of emerging countries' labor, cheap labor for data labeling. But also, the harvesting of citizens information through surveillance capitalism and surveillance authoritarianism to prop up AI systems in large language models, so to speak. And a second prong of my research as a theorist revolved around authoritarian regime. So, I did my default on citizens' responsibilities in authoritarian context.

AI is both an enabler of, but also potential challenger of authoritarianism. Which by the way, in my paradigm, I wouldn't necessarily say is something that we should treat as monolithically and equivocally bad, actually. So, that itself is quite controversial. But approaching these questions of what we are to do under imperfect conditions of clear injustices or constraints upon individual agency to tackle normative puzzles, such as access to justice, such as the ability to understand our rights and entitlements as citizens, as individual people. That's what got me interested in artificial intelligence, really, the sort of manifestations of their effects. Not just on power and how power's allocated, but also if you deep dive into it, what does it mean for work and employment? What does it mean for democratization and emancipation of the grassroots and societies where AI is now basically either controlled or wielded by very powerful large corporations or by unassailable and also impenetrable state entities?

So, that's really one prong and one side of the pincer, so to speak, through which I approach this question. The other prong of the equation revolved more around geopolitics. So, I will be fully transparent here. I work with multinationals. I work with hedge funds and banks around the world, more specifically, perhaps with a concentration in Asia. And helping them understand geopolitics and wrap their heads around Chinese foreign policy. What's going on in ASEAN, China's relationship with the EU and India and also the US as well. And the reason why I'm fascinated by these questions, of course, is I was trained by and large in Chinese politics in a study of China. Not just through the lenses of elite politics or political economy, which I'm glad is receiving a revival. And frankly, when I was studying in China as a student, it wasn't much that was openly discussed in the syllabi, even though I do think it's highly germane, the works of Union Ang, Danny Roderick, amazing people, and writers in their own right, but also through lenses of ordinary citizens on the ground.

Now, what are they thinking about AI? What is \[foreign language 00:15:29\], which is this raising your own lobster? What does that mean in the context of China's economic transition as it aims to pursue so called new quality productive forces? Which is a term that the CPC likes to trot around to really encapsulate its attempts to bolster China's total factor productivity. And so, judging at and looking at these issues through those lenses got me thinking about really questions of agency on a part of small and medium states. So, where lies the agency for Malaysia, where of course semiconductor manufacturing is now taking off and also renewables transition has seen revitalize interest in, as well as, of course, other countries with a chokehold or choke points rather with regards to rare earth?

Again, they're going to play an increasingly important role in a modern day contemporary neo or nascent military industrial complex, and also the integration supply chains along those lines. So, in thinking about small and middle state agency, and pondering and reflecting upon China's political economy, and of course, China's relationship with the USA, which is such a complex quagmire unto itself. All of these are essentially the starting points that got me interested in the missing elephanted room that I'd headed to, not been exposed to, artificial intelligence. And so, that completes the second prong of the pincer, so to speak.

And that's also how I got into intellectually artificial intelligence, which was an interest that was thankfully boyed and also amplified by my being very fortunately surrounded by fantastic colleagues, like Boris at the University of Hong Kong. Where in our department, we have a lot of theorists and philosophers who are working on AI governance, ethics, and also regulation, as well as, of course, other very important philosophical puzzles as well.

Vass Bednar (host)

Maybe we could talk about some of those puzzles. For instance, if there was some kind of AI accident that had a global impact. I was reading about how chatbots are increasingly resisting their instructions. There's some evidence of that. Who would be responsible? How would we think of the governance response? Super open to the example staying theoretical, but I think it'd be fun to talk about that and think about what governance responses, what could look and feel like.

Boris Babic (guest)

Yeah, definitely. And in all honesty, when it comes to responsibility for injustices or harms that involve AI, I really think there's a dearth of systematic theorization that covers all aspects of the equations. It's a bit like, if you would excuse the slightly loaded imagery here, the blind feeling an elephant. One of them feels for the trunk, another feels for the legs, and then third feels for the tail, and then fourth feels for, I don't know, the tusks. And they say, "Oh, that's the elephant." This is where I fear a lot of the contemporary theorization of AI is stuck, in that you've got really two main schools of thought that Boris and I address in a book. And also, we look at separately in our own works.

One framework is a liability model, which posits that responsibility for harms cause using AI can be identified or should be traced to identifiable agents, collective or singular or individual and we just hold them morally responsible. Because they intentionally perpetrate injustices, because they are apparently causally and morally relevantly so implicated in a causal sequence that led up to the production of these harms. So, some concrete examples, an engineer intentionally training a chatbot to produce really racist content because he is himself a racist. Or someone in a military command who knows fairly well the dangers and the frills of having untested LLMs, are used in targeting potential combatants and yet insisting we have to use it now because now's the time. And they want to dodge responsibility and essentially benefit from the impunity that comes with, of course, the nascent nature of the technology's emergence.

In these cases, it's fairly straightforward for us to say there is more responsibility. We assign the responsibility, we may prosecute that criminally, legally, or via alternative means. But what this model doesn't deal with as a fundamental paradigmatic flaw is cases where responsibility is more diffuse. Where there's a dearth of intentionality, where there's an insufficient level of awareness and understanding concern of what outputs of these models are. And we haven't even talked about non-generative instances of model, which frankly are where AI tends to be more ubiquitously used from facial recognition, through to processing and calculations within infrastructural energy grid decisions that are undertaken and micro decisions, so to speak. And in these cases, to pursue the liability model simplicity would compel us to fall into the trap of basically enabling those in a moral crumple zones to be scapegoated. The scapegoating of individuals who are most proximate causally to the harm in question.

So, talking here about, say, the frontline designer member team who's involved in training and also looking at applying the inputs and trying to develop a model. Alternatively, the low level bureaucrat who signs off on a usage of an LLM in a certain context of government communications. Or even indeed an informed teacher who shares an LLM or shares a subscription service, an AI subscription service with a student, thinking this is how they can get the students to learn. But the truth of the matter is these individuals have neither very often the agency nor the understanding of what they're doing. And so, the liability model cannot necessarily provide a reasonable solution. And to impose it unduly could lead to our going overboard, really. And holding too many individuals accountable on spurious and morally dubious grounds.

On the flip side, and that is where we go onto a second and rather different strategy that we might see as fruitful, which is a social connection model as advocated and championed by the wonderful, wonderful late political theorist, Irish Man Young. And Young basically says, "Well, look, maybe the correct way of thinking about responsibility is not to look at blame, not to think backwards and be backward looking. We should instead focus on the future. Let's forget about blame. Forget about moral liability. Forget about the individualized nature of liability insights grow. Let's focus on collective political responsibility. And all individuals who are connected to injustices should therefore be held responsible in a blameless manner, in a manner that allows for them to take up these responsibilities to do good to serve without being pinpointed, scapegoated, or held morally liable, for blame is not productive.

Blame is quite damaging sometimes, because it's invidious. And of course, Young has her advocates and has immense value in terms of what she contributed before her untimely passing. And I'm a massive fan of Young's political theory as well. But both Boris and I feel that we take this social connection model to the extreme. Then what it does is it overlooks the agency of many individuals, many powerful, many deeply, deeply maliciously intentioned individuals who see the accruing and amassing of power through their AI enabled military industrial complex as a means of basically accruing immense wealth, immense control and influence over governmental policies.

And yet they can run away with it, because they can say, look, it's not my fault. It's the engineer, it's the other folks in the system. I have no actual involvement in relation to the injustices. Or even if I do, then I shouldn't be blamed for it because I'm just connected to the injustice in question. So, what's needed here as we advocate in a book is a hybridization and a synthesis of the two, but also the acknowledgement that what's really often required, what's behooved here as a necessity, as a matter of necessity, is structural reforms. Structural reforms that bolster transparency. Structural reforms encourage accountability in a form of responsibility uptake, that align profit making incentives with responsibility, uptake, incentives, and motivations.

And finally, and ultimately, perhaps the acknowledgement that you cannot govern and manage AI unilaterally as one country, as one government. You've got to do it multilaterally, plural laterally, and across national and also regional boundaries as well. So, that's a moral and a political question. But on the front of legal regulation and also developments on that front, I think I would defer to the expert on that matter, who's Boris, and perhaps Boris can also share some of his thoughts on a question at large as well.

Boris Babic (guest)

Yeah. Let me add a little bit, though I am not capable of producing nearly as many words as Brian. But I think an example of what you had in mind could actually be something like OpenClaw, an agent that can cause quite a lot of harm and in a way that transcends borders. And what I think is nice about using this as an illustration is in some ways it's great. Did this agent... I mean, it's capable, it's powerful, it's open source. You can set it up yourself. You give it all the permissions that you want to give it, and you have to give it a lot of permissions, and then it can execute a lot of tasks on its own. And so, all of these things that make it very ideal. Everyone wants transparency and everyone wants to consent to certain kinds of permissions.

But once you do all this, wow, it can cause a lot of harm. If you take something like the EU AI Act, which has been years in development and the swollen legislative program, and you ask to what extent can it mitigate the damages that one might see from, let's say, OpenClaw type agents causing unanticipated harm? Generally, not a lot. It's actually very nice in terms of its transparency. It's very nice in terms of the fact that you consent to have all this on your machine and to do it. And so many of the concerns that were motivating them like big evil black box corporations with super expensive models doing things unbeknownst to you, actually that's not really the case here. And so, much of this legislative program is inapplicable to what I think will be the most interesting harm, unanticipated harm.

For that reason, I mean, I pick on the EU AI Act, but it's not just the EU IA Act. I think that would be true of domestic legislation in most states. It's very hard to anticipate where harm will be. And it's very hard to address it, to address the technical problems, given the way that legislation comes together through various political compromises. One thing that we do focus on, though, is that you need some kind of multilateral institutional oversight. And this is what Brian ended with, because none of these domestic pieces of legislations are going to work. Litigation tends to be after the fact. So, figuring out responsibility after the fact is not going to be that effective. One place where we don't have nearly enough institution building right now is when it comes to multilateral institutions, together with some sort of mediation, arbitration, dispute resolution that goes with those institutions. That I think would be particularly important.

Paul Samson (host)

Wow, there's so much to unpack there. I think those were great comments from both of you. It does strike me that what you're suggesting is that the liability regime is evolving. It's going to have to be a hybrid regime ultimately, as you said. I still think that there's a little bit of a black box problem with AI algorithms, where if you're trying to pinpoint wrongdoing, whether it was an engineer or somebody else, is it going to be possible to do that? Like you could say, sorry, I made a mistake. Is that the same thing as intentionally doing something? Is there going to be equal liability? And just the ability to unpack something of saying, okay, well, we identified the smoking gun.

So, the legal regime is facing unprecedented stress by this, it seems both... and then the moral side is in the spotlight. So, fascinating area. And I think transparency, I just want to underline the word transparency, which I think is critical here, that a lot of what is going to drive us to a better place probably hinges on more transparency than there is currently now in the system.

Vass Bednar (host)

Policy Prompt is produced by the Center for International Governance Innovation. CIGI is a nonpartisan think tank based in Waterloo, Canada, with an international network of fellows, experts, and contributors. CIGI tackles the governance challenges and opportunities of data and digital technologies, including AI and their impact on the economy, security, democracy, and ultimately our societies. Learn more at cigionline.org.

Paul Samson (host)

So, one thing that comes out in your book and in the media when you look anywhere, and it's kind of like the idea that the AI race is a race, but also that it's really between the two superpowers. It's between the US and China. And on many measures, there's such a gap between those two countries and everyone else. You suggested it's not a two-player conversation, both from a game theoretical perspective, from a dynamics perspective. So, you seem quite optimistic about other countries being able to leverage some space here, not necessarily just the EU and the other biggest players, the next level down, but potentially even some smaller players. Can you unpack that a little more for us? Why is it not really just dominated by the two big powers here? And how are those dynamics evolving?

Boris Babic (guest)

Let me address the game theoretic point and then I will pass it to Brian to discuss more the role of middle powers and other states beyond the US and China. On the game theoretic point, our argument is, it's sort of like a step-by-step argument, which is to say that the typical framing is that interstate AI dynamics, particularly between the US and China, are characterized as a race. And I think when we think about races and the history of geopolitics, something like the space race between the US and the USSR comes to mind or maybe nuclear armaments races. That looks like a race. And what's unique about that situation is that it has clear wind conditions and it also has typically a unique equilibrium. It's clear when the race ends, what I'm going to get when it ends, what the value is.

If you're going to win and you're going to spend all these resources and I'm going to lose, then I shouldn't basically finish second and that will be my rational action. And when we look at competition in AI, it's really not like this. It's not clear that you gain something by not cooperating. So, one thing is on the technical side, the algorithms, the underlying math, the underlying statistics, this has been all very well-understood for quite some time. So, it's not like there are secrets, like nuclear secrets that we can keep from each other and gain a competitive edge. They're all very well known. There are different resource advantages, but it's not obvious that it's the best for you to keep these for yourself. I want to plug a really recent example here that just happened a couple days ago, which is that the biggest computer science conference, NeurIPS, made statements that they would not accept papers that are published by any institution that is on some US restricted list.

And that's a lot of public institutions, many in China, who seemingly, this hasn't been resolved. It's not clear whether they'll stick with this because there was a lot of backlash, but seemingly would not be able to publish in NeurIPS. That's a huge, huge development in computer science. Now one thing is, I mean, it seemed like a politicized statement or that it came from some kind of political pressure, because no conference ever, no journal ever thought of saying such a thing, that if an institution is on some sanctions list, that an employee there cannot submit a research article for publication and peer review. What's interesting about it is you could criticize it on moral grounds and you could criticize it on political grounds. It's basically wrong in every way, but it's totally not clear to me how that would be in the US self-interest to do something like that.

It's like saying, "Hey, we have this platform for disseminating the leading knowledge." And it's like an American platform because it's an American entity. And it's like, "Well, all of you other people, we don't want your leading knowledge." That sounds like a terrible idea. I mean, if you were in the US, really you would want to maintain such a platform. And I think this kind of thing comes up in this race narrative all the time. There's a sense of like, "Oh, we have to do things ourselves." And you're like, "But that's not even in your interest to do things yourself." If you get through, it's not obvious that there's a competitive edge. To bring this back to game theory, what we then point out is that when you think about what are the rational actions in the space of possibilities, the game doesn't really look like a race. And it doesn't really look like a typical prisoner's dilemma situation, where none of us want to cooperate.

The model that we like to draw on is a very simple model in economics. It's called a stag hunt game. And in a stag hunt game, it's just a simple model where we can choose between hunting a large animal together. That would be the stag. And hunting a small animal alone, a rabbit. So, the assumption is that I can catch a rabbit alone, but I need your help to catch the bigger animal. If we both collaborate, that's the best because we want the bigger animal. That's better than two rabbits. If neither of us collaborate, then we just have a rabbit. But the thing is, both of those are equilibrium conditions. So, both of us non-cooperating, that's rational and that's Nash equilibrium. And both of us cooperating, that's also rational and that's Nash equilibrium. And right now, we're stuck in this worse Nash equilibrium, basically the rabbit hunting one.

And the trick is, the challenge is like, how do you encourage states to move to the better one? But that's not something that math can do. That's like a political question. That's a policy question. That's something for all of us here to do.

Vass Bednar (host)

I feel like the four of us could totally get a stag together.

Boris Babic (guest)

Would that be a stag two party though? Would that be a stag two?

Vass Bednar (host)

Exactly. Why not? Why not? I wanted to probe a little bit more on that people power level. Through your work, you've suggested that regime type matters less than power distance between the elite and disempowered people. What can be done, if anything, to mitigate or reverse this centralization of AI power?

Boris Babic (guest)

Well, absolutely. And perhaps I can address this question and also briefly circle back to what was said just then, what we were discussing concerning the agency of middle powers, Vass. Look, in all honesty, where I have got to say, one of the most impressive geopolitical moments I've seen over the past five years or so was not so much the speech that was delivered by Mark Carney, your prime minister. But in fact, it was the reaction to Mark Carney's speech that I found so spectacular. Of course, there was a lot of attention, a lot of hurrah and almost a sense of jubilation that was focusing around what he was saying. But I don't think what was remarkable what was said, but the fact that he said it at such a critical juncture with such a manner of trenchant confidence in the face of provocation and bellicose, of course, from Washington.

And to add onto that, what I found most powerful about his speech was the way he framed a lot of these commonly known concepts about acknowledging multi-alignment, about not taking sides, about standing up to hegemonic powers, regardless of where not the democracy or autocracies authoritarian are popularly elected and supported republics. At the end of the day, it's about naming the reality. And that's a point I want to, again, highlight really as a core takeaway. Name the reality. Name the fact that even though you are nominally in a country where you can vote for representatives, who then vote for the president in an electoral college. You cannot vote possibly for the companies who are generating and of course, sucking up the largest amount of computing and computational resources when it comes to the AI race within, of course, the domestic ecosystem.

That you cannot determine who regulates these individuals, because they're either influenced or shaped heavily or captured even by powerful lobbying groups affiliated with an emerging military industrial complex. You cannot say no to large companies, eavesdropping on your conversations and working in cahoots with government agencies to spy on your email conversations or your WhatsApp messages. Of course, you can say no about opting up, but the exit costs are too high in those cases. None of these manifestations of malaise folks takes place in the context of an authoritarian state or a dictatorship. It takes place in one of the oldest democracies on Planet Earth today.

And that I think goes to show that ultimately the regime type fixation that we are so bent on, how only authoritarian states would've thought would be unaccountable, it's increasingly destabilized by the fact that you can have authoritarian corporations. You have authoritarian interest groups. You can have authoritarian sub-national entities and movement who repudiate democratic norms, who issue election results, who seek to overturn the democratic mandate of the people within a democracy. And that is also why for us to push back, I don't have a... I don't have a clear antidote, but I think it all starts with naming the reality.

And just being more upfront about the fact that yes, I may live in a country that is nominally and procedurally democratic, and yet I don't feel like I'm any more capable of influencing the usage of AI and the implementation of AI and the safeguards, lack thereof, in AI's developments than my counterpart in some non-democratic state, half way around the planet or 12 time zones away, so to speak. That I think is the first step to taking action. Then comes, of course, praxis. And to me, as someone who studies praxis in authoritarian context, I would say we've got to get creative. Get creative, be creative, do something that's creative. And that comes from identifying, of course, weaknesses and fissures within powerful coalitions and building contingent coalitions of the willing to reshape both formally regulations and also defacto implementation.

The sort of, in reality, implementation of said regulation. So, I'll take a very concrete example here just to ground a lot of this. When we speak of, of course, AI's most pernicious effects and implications or impacts, there's a tendency to associate it with war. But by and large, what's perhaps omitted here is the importance of AI as well on the front of regulating and also policing infrastructural usage, energy usage, so to speak. And thus there's a tendency to see and to think of AI as just the outputs, but ignoring and setting aside, of course, the hidden and the abstract inputs, so to speak.

And this is where I want to give my European friends a lot of credit in Brussels. We have actually sought to develop a strategic roadmap, they call it for digitalization and AI in the energy sector. So, this, of course, came off and on the backs of heaps and heaps of comments, of statements, of lobbying efforts by small and medium enterprises, by small and middle states within EU. And also, by academics who tell Brussels flat out, you cannot just look at AI outputs and think, "Okay, you're going to win on frontier." A, that's not the modus operandi of Brussels. And B, that's not how you win. You've got to ask yourself, where's the energy going to come from?

How can you maintain a sustainability, renewable transition, and also tackle climate change, even amidst all of the immense and hyperintense pressures to develop and speed ahead when it comes to AI? And that is where we are seeing attempts on a part of Brussels to incorporate, to internalize and bring into the fault of the conversation. These thoughts and inputs on how to strike the right balance and indeed potentially use AI to be more efficient in innovating about renewable energy. And innovating about the manufacturing methods to both draw upon, of course, a Chinese electric vehicle manufacturing methods. But also, bring that and endogenize that and domesticate that in a context of the EU.

I'm actually cautiously bullish about the revitalization of manufacturing in the EU in these various sectors, as a result of precisely Brussels waking up to the fact that no longer could it count upon the good graces and magnanimity of Beijing or Washington, in order to get its energy security or indeed its economic security and technological security in order. It's time for them to wake up. And I'm glad they are waking up, so to speak. So, that's what I would say in answering your question about agency and also paths of change. But just one very quick note on where small and middle states, in my view, and in our view indeed, still have a very, very significant room or space for creative maneuvering if they look in the right direction.

And here I want to talk about Kenya. Kenya was amongst perhaps the first African countries to have rolled out and unveiled a comprehensive national artificial intelligence strategy that which came out, I believe in March last year, March 2025. And they made it very clear they're not going to try and out-compete the US or China on frontier scale or open source large language model. They're not working on another Llama, another Mistral, another deep seek or Alibaba, Kiwan, and of course, all the big players that you see in Silicon Valley right now. Instead, their approach is oriented around fine-tuning and adapting global-based models, say Llama 3 and Gemini, to bring these models to the African context, incorporate them into the African context through drawing upon African data.

Kenyan data, of course, chiefly being one of the primary subsets of African data in that question. And when it comes to the linguistic side of things, Swahili is heavily featured in the LLM development within Kenya. And there're also attempts to use AI for, I believe, malnutrition forecasting and amelioration, precision farming and improving agricultural efficacy, and even in improving primary healthcare access and precision as well. These are very specialized models. These are models that are, of course, derivative. They're built off the stack to some extent of other countries. And because Kenya knows how to stay out of the limelight when it comes to geopolitically sensitive and contentious dimensions of the equation.

And by focusing on these niche but important applications that have applicability and transferability outside and beyond merely the borders of Kenya, that is how companies and labs in Kenya are building LLM powered solutions for the digital landscape. Despite the dearth of a lot of the resources and also natural wherewithal that you can see in, say, countries in the Gulf, with the immense energy reserves and availability there, but also of course in Canada. Canada, in my humble opinion, as I said, in a recent high level seminar between policymakers in India and Canada. I highlighted the fact that personally, I see Canada as playing a really important role in shaping the AI conversation with, of course, its ever-increasing energy production capacity.

And also, its driving towards not just energy autonomy, but also energy leverage in the global geopolitical landscape as well. So, if Kenya could do it, and if Canada and the Gulf and the EU are relevant in a conversation, why exactly should we accept the narrative that this global race, if there is one, which I agree with Boris, there isn't one, this global race is a two-player game. It's not a two-player game. It is a multiplayer game. Some might call it three-body problem. I would call it a many-body problem to paraphrase Nielseshin, but also more recently, Vukjerimich in his speech concerning the three-body problem of geopolitics today.

Paul Samson (host)

People in middle power around the world are clapping with your comments there about that they have some power and they can do something here. What does it actually look like? So, you mentioned what Nairobi's doing. That's very interesting. I think there are a lot of middle powers out there. There's Indonesia, there are countries like this that probably have not asserted middle power, but could do so. Now, could Canada and others are planning to do things, but could middle powers come together for some more integrated collective conversation around safety, transparency, and these kinds of issues?

Shouldn't they be doing that? Is that the extension of what you're saying? Why aren't those middle powers leveraging their power collectively here and then creating some best practices that would have to be paid attention to by the superpowers that are doing their own thing right now? Maybe just a quick comment on that, and then we're going to move to a final question that Vass has. So, to both of you, what should middle powers be doing specifically here to advance this direction?

Boris Babic (guest)

What they can do is focus on what their comparative advantage is. So, there are going to be states that are richer in resources or states that are richer in infrastructure or states that are richer in labor, capital developments and so forth. I mean, what I think they can do is leverage those. And I think middle powers around us, such as Malaysia or even Vietnam, have done some of this, in developing quite a lot of emergent tech industries and in doing some things that might be harder for larger states. And in terms of can they kind of focus on things like AI safety mutually? Yeah. But I guess they face a coordination problem, to some extent. One thing that we can highlight and that we try to do throughout the book, which I don't know that it solves the coordination problem, but it's a helpful point to remember that they have much more aligned interests than they may realize.

Boris Babic (guest)

Just to add on to what Boris said. The key is to recognize that unity is not a necessary prerequisite, for alignment can be contingent, can be fluid, and in fact, as often conditional. And alignment could be a better end objective to strive towards for these middle powers than unity. It's a bit like ASEAN, the Association of Southeast Asian Nations. And of course, for ages, for decades, folks in ASEAN, very smart people, have been quibbling and debating over ASEAN centrality, ASEAN unity. How can we get ASEAN to agree? How can we get ASEAN countries to come together and do something together only to, I guess more recently, to use the word of Ambassador Harikaltican, who's a very, of course, a prominent diplomat from Singapore, maybe what we need to look at here is ASEAN usefulness. So, ASEAN is only central if it's useful to its members.

And symmetrically, if we look at some of the more successful constellations and arrangements in shaping geopolitical discourses and narratives over the past 10 years. Quad, pretty prominent, of course. RCEP, RCEP, BRICS even, which is a topic on which I've written a co-authored book last year.

Paul Samson (host)

CTPPP.

Boris Babic (guest)

CPTPP. Yep, that's right. And also, even to some extent, the Belt and Road Initiative. There's no real unity across many of these organizations and blocks. But what you have here is there's a handful of issues on which these member states and member powers find room for alignment. And so, they produce these temporary memorandums, documents and statements that affirm their positions, but also they put the money where their mouth is. That's where the key lies. And you don't need to have a unified block in order to see to the accomplishment of these objectives. So, neither Boris nor I would believe or advocate the signing or getting everyone around the world, almost 200 countries to sign a document that talks about, or as a joint declaration over AI. That might be nice from a PR point of view, but that's not how we're going to get real AI safety and agreements on not using AI for, say, lethal purposes that wholly automated.

That's not where we're going to get these agreements to come through. So by and large, perhaps in settling for a less ambitious and less unrealistic goal of alignment over unity, that is the frame that we can use in order to help middle powers navigate these deeply turbulent and also disorienting times as we speak today.

Vass Bednar (host)

My final question's going to feel a bit like a tiny bit of a swerve, but we're also wondering where and how AI and writing can fit into all of this thinking. Generative AI is driving a breakdown in trust, whether it's writing or images. It's just becoming harder and harder for everyday people to trust the things that we see and even maybe the people around you. It kind of puts everyone on edge and on the defensive. What can we do about that in the immediate term?

Boris Babic (guest)

That's a good question. I'm just worried I don't have a particularly insightful answer. I share the concern. And I think it's almost like it's a little bit fascinating to me to see how quickly that has become a concern across disciplines, which I thought were so protective of writing as their unique expression and skill. Any philosophers are happy to incorporate AI as much as they can into their workflow. Of course, our students sure are. As are lawyers. And I mean, I remember when I was practicing law, you really think about this as a craft. So, I'm a little bit surprised to see how quickly people would be willing to use AI to substitute that craft, but I don't know what we can do about it.

Vass Bednar (host)

And that's okay. We don't have to have all the solutions.

Paul Samson (host)

I don't think we can solve that.

Vass Bednar (host)

I think it's okay to be stuck together too.

Paul Samson (host)

I've got one idea. I've got one idea on it, but I'll come in after Brian.

Boris Babic (guest)

We cannot hold back the river. The river is already here. We cannot build a dam, for the river will tear it down. But what we can do is to find the right boat and sail with the river, not against the river. And yet at the same time beyond merely turning our boats not against the current. But in alignment with the current, we must also remember we own and we must control the very boat that we should steer in the right direction. That to me is what I would say on a front of AI and writing. But it also applies to other aspects of life as well, such as relationships. But I'm not going to go into detail there because that's peripheral or on a tangent.

At the end of the day, when it comes to writing, I'm a firm believer that writing is powerful for two reasons. One, it delivers and it serves a function of communication. But two, also because it communicates something unique about the speaker, or rather in this case, the writer of the text in question. On a former, AI is no doubt an amplifier, and there's no holding it back. I personally do use AI quite regularly in composing dry and boring email responses to dry and boring emails in my inbox. Nothing against the writers who I'm sure are neither dry nor boring, but it's just the emails are very dry and boring.

And yet on the other hand, on the second front, on uniqueness, one of my greatest fears is not so much just losing my life, but losing the sense of the self. To lose a good sense of who I am, to become merely another, kind of like the agency you see in a matrix. I fear that all the time. And that's because I've realized perhaps one source of value, if not the primary source of value for a human life is not so much the biological, but the unique identitarian. And that is we are different, we are separate, we are unique, we are detachable from others. And so, in order to preserve this uniqueness, which is as much a source of moral affirmation as it is of psychological comfort, in my humble opinion, we've got a shift from writing to talking and from talking to in person interaction.

So, my humble prediction is in the next three to five years time, we're going to see a return in favor or resurgence and popularity of in person examinations. We're also going to see a greater emphasis placed upon by employers, your ability to build bonds and relationships with others, because sure, AI can mimic speech. AI can actually supersede speech. AI can supersede human speech that is, and also human communication, even deep fakes of audios and videos, no problem. What AI cannot replace, at least as of now, as of the next 10 years, and dare I say in the next 20 years, is the face-to-face interaction that builds up the human.

And so, from my perspective, the antidote lies less with looking at what we can do and training individuals, of course, education matters, training matters and all of that. AI literacy matters. But the real key crux to this is how can we encourage individuals when the temptation is so overwhelming for them to do. How can they be instead of merely doing? That I think is the antidote to the malaise concerning individuality and identity in the era of AI.

Paul Samson (host)

This is a bit of like the age of Socrates, when they were worried about writing emerging and undermining the human contact and things. I feel like we're going back to some of those moments. It's been fantastic to have both of you here today and your comments were massively interesting across so many topics. Thank you for your time today.

Boris Babic (guest)

Thank you very much.

Boris Babic (guest)

Yeah, thank you.

Vass Bednar (host)

I mean, I could talk to Boris and Brian for a lot longer, but they had to head out for their stag hunt, so we had to cap it there. I know you and I have so many questions that we also wanted to ask them.

Paul Samson (host)

We need to be part of the stag hunt though. They're waiting.

Vass Bednar (host)

Yeah. Do you guys really need me for that? No, I'm kidding. I really appreciated hearing their bridging and thinking of philosophy and ethics with their observations on geopolitics and what's possible here, what's probable here. What stood out for you, Paul?

Paul Samson (host)

I want to follow up on your last question, which was about, well, what do we do with this crisis in human writing and expression. And it impacts so many things, not just academia and research and things, but people's lives. The email that I send you, is it me? Is it kind of a sloppy AI email? I think what's going to be really important is that there is some kind of certification or verification of human agency, of human in control for platforms that are saying they are authoritative voices and they are authentic voices. I think for think tanks, it's going to be important. For media organizations, I think it's going to be critical and it'll have to be real.

There'll have to be a lot of transparency around AI will be used. It's absolutely going to be used. It's being used now more than we even know. And it will become even more mainstream. But we'll have to be very clear about what is actually human stamped, as opposed to something that's just kind of an email response to a AI generated email in the first place. That's like a never ending ping pong match that can happen in mult bot and they can have that over there, but we've got to have human conversations somewhere.

Vass Bednar (host)

Yeah. And we have to bring back an appreciation for that, even though it's more demanding and more time intensive. So, I appreciated hearing that too. And that's why I enjoy the time I get to spend with you thinking and doing this work, instead of sending my AI agent to the podcast.

Paul Samson (host)

Likewise. And then we meet in person for stag hunts on the weekends and things like that once in a while.

Vass Bednar (host)

Totally.

Paul Samson (host)

It's hard to coordinate it all.

Vass Bednar (host)

When the weather's good.

Paul Samson (host)

When the weather's good. On frozen ground, the stag hunts are less effective. Yeah.

Vass Bednar (host)

Yeah. I'm so glad you met these scholars and brought them to Policy Prompt, because I would not have come across their work otherwise.

Paul Samson (host)

Yeah, it's great. We want a global conversation here and there's a bit of a bubble sometimes around certain conversations. And so, it's great to bring people like that in that are sitting in such a different spot and have very different priors.

Vass Bednar (host)

Talk to you later.

Policy Prompt is produced by me, Vass Bednar, and CIGI's Paul Samson. Our supervising producer is Tim Lewis with technical production by Henry Daemen and Luke McKee. Show notes are prepared by Lynn Schellenberg, social media engagement by Isabel Neufeld, brand design and episode artwork by Abhilasha Dewan and Sami Chouhdary, with creative direction from Som Tsoi. The original theme music is by Josh Snethlage.

Please subscribe and rate Policy Prompt wherever you listen to podcasts, and stay tuned for future episodes.