Such a Coordination Regime could also exist in either a unilateral scenario where one team consisting of representatives from multiple states develops AI together or a multilateral scenario where multiple teams simultaneously develop AI on their own while agreeing to set standards and regulations (and potentially distributive arrangements) in advance. By failing to agree to a Coordination Regime at all [D,D], we can expect the chance of developing a harmful AI to be highest as both actors are sparing in applying safety precautions to development. [47] George W. Downs, David M. Rocke, & Randolph M. Siverson, Arms Races and Cooperation, World Politics, 38(1: 1985): 118146.
Using Game Theory to Claim Advantage in Negotiations - Kogan Page Actor As preference order: CC > DC > DD > CD, Actor Bs preference order: CC > CD > DD > DC. Nonetheless many would call this game a stag hunt. Moreover, the usefulness of this model requires accurately gauging or forecasting variables that are hard to work with. It would be much better for each hunter, acting individually, to give up total autonomy and minimal risk, which brings only the small reward of the hare. [9] That is, the extent to which competitors prioritize speed of development over safety (Bostrom 2014: 767). 0000006962 00000 n
Therefore, if it is likely that both actors perceive to be in a state of Prisoners Dilemma when deciding whether to agree on AI, strategic resources should be especially allocated to addressing this vulnerability. PxF`4f$CN*}S -'2Y72Dl0%^JOG?Y,XT@ dF6l]+$.~Qrjj}46.#Z x^iyY2)/c
lLU[q#r)^X Still, predicting these values and forecasting probabilities based on information we do have is valuable and should not be ignored solely because it is not perfect information. Here, both actors demonstrate high uncertainty about whether they will develop a beneficial or harmful AI alone (both Actors see the likelihood as a 50/50 split), but they perceive the potential benefits of AI to be slightly greater than the potential harms. Table 5. If the regime allows for multilateral development, for example, the actors might agree that whoever reaches AI first receives 60% of the benefit, while the other actor receives 40% of the benefit. This same dynamic could hold true in the development of an AI Coordination Regime, where actors can decide whether to abide by the Coordination Regime or find a way to cheat. Depending on which model is present, we can get a better sense of the likelihood of cooperation or defection, which can in turn inform research and policy agendas to address this. Using their intuition, the remainder of this paper looks at strategy and policy considerations relevant to some game models in the context of the AI Coordination Problem. [3] While (Hare, Hare) remains a Nash equilibrium, it is no longer risk dominant. Additionally, this model accounts for an AI Coordination Regime that might result in variable distribution of benefits for each actor. [5] They can, for example, work together to improve good corporate governance. It is the goal this paper to shed some light on these, particularly how the structure of preferences that result from states understandings of the benefits and harms of AI development lead to varying prospects for coordination. endstream
endobj
12 0 obj
<>stream
We see this in the media as prominent news sources with greater frequency highlight new developments and social impacts of AI with some experts heralding it as the new electricity.[10] In the business realm, investments in AI companies are soaring. Understanding the Stag Hunt Game: How Deer Hunting Explains Why People are Socially Late. [10] AI expert Andrew Ng says AI is the new electricity | Disrupt SF 2017, TechCrunch Disrupt SF 2017, TechCrunch, September 20, 2017, https://www.youtube.com/watch?v=uSCka8vXaJc. On the other hand, Glaser[46] argues that rational actors under certain conditions might opt for cooperative policies.
Stag Hunt: Anti-Corruption Disclosures Concerning Natural Resources It sends a message to the countrys fractious elites that the rewards for cooperation remain far richer than those that would come from going it alone. SECURITY CLASSIFICATION OF THIS PAGE Unclassified . This is the third technology revolution., Artificial intelligence is the future, not only for Russia, but for all humankind. [56] Downs et al., Arms Races and Cooperation., [57] This is additionally explored in Jervis, Cooperation Under the Security Dilemma.. What is the key claim of the 'Liberal Democratic Peace' thesis? Despite the damage it could cause, the impulse to go it alone has never been far off, given the profound uncertainties that define the politics of any war-torn country. However, anyone who hunts rabbit can do sosuccessfullyby themselves, but with a smaller meal. In this model, each actors incentives are not fully aligned to support mutual cooperation and thus should present worry for individuals hoping to reduce the possibility of developing a harmful AI. How do strategies of non-violent resistance view power differently from conventional 'monolithic' understandings of power? It truly takes a village, to whom this paper is dedicated. Jean-Jacques Rousseau described a situation in which two individuals go out on a hunt. Uneven distribution of AIs benefits couldexacerbate inequality, resulting in higher concentrations of wealth within and among nations. The stag may not pass every day, but the hunters are reasonably certain that it will come.
International Cooperation Theory and International Institutions In this section, I survey the relevant background of AI development and coordination by summarizing the literature on the expected benefits and harms from developing AI and what actors are relevant in an international safety context. Economic Theory of Networks at Temple University, Economic theory of networks course discussion. Since this requires that the fish have no way to escape, it requires the cooperation of many orcas. In the context of the AI Coordination Problem, a Stag Hunt is the most desirable outcome as mutual cooperation results in the lowest risk of racing dynamics and associated risk of developing a harmful AI. Those in favor of withdrawal are skeptical that a few thousand U.S. troops can make a decisive difference when 100,000 U.S. soldiers proved incapable of curbing the insurgency. Even doing good can parallel with bad consequences. What is the difference between ethnic cleansing and genocide? The first technology revolution caused World War I. You note that the temptation to cheat creates tension between the two trading nations, but you could phrase this much more strongly: theoretically, both players SHOULD cheat. NUMBER OF PAGES 65 14. We have recently seen an increase in media acknowledgement of the benefits of artificial intelligence (AI), as well as the negative social implications that can arise from its development.
About: Stag hunt xref For example, one prisone r may seemingly betray the other , but without losing the other's trust. How does the Just War Tradition position itself in relation to both Realism and Pacifism? They are the only body responsible for their own protection. In order to mitigate or prevent the deleterious effects of arms races, international relations scholars have also studied the dynamics that surround arms control agreements and the conditions under which actors might coordinate with one another. [11] This Article conceptualizes a stag hunt in which the participants are countries that host extractive companies on their stock exchanges, including the U.S., Canada, the United Kingdom, the Member States . This article is about the game theory problem about stag hunting. Table 4.
The Stag Hunt Theory and the Formation Social of Contracts : Networks Deadlock occurs when each actors greatest preference would be to defect while their opponent cooperates. These strategies are not meant to be exhaustive by any means, but hopefully show how the outlined theory might provide practical use and motivate further research and analysis.
Stag Hunts: fascinating and useful game theory model for collective Additionally, both actors perceive the potential returns to developing AI to be greater than the potential harms. Payoff matrix for simulated Stag Hunt. <>stream
Course blog for INFO 2040/CS 2850/Econ 2040/SOC 2090, Link: http://www.socsci.uci.edu/~bskyrms/bio/papers/StagHunt.pdf.
PDF The Stag Hunt - University of California, Irvine She argues that states are no longer Although the development of AI at present has not yet led to a clear and convincing military arms race (although this has been suggested to be the case[43]), the elements of the arms race literature described above suggest that AIs broad and wide-encompassing capacity can lead actors to see AI development as a threatening technological shock worth responding to with reinforcements or augmentations in ones own security perhaps through bolstering ones own AI development program. (Pergamon Press: 1985). The hedge is shared so both parties are responsible for maintaining it. 0000018184 00000 n
This is what I will refer to as the AI Coordination Problem. Specifically, it is especially important to understand where preferences of vital actors overlap and how game theory considerations might affect these preferences. Within the arms race literature, scholars have distinguished between types of arms races depending on the nature of arming. . Put another way, the development of AI under international racing dynamics could be compared to two countries racing to finish a nuclear bomb if the actual development of the bomb (and not just its use) could result in unintended, catastrophic consequences. And, seeing how successful the stag hunters are, most hare hunters will convert to stag hunters. 0
GAME THEORY FOR INTERNATIONAL ACCORDS - University of South Carolina A day passes. These remain real temptations for a political elite that has survived decades of war by making deals based on short time horizons and low expectations for peace. The Stag Hunt represents an example of compensation structure in theory. Gardner's vision, the removal of inferior, Christina Dejong, Christopher E. Smith, George F Cole. HW?n9*K$kBOQiBo1d\QlQ%AAW\gQV#j^KRmEB^]L6Rw4muu.G]a>[U/h;@ip|=PS[nyfGI0YD+FK:or+:=y&4i'kvC An individual can get a hare by himself, but a hare is worth less than a stag. Evidence from AI Experts (2017: 11-21), retrieved from http://arxiv.org/abs/1705.08807. [4] Nick Bostrom, Superintelligence: Paths, Dangers, Strategies (Oxford University Press, 2014). War is anarchic, and intervening actors can sometimes help to mitigate the chaos.
Intuition and Deliberation in the Stag Hunt Game - Nature Members of the Afghan political elite have long found themselves facing a similar trade-off. Landing The Job You Want Through YourNetwork, Earth Day: Using game theory and AI to beat thepoachers, Adopting to Facebooks new Like Alternative. The story is briefly told by Rousseau, in A Discourse on Inequality : If it was a matter of hunting a deer, everyone well realized that he must remain faithful to his post; but if a hare happened to pass within reach of one of them, we cannot doubt that he would h ave gone off in pursuit . Nations are able to communicate with each other freely, something that is forbidden in the traditional PD game. The payoff matrix is displayed as Table 12. As such, Chicken scenarios are unlikely to greatly affect AI coordination strategies but are still important to consider as a possibility nonetheless. hRj0pq%[a00a
IIR~>jzNTDLC=Qm=,e-[Vi?kCE"X~5eyE]/2z))!6fqfx6sHD8&: s>)Mg 5>6v9\s7U Also, trade negotiations might be better thought of as an iterated game the game is played repeatedly and the nations interact with each other more than once over time. There is a substantial relationship between the stag hunt and the prisoner's dilemma. This table contains a sample ordinal representation of a payoff matrix for a Stag Hunt game. For example, Jervis highlights the distinguishability of offensive-defensive postures as a factor in stability. @scR^}C$I3v95p6S'34Y1rar{SQ!#fzHBM6 K4m|OOpa7rB'~Y(A|'vh=ziN/quu~6,{Q Author James Cambias describes a solution to the game as the basis for an extraterrestrial civilization in his 2014 science fiction book A Darkling Sea. One significant limitation of this theory is that it assumes that the AI Coordination Problem will involve two key actors. One hunter can catch a hare alone with less effort and less time, but it is worth far less than a stag and has much less meat. The real peril of a hasty withdrawal of U.S. troops from Afghanistan, though, can best be understood in political, not military, terms. In a case with a random group of people, most would choose not to trust strangers with their success. Finally, in a historical survey of international negotiations, Garcia and Herz[48] propose that international actors might take preventative, multilateral action in scenarios under the commonly perceived global dimension of future potential harm (for example the ban on laser weapons or the dedication of Antarctica and outer space solely for peaceful purposes). They suggest that new weapons (or systems) that derive from radical technological breakthroughs can render a first strike more attractive, whereas basic arms buildups provide deterrence against a first strike. Last Resort, Legitimate authority, Just cause, high probablity of succession, right intention, proportionality, casualities. Indeed, this gives an indication of how important the Stag Hunt is to International Relations more generally. A great example of chicken in IR is the Cuban Missile Crisis. Additionally, the feedback, discussion, resource recommendations, and inspiring work of friends, colleagues, and mentors in several time zones especially Amy Fan, Carrick Flynn, Will Hunt, Jade Leung, Matthijs Maas, Peter McIntyre, Professor Nuno Monteiro, Gabe Rissman, Thomas Weng, Baobao Zhang, and Remco Zwetsloot were vital to this paper and are profoundly appreciated. I thank my advisor, Professor Allan Dafoe, for his time, support, and introduction to this papers subject matter in his Global Politics of AI seminar. [6] See infra at Section 2.2 Relevant Actors. Name four key thinkers of the theory of non-violent resistance, Gandhi, martin luther king, malcon X, cesar chavex. For example, it is unlikely that even the actor themselves will be able to effectively quantify their perception of capacity, riskiness, magnitude of risk, or magnitude of benefits. Using the payoff matrix in Table 6, we can simulate scenarios for AI coordination by assigning numerical values to the payoff variables. Before getting to the theory, I will briefly examine the literature on military technology/arms racing and cooperation. Cultural Identity - crucial fear of social systems. As we discussed in class, the catch is that the players involved must all work together in order to successfully hunt the stag and reap the rewards once one person leaves the hunt for a hare, the stag hunt fails and those involved in it wind up with nothing. To begin exploring this, I now look to the literature on arms control and coordination. [8] Elsa Kania, Beyond CFIUS: The Strategic Challenge of Chinas Rise in Artificial Intelligence, Lawfare, June 20, 2017, https://www.lawfareblog.com/beyond-cfius-strategic-challenge-chinas-rise-artificial-intelligence (highlighting legislation considered that would limit Chinese investments in U.S. artificial intelligence companies and other emerging technologies considered crucial to U.S. national security interests). [26] Stephen Hawking, Stuart Russell, Max Tegmark, Frank Wilczek, Transcendence looks at the implications of artificial intelligence but are we taking AI seriously enough? The Indepndent, May 1, 2014, https://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence-but-are-we-taking-9313474.html. Another proposed principle of rationality ("maximin") suggests that I ought to consider the worst payoff I could obtain under any course of action, and choose that action that maximizes . 0000001656 00000 n
In the same vein, Sorenson[39] argues that unexpected technological breakthroughs in weaponry raise instability in arms races. This is taken to be an important analogy for social cooperation.
Battle of the sexes (game theory) - Wikipedia This essay first appeared in the Acheson Prize 2018 Issue of the Yale Review of International Studies. I refer to this as the AI Coordination Problem. 8,H7kcn1qepa0y|@. A classic game theoretic allegory best demonstrates the various incentives at stake for the United States and Afghan political elites at this moment. In addition to the example suggested by Rousseau, David Hume provides a series of examples that are stag hunts. Finally, I discuss the relevant policy and strategic implications this theory has on achieving international AI coordination, and assess the strengths and limitations of the theory in practice.
The game is a prototype of the social contract. f(x)={332(4xx2)if0x40otherwisef(x)= \begin{cases}\frac{3}{32}\left(4 x-x^2\right) & \text { if } 0 \leq x \leq 4 \\ 0 & \text { otherwise }\end{cases} I refer to this as the AI Coordination Problem. The following subsection further examines these relationships and simulates scenarios in which each coordination model would be most likely. genocide, crimes against humanity, war crimes, and ethnic cleansing. But who can we expect to open the Box? 0000004572 00000 n
While each actors greatest preference is to defect while their opponent cooperates, the prospect of both actors defecting is less desirable then both actors cooperating. Therefore, an agreement to play (c,c) conveys no information about what the players will do, and cannot be considered self-enforcing."