![]() The problem of an artificial superintelligence at war Bernard Baruch, June 14, 1946, presenting to the United Nations Atomic Energy Commission. The way is plain, peaceful, generous, just - a way which, if followed, the world will forever applaud. We shall nobly save, or meanly lose, the last, best hope of earth. We, even we here, hold the power and have the responsibility. ![]() The world will not forget that we say this. Keywords: AI arms race, artificial superintelligence, existential risk, nonkilling, peace While this strategy cannot cope with non-state actors, it could influence state actors, including those developing ASIs, or an ASI with agency. A critical juncture to optimise peace via the UGPT is emerging, by leveraging the UGPT off a ‘burning plasma’ fusion reaction breakthrough, expected from circa 2025 to 2035, as was attempted, unfortunately unsuccessfully, in 1946 with fission, for atomic war. One risk reduction strategy would be optimising peace through a Universal Global Peace Treaty (UGPT), which could contribute towards the ending of existing wars and towards the prevention of future wars, through conforming instrumentalism. An ASI-directed/enabled future interstate war could trigger ‘total war’, including nuclear war, and is therefore ‘high risk’. Furthermore, a ‘New Cold War’ between AI superpowers (the United States and China) looms. However, costly interstate conflicts, both ‘hot’ and ‘cold’, still exist, for instance the Kashmir Conflict and the Korean War. In this theoretical ideal, wars are not declared instead, 'international armed conflicts' occur. This is because the 1945 United Nations’ Charter's Article 2 states that UN member states should “refrain in their international relations from the threat or use of force against the territorial integrity or political independence of any state”, while allowing for “military measures by UN Security Council resolutions” and “exercise of self-defense”. We presently live in a world where few states actually declare war on each other or even war on each other. Optimising Peace through a Universal Global Peace Treaty to Constrain Risk of War from a Militarised Artificial SuperintelligenceĪn artificial superintelligence (ASI) emerging in a world where war is still normalised may constitute a catastrophic existential risk, either because the ASI might be employed by a single nation-state on purpose to wage war for global domination or because the ASI goes to war on behalf of itself to establish global domination these risks are not mutually incompatible in that the first can transition to the second. “Optimising Peace Through a Universal Global Peace Treaty to Constrain Risk of War from a Militarised Artificial Superintelligence.” SocArXiv. Continued abuse of our services will cause your IP address to be blocked indefinitely.This paper is now up (with the annex mentioned in the paper) as a preprint at Please fill out the CAPTCHA below and then click the button to indicate that you agree to these terms. If you wish to be unblocked, you must agree that you will take immediate steps to rectify this issue. If you do not understand what is causing this behavior, please contact us here. If you promise to stop (by clicking the Agree button below), we'll unblock your connection for now, but we will immediately re-block it if we detect additional bad behavior.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |