Michael W. Meier
- Published in print:
- 2019
- Published Online:
- December 2018
- ISBN:
- 9780190915360
- eISBN:
- 9780190915391
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780190915360.003.0010
- Subject:
- Law, Public International Law
Over the past decade, there has been a proliferation of remotely piloted aircraft or “drones” being used on the battlefield. Advances in technology are going to continue to drive changes in how ...
More
Over the past decade, there has been a proliferation of remotely piloted aircraft or “drones” being used on the battlefield. Advances in technology are going to continue to drive changes in how future conflicts will be waged. Technological innovation, however, is not without its detractors as there are various groups calling for a moratorium or ban on the development and use of autonomous weapons systems. Some groups have called for a prohibition on the development, production, and use of fully autonomous weapons through an international legally binding instrument, while others view advances in the use of technology on the battlefield as a natural progression that will continue to make weapons systems more discriminate. The unanswered question is, which point of view will be the right one? This chapter approaches this question by addressing the meaning of “autonomy” and “autonomous weapons systems.” In addition, this chapter looks at the U.S. Department of Defense’s vision for the potential employment of autonomous systems, the legal principle applicable to these systems, and the weapons review process.Less
Over the past decade, there has been a proliferation of remotely piloted aircraft or “drones” being used on the battlefield. Advances in technology are going to continue to drive changes in how future conflicts will be waged. Technological innovation, however, is not without its detractors as there are various groups calling for a moratorium or ban on the development and use of autonomous weapons systems. Some groups have called for a prohibition on the development, production, and use of fully autonomous weapons through an international legally binding instrument, while others view advances in the use of technology on the battlefield as a natural progression that will continue to make weapons systems more discriminate. The unanswered question is, which point of view will be the right one? This chapter approaches this question by addressing the meaning of “autonomy” and “autonomous weapons systems.” In addition, this chapter looks at the U.S. Department of Defense’s vision for the potential employment of autonomous systems, the legal principle applicable to these systems, and the weapons review process.
Masahiro Kurosaki
- Published in print:
- 2020
- Published Online:
- November 2020
- ISBN:
- 9780197537374
- eISBN:
- 9780197537404
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780197537374.003.0014
- Subject:
- Law, Public International Law
One of the implications of fully autonomous weapons systems (AWS) as an independent decision maker in the targeting process is that a human-centered paradigm should never be taken for granted. ...
More
One of the implications of fully autonomous weapons systems (AWS) as an independent decision maker in the targeting process is that a human-centered paradigm should never be taken for granted. Indeed, they could allow a law of armed conflict (LOAC) debate immune from that paradigm all the more so because the underlying “principle of human dignity” has failed to offer convincing reasons for its propriety in international legal discourse. Furthermore, the history of LOAC tells us that the existing human-centered approach to the proportionality test—the commander-centric approach—is, albeit strongly supported and developed by states and international criminal jurisprudence, particularly since the end of the Second World War, nothing more than a product of the time. So long as fully AWS exhibit the potential for better contribution to the LOAC goals to protect the victims of armed conflict than human soldiers, one could thus seek an alternative computer-centered approach to the law of targeting—a subset of LOAC—tailored to the defining characteristics of fully AWS in a manner to maximize their potential as well as to make the law more responsive to the needs of ever-changing battlespaces. With this in mind, this chapter aims to relativize the absoluteness of the existing human-centered approach to the proportionality test—which is not to deny the role of humans in the overall regulations of fully AWS whatsoever—and then, away from that approach, to propose an alternative one dedicated to fully AWS for their better regulation in response to the demands of changing times.Less
One of the implications of fully autonomous weapons systems (AWS) as an independent decision maker in the targeting process is that a human-centered paradigm should never be taken for granted. Indeed, they could allow a law of armed conflict (LOAC) debate immune from that paradigm all the more so because the underlying “principle of human dignity” has failed to offer convincing reasons for its propriety in international legal discourse. Furthermore, the history of LOAC tells us that the existing human-centered approach to the proportionality test—the commander-centric approach—is, albeit strongly supported and developed by states and international criminal jurisprudence, particularly since the end of the Second World War, nothing more than a product of the time. So long as fully AWS exhibit the potential for better contribution to the LOAC goals to protect the victims of armed conflict than human soldiers, one could thus seek an alternative computer-centered approach to the law of targeting—a subset of LOAC—tailored to the defining characteristics of fully AWS in a manner to maximize their potential as well as to make the law more responsive to the needs of ever-changing battlespaces. With this in mind, this chapter aims to relativize the absoluteness of the existing human-centered approach to the proportionality test—which is not to deny the role of humans in the overall regulations of fully AWS whatsoever—and then, away from that approach, to propose an alternative one dedicated to fully AWS for their better regulation in response to the demands of changing times.
Deane-Peter Baker
- Published in print:
- 2021
- Published Online:
- April 2021
- ISBN:
- 9780197546048
- eISBN:
- 9780197546079
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780197546048.003.0003
- Subject:
- Law, Human Rights and Immigration
The prospect of robotic warriors striding the battlefield has, somewhat unsurprisingly, been shaped by perceptions drawn from science fiction. While illustrative, such comparisons are largely ...
More
The prospect of robotic warriors striding the battlefield has, somewhat unsurprisingly, been shaped by perceptions drawn from science fiction. While illustrative, such comparisons are largely unhelpful for those considering potential ethical implications of autonomous weapons systems. In this chapter, I offer two alternative sources for ethical comparison. Drawing from military history and current practice for guidance, this chapter highlights the parallels that make mercenaries—the ‘dogs of war’—and military working dogs—the actual dogs of war—useful lenses through which to consider Lethal Autonomous Weapons Systems—the robot dogs of war. Through these comparisons, I demonstrate that some of the most commonly raised ethical objections to autonomous weapon systems are overstated, misguided, or otherwise dependent on outside circumstance.Less
The prospect of robotic warriors striding the battlefield has, somewhat unsurprisingly, been shaped by perceptions drawn from science fiction. While illustrative, such comparisons are largely unhelpful for those considering potential ethical implications of autonomous weapons systems. In this chapter, I offer two alternative sources for ethical comparison. Drawing from military history and current practice for guidance, this chapter highlights the parallels that make mercenaries—the ‘dogs of war’—and military working dogs—the actual dogs of war—useful lenses through which to consider Lethal Autonomous Weapons Systems—the robot dogs of war. Through these comparisons, I demonstrate that some of the most commonly raised ethical objections to autonomous weapon systems are overstated, misguided, or otherwise dependent on outside circumstance.
Natalia Jevglevskaja and Rain Liivoja
- Published in print:
- 2021
- Published Online:
- April 2021
- ISBN:
- 9780197546048
- eISBN:
- 9780197546079
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780197546048.003.0008
- Subject:
- Law, Human Rights and Immigration
Disagreements about the humanitarian risk-benefit balance of weapons technology are not new. The history of arms control negotiations offers many examples of weaponry that was regarded ‘inhumane’ by ...
More
Disagreements about the humanitarian risk-benefit balance of weapons technology are not new. The history of arms control negotiations offers many examples of weaponry that was regarded ‘inhumane’ by some, while hailed by others as a means to reduce injury or suffering in conflict. The debate about autonomous weapons systems reflects this dynamic, yet also stands out in some respects, notably largely hypothetical nature of concerns raised in regard to these systems as well as ostensible disparities in States’ approaches to conceptualizing autonomy. This chapter considers how misconceptions surrounding autonomous weapons technology impede the progress of the deliberations of the Group of Governmental Experts on Lethal Autonomous Weapons Systems. An obvious tendency to focus on the perceived risks posed by these systems, much more so than potential operational and humanitarian advantages they offer, is likely to jeopardize the prospect of finding a meaningful resolution to the debate.Less
Disagreements about the humanitarian risk-benefit balance of weapons technology are not new. The history of arms control negotiations offers many examples of weaponry that was regarded ‘inhumane’ by some, while hailed by others as a means to reduce injury or suffering in conflict. The debate about autonomous weapons systems reflects this dynamic, yet also stands out in some respects, notably largely hypothetical nature of concerns raised in regard to these systems as well as ostensible disparities in States’ approaches to conceptualizing autonomy. This chapter considers how misconceptions surrounding autonomous weapons technology impede the progress of the deliberations of the Group of Governmental Experts on Lethal Autonomous Weapons Systems. An obvious tendency to focus on the perceived risks posed by these systems, much more so than potential operational and humanitarian advantages they offer, is likely to jeopardize the prospect of finding a meaningful resolution to the debate.
Duncan MacIntosh
- Published in print:
- 2021
- Published Online:
- April 2021
- ISBN:
- 9780197546048
- eISBN:
- 9780197546079
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780197546048.003.0002
- Subject:
- Law, Human Rights and Immigration
Setting aside the military advantages offered by Autonomous Weapons Systems for a moment, international debate continues to feature the argument that the use of lethal force by “killer robots” ...
More
Setting aside the military advantages offered by Autonomous Weapons Systems for a moment, international debate continues to feature the argument that the use of lethal force by “killer robots” inherently violates human dignity. The purpose of this chapter is to refute this assumption of inherent immorality and demonstrate situations in which deploying autonomous systems would be strategically, morally, and rationally appropriate. The second part of this chapter objects to the argument that the use of robots in warfare is somehow inherently offensive to human dignity. Overall, this chapter will demonstrate that, contrary to arguments made by some within civil society, moral employment of force is possible, even without proximate human decision-making. As discussions continue to swirl around autonomous weapons systems, it is important not to lose sight of the fact that fire-and-forget weapons are not morally exceptional or inherently evil. If an engagement complied with the established ethical framework, it is not inherently morally invalidated by the absence of a human at the point of violence. As this chapter argues, the decision to employ lethal force becomes problematic when a more thorough consideration would have demanded restraint. Assuming a legitimate target, therefore, the importance of the distance between human agency in the target authorization process and force delivery is separated by degrees. A morally justifiable decision to engage a target with rifle fire would not be ethically invalidated simply because the lethal force was delivered by a commander-authorized robotic carrier.Less
Setting aside the military advantages offered by Autonomous Weapons Systems for a moment, international debate continues to feature the argument that the use of lethal force by “killer robots” inherently violates human dignity. The purpose of this chapter is to refute this assumption of inherent immorality and demonstrate situations in which deploying autonomous systems would be strategically, morally, and rationally appropriate. The second part of this chapter objects to the argument that the use of robots in warfare is somehow inherently offensive to human dignity. Overall, this chapter will demonstrate that, contrary to arguments made by some within civil society, moral employment of force is possible, even without proximate human decision-making. As discussions continue to swirl around autonomous weapons systems, it is important not to lose sight of the fact that fire-and-forget weapons are not morally exceptional or inherently evil. If an engagement complied with the established ethical framework, it is not inherently morally invalidated by the absence of a human at the point of violence. As this chapter argues, the decision to employ lethal force becomes problematic when a more thorough consideration would have demanded restraint. Assuming a legitimate target, therefore, the importance of the distance between human agency in the target authorization process and force delivery is separated by degrees. A morally justifiable decision to engage a target with rifle fire would not be ethically invalidated simply because the lethal force was delivered by a commander-authorized robotic carrier.
Austin Wyatt and Jai Galliott
- Published in print:
- 2021
- Published Online:
- April 2021
- ISBN:
- 9780197546048
- eISBN:
- 9780197546079
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780197546048.003.0017
- Subject:
- Law, Human Rights and Immigration
While the Conference on Certain Conventional Weapons (CCW)-sponsored process has steadily slowed, and occasionally stalled, over the past five years, the pace of technological development in both the ...
More
While the Conference on Certain Conventional Weapons (CCW)-sponsored process has steadily slowed, and occasionally stalled, over the past five years, the pace of technological development in both the civilian and military spheres has accelerated. In response, this chapter suggests the development of a normative framework that would establish common procedures and de-escalation channels between states within a given regional security cooperative prior to the demonstration point of truly autonomous weapon systems. Modeling itself on the Guidelines for Air Military Encounters and Guidelines Maritime Interaction, which were recently adopted by the Association of Southeast Asian Nations, the goal of this approach is to limit the destabilizing and escalatory potential of autonomous systems, which are expected to lower barriers to conflict and encourage brinkmanship while being difficult to definitively attribute. Overall, this chapter focuses on evaluating potential alternatives avenues to the CCW-sponsored process by which ethical, moral, and legal concerns raised by the emergence of autonomous weapon systems could be addressed.Less
While the Conference on Certain Conventional Weapons (CCW)-sponsored process has steadily slowed, and occasionally stalled, over the past five years, the pace of technological development in both the civilian and military spheres has accelerated. In response, this chapter suggests the development of a normative framework that would establish common procedures and de-escalation channels between states within a given regional security cooperative prior to the demonstration point of truly autonomous weapon systems. Modeling itself on the Guidelines for Air Military Encounters and Guidelines Maritime Interaction, which were recently adopted by the Association of Southeast Asian Nations, the goal of this approach is to limit the destabilizing and escalatory potential of autonomous systems, which are expected to lower barriers to conflict and encourage brinkmanship while being difficult to definitively attribute. Overall, this chapter focuses on evaluating potential alternatives avenues to the CCW-sponsored process by which ethical, moral, and legal concerns raised by the emergence of autonomous weapon systems could be addressed.
Tim McFarland and Jai Galliott
- Published in print:
- 2021
- Published Online:
- April 2021
- ISBN:
- 9780197546048
- eISBN:
- 9780197546079
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780197546048.003.0004
- Subject:
- Law, Human Rights and Immigration
The physical and temporal removal of the human from the decision to use lethal force underpins many of the arguments against the development of autonomous weapons systems. In response to these ...
More
The physical and temporal removal of the human from the decision to use lethal force underpins many of the arguments against the development of autonomous weapons systems. In response to these concerns, Meaningful Human Control has risen to prominence as a framing concept in the ongoing international debate. This chapter demonstrates how, in addition to the lack of a universally accepted precise definition, reliance on Meaningful Human Control is conceptually flawed. Overall, this chapter analyzes, problematizes, and explores the nebulous concept of Meaningful Human Control, and in doing so demonstrates that it relies on the mistaken premise that the development of autonomous capabilities in weapons systems constitutes a lack of human control that somehow presents an insurmountable challenge to existing International Humanitarian Law.Less
The physical and temporal removal of the human from the decision to use lethal force underpins many of the arguments against the development of autonomous weapons systems. In response to these concerns, Meaningful Human Control has risen to prominence as a framing concept in the ongoing international debate. This chapter demonstrates how, in addition to the lack of a universally accepted precise definition, reliance on Meaningful Human Control is conceptually flawed. Overall, this chapter analyzes, problematizes, and explores the nebulous concept of Meaningful Human Control, and in doing so demonstrates that it relies on the mistaken premise that the development of autonomous capabilities in weapons systems constitutes a lack of human control that somehow presents an insurmountable challenge to existing International Humanitarian Law.
Susanne Burri
- Published in print:
- 2018
- Published Online:
- November 2017
- ISBN:
- 9780190495657
- eISBN:
- 9780190495671
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780190495657.003.0009
- Subject:
- Philosophy, Moral Philosophy, Political Philosophy
An autonomous weapon system (AWS) is a weapons system that, “once activated, can select and engage targets without further intervention by a human operator” (US Department of Defense directive ...
More
An autonomous weapon system (AWS) is a weapons system that, “once activated, can select and engage targets without further intervention by a human operator” (US Department of Defense directive 3000.09, November 21, 2012). Militaries around the world are investing substantial amounts of money and effort into the development of AWS. But the technology has its vocal opponents, too. This chapter argues against the idea that a targeting decision made by an AWS is always morally flawed simply because it is a targeting decision made by an AWS. It scrutinizes four arguments in favor of this idea and argues that none of them is convincing. It also presents an argument in favor of developing autonomous weapons technology further. The aim of this chapter is to dispel one worry about AWS, to keep this worry from drawing attention away from the genuinely important issues that AWS give rise to.Less
An autonomous weapon system (AWS) is a weapons system that, “once activated, can select and engage targets without further intervention by a human operator” (US Department of Defense directive 3000.09, November 21, 2012). Militaries around the world are investing substantial amounts of money and effort into the development of AWS. But the technology has its vocal opponents, too. This chapter argues against the idea that a targeting decision made by an AWS is always morally flawed simply because it is a targeting decision made by an AWS. It scrutinizes four arguments in favor of this idea and argues that none of them is convincing. It also presents an argument in favor of developing autonomous weapons technology further. The aim of this chapter is to dispel one worry about AWS, to keep this worry from drawing attention away from the genuinely important issues that AWS give rise to.
Barry S. Levy
- Published in print:
- 2022
- Published Online:
- May 2022
- ISBN:
- 9780197558645
- eISBN:
- 9780197558676
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780197558645.003.0007
- Subject:
- Public Health and Epidemiology, Public Health
This chapter describes conventional weapons and treaties to control these weapons. These weapons include small arms and light weapons, heavy conventional weapons, bombs and other explosives, and ...
More
This chapter describes conventional weapons and treaties to control these weapons. These weapons include small arms and light weapons, heavy conventional weapons, bombs and other explosives, and incendiaries. The chapter covers the international arms trade, including its magnitude, leading exporting and importing countries, the Arms Trade Treaty, and illicit transfers. The chapter describes antipersonnel landmines and unexploded ordnance, including the magnitude of the problem, its health consequences, and the Mine Ban Treaty. It also covers cluster munitions, including a description and use of cluster munitions, injuries and deaths caused by them, and the Convention on Cluster Munitions. The chapter also describes emerging issues, including autonomous weapons systems, cyber warfare, and the militarization of outer space.Less
This chapter describes conventional weapons and treaties to control these weapons. These weapons include small arms and light weapons, heavy conventional weapons, bombs and other explosives, and incendiaries. The chapter covers the international arms trade, including its magnitude, leading exporting and importing countries, the Arms Trade Treaty, and illicit transfers. The chapter describes antipersonnel landmines and unexploded ordnance, including the magnitude of the problem, its health consequences, and the Mine Ban Treaty. It also covers cluster munitions, including a description and use of cluster munitions, injuries and deaths caused by them, and the Convention on Cluster Munitions. The chapter also describes emerging issues, including autonomous weapons systems, cyber warfare, and the militarization of outer space.
Amichai Cohen and David Zlotogorski
- Published in print:
- 2021
- Published Online:
- June 2021
- ISBN:
- 9780197556726
- eISBN:
- 9780197556757
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780197556726.003.0014
- Subject:
- Law, Public International Law
The final chapter of the book presents three developments in modern warfare that might affect the way the principle of proportionality will be applied in the future. The first is the development of ...
More
The final chapter of the book presents three developments in modern warfare that might affect the way the principle of proportionality will be applied in the future. The first is the development of “image-fare”—the use of the way that the armed conflicts and their effects are perceived through the lenses of the media and social networks; second, cyber warfare, and its influence over the interpretations of proportionality; and third, the development of autonomous weapon systems. The chapter suggests that all these areas might change the way we perceive the principle of proportionality, and that further research should be directed at exploring these changes.Less
The final chapter of the book presents three developments in modern warfare that might affect the way the principle of proportionality will be applied in the future. The first is the development of “image-fare”—the use of the way that the armed conflicts and their effects are perceived through the lenses of the media and social networks; second, cyber warfare, and its influence over the interpretations of proportionality; and third, the development of autonomous weapon systems. The chapter suggests that all these areas might change the way we perceive the principle of proportionality, and that further research should be directed at exploring these changes.
Jason Scholz and Jai Galliott
- Published in print:
- 2021
- Published Online:
- April 2021
- ISBN:
- 9780197546048
- eISBN:
- 9780197546079
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780197546048.003.0005
- Subject:
- Law, Human Rights and Immigration
For the use of force to be lawful and morally just, future autonomous systems must not commit humanitarian errors or acts of fratricide. To achieve this, we distinguish a novel preventative form of ...
More
For the use of force to be lawful and morally just, future autonomous systems must not commit humanitarian errors or acts of fratricide. To achieve this, we distinguish a novel preventative form of minimally-just autonomy using artificial intelligence (MinAI) to avert attacks on protected symbols, protected sites, and signals of surrender. MinAI compares favorably with respect to maximally-just forms proposed to date. We examine how fears of speculative artificial general intelligence has distracted resources from making current weapons more compliant with international humanitarian law, particularly Additional Protocol 1 of the Geneva Convention and its Article 36. Critics of our approach may argue that machine learning can be fooled, that combatants can commit perfidy to protect themselves, and so on. We confront this issue, including recent research on the subversion of AI, and conclude that the moral imperative for MinAI in weapons remains undiminished.Less
For the use of force to be lawful and morally just, future autonomous systems must not commit humanitarian errors or acts of fratricide. To achieve this, we distinguish a novel preventative form of minimally-just autonomy using artificial intelligence (MinAI) to avert attacks on protected symbols, protected sites, and signals of surrender. MinAI compares favorably with respect to maximally-just forms proposed to date. We examine how fears of speculative artificial general intelligence has distracted resources from making current weapons more compliant with international humanitarian law, particularly Additional Protocol 1 of the Geneva Convention and its Article 36. Critics of our approach may argue that machine learning can be fooled, that combatants can commit perfidy to protect themselves, and so on. We confront this issue, including recent research on the subversion of AI, and conclude that the moral imperative for MinAI in weapons remains undiminished.
Steven J. Barela and Avery Plaw
- Published in print:
- 2021
- Published Online:
- April 2021
- ISBN:
- 9780197546048
- eISBN:
- 9780197546079
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780197546048.003.0006
- Subject:
- Law, Human Rights and Immigration
The possibility of allowing a machine agency over killing human beings is a justifiably concerning development, particularly when we consider the challenge of accountability in the case of illegal or ...
More
The possibility of allowing a machine agency over killing human beings is a justifiably concerning development, particularly when we consider the challenge of accountability in the case of illegal or unethical employment of lethal force. We have already seen how key information can be hidden or contested by deploying authorities, in the case of lethal drone strikes, for example. Therefore, this chapter argues that any effective response to autonomous weapons systems (AWS) must be underpinned by a comprehensive transparency regime that is fed by robust and reliable reporting mechanisms. This chapter offers a three-part argument in favor of a robust transparency regime. Firstly, there is a preexisting transparency gap in the deployment of core weapon systems that would be automated (such as currently remote-operated UCAVs). Second, while the Pentagon has made initial plans for addressing moral, ethical, and legal issues raised against AWS, there remains a need for effective transparency measures. Third, transparency is vital to ensure that AWS are only used with traceable lines of accountability and within established parameters. Overall this chapter argues that there is an overwhelming interest and duty for actors to ensure robust, comprehensive transparency, and accountability mechanisms. The more aggressively AWS are used, the more rigorous these mechanisms should be.Less
The possibility of allowing a machine agency over killing human beings is a justifiably concerning development, particularly when we consider the challenge of accountability in the case of illegal or unethical employment of lethal force. We have already seen how key information can be hidden or contested by deploying authorities, in the case of lethal drone strikes, for example. Therefore, this chapter argues that any effective response to autonomous weapons systems (AWS) must be underpinned by a comprehensive transparency regime that is fed by robust and reliable reporting mechanisms. This chapter offers a three-part argument in favor of a robust transparency regime. Firstly, there is a preexisting transparency gap in the deployment of core weapon systems that would be automated (such as currently remote-operated UCAVs). Second, while the Pentagon has made initial plans for addressing moral, ethical, and legal issues raised against AWS, there remains a need for effective transparency measures. Third, transparency is vital to ensure that AWS are only used with traceable lines of accountability and within established parameters. Overall this chapter argues that there is an overwhelming interest and duty for actors to ensure robust, comprehensive transparency, and accountability mechanisms. The more aggressively AWS are used, the more rigorous these mechanisms should be.
Donovan Phillips
- Published in print:
- 2021
- Published Online:
- April 2021
- ISBN:
- 9780197546048
- eISBN:
- 9780197546079
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780197546048.003.0011
- Subject:
- Law, Human Rights and Immigration
This chapter considers how the adoption of autonomous weapons systems (AWS) may affect jus ad bellum principles of warfare. In particular, it focuses on the use of AWS in non-international armed ...
More
This chapter considers how the adoption of autonomous weapons systems (AWS) may affect jus ad bellum principles of warfare. In particular, it focuses on the use of AWS in non-international armed conflicts (NIAC). Given the proliferation of NIAC, the development and use of AWS will most likely be attuned to this specific theater of war. As warfare waged by modernized liberal democracies (those most likely to develop and employ AWS at present) increasingly moves toward a model of individualized warfare, how, if at all, will the principles by which we measure the justness of the commencement of such hostilities be affected by the introduction of AWS, and how will such hostilities stack up to current legal agreements surrounding their more traditional engagement? This chapter claims that such considerations give us reason to question the moral and legal necessity of ad bellum proper authority.Less
This chapter considers how the adoption of autonomous weapons systems (AWS) may affect jus ad bellum principles of warfare. In particular, it focuses on the use of AWS in non-international armed conflicts (NIAC). Given the proliferation of NIAC, the development and use of AWS will most likely be attuned to this specific theater of war. As warfare waged by modernized liberal democracies (those most likely to develop and employ AWS at present) increasingly moves toward a model of individualized warfare, how, if at all, will the principles by which we measure the justness of the commencement of such hostilities be affected by the introduction of AWS, and how will such hostilities stack up to current legal agreements surrounding their more traditional engagement? This chapter claims that such considerations give us reason to question the moral and legal necessity of ad bellum proper authority.
Nicholas G. Evans
- Published in print:
- 2021
- Published Online:
- April 2021
- ISBN:
- 9780197546048
- eISBN:
- 9780197546079
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780197546048.003.0014
- Subject:
- Law, Human Rights and Immigration
While the majority of neuroscience research promises novel therapies for treating dementia and post-traumatic stress disorder, among others, a lesser-known branch of neuroscientific research informs ...
More
While the majority of neuroscience research promises novel therapies for treating dementia and post-traumatic stress disorder, among others, a lesser-known branch of neuroscientific research informs the construction of artificial intelligence inspired by human neurophysiology. For those concerned with the normative implications of autonomous weapons systems (AWS), however, a tension arises between the primary attraction of AWS, their theoretic capacity to make better decisions in armed conflict, and the relatively low-hanging fruit of modeling machine intelligence on the very thing that causes humans to make (relatively) bad decisions—the human brain. This chapter examines human cognition as a model for machine intelligence, and some of its implications for AWS development. It first outlines recent neuroscience developments as drivers for advances in artificial intelligence. This chapter then expands on a key distinction for the ethics of AWS: poor normative decisions that are a function of poor judgments given a certain set of inputs, and poor normative decisions that are a function of poor sets of inputs. It argues that given that there are cases in the second category of decisions in which we judge humans to have acted wrongly, we should likewise judge AWS platforms. Further, while an AWS may in principle outperform humans in the former, it is an open question of design whether they can outperform humans in the latter. Finally, this chapter then discusses what this means for the design and control of, and ultimately liability for AWS behavior, and sources of inspiration for the alternate design of AWS platforms.Less
While the majority of neuroscience research promises novel therapies for treating dementia and post-traumatic stress disorder, among others, a lesser-known branch of neuroscientific research informs the construction of artificial intelligence inspired by human neurophysiology. For those concerned with the normative implications of autonomous weapons systems (AWS), however, a tension arises between the primary attraction of AWS, their theoretic capacity to make better decisions in armed conflict, and the relatively low-hanging fruit of modeling machine intelligence on the very thing that causes humans to make (relatively) bad decisions—the human brain. This chapter examines human cognition as a model for machine intelligence, and some of its implications for AWS development. It first outlines recent neuroscience developments as drivers for advances in artificial intelligence. This chapter then expands on a key distinction for the ethics of AWS: poor normative decisions that are a function of poor judgments given a certain set of inputs, and poor normative decisions that are a function of poor sets of inputs. It argues that given that there are cases in the second category of decisions in which we judge humans to have acted wrongly, we should likewise judge AWS platforms. Further, while an AWS may in principle outperform humans in the former, it is an open question of design whether they can outperform humans in the latter. Finally, this chapter then discusses what this means for the design and control of, and ultimately liability for AWS behavior, and sources of inspiration for the alternate design of AWS platforms.
S. Matthew Liao (ed.)
- Published in print:
- 2020
- Published Online:
- October 2020
- ISBN:
- 9780190905033
- eISBN:
- 9780190905071
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780190905033.001.0001
- Subject:
- Philosophy, Moral Philosophy
Featuring seventeen original essays on the ethics of artificial intelligence (AI) by today’s most prominent AI scientists and academic philosophers, this volume represents state-of-the-art thinking ...
More
Featuring seventeen original essays on the ethics of artificial intelligence (AI) by today’s most prominent AI scientists and academic philosophers, this volume represents state-of-the-art thinking in this fast-growing field. It highlights central themes in AI and morality such as how to build ethics into AI, how to address mass unemployment caused by automation, how to avoid designing AI systems that perpetuate existing biases, and how to determine whether an AI is conscious. As AI technologies progress, questions about the ethics of AI, in both the near future and the long term, become more pressing than ever. Should a self-driving car prioritize the lives of the passengers over those of pedestrians? Should we as a society develop autonomous weapon systems capable of identifying and attacking a target without human intervention? What happens when AIs become smarter and more capable than us? Could they have greater than human-level moral status? Can we prevent superintelligent AIs from harming us or causing our extinction? At a critical time in this fast-moving debate, thirty leading academics and researchers at the forefront of AI technology development have come together to explore these existential questions.Less
Featuring seventeen original essays on the ethics of artificial intelligence (AI) by today’s most prominent AI scientists and academic philosophers, this volume represents state-of-the-art thinking in this fast-growing field. It highlights central themes in AI and morality such as how to build ethics into AI, how to address mass unemployment caused by automation, how to avoid designing AI systems that perpetuate existing biases, and how to determine whether an AI is conscious. As AI technologies progress, questions about the ethics of AI, in both the near future and the long term, become more pressing than ever. Should a self-driving car prioritize the lives of the passengers over those of pedestrians? Should we as a society develop autonomous weapon systems capable of identifying and attacking a target without human intervention? What happens when AIs become smarter and more capable than us? Could they have greater than human-level moral status? Can we prevent superintelligent AIs from harming us or causing our extinction? At a critical time in this fast-moving debate, thirty leading academics and researchers at the forefront of AI technology development have come together to explore these existential questions.