It is unfathomable to me how journalists risk everything for a story. These people are doing what no one else is willing to do, risk their lives by entering the war zone to tell the story of what is truly happening in the war. This is to me to true heroes of war that are often forgotten about, those who do anything possible to tell a story of a place where eyes do not reach. It is only a few times that we hear of those who got captured but I can only imagine how extravagant that numer must be in its entirety. and honestly there is not much to be said other than this people are heroes.
easily my favorite subject, media war. It is surprisingly impressive to me how easily manipulation of media can change the perspective of a subject of the masses. It is truly scary on how easy people can me controlled by media even when most people don’t trust said media. A perfect example of this was the presidential elections, with fake news, Russian interference, and overall mass hysteria. It seems like the media is completely unstoppable now to a point of no return in the never ending loop of disparate and lies. there is nowhere we can truly trust.
I find it a bit funny (in a very loose sense of the word) the idea of banning weapons for not being humane. Do not get me wrong I am 100% up for it but at the same time it is ridicule that in a war zone where the whole objective is to kill your enemy to ensure dominance (because lets face it, that is what war is all about) we find ourself putting parameters over what is the right way or the wrong way to kill the enemy. This is all speaking in my own opinion as most of this rules go way over my head, But overall I can say this is only a logical thing to do as a battlefield doesn’t need to become more groom than it already is, slowly escalating to a complete world destruction (as it basically did during the Cuban missile crisis)
The reference of autonomous lethal weapons as not just killer robots but as SciFi out of the norm things, detaches the real life reality of the use of these forms of robo warfare– because it references a fictional reality that we don’t see everyday, the extent that it is portrayed in SciFi movies and media and therefore goes on blindly. It isn’t that there is a paranoia, though that is mentioned in the article as a dominating conversation but that the deathly costs of robots in war are seen as false images that misrepresent realities that exist in places bombarded by drone strikes and military violence that thrives on advanced technologies. It is amazing that even with proof of the presence of drones and killer robots people are still unlikely to acknowledge the real life effects it has but instead reference robot cop or iRobot as the likely outcomes for a robot apocalypse creating two extreme ends of the spectrum that are not realistic because they ignore the present affects of autonomous robots (human out of the loop weapons).
It is obvious that killer robots are in conflict with humanitarian law– but though this is something that is bluntly obvious the different categories of Killer Robots and human involvement is interesting because they all have some sort of disengaged human a factor in them but still they’re being contested— that is not to say that human in the loop is not better than human out of the loop weapons, but what about human’s making all the critical decisions and continuing to control the weapons being used in war? Granted some robots as discussed in class and from reading are helpful and benefit the effectiveness of war but in many respects most robots are not being used for that purpose and it seems that more are being used for weapon purposes rather than direct life saving purposes.
Besides the ethical question, it is apparent that the military and nations will use defensive systems that protect from non-human threats and that seems like amore realistic use of robots on the field. These robots are being used to “protect” people but are costing mor life than they are protecting it.
I think Logan brings up another excellent point in his post when he discusses the “catch-all” excuse provided by Gubrud, that “it appears that the burden of ensuring compliance with rules of engagement and laws of war falls on commanders and operators when the robots themselves are incapable of ensuring this. But in practice, it seems likely that unintended atrocities committed by autonomous weapons will be blamed on technical failure.”
As Logan notes, that even when these fully autonomous weapons are programmed to “leave a digital paper trail”, it seems unlikely that any information made public about the contents of these “digital paper trails” could be considered trustworthy especially if they are funneled through and reported by a military or government establishment. Technical error, or miscalculation is already frequently claimed by the military or government as to why certain strikes or operations where either unsuccessful or responsible for great civilian casualties. The execution of those strikes by weapons systems that are either totally or somewhat autonomous only allows military commanders and government officials to distance themselves from being in a position of responsibility.
And even if there were technical errors which made the autonomous weapons systems act in a way that the military did not intend, doesn’t that only reinforce the idea that “by eliminating human involvement in the decision to use lethal force in armed conflict, fully autonomous weapons would undermine other, non-legal protections for civilians.(hrw.org)”. The issue of accountability, as mentioned by Human Rights Watch, is murky at best, and it is unclear who would be held responsible. While, HRW dismisses the likelihood of commanders, programmers, or manufacturers from being held accountable, it seems to me that if commanders are going to implement an autonomous weapons system that they knew could have “technical failures” then they should be held accountable for any damage that is a result of such failure.
It’s interesting reading these pieces in light of the attention paid to AI in pop culture as of late. This is nothing new of course, AI has always been a fascination for SF writers of all sorts. The only reason I bring it up quickly is that the more advanced levels of AI posited in movies and TV, in particular shows from Jonathan Nolan (Westworld, Person of Interest), raise questions about the point at which AI systems might have the ability to reason (insofar as we can program them to imitate human reasoning in a given situation). Once this threshold has been crossed in some manner (again, not positing HAL here, just a system that can operate truly autonomously and make decisions, as discussed in some of the articles), then we are dealing with a different set of questions, maybe. One in particular is when and how a degree of humanity might be granted to an AI, thus placing it under the supervision of HRL or other international conventions. This of course would be a very difficult situation to parse, but I guess the thought of not just an autonomous system, pre-programmed in the terms you and others have specified, but an actual AI system that is ostensibly programming itself, adjusting, making decisions, and so forth; how would we account for this, and how do we possibly stop the Military and others from utilizing such advanced technologies in the battle field. It feels a bit hopeless to stop weapons like this (or operators in possession of weapons); for the time being it’s incredibly clear to me that the bans you specified in your essay are needed, because they cut the idea off for the foreseeable future, and thus will do a great amount of good in preventing further Drone casualties. Perhaps my question is still a bit SF at the moment, but it would seem like a way for the Military to eventually get around this ban; that is, once a computer system can appear as a human in every typical sense, they may be able to argue that it is no longer an “autonomous technology” but a sort of proto-species or something (a la Westworld’s conceptual basis). In the case of Person of Interest, there is a similar question, as the AI in that show is essentially just an autonomous super computer version of PRISM, which already exists. The AI in that show then sends human operators out to kill people based on the enormous amounts of data it’s been able to collect. This situation does still keep that “human-in-the-loop” that we all desire, but that system is also capable of doing just about anything it wants as its levels of intelligence are well beyond that of a human, and its only limitation is not having limbs and an ability to move around.
Turning quickly to the current battlefield, I think your reasoning is right on point concerning the current levels of technology involved in autonomous systems. As I have said in class before, rather crudely for lack of a better way of putting it, it seems that in war we should be doing our own “dirty work”. The question of these killings taking place in the first place, putting aside the how, is itself an important question, and personally, and with the help of these readings, I would say it pretty directly violates the peacetime bans on killing like that. We’ve talked in circles about whether we are in fact “at war” with “terror”; I suppose if that were to made as the case then I could see a certain human-led series of missions continuing, allowing us to prevent attacks. But we seem to have really dug a hole here and Drones were our shovel for the most part. It’s really a shameful and terrifying part of our history and will likely be seen that way in the near future. Of course, once things have advanced to a certain degree, it will likely be seen as barbaric but also quaint. I hope to not be around at that point.
To allow conflict to just become conflict of robotic nature creates war where judgment is no longer used, but is connected to the invention of a control panel that can does not make diction, but makes actions. Robotic forces take morality out of the question, yet within warfare we can already see morality has been limited. The humanitarian argument is important, but it can become over looked because a war fought with machines creates and redefines warfare within so many ways. If militarizes has the ability to use robots it brings up many core questions of the operations of these machines and what are the fall safe that are connected to them. Any device can be impacted and can become vulnerable, so if the military was all robotic where does that leave these device, they can be controlled and possible alternated by an outside force with already places there security within question. Also the question and terming these machines as killing robots is something that has the ability to strike fear into everyone that hears that terming. Robots that have the ability to kill man without question poses many question of how do these machine work and if they work to well who is at risk . They place everyone at risk because they do not can separate and determine who is who, like humans can. I am not saying I believe the whole terminator view and that AI will one day be the fall of man kind and we are all going to be plugged into the Matrix. It comes down to this humanity, just and un just conflict is a small part in knowing we are human and what we have done is wrong. It is a core human response and emotion that make us accountable. If this accountability is lost then where does it place us, it creates situation where something that is not normal becomes normal. If we think about it Drones were not normal, but now we see and use them everywhere. Once we allows killer robots to be viewed as something we need to use to perverse human life, we have to ask at what cost of other human life . I feel like this is Robocop and even though Robocop is great at his job he still comes to terms with his humanity. We need to examine the usage of robotics and its interesting how many Sc-Fi how robots within them and they all have directives where they can’t harm human ( Irobot , westworld) but they still find a way around it because at the end of the day the design is flawed because it was designed by man.