top of page

Stop Killer Robots

Writing by Anna Olszewska. Artwork from https://www.stopkillerrobots.org/.


Lethal Autonomous Weapons (LAWs) are still characterised as a niche topic in political debates. And let’s be real, we seem to have enough problems on our minds; the climate crisis, the UK government falling into pieces, the war in Ukraine – just to name just a few. On top of that, we have personal issues like rent and gas prices , plus no luck swiping on Tinder (which leaves us sobbing and rewatching Fleabag every Sunday evening). But while I still have your attention, I want to tell you that if we don’t do anything about LAWs now, we may become even more miserable soon.


The Campaign to Stop Killer Robots has recently published a report investigating the role of UK universities in the development of autonomous weapons systems, and the University of Edinburgh was one of the institutions that both lacks transparency and has no safeguarding policies in place to prevent their research projects contributing to the development of LAWs.


The report lists a few research projects undertaken at the University of Edinburgh, but highlights one in particular in red colour called “Signal Processing in the Information Age” affiliated with the School of Engineering. Red means that the project has the potential “to increase the degree of automation in weapons systems” and that it is “funded through an agency of the Ministry of Defence or by a contractor with a track record of developing military autonomous technology”. A couple of others such as “UKRI Trustworthy Autonomous Systems Node in Governance and Regulation” affiliated with the School of Informatics and “EPSRC Centre for Doctoral Training in Robotics and Autonomous Systems” affiliated with the Edinburgh Centre for Robotics are marked in amber colour, meaning that the projects have a dual-use potential. Therefore, without any safeguarding policies, they could still contribute to autonomous weapons systems.

LAWs are artificial intelligence (AI) controlled robots that have no meaningful human control over them. Therefore, the decision of whether or not to kill someone would be made by a robot with no moral system or human perception. No one can promise us that the machine would correctly distinguish between a civilian and a combatant. Given such dehumanisation of force and human detachment from military conflicts, people will become even more powerless in the face of war. Some may say that artificial intelligence is unbiased, but AI is programmed by humans. If humans are biased, why wouldn't AI be? Facial recognition algorithms often fail to recognise people of colour and women, which leads to reasoning that autonomous weapons systems would endorse the patriarchal society and white supremacy. What’s more, there is no International Law in place that would hold killer robots accountable for their mistakes; how can they punish a robot whose actions didn’t directly involve a human?


While there is still time to act on it, we need to pressure institutions to introduce clear ethics policies and inform the staff and students about the possible danger of their research projects. Currently, semi-autonomous machine gun robots are used by the Israel Defence Force in Gaza. And as terrifying as it sounds, such research will continue to be undertaken as long as there is no International Law and regulations in place to monitor it.


So, what can we do?

  • Pressure the university to sign the Future of Life pledge (which they refused to sign in 2021)

  • Tell your friends about this issue and watch the new short documentary film by Stop Killer Robots: Immoral Code

  • Support a student campaign to Stop Killer Robots. You can join Amnesty International society working group on this campaign (contact)


111 views0 comments
bottom of page