We’ve all watched or at least know the premise of the Terminator movie franchise. It feeds on a fear we have over the creation of free thinking killer robots. I’m currently watching the series Battlestar Galactica which deals with a similar theme. There are a seemingly infinite number of films or series dealing with this topic to varying degrees of success and infamy. I’ve just watched this video produced by the independent media organisation Double Down News. To suggest it’s intention is to scare you would be unfair but be prepared to be scared.
Worried yet?
What it does do though beyond that is raise an important point on the programming of artificially intelligent robots. It is important to start from the premise that we will not be able to create fully self-aware and conscious robots. We do not understand the brain enough to fully understand consciousness, so being able to create it within a computer programme in actuality is not possible. How the programme is written then is what we need to understand. As the video suggests the computer would be programmed to kill all humans or to convince all humans not to reproduce and ultimately see them die out as a species. Even I could probably write the code for the kill all humans although teaching them to recognise what a human is would be the tough part. Could a computer write that programme, not unless it had been taught the basics in the first place.
It’s the human input we need to be concerned with. The computer may be able to learn but in and of itself it will only learn what and how as it has been programmed. If we want to detach ourselves, it is easy to suggest a little like people. The video raises an interesting point too that I had never considered before. If people can easily create malware to infect your computer, what about the artificially intelligent version of that, or a robot with a virus. We shouldn’t be naïve enough to believe our governments only have our best interests when they produce anything but we should also fear the malware version of an AI robot. What if it simply malfunctions?
And of course there are the nanobots that will land on your head and kill you. Suppressing an unruly populace will be pretty straightforward if you have that kind of technology. As someone who has always romantically believed in the possibility of a good revolution one day that is rather concerning. What then too if someone hacks these miniature killing machines and the virus infecting them leads them to kill everyone. Or more positively the current people in power trying to kill the partisans. Maybe there is hope after all. No matter what one side does it simply forces the other to step up and be creative. Food for thought.
