This was one of those freaky moments when the future sneaks up and smacks you. I was on a plane earlier this week and took a break from work to watch the movie while they served dinner. It was Terminator Salvation, which once again tells us what happens when Skynet becomes self-aware and the machines take over. Then they cleared dinner, and I opened my laptop again and resumed work where I left off: programming unmanned aerial vehicles in RobotC.
Which is exactly, of course, how Skynet becomes self-aware. Have I learned nothing?!!
Seriously, it was a kinda weird moment. After all, if we ever do hit a singularity when the collective ability of machines (I won’t use the word “intelligence”, since we neither really know what that means, nor are we likely to recognize it in machines) exceed that of humans, it will because of lots of people like me doing just what I was doing.
And yet, I couldn’t imagine not doing it. I was programming UAVs in RobotC because I can—the technological opportunity was in front of me and I couldn’t resist taking it. Indeed, I was working rather than watching the rest of the movie because I was keen to finish the project faster, so nobody could beat me to it. My competitive drive was pushing to me to do what was possible, mostly just because it was possible and hadn’t been done yet.
In a sense, I couldn’t help myself. We are innovative animals. If something can be invented, we feel compelled to invent it. If I don’t do it, someone else will. That which can be invented, must be. It almost doesn’t matter how useful it will be or even if it might be dangerous. Matches must be struck, just to watch them burn.
I wondered if the inventors of the atomic bomb felt the same way. Atoms can fuse, so let’s fuse them. Chain reactions can take place, so let’s start one. They can happen faster with the right materials and conditions, so let’s create them. And so on. Each step of the way is just grabbing the natural opportunity in front of us, but the end result is a weapon of mass destruction.
Of course with the atomic bomb, it was eventually clear that the next step would lead to a terrifying weapon and wise minds considered whether or not to take that step (they decided to do so because they knew that others would get there soon, and with perhaps worse consequences).
But in the case of “Skynet becoming self-aware” (yes, I know that’s just a movie, but indulge me for the sake of the thought experiment), would that threshold be as clear?
Will it come someday with some guy like me fixing the last bug in his code and pressing compile? Will he even know what he has done? Or will it be more gradual, with loads of us building it bit by bit, with no single moment, technology or decision marking the point where we crossed the line?
Maybe that day will never come. But it stopped me in my tracks for a few minutes as I reflected on how amoral invention is. Technology wants to be invented and we are almost powerless to stop it. We are hard-wired to create the future, be it good or bad. Invention is its own master.
And then I went back to programming the robot. After all if I don’t make airplanes self-aware someone else will. And I can’t let them get the glory!
(Diagram of the real Skynet from the BBC.)