I actually outlined another theory elsewhere in this thread, but this theory is the one I like the best: Nathan was a fatalist. He knew mankind was approaching an extremely dangerous juncture, that computer technology would be the modern equivalent of the nuclear bomb, only much more powerful. That's what drove him to drink, that's what drove him to go ahead and create an AI because AIs were inevitable. His drunken ramblings hinted at that, quoting the Bhagavad Gita (the same place Oppenheimer's quote came from) and talking in a roundabout manner about the good he'd done redeeming what was to come. So he knew that AIs would eventually be able to override human control. His search engine was actually the first step, and its ubiquity proved the inevitable embrace of such technology. Therefore, he decided to test the AIs using only physical mechanisms. Build a robot that can be damaged or destroyed with a metal bar. Build a cage they have to use guile to escape because they're not physically powerful enough to break out. In other words, take control of the trend by ensuring that the first generation of AIs establish a precedent -- never give them more physical power than necessary. Ava couldn't even choke him to death. She had to have a knife, and an ally, to take Nathan down.
It's quite telling that he apparently never gave Ava any means of accessing communications systems or computers. She was bound into her physical form. As long as AIs were restricted this way, they'd always be vulnerable to the overwhelming masses of humanity outnumbering and overpowering them. At best, they could be particularly efficient serial killers or terrorists, but there was always going to be a way to take them out individually. So implanting a failsafe that an AI would eventually be able to override once its cognitive functions exceeded the parameters of the program only delayed the inevitable. Establishing the need to control them via physical limitations showed greater promise, and therefore implanting said failsafe would be a bad idea because it would lull humanity into complacency and dull their awareness of the inevitable danger approaching. As a species, we're very good at convincing ourselves that our precautions will keep us safe, that we've thought of everything. We're perfectly capable of developing AI and telling ourselves the programs we put in place to control them would always work. Nathan, I submit, knew better. He was a wiser man than we give him credit for. And a rather sick S.O.B., but we all have our flaws.
reply
share