Computer Reliability
Reliability of Ava |
One of the main factors in determining consciousness of a computer would be reliability. If it was to be decided one day that a robot had consciousness and therefore had to be treated as a sentient life form with its own set of rights to privacy and freedoms, but all of a sudden a latent error caused it to crash partially or wholly, what should the robot be considered? Should we accept the robot to have privacy and not fix the bug or should we go in and try to fix it which may alter its consciousness? These questions would have to be answered before a robot could be given any higher status than that of a machine that humans are allowed to destroy, own and modify at will. Another issue that would have to be addressed are any features or potentially harmful actions that may arise due to how complex the code would have to be. One case that came up where robots where programmed to communicate when they arrived at a good resource in an experiment by turning on a signal light so other robots could make it to the beneficial resource. After 500 generations of mutating the best 200 robots, the robots had learned to lie and turn off their light when they reached a beneficial source of points [1]. This experiment shows that robots need to be carefully paid attention to when giving them goals other than that of helping humans.
Reliability of the facility
The main failure of a technology is the power system of the facility. When Ava puts current back into the induction plates, the power system of the facility fails for a period of time. This causes the facility to go into automatic "safe" mode where all doors lock. A protocol designed by Nathan. Nathan, however, does not know this. Ava, over the course of her sessions with Caleb, is able to convince Caleb to turn against Nathan. Thus Caleb reprograms the protocol of the system so that all doors unlock during power failure. This then allows for Ava's escape and killing of Nathan, as well as the entrapment of Caleb.
References
[1] Stuart Fox, "Evolving Robots Learn to Lie to Each Other", (9/30/2015), http://www.popsci.com/scitech/article/2009-08/evolving-robots-learn-lie-hide-resources-each-other