I Robotics
Isaac Asimov's "Three Laws of Robotics":
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
So how can we implement these laws before AI becomes so intelligent that it surpasses our own (the so called singularity)? If AI becomes super-intelligent and makes more AI even more intelligent than itself then there is little to stop it from finding human beings irrelevent. Also if the laws are built in then the AI may be able to alter or delete them later on. Already we have robosapien2 and other smart devices.
This should be of major concern to mankind such that it becomes important that such rules are encoded in the neural hardware of a robot. The robot could be set to immobilize or short circuit if a violation or any tampering occurs. Also, in the case of software AI on the Internet or at isolated research bases then the laws should prevent hacking of the code by man or machine through auto-deletion and shutdown procedures. Here dormant viruses could be used that are otherwise undetectable.
Now do we have the ability to do this sort of thing yet? I very much doubt it. We may see self replicating machines with above human intelligence before the planet dies on us but will these machines help us save the world or destroy all human life or assimilate or create batteries from it or worse...
0 Comments:
Post a Comment
<< Home