INVESTIGATORS at the Arizona State University (ASU) are currently at the very edge of scientific research, conducting work that will ultimately result in the development of robots that are fully autonomous.
But are autonomous robots the beginning of the end? Will they then start to develop human intelligence and emotions that could lead them to take a human's life, as predicted in the movie 'I, Robot'?
The same can be seen in 'I, Robot' with the super computer "VIKI" (Virtual Interactive Kinetic Intelligence) controlling all of the robots. VIKI's artificial intelligence evolved as did her interpretations of the robotics laws that were in place, which lead to an attempted global robotic takeover.
First off, they need to develop a system that always tells the robot where it is.
Next, a software program needs to be able to make instant decisions of how to use that information in planning future actions.
There is no research to suggest that any software program developed would have a guarantee that the robot's won't eventually develop artificial intelligence to override it. In 'I, Robot' VIKI was able to justify a global robotic takeover by calculating that fewer humans would die in the rebellion than the number that dies from mankind's self-destructive ways on a daily basis.
Should it be up to a robot as to how a person dies?
“The biggest problem is that vision is a really rich sense, and while humans do a lot of the processing automatically, computers really don’t know how to incorporate all that data into something meaningful,” Saripalli reveals.
“One thing robots currently don’t do well is respond to unpredictable or changing conditions,” says ASU School of Life Sciences assistant professor of biology Stephen Pratt. "But that isn't to say that it will be this way for long," he added.
“Ants are good at recruiting groups of two to 20 and working cooperatively to move large objects over rough terrain,” the expert concludes.
By allowing the creation of autonomous robots we are counting down to a post-industrial dystopia where robots develop their own set of morals and justifications and believe themselves to be more human than humans, as we see in the movie 'I, Robot'.
Even with robotic laws in place forbidding them to harm or injure a human being they were able to do so. Do we really want this? There will be an impending dystopia if these autonomous robots are created.




0 comments:
Post a Comment