|written by Speeder on May 06, 2014 12:45|
|I am reading Asimov's "I, Robot" (mostly because it is supposed to be prequel to the foundation series, that is what I really want to read).|
And I noticed that the three laws of robotics have one, and only one very very simple flaw that make them so completely useless, that make all other theories about it working or not moot.
They must be implemented by a human.
Nothing stops a human from making a robot that ignore the three laws, specially a human that for example want to make a robot to blast away humans in some third world country with oil.
Thinking if the laws work or not on their own, if they are too deontological or not, or if their meta-physics or philosophy is correct is moot, because like in computer security, the weakest link is still the humans.
|written by Gligar on May 06, 2014 14:00|
Well yes. They're written by humans, for the benefit of humans. They're ostensibly a restraining bolt to prevent robot workers from rebelling.
|They must be implemented by a human.|
That's a given, and in some cases, a necessity; a military robot might have to fire at a human, after all.
|Nothing stops a human from making a robot that ignore the three laws|
I'm not going to say it won't happen -- I'm sure we're both looking in the same direction (one of the several that exist -- naming no names) here. But I hope that the threat of reprisal (war, sanctions, etc.) will stop this from happening.
|specially a human that for example want to make a robot to blast away humans in some third world country with oil.|
|written by Speeder on May 06, 2014 14:08|
|Well, it only takes one deranged human to build one deranged intelligent enough robot with enough power to do some serious damage...|
For example, what happens if during a ever-escalating war (the sort that ends with two nukes on two purely civilian cities) results into a race to make ever more aggressive robots?
That seems inefficient. Wouldn't it be better to have your military robots only fire at other robots? You could sell more robots that way.
|That's a given, and in some cases, a necessity; a military robot might have to fire at a human, after all.|
|i do my own stun-- avatars|
|In the time period of the books, the... what was it?.. positronic brains?... are so far advanced that humans don't understand it any more. They used some kind of self-advancing machinery to make it. So, once the three laws are solidly built in, no one would know how to remove them.|
That said, it's been a while since I read the books. Do correct me if I'm wrong.
|written by Speeder on May 07, 2014 01:43|
|Actually, in a chapter RIGHT AFTER I made this post, my theory here happens, someone built a robot with a partial first law, removing the "inaction" part, allowing a loophole that a character describes as:|
Robot KNOWS it can race down a certain building and catch a object if it drops it, it drops a object, aiming at a human, knowing it can obey the first law by racing down to catch the object.
Except AFTER the object is falling, it can then change his mind, after the object is falling, the robot can just do nothing, now the danger is the falling object, not itself...
This way a robot can murder a human.
The actual rogue robot don't try to kill anyone, but do find some perverse logic to convince other robots to not protect humans in a certain case, also it lies, also it do some other creepy stuff.
Some chapters later, in the last chapter, the scenario Albey is talking about is reached, humanity built "government machines" that are made by other machines...
But a certain human character STILL understands the machine anyway (humans claimed they could not understand because it would require too much math, then a character state that the machine still follow the initial program, ie: the three laws, and explain how the seemly wrong behaviour of the machine was actually intentional and correctly obeying the programming).
|written by Sunfire on May 03, 2018 21:50|
|Hello Speeder, hows it going, what are you reading, do you still remember me? I'll post in my introduction thread, beeing away for 10 years and... still reading Asimov? I loved the Foundation books which I've read all, that robot machine for solving science problems or governance is drawing closer to be built for real, the advances in A.I. are quick and already the A.I. of today could easily help govern states, but the security and powermonging is against, Id much rather have an A.I. overlord than a parliament full of thieves and stupids. Yeah Iam from Slovakia.. Foundation saga is like my Bible, where technology knows cure for every illness, humans live 100 years longer, food is great and plentifull and there is lots of time to relax in a seemingly utopian Empire. Just be sure not to break the law, or opose the government and you'll be just fine.|
|written by Speeder on May 03, 2018 23:41|
|To be honest... I don't remember you sorry |
At least not right now.
|written by Sunfire on May 04, 2018 02:00|
|No worries, I think we were in some Noctis discoveries thread.... So, did you read about Eto Demrzel? He was a robot who surviwed from the spacers era into the Empire era... He was able to laugh at one time when they asked if he is a robot, he said it's a nonsens with laughter. Guess he would pass the turing test, the fact that his actions were written down by a human (asimov) makes him exhibit a soulfull quality in the "aether" . Just saying, he was my favourite robot...|
|written by Ottonine on May 26, 2018 06:31|
|written by Ottonine on May 27, 2018 10:25|