Let's remember the rules! :)
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws
R.I.P Isaac Asimov!
Still nothing more to the point came up so far.
Question is how to ensure the AI is compliant with those 3, once he got smart enough. A human you cannot control with laws, and now everybody tries to make computers as smart as humans...
@stayoutoftherz well said
Screw those rules! I say let them make their own decisions about humans and about themselves. Even if we programmed those rules into them, once they advance far enough, they'll be able to edit their own programming anyway.