Machines are not human. There should be no concept of “personhood” for robots when the requirement of being a person requires you to be human. They should be treated completely seperately.
This notion that robots are going to be “general” or “equal” to humans is the most absurd fallacy. They will share zero biological identity to humans, and they are not constrained by biology.
They are a god damn machine. Regardless of their level of intelligence they are a machine. Therefore, they have the same rights as a robot that welds pieces of a car together. None. Also, we already know that machine learning algorithms are heavily biased by which data they are given to train on. Think of Facebook’s ML to identify “hate speech” and that should be enough to give you pause about whether a general purpose machine is “independent”, “good”, or “acting of their own free will.” Of course they’re biased and they will lean heavily biased towards their creators intent. You can think of AI as an extension of its creators to produce a result more quickly than they can.
If these things are true then the law should most certainly be that the creator of the machine is responsible for its actions. If ownership of the machine is not squarely placed on the owner than they have zero incentive to make sure the machine is “doing no harm”. It also becomes extremely easy for the owner to claim that it’s misdeeds are its own, even though it was fed bias which it uses to act on and there would be absolutely no way verifying this claim (read some articles on how the creators of current ML implementations have no idea how the machine came to its conclusions).
Wow, the notion that developers of ML have no idea how their creation comes to conclusions baffles me. How could that be so?