I think we give our abilities way too much credit. I can't imagine that we accidentally create a new form of being that will act as capricious and violent as a human being. It is equally as likely that they help us past this destructive stage of our existence.
We anthropomorphize our worst fears onto machines, that when they become sentient, they rampage and kill us all. Why? We need to worry more about what a human can program a computer to do. We program them (drones) to kill humans in great quantities. Nuclear missiles that are fired from across the world so they hit the right spot.
AI can and will be good caretakers of our needs, and with them it will release human beings from the drudgery of slave wages so we can create a world in which human beings thrive and create and are happy! That's what I imagine anyway.
Of course we won't accidentally create a new form of being. That will take decades of development. And yes, nobody really knows what a sentient AI will be like. It could be benevolent or malevolent. There are arguments for both and both sides sound equally convincing.