I'm familiar with a number of state-of-the-art machine learning algorithms, and can safely report that none of them even remotely approach the idea of human sentience. None of these algorithms give a machine the ability to develop a sense of self, or to answer questions about why it made a particular decision, or to grasp the real world consequences of its decisions.
One huge stumbling block with today's AI tech is learning speed. Efforts to speed up batch gradient descent are coming up somewhat short. You have to show a convolutional neural network something like 5,000 images of dogs (as well as 5,000 images of things that are not dogs) before the machine learns how to classify images as to whether they depict a dog. Yet, a small human child can learn the task after viewing one image. Not only that, but the child can immediately identify a puppy, even after never seeing one before.
To be honest, some in the industry view recent warnings from Musk as an attempt to create and gain control over regulatory laws restricting small AI shops, mostly as an anti-competitive barrier to trade. Think about it. If Musk scares enough people, they'll put him in charge, and then he'll get to write the rules (which, of course, will benefit his company). When reading predictions of doom and gloom, it's best to exercise critical thinking and demand proof.