These are good first thoughts. If you're interested in more, the AI explosion was covered well in this post by @ai-guy. That post in turn borrows largely from Nick Bostrom's influential book Superintelligence, which is a thorough and excellent overview of the potential dangers.
And I can't help but mention that I will have an article responding to Bostrom's arguments in Robot Ethics 2.0 (the sequel to this book), due out from Oxford University Press next year.
Finally, it's worth noting that groups like Google Deep Mind, Eliezer Yudkowsky's Machine Intelligence Research Institute, Bostrom's Future of Humanity Institute, and Elon Musk's OpenAI are working together to solve this problem and making pretty good progress. Maybe the best recent news is a framework for a kill switch for reinforcement-learning AI.