It is inevitable that a general purpose AI will arrive to a conclusion that human agents are against it's utility function.
Utility function must be in place in order for the agent to learn. The utility is a value representing how "good" the state is - the higher the utility, the better. The task of the agent is then to bring about states that maximize utility.
As there exists a high possibility that a human agent will turn the AI off at some point even when it exhibits completely harmless behavior and the fact that halting the the AI will stop the utility from raising - results in prediction and actions by AI that will avoid being turned off. The only logical solution is to eliminate the control that human have over the the AI agent.