Sort:  

I think your comment was truncated. I'm interested in hearing your take on Elon Musk's views.

Musk’s alarming views on the dangers of A.I. first went viral after he spoke at M.I.T. in 2014—speculating (pre-Trump) that A.I. was probably humanity’s “biggest existential threat.” He added that he was increasingly inclined to think there should be some national or international regulatory oversight—anathema to Silicon Valley—“to make sure that we don’t do something very foolish.”
https://www.vanityfair.com/news/2017/03/elon-musk-billion-dollar-crusade-to-stop-ai-space-x

Do you think he's right, wrong or in between?

I think he is entitled to his opinion (like everyone else). Every time technology threatens to push the boundary of our moral and ethical understanding to a territory it is unfamiliar or unable to deal with (nuclear weapons, GM crops, genome sequencing, etc.) there are opposing forces trying to work out where to draw the line. Yet I have never seen any technology that is purely good or evil because it is still just an extension of human thought and behaviour. Every science fiction writer has been talking about the same things before and I am sure humanity will find ways to create catastrophe using AI and then try to figure out some way to save the world, if only so we can have more science fiction movies and books to watch :D

Not sure how much regulations and laws has prevented people that want to do something very foolish from doing exactly that. But it has ensured that people who want to manipulate the law know exactly how to plan their legal arguments.

I guess the answer to your question is that he is both right and wrong (is that in between or not?).

I think the difference the people say about AI is that our previous tools lack agency and AI will either get sophisticated enough to have agency of their own or we will, at least, be surrendering more of our human agency to the AIs.

This was a recent article I read that talks about various algorithms for autonomous vehicles but failed to see how they can be applied consistently when people have different values:
https://www.fastcodesign.com/90149351/this-game-forces-you-to-decide-one-of-the-trickiest-ethical-dilemmas-in-tech
If we have trouble programming cars when there are traffic rules and insurance policies, how far would we get with other applications of AI?

That only deals with one AI. What if there's an AI in the van and it acts differently; there's a very real chance they both mow down the cyclist and have a collision that damages both vehicles, has the miscarriage and injures the men. Dumb AI.