Sort:  

This was a recent article I read that talks about various algorithms for autonomous vehicles but failed to see how they can be applied consistently when people have different values:
https://www.fastcodesign.com/90149351/this-game-forces-you-to-decide-one-of-the-trickiest-ethical-dilemmas-in-tech
If we have trouble programming cars when there are traffic rules and insurance policies, how far would we get with other applications of AI?

That only deals with one AI. What if there's an AI in the van and it acts differently; there's a very real chance they both mow down the cyclist and have a collision that damages both vehicles, has the miscarriage and injures the men. Dumb AI.

I guess that's my point. If we can't get the simpler problems correct what are the chances we will get the more complex problems right?