I came across an headline on Medium this morning that grabbed my attention. The subheading softened it a bit, but nonetheless, I was intrigued. The article, titled “A Self-Driving Car Might Decide You Should Die”, used a series of well-known ethics questions to reach the author’s ultimate point. That point, at least what I got from it was, if you build an algorithm that removes human error by calculating the optimal outcome of a life-threatening situation, how do you assign value to human life so you can choose who lives and who dies? It is in the programming of this algorithm that our values are truly reflected, and that is, in fact, very scary.