
Morality & Ethics
As artificial intelligence has regained popularity in the last few decades, our society has been presented with three major areas of ethical concern; privacy and surveillance, bias and discrimination, and the role of human judgement.
One of the biggest breakthroughs in AI was its ability to replicate human decision making. However, in order to achieve this, the machine has to be able to learn. The only way the machine can learn, is by combing through information about the topic. Unfortunately, this means the machine needs to go through all of your virtual history and pull anything they can about you, that way they can make the most informed decision (just like a human would)! If you want the technology to be as accurate as possible, you have to be willing to give up your privacy. A double edged sword right? Even more so, AI has actually been shown to recreate biases and discriminations that we see in the “real” world.
​In an article in the Harvard Gazette, a quote reads “But we are discovering that many of the algorithms that decide who should get parole, for example, or who should be presented with employment opportunities or housing… replicate and embed the biases that already exist in our society.” It brings a question to the table of whether or not we will ever be able to create a machine that is without bias. No matter how hard humans try, biases are part of who we are, and it has certainly never been something that can be 100% removed from our instincts. In a review paper on moral responsibility, the author notes that “... (when) a person delegates some task to an agent, be it artificial or human, the result of that task is still the responsibility of the delegating person…”

Truth Behind AI

"But we are discovering that many of the algorithms that decide who should get parole, for example, or who should be presented with employment opportunities or housing… replicate and embed the biases that already exist in our society.”
Let's Talk: Delphi
With the debate of morality and ethics hot on the trail of AI, labs all across the world are working to create better versions of AI and solve the current dilemmas we are having. At the Allen Institute for AI, they just recently developed a new technology that is designed to specifically make moral judgements. Delphi, named after the religious oracle consulted by ancient Greeks, essentially is a program where you can ask a question, and Delphi answers you immediately telling you whether it is right or wrong. The program will continue to learn and grow by taking your response into consideration with the millions of other responses she gets from other users of the interface. But how accurate is this? Does this reflect society? Does it reflect how everyone feels? The moral and ethical dilemma of AI technology does not seem to be one that is going to be solved quickly...