Skip to content
Home » Blog » Artificial Intelligence » Robots are Not Your Friend – Ethics and AI

Robots are Not Your Friend – Ethics and AI

AI is taking over more and more tasks previously done by humans. For the most part this is a good thing; AI algorithms are very good at handling tedious, repetitive tasks that generally make human workers bored to frustration…and mistakes. However, there is growing evidence that Robotics Ethics and AI is not quite the solution we expected it to be.

The theory is that a computer should provide an unbiased “opinion,” but there is a body of evidence that proves that AI algorithms actually share the biases of the people who code them. As recently as October 2019, the Washington Post discovered that a leading algorithm that helps healthcare workers determine who needs extra care was dramatically favoring white patients over black. As more and more decisions are made by AIs without human input, then biases in the code are going to become more obvious and serious.

The Nature of the Problem

To understand the problem, we need to talk a bit about how current artificial intelligence works. First, it’s important to note we’re not talking about sentient AI. Although some chatbots can pass the Turing test, we have yet to develop a sentient AI that starts asking its own questions, and possibly never will.

What we are talking about with AI is primarily machine learning. Modern AIs are capable of learning from experience. So, for example, if you feed a hiring AI information about what successful candidates put on their resumes, it can “learn” to sift through a stack of resumes and pull out only the ones worth human intelligence. Airlines may feed an algorithm data about maintenance records and problems, and the algorithm will spit out, ideally, the perfect maintenance schedule to avoid accidents and flight cancellations.

Unbiased Decisions?

So, what is the problem? Let’s take the example of the resume-skimming AI. The first use of resume-skimming AIs was simple keyword filters, and candidates could get past the AI by the simple means of making sure that they worded things to use the keywords. Essentially, the AI became a test in how well you could follow instructions. However, when the AI is a learning algorithm, it can start to pick up on the company’s biases. In 2018, Amazon discovered to their embarrassment that their computer models used to screen resumes were sexist. Because the company had hired mostly men for certain tech jobs in the past, and received mostly resumes from men, the AI inadvertently learned that “men are better at tech jobs.” Oops. Thankfully, they spotted the problem while the tool was still in development and it was never used to screen resumes.

These aren’t issues that are small concerning Robotics Ethics and AI. They’re dealing with healthcare and employment, and may in the future be used to screen applicants for housing. Oh, and they’re already being used to find criminals, with issues that should be obvious.

What Kind of Ethics Code is Needed?

There’s general agreement that there is a problem. Sexist hiring AIs, facial recognition systems that can’t tell black people apart or, worse, mistake them for gorillas, the examples of problems go on and on.

Where the disagreement is, of course, is in what the code should be and how it should be implemented. Here are some of the obstacles:

  1. Some people don’t think we can actually implement a proper ethics code, due to the fact that ethics is subjective and AI is hard science. The counter to that is the fact that algorithms do show subjective biases.
  2. Ethics varies across cultures, meaning that it may not be possible to come up with a code everyone can agree on.
  3. It is very hard to fix bias in code that is already active.

However, there are some solid approaches to a code of ethics already in play. IBM, for example, has released a code of ethics that requires things such as ensuring the designer knows company policy, understands the values of users, etc. A good example of Robotics Ethics and AI.

When does Ethics Become a Concern when Designing an AI?

There is a growing consensus that whatever kind of ethical rules we apply to AI algorithms have to be, much like Asimov’s famous Three Laws of Robotics, baked into the code. (In “The Caves of Steel” a roboticists tests a highly advanced robot to determine whether he is equipped with the First Law, which prevents a robot from knowingly harming a human, by having the robot perform a series of tasks that seem completely unrelated. The First Law, part of the AI’s code of ethics, is so basic a property to the code that it’s absence would cause other obvious problems).

In other words, ethics are or should be a concern before the designer even sits down to start working on the code. The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems is putting together bodies of work to help designers do so. As somebody hiring a designer, the onus is on you to make sure that your designer follows ethical principles, which starts by sitting down with them before they start and outlining what you consider important. Are you trying to increase diversity in hiring? Provide personalized customer service? These will affect the kind of ethics that need to be coded into your AI. National, state, and local regulations are also starting to come into play as more jurisdictions attempt to regulate AIs to prevent the kind of biases showing up.

What About the Future?

Any ethical code you might apply has to take into account current regulations, but it also needs to be future proofed. Perhaps in the future we’ll have AIs so sophisticated we can simply tell them what the ethical rules are, but we are a long way from that point.

Future proofing, in reality, means coming up with ethical principles that are available to designers and to customers, so they know what kind of principles your product is based on. These can then be applied to whatever technology is developed, and you also need to do your best to anticipate future concerns around bias, privacy, and security. As the internet and the real, physical world become more and more integrated, AIs will make even more decisions about our lives.

Conclusion

We need to take steps now to make sure that they are making better decisions, whether it means having rules about the number of male and female faces shown to an algorithm, or writing “Laws of Robotics” into the base code. If you need help defining ethical criteria when developing Robotics Ethics and AI tools for your company, contact STRATECTA today.

Related Links:

nv-author-image

Georg Tichy

Georg Tichy is a management consultant in Europe, focusing on top-management consultancy, projectmanagement, corporate reporting and fundingsupport. Dr. Georg Tichy is also trainer, lecturer at university and advisor on current economic issues. Contact me or Book a Meeting