When talking about artificial intelligence the first questions that come to mind are: What is it? What can it do? And should we be worried?
The common threats and risks identified by increasing automation and artificial intelligence (AI) in systems include:
- Loss of jobs – when will machines take over routine tasks?
- Liability issues – who is responsible when AI does the job?
- Lack of transparency – what does it actually do and why?
- Loss of humanity and social relationships – when we talk with intelligent agents who are we actually talking with ourselves or just an echo?
- Loss of privacy and personal autonomy – when our data is shared and situational decisions are pre-automated, do we have a say anymore?
- And need we even say the opportunities for manipulation, surveillance and illegal behaviour… but in fact these are risks that are common to any interaction.
Should we be worried, perhaps, but there are several ways of designing AI systems in an ethical way. Here are some points that took inspiration from the World Economic Forum guidelines for a governance for a holistic approach to implement ethics in AI. Those were a reference point for a Propellerhat research survey on philosophy of AI.
How to implement ethics in AI?
When implementing ethics by technology design we can choose different approaches. When we teach a machine “bottom-up” by our own interactive behaviors (the data) we also teach the “bad” behavior. How can you tell what is good or bad? That is the question philosophers have been thinking about for thousands of years. What are the rules of morality? We as humans think we can judge what is right and what is wrong, but when we just look at “the data” with no historical or contextual understanding, I would say, that it is not that easy to make moral judgements. We tend do those by our own culture, context and, yes, the philosophy we follow in our daily lives. Any behaviour that has happened earlier can be learned and repeated. It is not right or wrong for the AI as it just mimics the data and behavior as-it-happened.
A similar question would be to ask what is the moral of mathematics? Is 1+1=2 a good equation? I guess it depends on what is behind the numbers.
Machines can be programmed to make an ethical decision that takes into consideration the pre-conditions and causality of the situation: what happens in reality when the decision and action happens. For example when dealing with a medical decision, the wellbeing of the patient should be one of the first things to consider before taking any action. This approach would be called casuistic, so causality is main driver. Given the conditions and circumstances the decision can be different.
That makes sense as opposed to a more dogmatic approach, when in any situation there would be a pre-determined and anticipated “right answer”. This way a specific ethical school of thought could be programmed in the AI solution. For example we could use the Asimov’s Three rules of Robotics or Kanthian ethics, or just plain and simple utilitarism.
Or would it be more fun to deal with a relativist or surrealist machine? Perhaps, if you like a joke from your self driving car. Where will it take you next? And is it somewhere you’d like to be?
Three Laws of Robotics, by Isaac Asimov
First Law – A robot may not injure a human being or, through inaction, allow a human being to come to harm.
Second Law – A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
Third Law – A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
The solution for implementing AI ethics according to a school of thought is that the machine would then be slavishly executing those philosophical and logical processes according to that specific school of thought. As simple as that. Or is it? Do we always want to follow just the same rules? In any situation? What about the context and culture? What about other countries and traditions? What about things that did not exist when that school of thought was created.
In a recent study by Curry, Mullins and Whitehouse (2019) the took data from from 60 societies across 600 sources when they wanted to define “What is morality?” They found out the most plausible universal rules for morality and cooperation:
// Help your family, help your group, return favors, be brave, respect others, divide resources fairly and respect property rights
As simple as that. But how could we make sure AI follows the rules?
On a higher level, we could implement a monitoring system that would control compliance issues with a “guardian AI” type of an artificial intelligence approach. These meta level sets of rules would be surveilled and autonomous decisions of sub-systems could be overruled (thus re-programmed) if those collide with the meta-level monitoring system codex, or should we say guidelines.
In any case, one must remember that any AI is programmed and used by us humans. The non-technical means are important. When people are committed to ethical and moral guidelines, it will be easier to have artificial intelligence a.k.a. “computer programs” to follow those guidelines. Just consider taking a rule and following it in all situations. No exceptions allowed. Now that does not sound human at all. So, perhaps making AI more human is to add a bit of random effect? Bugs make AI a bit more human, right?
What if, just what if, we would consider our own actions based on moral and ethical guidelines with our without working with artificial intelligence?