Artificial Intelligence: The New Second Class Citizen

Artificial Intelligence is here to stay.

There is no debate regarding this. Siri, Google, and Alexa, to name the most well-known VAs, are coming standard with more and more technology. This artificial intelligence can make life easier, more organized, and more efficient. Yet virtual assistants and chatbots have the same weakness as all tools.  They are only as good as the humans who use them.

This is not going to be an article about training. Rather, this is a look at human behavior and how we might be evolving in response to virtual assistants. Or how we are not evolving.

Human interaction

Since the 1960’s and the original chatbot, humans have a complex and complicated history with artificial intelligence. In that scenario, humans became dependent and emotionally tied to the AI. Flash forward to 1996 when Garry Kasparov lost to IBM’s Deep Blue after assuming he was the better of the two opponents.

While humans desire connectedness with a machine on which they are dependent, there is also an innate tendency to believe we are superior to machines.

This superiority has led to a new phenomenon of humans abusing or becoming verbally violent with AIs. There is more and more data of humans not using the same filters they would with other humans. Although unwilling to comment, Microsoft and Amazon have untold amounts of recordings where their AI is taking abuse from a human.

Frustration is a natural response if the machine makes an error or does not perform to human expectation. Human staffed call centers deal with this on a regular basis. A customer does not get what they need, and they begin to escalate. However, most call centers have trained leads with skills at de-escalation tactics. The customer is generally mollified and the interaction can right it’s course.

However, this type of behavior towards an AI can have unintended consequences. In the example of a human staffed call center, the upset customer knows they are speaking with another human. With tone and word choice, the right responses decrease volatile emotions. A human interacting with an AI is usually aware they are dealing with a machine, and there is no check to prevent them from continuing escalation.

Fears of the future

Some concern regarding human behavior towards machines is the tendency to treat them as less than. A human inherently knows a machine does not have the same biology and is manmade, and the quick excuse is they are talking with a machine. After all, a machine can’t get hurt feelings or respond with violence.

Yet this sliding slope logic can lead to a breakdown in human to human interactions. Since the advent of computers, servers, and the internet, human vocabulary has changed. Humans use tech terminology to describe behavior, such as a calling someone a binary thinker rather than a black/white thinker. Or people saying they are at capacity when they don’t have room in their schedule for another task.  As if they had a hard drive with limited processing capabilities.

These blurred lines of reference lead some to believe that if humans become comfortable with seeing humans as machines, and are used to AIs with human traits like Alexa, what prevents humans from treating other humans with disregard?

AI Development

Humans are not inclined to feel comfortable with machines who have a potential for greater learning. Part of AI development is creating machines with human traits so we feel comfortable with the interaction, such as a female voice or human name. Namely for the sake of our comfort.

Most assistants have built in responses to human anger, as well, such as Siri’s cute quips in response to profanity or Cortana’s short response. However, at some point human action is needed for complete de-escalation.

Future development must pose the question of should AI and virtual assistants have built in required mechanisms to de-escalate and defuse anger? Cortana responds the abuse will not resolve the issue, but there is no additional programming to deter the behavior like stopping the interaction.

While this seems to be desirable answer to the problem, programming will not stop human disrespect for machine. Nor will this resolve blurred lines between human and machine, potentially leading to further breakdowns in human behavior towards each other.

Next steps

Microsoft’s failed Twitter bot Tay has given us glimpses into a very dark side of humanity. However, while we cannot control human reactions or responses, we can control the machines we build to support humanity. While Tay was learning from Twitter interactions with no safe guard preventing the ultimate result, artificial intelligence has some limited deflection for human negativity.

It is unlikely humans as a collective are capable of learning empathy and respect towards machines.  That is lacking in human to human interaction as well.

Ultimately, we will be left with the quandary of do we make AIs better than humans to deal with us, or do we make them in our own image for a more human interaction?

Say goodbye to downtime and hello to new opportunities.