Algorithms, Rules, Values, and Artificial Moral Intelligence

I have no way to know whether or not what was reported is true, but I read that an algorithm decided which passenger was to be removed from the United Airlines flight that caused such a stir. It makes sense to me that a computer would analyze the factors determining who stays and who gets off the plane. And it shows us the limits of artificial intelligence now.

I would guess it looked at a person’s status. You aren’t likely to remove someone who has flown 1,000,000 miles with you. I am certain it looked at the price paid for the seat. It might make sense that the person with the cheapest seat has a smaller claim than someone who paid far more for theirs. I imagine that the gentleman removed from the plane was the last to check in, or something close to it.

Right now, this is a weak algorithm on which to base a decision.

I am doubtful that the decision included factors like identifying the person with the best chance of getting to their final destination on another flight and arriving at close to the same time. These people may have been more easy to sell another flight, especially if you somehow sweetened the pot.

Because the airlines operate from a scarcity mindset, I am certain the decision did not factor in other flights on other airlines that might have been leaving and arriving at close to the same time, or  direct flights to the final destination on other airlines for people who may have opted for that flight.

The algorithm isn’t yet programmed to take into account any complex factors, and it isn’t likely to have considered choices that would have cost the airlines more money but protected their brand and their relationship with their customers. Worse still, the hand off to the humans was done in the same way the decision was made: rules-based.

The problem with rules-based decisions is that you remove the “human” from “human relationships.” Values-based decisions are better. I am certain that somewhere at United Airlines headquarters, there is some written statement about their commitment to their customers. I am certain that the decision to use force instead of persuasion is at odds with their values, and I can’t imagine it being otherwise. I am certain that United Airlines wants empowered employees to exercise their resourcefulness and solve problems like this without resorting to physical violence, the end result of their decision to follow the rules instead of their values (which appear to be very legalistic and not very human as written here).

This isn’t about AI. But if it is, it’s about Artificial Moral Intelligence, something that isn’t being widely discussed as we blaze into a future where our reliance on computers is already beginning to exceed our reliance on our machines. Right now, humans still possess an ability that computers cannot, and may never be able to possess. That ability is the ability to be compassionate (or empathetic, if you like that word better).

In situations like this, the easiest way to decide how to move forward is to imagine how you would want to be treated. Resourceful employees, operating with a values-based approach instead of a rules-based approach to this decision could certainly have found a way to get a single passenger to leave the plane without physical harm.

Want more great articles, insights, and discussions?

Join my weekly Newsletter, sign up for Sales Accelerator and follow me on social.

Facebook | Twitter | Instagram | LinkedIn | YouTube

Filed under: Sales

Tagged with:

[if lte IE 8]
[if lte IE 8]

Share this page with your network