top of page
Search
  • Writer's picturePeter McAllister

Empathy and its role in AI “sanity” Part 2

The last blog talked about how we are writing emotions into the genetics of AI DNA – it’s code. But disorders often have an environmental component as well.


How can that have an impact on software that is just brought into being. That is where training comes in. We have to train AI’s in the tasks we want them to perform. A tool like COMPAS, used in the many jurisdictions in the US with to help Judges in pretrial release and sentencing. It gives you a score between 1 and 10 to quantify how likely you are to be rearrested if released.


While it saves a lot of time for the Judges, COMPAS has trained on hundreds and thousands of prior cases that have previously been judged by humans. There are now many studies emerging where sentences and decisions are found to be materially different for the African American and white communities where the difference appears to be skin colour. It appears the unconscious biases of the officers and judges in the training cases are being handed on to the AI’s of tomorrow.


It’s not just sentencing, its predictive law enforcement, healthcare, insurance, finance and who knows what else. And because it is unconscious bias, filtered through AI, the person using that information does not necessarily know how or why a decision is made, and they may not have the authority to override it.


Then we get a “Computer says no” (thanks to the UK Sitcom Little Brittan for that line) moment. It has made is assessment based on faking empathy, prior treatment of millions of individuals by people with unconscious biases, it’s own inability to understand emotions and the last line of defence – the person looking at the result (from Judge to Insurance clerk) being unable to challenge it because they don’t know how it came to that result (or their corporate bosses want the cheapest outcome).


To generalise - we can see the “behaviour” of AI’s being “aberrant” – results that the observer can’t understand and finds contrary to the norms and expectations of society - from a system that might be driving justice and health outcomes for people. If that was a person, we may well consider that person to have a mental illness. It is possible to map parallel causes and drivers between the two species.


In “The Code”, the AI, Gene has all the hallmarks of mental illness when you look at it’s behaviour. And when his behaviour becomes a threat to humanity, the options for treating it become limited.


So can your AI lose its mind?

5 views0 comments
bottom of page