We do it all the time, break systems and problems down into small components (that fit in our heads), understand them – often in isolation from the rest of the system it was part of , and then assemble the parts to try and understand the whole system.
But in systems theory there are two types of things to study:
“Bicycles” are those that you can break down to their individual element, reassemble, and they will work as before, and
“Frogs” – where we can take it apart, but you can’t put them together again as a working system (ignoring Frankenstein’s Monster for a moment here).
So is intelligence a bicycle, that once we have the blueprint we can just churn it out?
Lets start with how we generate our AI now? To be simplistic about it, much of the training is just teaching it to copy us. Often we learn a lot about ourselves in the process - what ate the key decisions, why do we do that, that work of thing – and that is where we put in our unconscious bias as well. We teach AI to learn from its mistakes and because they can make so many more mistakes, so much faster, there get good at it.
But as Alistair Dabbs notes (slightly sarcastically) – all he has really learned from his mistakes is how to be more efficient at making them the next time.
The other view is from Professor Colin Allen, Indiana University, who argues that you can’t deal with the mind alone, you have to look at in concert with the body. The twin concepts of embodied and embedded cognition are challenging the way we understand human intelligence.
To cut a long story short, Professor Allen is suggesting that to create a human like AI it needs to be receiving constant feedback on the way its actions are impacting others and to be able to interpret that as positive or negative – to effectively have a way understanding the emotion of those it interacts with.
But do we want AI to be like humans – or are we better of building something we don’t have to compete with down the evolutionary track?
Comments