AI - part 5 - Reasoning
In the final chapter of his 5-part blog post on artificial intelligence, Richard Develyn, CloudTrade CTO, compares and contrasts machine reasoning and human intuition. He concludes his examination of AI explaining how CloudTrade’s new product, the CloudTrade Intelligent Rules Assistant (CIRA), automates document interpretation with the help of human reasoning.
When we talk about human intelligence, what we tend to mean is what we might call “conscious reasoning”. I think that we’re all aware that there is other thinking going on inside our brains which falls outside of that definition, but we get a bit woolly about whether that might be subconscious reasoning, or intuition or even instinct.
In fact, as human beings we tend to be a bit “sniffy” about the work our brain is doing in the background. No one has ever suggested, for example, that we should try to measure IQ by looking at sub-conscious reasoning, or intuition. This is perverse because our work with computing and artificial intelligence suggests that our conscious reasoning ability is relatively poor, and easily outmatched by computers, whereas our intuitive ability is stellar, well outstripping anything that the most advanced neural networks could ever hope to achieve today.
The miracle of human reasoning is the fact that we can do it at all, rather than its raw power. It seems to me to have come about from a need to communicate thought to each other, rather than as a requisite step in order to “figure things out”. It might seem to us that the only way that we can act with logic is to phrase our reasoning into internal dialogues, but there are plenty of examples that we can think of where we don’t appear to have to bother.
Think about driving a car. A lot of logical thinking has to happen in order for us to get from A to B without running people over, but most of it seems to happen on “auto-pilot”. We don’t constantly create little narratives in our heads along the lines of “I need to slow down now because I can see an elderly person up ahead, but there’s also a car on my right and I think that traffic light is about to go red”. Obviously, we do think this, somehow, and you couldn’t call it instinct, as there’s clearly logic at play. Perhaps it’s what we might term subconscious reasoning.
The dictionary definition of intuition gathers together instinct and subconscious reasoning. Perhaps the best way to describe it, however, is as that mysterious, and super powered, mental problem-solving ability that we all have, which we only even recognise exists when it bothers to take the time to explain itself by manifesting “reasoned” explanations inside our heads.
In his seminal book, “Gödel, Escher, Bach: an Eternal Golden Braid”, Douglas R Hofstadter describes how cognition, which is what I’m calling here conscious reasoning, might have emerged from a neurological process, by developing the ability to refer to itself objectively. It’s a fascinating book which requires some serious commitment from its readers in order to fully appreciate and enjoy. I’m afraid to say that after three attempts I still haven’t made it past page 200 or so, but it has a permanent place at my bedside table. I don’t recall whether DRH postulates that the requirement for conscious reasoning might have come about through a need to communicate, or whether that is my own personal theory, however if reasoning can be seen not as something separate to other intelligent processes, but rather as an interface into other intelligent processes, then it becomes easier to understand why an intelligent process without a “reasoning” interface, such as an artificial neural network, is going to hit some serious limitations when it comes to working collaboratively with us. In particular, an artificial neural network lacks the ability either to explain itself to us, or to take instructions from us. At present, they don’t even have the ability to do this with each other.
There’s no need, however, to throw the AI baby out with the bathwater. An artificial neural network might not be able to articulate how it reaches its conclusions, but if it points us in the right direction then our human brains can take over, investigate and provide all the necessary reasoning-based explanations that we need.
Once that intelligence has been translated into reasoning, perhaps “reduced” in the process but nevertheless turned into something that we can all work with, then we can automate it using traditional computer processes such as those that we have come to love and live with over the last few decades. Whether you consider the automation of reasoning “intelligent” is a moot point; it does the job at ultra-fast speeds, and it is generally considered to be the second, and perhaps more traditional, branch of artificial intelligence.
If we don’t care how an AI neural network reaches its conclusions, and if we have no desire to give it any form of instruction, merely to let it learn and do, then all we have to concentrate on is making it as efficient as possible, either internally or, as in the case with facial recognition, by using our own intelligence in order to try to simplify the problem that it’s trying to solve.
Most of the time, however, we will not want to delegate the processes which govern our lives to a black box buzzing in the corner giving out inexplicable and uncontrollable decisions. This is where collaborative AI is going, though the term “collaborative” is used rather loosely. AI points the way, then we take over.
This is also the direction that we’re going at CloudTrade with CIRA, with our aim to improve the way that we can automate the interpretation of human-based documents. The CIRA black box will make its suggestions, which we will then verify, or choose between the options that it presents us, using our own human-based reasoning, and then the results of those decisions will be automated using our current, established, predictable, rules-based system.