The Neural Network Oracle

/ in , / by
Reading Time: 4 minutes


Can we really pass on the responsibility of important parts of our lives onto Artificial Intelligence neural networks? Can we really treat these things like human advisors, or shouldn’t we rather consider them like the oracles of old, giving their strange pronouncements and predictions which, however true, cannot be queried or explained?

In this blog post, Richard Develyn, CTO, highlights some of the key differences between human and neural network reasoning and suggests that whilst neural networks may well have their place in society, we shouldn’t get too carried away with them and start imagining that they’re going to become more important than “fire”.

I remember a night many years ago when a group of friends of mine decided to play a trick on me.

“You had a very vivid dream last night,” they said to me. “And though you might not remember it yourself, it was so strong that it beamed into all of our heads. Ask us some yes/no questions and we’ll slowly describe to you what it was.”

Being up for anything, I proceeded, noting how they all answered “yes” or “no” simultaneously without any apparent consultation between them. I didn’t believe the whole business about the telepathic dream, of course, I merely thought that they must have agreed on some sort of scenario between them before the game began and I was keen to discover what it was.

They hadn’t agreed on any such thing. The game works by everyone answering “yes” or “no” depending on whether the question that you ask them ends with a word containing the letter “e” or not. The scenario that unfolds is driven entirely by what you choose to ask, with potentially embarrassing/revealing consequences as a result!

Had I had half an ounce of common sense I might have started hunting for the trick. It’s not a difficult trick, after all, and I think that any human being given a reasonable number of questions and answers should be able to figure it out if they had alerted themselves to the fact that a trick was taking place in the first place.

A neural network presented with a series of questions and their corresponding yes/no answers would, in due time, start to predict with increasing levels of accuracy (but never 100%) the right answer before it was given. What it would never do, however, is conclude the simple rule which drove the algorithm, i.e., answer “yes” if the last word contains an “e”, in the same way, that a person could. The reasons for this are two-fold.

First, a neural network lacks the capacity to translate the millions of numbers that it holds within its knowledge base into a logical statement. Neural Networks don’t “understand” logic, mathematics, or anything like it. I don’t suppose we altogether know how we understand these things ourselves, but somehow or another we can operate and communicate at these levels of reasoning rather than by exchanging photographs of the chemical composition of our brains at the point where they’re expressing an idea. The way that we can move from organic neural mush into a logical statement that we know to be true is an amazing part of what we are and not something that a neural network can replicate.

The second reason is our ability to think outside the box. It’s something that we do all the time – in fact, we have more of an issue focusing on a problem than we have in thinking around it. Presented with a situation, our minds tend to dance about all over it, which is a natural thing to do as we have discovered that we don’t live in some mathematically clean reality but rather in a messy one where everything seems to be connected to everything else. Part of the reason why we might deduce the rules of the “dream” game is that we know that the people running the game won’t be doing anything too complicated, so we’ll look for an easy answer.

These advantages become disadvantages when the problem to be solved requires both focus and an ability to go beyond what can be described by simple logic; in other words, when the answer to the problem is so very convoluted that you must simplify what you’re looking at as much as possible and then not constrain yourself to find it by discovering its underlying logic. These cases, where you accept that you cannot solve the problem to any sort of mathematical or logical satisfaction, are where neural networks exceed.

Much like the oracles of old, neural networks can provide guidance on these sorts of intractable conundrums, although, again like those oracles, they can only do so using strange pronouncements which we have no way of verifying or understanding. When conundrums are intractable, however, we have no other option.

I think that there’s a place in the world for neural network oracles, but there are quite a lot of places where they don’t belong. Their opacity makes them awkward: you cannot navigate back from their conclusions to their suppositions, you don’t know what facts they’re working from or even if these facts are the right ones and haven’t been tampered with. If you don’t care about such things, and if they seem to be doing a good job at whatever it is that you’ve asked them to do, then you might as well trust their conclusions - as much as you’re prepared to.

Certain decisions, however, cannot be made on trust, and for that you need something that can explain itself in a logical way, having had the means to be able to come to a logical conclusion in the first place. Here, neural networks won’t help, though some other forms of AI might, such as the sort of logical deductive engines which formed the basis of AI in the past. Neither of these systems, however, will ever replace humans – in my very humble opinion.

I doubt that neural networks will come to be seen as being more important than fire (!), but they’ll certainly have a role to play in helping with situations where we’re either not bothered about what the underlying logic might be, or where we’ve accepted that such logic might be beyond us. If a neural network is helping to predict the early onset of Alzheimer’s disease, then why should we care how it’s doing it? Hedge fund managers use neural networks to play the stock market, which makes perfect sense: it’s a spread bet, and there is no logic behind stocks and shares as far as we can see. With other decisions, however, you need to understand the reasoning behind the advice, such as when making major purchasing decisions, career decisions, medical decisions, and so on. In these cases, you can’t just listen to an oracle, even a neural network one.