Posts

Is it AI? 1

Is it AI?

Reading Time: 7 minutes
Is it AI? 2
Is it AI?

Is there a difference between systems based on AI and solutions using neural networks? And, if so, which of these approaches lies at the root of the CloudTrade data-extraction service? In this blog post, the second in a trilogy explaining the CloudTrade data-capture solution, Richard Develyn, CloudTrade CTO, answers these questions, colourfully depicting the role of neural networks from the time of the dinosaurs onwards.

In the middle of the Second World War an American behaviourist, and Harvard professor of psychology, called B.F. Skinner, “trained” pigeons so that they would direct specially modified missiles towards their targets, by pecking on the little windows on the front of the missile that the poor creatures were allowed to look out of. It was called Project Pigeon and though it never got past the Beta Test stage it did show considerable promise and even received $25,000 of US state funding.

I say that the pigeons were “trained”, in quotes, because they were never actually sat down in classrooms with exercise books and had the whole business of how to recognise a bomb’s target patiently explained to them. The pigeons were trained in the same way that neural networks are trained, i.e.: “peck on the window where the target is in front of you and you get some nice pigeon treats; if you don’t, or you get it wrong, then no pigeon treats for you”. Eventually, the pigeons figured it out, somehow, in much the same way that neural networks figure the answers to their problems out, somehow.

Determining how to guide a missile to its target might not seem too hard a problem – for a pigeon. One might consider it comparable to the problem of trying to find an invoice date in an invoice that’s never been seen before – for a neural network.

In June 2018, the largest neural network, built on supercomputers, had about 16 million neurons. That’s about the size of the brain of a frog. I’m sure there are bigger ones around now (neural networks, not frogs) but back then it was the biggest one in the whole wide world, and anything that we might be able to get hold of ourselves today is likely to be at least an order of magnitude smaller than that. Of course, you cannot draw a straight comparison between organic and synthetic neural networks, but for the sake of argument let’s assume that any sensible neural network we might be able to play with today is going to be of about the size of the brain of a frog.

The brain of a pigeon is actually about twenty times the size of the brain of a frog.

Back in the 1940s, of course, B.F. Skinner had a decided advantage when training pigeons. Pigeon brains are not totally untrained to begin with.

Charles Darwin had a particular fascination with pigeons when he was researching “On the Origin of Species”. He bred them himself and used his breeding as an analogy to describe what nature was getting up to in the wild. All of today’s pigeons descend from Rock Pigeons, which can be found all over the world and which humans have been training since the dawn of civilisation. The Ancient Greeks used pigeons to send the results of the Olympic Games from town to town as early as the eighth century BC. Genghis Khan created the first pigeon-based internet across his whole empire all the way back in the 12th century.

If you really want to know how long a pigeon brain has been “learning”, however, you probably need to trace them back to their roots, which turns out to be the, now infamous, Velociraptor found in the Jurassic period (and Steven Spielberg movies). Assuming that our humble pigeon had a change of heart at the end of that particular epoch and re-set its neural network to focus more on pigeony things rather than the ripping up of other animals, then our pigeon brain has had about 150 million years to perfect itself, which is quite a long time to train a neural network, and goes some way to explaining why pigeons are something like three times faster than human beings at processing visual information (even though humans brains are around three hundred times bigger – you’ll be pleased to hear).

If all of this seems to fly in the face of the much-lauded accomplishments of neural networks today, then take a bit of time to scratch below the surface of the headlines. Facial recognition works well when people don’t look too much like each other, don’t try to disguise themselves such as by wearing paper masks, don’t do annoying things like wear t-shirts with other people’s faces on them, or wear any of that interesting make up you can find on the internet designed to throw the algorithms off. Language translation works well when there is no need to understand what the sentences being translated actually mean; the step change in difficulty between text-substitution style translation and what one might call “cognitive” translation is immense.

The rise in the profile of neural networks over the last few years has come about not so much because of a rise in computing power but because of a rise in the amount of training material which our internet-based society has enabled. This rise in profile has fuelled a rise in interest, money, research and, I’m afraid, wishful thinking about what neural networks might be able to do for us in the future. We all of us have this very sci-fi based vision of computers as no longer needing to be programmed but rather able to magically program themselves if they’re shown the right amount of correct answers. Neural networks, however, are not self-programming computers; they’re vast, convoluted, and actually pretty incomprehensible pattern-matching engines. Their beauty is that they can be “taught” in the same way that we train animals, i.e. not by instructions, but by showing them lots of examples of what is right and what is wrong. Their disadvantages are that (a) they need a lot of training and (b), again just like animals, they sometimes just go wrong anyway without bothering to provide us with an explanation.

Ultimately, that was the reason why B.F. Skinner’s experiments with trained pigeons in missiles would never have taken off the ground (if you’ll excuse the pun). No one was going to be particularly happy about having weapons of mass destruction guided to their target by a bunch of pigeons pecking away at a screen.

This problem of “explainability”, as it is known in the AI world, is called the AI Black Box problem, though it should really be termed the Neural Network Black Box problem as AI and neural networks are not the same thing. “Explainability” is only half the story; the real requirement is “instructability”. Unfortunately, we can no more instruct neural networks to behave in particular ways than we can instruct our pet dog not to pee on the sofa. We can train it (them) and hope they get the message right, eventually, but we can’t program them in the same way that we can program computers.

That’s not to say that neural networks don’t have a role in solving the problems faced by IT today, even including to some degree those of identifying data from documents as we do in CloudTrade. If your requirements are simple, your training data set large, you don’t care if it gets it wrong and you’re not going to ask it to explain itself, then a neural network might well be the thing for you. In terms of where we see the future for our core technology, however, which is all about increasing in sophistication rather than decreasing it, then it’s not so wondrously exciting for us, though it will still have a role to play as an advisor in our latest developments, project Grandalf. Neural networks are great for suggesting to humans where to look for things or where to get ideas. As advisors, you don’t mind if they get it wrong, all help is useful, but there’s no way you’re going to hand over the reins to them.

AI has never, however, been just about neural networks. If you were to ask me the question “Is what you’re doing here at CloudTrade AI?” then I would say, “Yes, but not the neural-network flavour of AI”.

John McCarthy, an American computer scientist who was one of the founders of the discipline of artificial intelligence, felt that machines did not need to simulate human thought at all, but should rather try to implement the essence of abstract reasoning and problem-solving, regardless of whether people used the same algorithms. Pamela McCorduck, in her 2004 paper “Machines who Think”, summarised it as follows, writing that there are “two major branches of artificial intelligence: one aimed at producing intelligent behaviour regardless of how it was accomplished, and the other aimed at modelling intelligent processes found in nature, particularly human ones”.

If you think about the traditional human problem that AI was set as a challenge – winning a game of chess against a grandmaster – the answer in the end didn’t come from building the neural-network equivalent of a human brain (which is well outside of our capabilities anyway) but instead from programming a normal computer using normal software that implemented very understandable algorithms discovered by people who studied how the game was played. That’s how most of AI happens.

CloudTrade’s technology works in exactly the same way. We use a variant of natural language processing to program our core engine, “Gramatica”, to understand documents in the same way that we understand them as humans, i.e. in the cognitive sense, rather than trying to simulate the way that our neurons in our neural-network brains are firing across to each other. Much like with the chess-playing programs that we have today, Gramatica is the obvious way to solve this problem. It does carry with it the irksome task of having to program the thing to deal with each different type of document individually, but the dreams of having a neural-network style engine which self-programs itself to magically produce an answer are a long way from reality. Dreams are seductive, of course, but businesses are built on what is sensibly achievable today.

CloudTrade uses natural language processing to solve the problem of understanding and extracting meaningful data from documents. Natural language processing is a well-recognised branch of AI. CloudTrade doesn’t, at the moment, use neural networks in its solution, but we are intending to introduce it in the next generation of our software called “Project Grandalf”. AI is indeed at the core of what CloudTrade does, and it is interesting and exciting for us. We like to think that we have helped to push that technology forwards with our initial patent, and it has enabled us to build a successful business out of it with hundreds of happy customers. Who knows, perhaps with Grandalf, and more neural-network advances, other patents will follow.

If you’d like to reach out to me to find out more information or to discuss anything mentioned in this blog, please connect with me on LinkedIn.

AI-Richard-Craig-Terminator

HI, the new AI – What the Terminator got right.

Reading Time: 2 minutes
HI, the new AI – What the Terminator got right. 3
HI, the new AI – What the Terminator got right.

Ever since Schwarzenegger told a desk Sargent that he’d be back and then was back 5 mins later crashing through the wall of police station, the world has been in love with the idea of AI.

And why not? The prospect of machines taking away the mundane tasks of the day-to-day, freeing civilisation to live a decadent and carefree life is a dream to aspire to – right?  And if some of those machines turn out to be human hating cyborgs then surely that’s a price worth paying…

Generations have been working away to create that first version of Skynet (the fictional superintelligence system). But rather than looking at teaching the system how to co-ordinate a nuclear strike (hopefully we have learnt something from the film War Games) companies have instead focussed on the more mundane but ultimately monetizable day-to-day tasks that occupies the humble office worker.

While not something that generally lends itself to a big budget movie, it’s obviously a subject that people are hoping will create a big budget company.

One area in which companies have focussed is on the world of data extraction from documents. Around the world, millions of documents are having their information extracted and placed into a target system. Sometimes they are using new technology to perform the task but often (more than you would think), data extraction is carried out by people just keying in the data.

A prime target for termination you might say? But this is where we begin to see more parallels with a movie script than you would expect, as m­­arketing teams in these companies polish the reality of AI into something Oscar worthy.

A quick google for AI data extraction will fill your screen with companies using buzz words like they are going out of fashion: Neural Networks, Deep Learning, Powered by AI, Machine Learning, Big Data, Document Understanding Platform, Set and Forget, Pre-trained AI models… The list goes on and on.

But the reality is that AI hasn’t quite lived up to the dream that Hollywood has sold us. Despite the claims out there that AI is still in the early stages, nobody has created a truly work-killing system. Either AI is only involved in a small part of the end to end process or it needs a large amount of human interaction to train and review the output, simply moving the human costs of processing from one area of a business to another.

So while we wait for AI to catch up with it’s own hype, and that might be 2030 or even 2060, we need to look for solutions that harness current technology to solve today’s problems. And perhaps we need to harness HI (Human Intelligence) to do this while the machines catch up with us.

Computers can already do amazing things when given the proper guidance. That is how we approach problems here at CloudTrade using our patented data extraction and interpretation software, coupled with the in-house expertise, to quickly and efficiently teach our systems without the trial and error that AI needs.

So maybe Terminator did get something right. You need something part-man and part-machine to deal with difficult problems…