Is Robotic Processing Automation (RPA) science fiction or science fact? Does it have a part to play in the CloudTrade story? In this blog post, the third in his trilogy explaining the CloudTrade data-capture solution, Richard Develyn, CloudTrade CTO and sci-fi film enthusiast, lays bare the limitations of RPA, exposing the very real challenges associated with coverage, complexity and scope.
In Fritz Lang’s 1927 film, Metropolis, a worker stands in front of a man-sized clock moving the clock’s hands to point to different light bulbs arranged outside the edge of the clock’s rim. Every few seconds, two of the light bulbs light up, and it is clearly the worker’s job to move the hands to point to those particular light bulbs rather than to any of the others.
This is what I call a Robotic Process.
As the film progresses, however, it becomes clear that the worker’s task involves more than just looking out for lit light bulbs and moving the clock’s hands accordingly. Arranged on the same panel as the clock are several gauges and dials which are somehow affected by what the man is doing, and which appear to have safety thresholds triggered if something starts to go wrong. Judging by the man’s reaction as he becomes tired and the needles start to climb alarmingly towards their red zones, it is also his job to keep these dials under control, and there appear to be some very large levers to the side which we assume he has to use “in case of emergency”.
Fritz Lang makes an important point about automation in these scenes, a point which has pervaded our view of science-fiction dystopian futures ever since. Making humans perform robotic processes may be dehumanising but replacing humans with robots runs the risk of a catastrophe occurring if those supposedly robotic processes start to behave in unexpected ways.
This problem is not helped by RPA because RPA “bots” are a particularly dumb sort of robot. RPA bots only achieve their advantages over other forms of IT, such as programming, because they learn what to do by watching their human counterparts and then replicating their, supposedly, dumb human behaviour.
Stick an RPA bot in front of the Metropolis clock-worker, therefore, and that bot is going to encounter three pretty fundamental problems.
The first is that, even in normal operation, the number of possible combinations for two lights around the clock is 2,116 (there are 46 lights around the clock, 43 labelled 1-43 plus three other ones labelled I, II and III), and some of these combinations might occur very rarely. An RPA bot cannot be programmed (strictly speaking, but see later) with the instruction “move the clock’s hands to the lit light bulbs”, all it can do is watch the human clock-worker and copy, dumbly, what the human worker does. The bot’s learned knowledge base, therefore, will consist of a series of instructions along the lines of: “when bulbs 12 and 37 are lit, and no others, move the hands of the clock to positions 12 and 37”. If this particular combination does not happen when it is being trained, then that bot is not going to know what to do should that combination subsequently happen in “live”.
I suppose you could classify this problem as insufficiency in training, but the issue is deeper than that. Watching an operator work on a machine for some fixed period of time does not constitute a very sensible learning plan. What if half the bulbs only come on in winter and you happened to have trained your bot in the summer? This problem should really be termed one of Coverage, i.e. knowing what the span of possibilities is, against which you can measure sufficiency. If you have no handle on Coverage, then you have no idea how well you’ve managed to train your bot.
The second problem happens when there’s some sort of underlying complexity to the robotic process which the bot cannot perceive, such as, perhaps, if the order in which the operator moves the hands depends on one of the gauges to the side which the operator is keeping an eye on. The bot can see what the operator sees and note what the operator does as a result, but it has no insight into how the operator is thinking.
This problem should be termed the one of Complexity, which is a recognised and insidious one in the world of RPA. Operations that appear to be simple are later discovered to have more complexity about them than at first thought. The situation becomes worse if it turns out that the operators themselves have some sort of internal state which the bot needs to discover, such as that carried in the instruction: “if the dial moves into the red zone three times in an hour then reverse what you’re doing with the hands”. If the operators are exercising judgement or skill in what they’re doing then the problem gets worse again, as in, for example: “ensure this gauge lies between those two safety lines; if it drops below the lower one then move the hands faster, if it rises above the higher one then move the hands slower”.
Of course, you can increase the complexity of the bot itself to pick up on these things and try to learn them, but it doesn’t take long before you start wondering whether you shouldn’t just have written a program in the first place.
The third problem occurs if, say, one of the bulbs blows, or the handles start moving by themselves unexpectedly. The human operator will likely do something sensible like change the light bulb or let go of the hands, even if they haven’t previously been trained how to do so. Systems created to be operated by humans are built on the basis that humans, not bots, are going to be sitting in front of them. Sometimes these systems will have all of their behaviours documented and sometimes not, but always there will be the assumption that a reasonable human being, with an organic brain in his or her head, will be able to react sensibly to whatever might happen. Bots aren’t people, I’m afraid, and it’s no good pretending. Bots can’t think “outside the bot”, which is why computer interfaces are built in quite a different way to the interfaces that are built for humans.
And then there’s the issue of change control. If Metropolis Central were to decide that they were going to enhance their clock-machines so that every now and then a little message would pop up saying “stand away” followed by the clock’s hands sizzling with electricity, then there’ll be no doubt in the minds of their engineers that the clock-operators will figure out, pretty quickly, what they need to do. The bots, however, will repeatedly get frazzled.
This final problem is the problem of Scope. A poor dumb RPA bot isn’t going to know what to do when anything happens which is outside of the inputs and outputs that it was told about, even once it gets past the problem of Coverage. It can’t read the message “stand away” if it’s never encountered it before, which of course will definitely be the case if the message couldn’t happen when the bot was being trained because Metropolis Central subsequently changed the behaviour without troubling to inform the RPA bot vendors.
The only way to solve any of these problems is to program the bots up using some sort of scripting or programming language (and a proper change-controlled spec from Metropolis Central). Once you start doing this, however, you can no longer call them RPA bots, as you no longer have that RPA bot simplicity which was the reason you started using them all in the first place.
Programming requires analysis, design, coding, testing, management, project planning, maintenance, bolshy programmers, buggy software, slipped deadlines, and all of those other nasty little problems which you sought to escape by deploying your RPA bots instead. RPA bots that are scripted are just programs pretending to be something that they’re not.
If the problem you wish to solve requires programming, then just bite the bullet and write it, or get someone else to write it. Use programmable RPA bots if you like but accept all the pain that programming brings along with it. If you think you can dispense with all of this with RPA, make double or trebly sure that you really are trying to automate a robotic process before you embark on the journey.
True RPA bots, i.e. those without programming, can only provide a solution to the simplest of possible problems. Unfortunately, the laws of wishful thinking seem to suggest that if you think you’ve got a cheap and simple problem you’ll convince yourself that all you need is a cheap and simple solution, and only realise you’ve made a costly mistake once you’re some way down the line deploying your RPA bots in the field.
CloudTrade doesn’t do RPA. Our problems are not robotic, we program. But if you do decide to use RPA to manage the simple and robotic tasks in your company, make sure you supply your bots with perfect data from CloudTrade. Give them reliable data so at least they can complete their simple tasks.
Want to know more about how CloudTrade’s technology can provide your bots with perfect data?
Download our guide outlining the 10 Top Tips to improve RPA Bots performance.