Posts

RPA Technology

Robots don’t make mistakes – but data does!

Reading Time: 3 minutes
RPA Technology
RPA bots, don’t make mistakes if the instructions are correct.

There has been a huge amount written about the benefits of Robotic Process Automation (RPA) and probably as many column inches dedicated to the challenges and pitfalls. In this article and our upcoming webinar, we explore the role that data plays in all RPA projects and the impact that bad data has on the robots and the desired business outcome.

Whatever industry you work in, or in whichever interest you may have, you will almost certainly have come across a story about how “data” is changing the face of our world, particularly “big data”. You may have heard this term as part of a study helping to cure a disease, boost a company’s revenue, improve customer service, make a building more efficient or be responsible for those targeted ads we keep seeing.

But we don’t mean THAT “data”!

Despite what term is commonly used, data is simply another word for information. But in computing and business, data refers to information that is machine-readable as opposed to human-readable.

In business, we receive masses of data in human readable form such as contracts, invoices, orders  or HR records etc. These documents need to be converted to a machine-readable form so that technology, like RPA, can be used to automate the process end-to-end.

The challenge is to firstly have the creator of the document produce it in a digital format that is also human readable, so that further downstream this can be read, data extracted and passed to a robotic process for downstream automation. Data extraction can be achieved at 100% accuracy if produced in a digital format (if the format contains a text layer).

Images causing havoc

But, where the sender chooses to create an image file, you must rely on Optical Character Recognition (OCR) to convert the text to a machine-readable format. The problem with OCR is that as the receiver has no control over the image quality or how data is presented, the net result is you can never guarantee accuracy and it’s these data errors that cause havoc with the RPA process.

Ensuring the best data for your robots

To make sure your bots do not go awry, the first challenge is getting the sender to create a digital document. To do this, we need to remove any barriers, ensure there is no cost or resource requirement and ideally no process change for the sender. The second challenge is to remove paper or image files that require OCR.

Bad data, big problems

Let’s consider the consequences of bad data for a minute. The impact of misreading a measurement or value could mean an engine part is manufactured to the incorrect size or an order gets processed with the wrong amount, a -10 becomes 100 and so on. Data without context delivers a second layer of complexity, as ‘ea’ could be read as ‘each box’ and not ‘each unit’ etc. There is a clear and obvious need to not only read data accurately but also to understand the context of a data element.

Now consider these challenges at scale and the impact of such errors on ‘big data’ as more of the world’s business processes become digital and move online, the need to process data at scale accurately has never been more important.

RPA for business process automation

In the world of shared services, we have looked to deploy RPA in areas such as invoice and order processing to increase automation and drive efficiencies. Through the implementation of innovative technologies, such as RPA, the human task is rapidly moving from the mundane and repetitive to those of quality control and cognitive value creation. The theory is great, but the reality is that unless the right technology and business process is deployed to convert human readable documents to that of a machine readable format, the data for the RPA bots will always contain errors. You can read more about RPA integration and CloudTrade here.

Technology for data perfection

There is a solution to read digital documents and process that data into a format a machine can read to give bots the right tools for the job.

We’re running a webinar focusing on this integration for RPA, sign up is available here and will address how this proven approach works for RPA , provide a live demonstration of delivering 100% accurate data, and how to automate business processes that will eliminate human intervention.

Microservices

Our journey from Monolith to Microservices

Reading Time: 5 minutes

Richard Hooper, Head of Systems, explains how CloudTrade upgraded its software environment to cope with increased demand and some of the problems solved along the way.

Just over a year ago at CloudTrade, we made the jump and decided that containers (using Kubernetes) were the answers to all our application issues. In this article I will examine why we have chosen to jump on the container band wagon, which could be termed as the ‘latest tech craze’, as well as how we solved some of the issues along the way, but firstly, a little about me.

About me

I’m Richard Hooper, Head of Systems and a Microsoft MVP in Azure. I started with CloudTrade back in March 2018 as a Systems Architect. As CloudTrade grew so did my responsibilities, and now I manage a team that look after the internal servers as well as the desktop, Azure estate, and the whole production estate.

My passion lies in all thing’s technology based and specially Microsoft Azure. In my spare time I blog about Azure at Https://pixelrobots.co.uk and can be found hosting the North East Azure User Group.

Was a container system the right thing to do?

It’s a question I ask myself often. With the rate of change in the cloud world you kind of have to keep questioning and evaluating, as a new technology comes out almost monthly, well it seems to anyway. Every time I ask myself, I always come to the conclusion of, yes. However, as we became more familiar with microservices and what we need from our application, I know we made the right choice.

Why microservices?

The application that powers CloudTrade’s unique data acquisition technology, Gramatica, started life as a sort of Desktop application. It needed the user to be logged in and wrote a lot of files onto the server or desktop. One good thing is that when the application was first created, it was created with steps and each step had a sort of handover using files. When I found out about this, it was a relief as it should make the move to microservices easier.

Why change then, I hear you ask! Well for a start the management of the server and application became difficult, especially if you wanted to do any kind of automated patches and, I certainly did not want to keep patching servers out of hours. But the main driving force for the move was scalability – the dream for a software business.

With the way the application was created, and all the file access, at the time, scaling was a right pain! First you had to run more copies of the application per user if there were enough free resources on the server or spin up a new server and migrate the user and application to it. Sometimes we would also hit disk issues, capacity and IOPS.

With the move to Kubernetes, an open-source container-orchestration system, and more specifically Azure Kubernetes Services (AKS) this headache has gone away. Our AKS cluster utilises something called Virtual Machine Scale Sets (VMSS) which allows for the cluster to auto scale it’s nodes when resources are becoming constrained, all done automatically. Another great feature with Kubernetes is the way it can automatically scale your deployments (a deployment is a collection of pods, a pod is a wrapper for containers in Kubernetes). How awesome is that?! But all this awesomeness still came with issues, issues that we had to get over to make this journey a true success.

Oh no, not issues!

Yes, with any journey you are always going to have hurdles along the way and this one is no different. One of our main issues, is that part of our new microservices application needs to be run in Windows containers. This was the problem we tried to fix first – some may say that was a mistake as Kubernetes did not support Windows containers at the time, but Docker did!

To get round this issue, we are currently running the microservice on Windows server 2019 in a VMSS using a custom hardened image. We currently run 6 containers per node, 1 for configuration and 5 for actual processing.

Scaling became a bit of an issue as we moved more onto this new microservice. As we are now using RabbitMQ instead of the file system, we came up with a brilliant solution of using an Azure Logic App to query the RabbitMQ cluster, which is running inside our AKS cluster, every 15 minutes. It checks the queue size and how many containers are consuming the queue and will then either scale up or down the VMSS nodes. Unfortunately, we had to choose 15 minutes for the check as the nodes can take a while to come up.

We are currently rewriting this application to run in Linux, so my tip is if you can get away with not running Windows containers then do it!

As we are using RabbitMQ, to scale our microservices that run inside the AKS cluster, we were unable to utilise any of the basic container autoscaling that comes with Kubernetes. After some research we came across Keda, which is an open source project by Microsoft and Red Hat. Keda extends the basic container autoscaling and allows us to scale based on RabbitMQ queue size and quicker than the logic app approach we used above. We were quite lucky that Keda went GA just in time for us to release the second phase of containers.  

What’s next?

We are continuing our journey with the next phases being worked on. We hope to get the release into production by the second half of this year. Once each step has been finished, we will end up with what we are calling a skeleton of our old application which will still be running on the servers. There will need to be some time spent to remove these to complete our journey as we are envisioning that there will be no need for any servers apart from the AKS nodes.

We will also continue with another journey. This one is to utilise tools like GitHub Actions and Azure DevOps which will help to automatically build and release each microservice to our test and then production AKS cluster. This will enable us to fully embrace the ‘DevOps mentality’ by not only improving internal processes, but also improving the application.

Feel free to reach out if you would like to discuss any of the above – thanks for reading!

CloudTrade specialises in converting documents (with 100% accuracy)

so humans can read them.

Learn more about CloudTrade and our technology here.

It may not be rocket science, but it can be complex

Reading Time: 3 minutes

Reading documents may not be rocket science, but computers struggle to do what humans find simple. Is technology finally able to read documents in the same way as humans?

CloudTrade are in the business of extracting and interpreting information out of documents which have been written can be understood not by people, not computers.

This is probably one of the most frustrating problems in the history of IT.

Reading stuff out of documents feels easy to us, as people. Nowadays anything to do with people communicating to other people feels easy, and we ultimately think that since computers are cleverer than we are (in many ways), that if a person finds a task easy, then a computer should find this no trouble at all.

The problem is: we tend to forget just how clever people are. Even if you struggle with long division, that brain of yours which controls everything from getting out of bed in the morning, to washing, driving to work, eating lunch, watching TV and so on, leaves the most powerful computers in the world floundering at the starting pole like electronic tortoises.

Communicating with other people, in speech or in writing, falls into that category of stuff that your brain is very good at but computers struggle to do. People get a lot of practice at it. No computer in the world could have read what you’ve read so far and have any idea of what I’m talking about, but you’ve understood me completely (well I hope so!).

CloudTrade aren’t in the world of building robots, of course, not even robotic tortoises. Neither are we trying to write a full natural language processor which could understand everything that a human being might want to say to it. These sorts of achievements are truly well within the realms of science fiction. However, what we have built at CloudTrade, is a natural language processing engine which can understand those documents which we have programmed it to understand. This is much more sophisticated than the approaches which are otherwise prevalent in the marketplace.

For example, just hoping that a particular bit of information on a document (for instance, a VAT number) might always be found on the same place on a page, just isn’t going to work. Neither will the idea that you might be able to go hunting for some unique piece of text and then look in some predetermined distance and direction to find what you’re after. These sorts of techniques work occasionally, but most of the time pages jiggle around and the chances of being able to find something which is not only guaranteed to be unique, but also always in the same location relative to what you’re looking for is tiny.

We frequently get people coming to us after they’ve tried these sorts of solutions and then given up in frustration and I sympathise with this scenario. Often, they thought that the problem they had was an easy one so they bought into an easy solution, more often or not wrapped up with some sort of neural network element, which then proves unhelpful. They’ve then discovered that this easy solution didn’t work and that they had to spend all of their time filling in for its mistakes, or being told that they had “yet another” special case which would require costly scripting or programming.

CloudTrade are simply not like this.

Ok, I know anyone can make that sort of claim, but I like to think that we put our money where our mouth is by offering our solution as a full service, rather than as a software licence where you may be left to find out for yourself whether the solution works effectively or not. We configure it to fit your requirements and when it’s up and running we correct its mistakes and maintain it for as long as you stay with us. Furthermore, we’ll charge you the same price for every document we handle, no matter how awkward or complicated it may be.

We’re the only company prepared to do this because we know, ultimately, that we’ve built the right solution. It may not be rocket science, but it’s actually pretty clever, and it turns out that you need to be pretty clever if you want to solve this problem.

CloudTrade specialises in converting documents (with 100% accuracy)

so humans can read them.

Learn more about CloudTrade and our technology here.

Why Intelligent Data Capture is a Stepping Stone Towards Machine Learning

Reading Time: 2 minutes
Stepping stone to machine learning

As you’ve likely noticed, an increasing number of small and medium-sized businesses are assessing how they can adopt machine learning, artificial intelligence and RPA.

The Future is Here

It’s becoming widely accepted that the various forms of AI, like machine learning, will continue to grow in importance in the coming years. A recent survey conducted by Brother UK and The Telegraph found that almost half of IT decision makers (45%) believe that AI will drive the biggest innovations of the next three years.

But many organisations are unsure how best to integrate the technology into their existing workflows. This uncertainty can lead to paralysis as managers choose to ignore implementation, believing that it’s too complicated for them to implement today. Many also believe the tech is to expensive and won’t lead to returns at this relatively early stage in its development.

The spark that lights the fire

It’s important to remember that Rome wasn’t built in a day. As we discussed on the blog a couple of weeks ago, the best time to start any digital transformation is now, even if starting means starting small. This is especially true when it comes to AI and machine learning.

One necessary ingredient for any machine learning project, is clean data. But many businesses don’t have the sufficient quantity of data that the algorithms require to produce a reliable prediction. If you don’t have massive silos of clean data then you need to consider intelligent data capture.

Act now, or get left behind…

Last week on the blog we spoke about how extracting line level data gives companies greater insights into their bottom lines. If your business is going to leverage AI at some point in the future, then you’re not going to be able to do it without data. If you’re only capturing small amounts of basic information, you will only be able to make basic assumptions.

In short, we firmly believe that intelligent data capture is a crucial stepping stone towards the implementation of machine learning. Implementing intelligent data capture technology today will give your business a long term advantage over your competitors. And companies must act with a sense of urgency, recognising how information and data that’s flowing through systems on a daily basis is currently being lost. For example, once an invoice has been processed, without intelligent data capture, that data is lost forever.

Prepare for tomorrow

Intelligent data capture is not only a solution that can help you and your business today. Perhaps more importantly, it’s also an investment in the future of your business, creating opportunities that can be realised further down the line when machine learning and AI become truly ubiquitous and affordable.

As machine learning capabilities improve, it will no longer be a question of whether a company can afford its implementation. It will be whether a company can afford to be without it.CloudTrade can help you and your company begin to successfully implement technology that can be the first stepping stone towards the implementation of machine learning. If you want to find out more, get in touch.