Containerisation 1


Reading Time: 3 minutes
Containerisation 2

Like any software-based technology service, CloudTrade continually looks to improve its technical platform, increase scalability and support technical stability as the company continues to grow in customer numbers and document volumes. CloudTrade CTO Richard Develyn outlines in this blog post how we are currently rolling out a new technical architecture based around containerisation.

Containerisation and cloud computing in general is definitely the way of the future. The technology and adoption of these systems is proceeding apace. Given our position in the market as a service provider experiencing high growth, our move to containerisation was essential, and we are happily now well on the way to completing the implementation of containers for the whole of our analytical data-capture service and for the whole of our customer base.

For the last two years CloudTrade has been in the process of re-deploying its core analytical data extraction service onto this highly scalable, cutting edge, infrastructure technology.

As of February 2021, most of the processing that delivers our core service has been teased apart and deployed within individual Microsoft Azure containers. This milestone marks the end of the second stage of our implementation strategy, and consequently the beginning of the migration of our existing customer base onto this second stage.

During the first stage we implemented containerisation for our core rules-processing engines and migrated our customer base accordingly. At the end of the third stage, everything will be running in containers. First-stage implementation, therefore, was all about moving the most processor intensive parts of our service across. Second stage has ensured that all of our time-critical processes are also across. The third stage will be there to mop up the rest.

The purpose of containerisation is to take a service or application and split it into little bite-sized pieces each of which can be individually deployed and scaled up or down on demand. These “pieces”, known as containers, are managed using a container eco-system such as “Docker”. Docker was the first open-source implementation of such as a system and has now gained wide-spread use in industry today. Docker is also the system that we use in CloudTrade.

This container eco-system also includes within it a container orchestration tool that monitors various performance metrics associated with the individual containers and queues of messages which run between them and decides, via configurable rules, whether more of them should be spun up or whether some of them should be closed down. This key component allows a containerised system to not only make best use of its resources but also to perform optimally in the face of bursts in its processing demands, such as we regularly encounter in CloudTrade when document senders do nothing at all for days and then send us a whole pile of documents expecting them to be processed at once.

The most popular container orchestration tool is called Kubernetes. This tool originally descended from a project developed internally at Google and is now one of the most successful tools of its kind. Kubernetes is also the orchestration tool that we use in CloudTrade.

A containerised application or service has the additional benefit of being operating-system agnostic. This means that containers that might be running on Microsoft Azure today could be deployed on AWS or a Linux-based system tomorrow. This allows a service provider to not only make best use of the operating system resources that they have with their cloud system of choice but also to consider moving part or all of their service onto other operating systems if that would result in better performance or lower costs. Currently, we deploy on Microsoft Azure, but we are keeping an eye on other cloud system providers.