Run CUBA on AWS ECS - Part 1

Run CUBA on AWS ECS - Part 1

Going from running Docker in the command line to a production scenario can be quite challenging since there is so much more to cover and so much more possibilities to do it right. One solid way of Docker and the Cloud is AWS.

In the next three articles we’ll go through the different possibilities AWS has to offer especially regarding Container as a Service. We will deploy the cuba-ordermanagement CUBA app on an ECS cluster and use different features of the AWS cloud to leverage the full cloud potential to CUBA.

The three parts of the article will cover the following content of the overall topic:

  1. introduction to AWS services, creating the docker image and pushing it to ECR
  2. creating a simple ECS cluster and running the cuba app on it
  3. using different AWS features to extend the ECS cluster towards HA, cluster the different CUBA layers independently

A brief history on cloud land

AWS is Amazons cloud offering. It is a very popular cloud provider and probably one of the oldest ones as well. Not long ago i read the pharse which sets the scene for the cloud market dominance of AWS quite well:

Nobody ever got fired for choosing AWS

Reffering to the well known marketing term “No one ever got fired for buying IBM”. AWS started as a Infrastructure as a Service (IaaS) platform. With EC2 it offers compute capabilities (basically virtual machines). For storage there are a few more options: S3 / DynamoDB addresses file storage as well as non-relational and relational data storage (RDS). After that a lot of other services pop up. These were not only in the IaaS space, but in the P/SaaS space as well. Things like Elastic Beanstalk or Amazon Lambda filled the gap between the low level infrastructure services and alternatives like Heroku.

At the end of 2014 AWS announced ECS, which is a service that is a layer on top of Docker containers in order to orchestrate and manage containers.

ECS is a offering in a highly competitve market. Docker Swarm, Apache Mesos and Kubernetes are just a few tools to mention. Although ECS is not open source like the other examples, it is mainly build on top of Docker and therefore a good fit for deploying our open source CUBA application cuba-ordermanagement. On a later stage we’ll probably take a look at Kubernetes because it has a fairly large user base as well. Additionally a lot of services, like OpenShift from Red Hat or Rancher use it as a basis for their solutions.

To get back to the topic of this article, let’s have a look at the different services that we’ll use throughout the deployment with ECS and AWS.

AWS building blocks relevant for ECS

EC2

EC2 is the basis of a lot of AWS services. It is their compute solution. Basically you can create virtual machines. These VMs can be created with different capabilities like amount of RAM, CPU power, network bandwith and so on. An EC2 instance can go from a VM with 512 mb RAM to a VM with 2 TB of RAM and 128 vCPUs. Here is an overview of instance types.

ELB

Load balancing on AWS can be done through their service “elastic load balancer”. It is a service that allows to route traffic to EC2 instances. Additionally it can termininate SSL in case the network traffic is HTTP based. The main selling point on ELB is probably the fact that it is built with a high availability concern in mind. In this case it is just like a few other AWS services. Oftentimes a load balancer can be easily built with the essential building blocks of the AWS environment: EC2 instances. The problem is, that creating a really high avaliable solution with something HAProxy is fairly complicated. Especially if the solution should work across two data centers or alike. This is where ELB shines. This article describes the situation around AWS ELB and other approaches very good.

RDS

RDS is another example of a high level service that sits on top of EC2. It is about storing data in relational databases. Sounds simple, but if you have ever tried to install a Oracle installation in cluster mode across two data centers with automatic backup and fail over - you’ll probably know how much time it is going to cost. RDS allows people to do this within a few clicks. It supports the main database vendors and even cares about licensing if you don’t want to explicitly.

VPC

With virtual private cloud: “VPC”, AWS allows controlling the different pieces of your infrastructure to be not accessible to everyone in the internet. You can create subnets within your VPC, connect or disconnect it from the internet, defining security rules about what has access to what within the VPC and so on. Even a dedicated VPN tunnel can be created so that there is no internet connection needed at all.

ECR

ECR stands for elastic container registry. It is the fully managed solution for the Docker containers (the binaries). Basically it is the same thing as Dockerhub for storing your Docker images. But it has tight integration in the full fletched security mechanisms on AWS. Besides that it uses the same protocol as the docker registry does. Due to this, the Docker CLI docker pull tomcat:8-jre will seamlessly work with ECR.

ECS

ECS is the last building block we will use in this articles. As described above, it is the solution from AWS for Container orchestration. We’ll go into much more detail about this topic in the second article because we need to configure the different pieces within ECS. For know it can be seen as the part that automatically deploys Docker containers to existing EC2 instances and cares about the healthiness of the containers.

Overview on the deployment process

To give you a general idea of what scenarios will be covered in the following articles, here’s an overview diagram:

run CUBA application of AWS with ECS overview

The diagram shows the different building blocks of the AWS infrastructure that are relevant for the ECS deployment. I’ll describe on a very broad basis the workflow and we’ll go into much more details of the different steps afterwards.

First either the developer or your favorite CI system kicks off the deployment process. To do this, the Docker image get build locally and pushed to the central Docker repository. Next the elastic container service (ECS) gets notified to redeploy the newly created Docker image.

Since ECS is just the orchestration layer but not responsible for actually running the Docker containers, pre-configured EC2 instances are contacted in order to redeploy.

To ensure fault tolerance on the EC2 instance level as well as the Docker container level, multiple EC2 instances serve the multiple instances of the Docker image. Thus there is a need for a load balacing mechanism that will shield the docker containers from direct internet connection, terminate SSL and balance requests between the instances.

For database access of the CUBA application the deployment uses a postgres RDS instance that is clustered on a cross availability zone basis. An availability zone or a AZ as called in AWS is basically a data-center. The biggest unit of computing on AWS is a region (like eu-west-1: EU Ireland). Within a region there are multiple AZs that are fully isolated but have a high bandwith connection between them (to allow synchron database cluster e.g.). More information can be found at the AWS AZ docs.

After every part of the diagram has been talked through very slightly, we will have a look at the different steps and how to implement them in order to deploy our CUBA application to ECS in the next articles. In this first article we will build the application and deploy the binaries to ECR. This will be the requirement to take a look at how to create a simple ECS cluster with cuba-ordermanagement (second part) and after that different options to enhance the deployment to even more production scenarions (third part).

For this first part we cover the image building and registry configuration and deployment.

Build cuba-ordermanagement Docker image

The first step towards this is to actually deploy the docker image to ECR. Based on the docker image i created in the initial docker cuba blog post here’s a slightly changed Dockerfile. In fact, there are different Dockerfiles for each application layer so we can create and deploy them independently, but they are quite similar. Below is the Dockerfile for the middlware layer:

### Dockerfile

FROM tomcat:8-jre8

# ...

ADD container-files/war/app-core.war /usr/local/tomcat/webapps/ROOT.war

# ...

ADD container-files/start.sh /start.sh
RUN chmod +x /start.sh

# ...

CMD ["/start.sh"]

This is just a subpart of the Dockerfile. In the repository you’ll find the whole listing. In comparison to the original Dockerfile i created a startup file start.sh that basically ensures certain environment variables are available and copy the values to the tomcat installation so that the CUBA application is configured correctly. This is done via checking for existence of environment variables:

: ${CUBA_DB_HOST?"CUBA_DB_HOST required (docker run -e 'CUBA_DB_HOST=myHost)'"}
echo "db.host=${CUBA_DB_HOST}" >> /usr/local/tomcat/conf/catalina.properties

In case the Docker container is not started with docker run -e CUBA_DB_HOST=dbServer the container will complain about the absense of the variable and stop the server. If the variable is set, the value will be copied to catalina.properties so that the CUBA application is configured via the environment variables.

This is done because in comparison to Java with its System properties, in the Docker world the smallest building block for configuring a container is operating system environment variables. Everything that sits on top of Docker like docker-compose or ECS will be aware of this configuration option. This little trick from above will do the translation between these two worlds.

After the image is described correctly, it can be build locally and deployed to the central AWS Docker registry (ECR) afterwards. To build the Docker image i created the shell script docker-build.sh which lets us build both images. It mainly does the following steps (example for the app.war):

./gradlew buildWar
cp build/distributions/war/app.war deployment/docker-image/app/container-files/war/
docker build -t cuba-ordermanagement-app:latest deployment/docker-image/app/

The script takes three parameters to build the image:

  1. the name of the application (cuba-ordermanagement)
  2. the component name (app / app-core / app-portal)
  3. the docker tag / version of the component (latest / 0.1 / 0.2…)

The resulting call from the root of the project for the cuba-ordermanagement application should look like this:

$ deployment/docker-build.sh cuba-ordermanagement app-core latest
$ deployment/docker-build.sh cuba-ordermanagement app latest

After this, the Docker images should be created and listed via docker images:

$ docker images
REPOSITORY                      TAG                 IMAGE ID            CREATED             SIZE
cuba-ordermanagement-app        latest              e5e0498669dc        2 minutes ago        388.7 MB
cuba-ordermanagement-app-core   latest              2446085b946a        2 minutes ago        380.8 MB

Deploy docker image to ECR

The last step for this part is to transfer the Docker image to ECR. There have to be a few steps have to be taken before you can communicate with ECR. Detailed information about this can be found in the ECR getting started guide. Here are the main points:

  1. create a AWS account
  2. follow the ECS setup instructions
  3. create a ECR registry for each component (app-core and app)
  4. install the AWS CLI
  5. login to ECR via the Docker CLI: $(aws ecr get-login)

At least for me it took qiute some time to go through the different steps, but this is just a one time effort in order to get going with AWS. Additionally, a lot of steps are requirements for ECS as well.

I created another shell script called docker-deploy-to-ecr.sh which will to the job for us. Since ECR is quite seamlessly integrated into the normal docker workflow, the script just uses the normal docker commands docker tag and docker push to transfer the images to the repository. It takes the same arguments as the last script but one additional parameter: “ECR_REGISTRY_HOST”.

With this commands you will transfer the local docker images to the AWS ECR repository:

$ deployment/ecs/docker-deploy-to-ecr.sh cuba-ordermanagement app-core latest 123456789101.dkr.ecr.us-east-1.amazonaws.com/cuba-ordermanagement-app-core
$ deployment/ecs/docker-deploy-to-ecr.sh cuba-ordermanagement app latest 123456789101.dkr.ecr.us-east-1.amazonaws.com/cuba-ordermanagement-app

When you have a look at your ECR repository list, both repositories should host at least one Docker image now.

With this steps inplace you are ready to take on the next major topic. In this we will take what we have achieved, that our application is available to the ECS environment so that we can now create a ECS cluster that will run the docker containers.

This will be covered in the second part of this three part series about AWS ECS.

Mario David

Mario David
Software developer with passion on agile, web and fast development, blogger, father, family guy

CUBA to Heroku in 10 steps

In this blog post I will show you how to deploy a CUBA app to Heroku in ten simple steps Continue reading

Salesforce through the Lens of a Java Dev

Published on March 02, 2020

Salesforce through the Lens of a Java Dev

Published on February 14, 2020