What is Amazon Containerization - An Actionable Guide to Running Containers in AWS

What is Amazon Containerization - An Actionable Guide to Running Containers in AWS

Featured on Hashnode

Containers have become an industry standard. Deploying software is easier and more reliable if it is containerized, especially if you are deploying to the cloud.

AWS, being the leader among cloud providers, provides several ways for you to deploy your apps in a containerized fashion.

In this article I will:

- Explain why containers are so valuable

- Examine the various ways of running containers in AWS

- Demonstrate one approach to running a containerized web application on AWS

Why Are Containers So Valuable?

Mimicking a production environment locally can be challenging. Chances are that the operating system, runtime version, and many other dependencies are different on your local machine compared to the production server. These discrepancies can make local testing difficult, and cause unexpected behavior and critical failures in production.

Enter Docker

docker whale logo

When Docker appeared in 2013, it quickly became very popular. All of a sudden you had a platform that enabled you to package your application, along with all of the other dependencies into a sort of "mini-computer" called a container. This ensured that there would be absolutely no discrepancy between the local and production environments, as long as both ran the app in a containerized manner.

The Rise of Container Orchestrators

kubernetes logo

The container revolution opened a number of new topics. Mainly - what is the optimal way to actually run containers in production? Industry leaders quickly realized that the need for some sort of orchestration tool was mandatory. Docker released Swarm in 2014, but very quickly some more advanced tools followed. Apache Mesos was one of them, and then in July 2015, Kubernetes 1.0 was released, which marked the beginning of its dominance in the orchestration field.

What About AWS?

Coincidentally, AWS released its own orchestrator in July 2015, called Elastic Container Service. But, this is certainly not the only way to run containers in AWS.

How To Run Containers On AWS

Elastic Beanstalk

aws elastic beanstalk logo

Although Beanstalk is generally used as a way to easily migrate monolithic applications from private data centers to the cloud, it also supports running Docker containers. You select Docker as your platform and it will create an environment with Docker already installed:

selecting a docker environment on ebs

Even though Beanstalk is usually not the default choice for running containers on AWS, it's good to know that this option is available for certain cases where it may make sense.

Elastic Kubernetes Service

aws elastic kubernetes service

It was not until 2018 that AWS introduced it's own version of a managed Kubernetes environment called Elastic Kubernetes Service. Configuring Kubernetes master nodes (also known as the Control Plane) and workers is not an easy task. This is why AWS offers to do this for you. You pay $0.10 per hour for each Amazon EKS cluster that you create, plus the standard fee for any worker EC2 node that you add to the cluster.

With the dominance of Kubernetes in the orchestration area, EKS is definitely a popular choice for running containerized apps on AWS. It is commonly used for systems that run tens, or even hundreds of different micro-services.

Elastic Container Service

aws elastic container service

As I mentioned, ECS was released somewhere around the same time Google released Kubernetes. Although ECS did not gain the same level of popularity as Kubernetes, it is still an amazingly useful tool. It is typically used for systems with a smaller number of micro-services that do not require some of the more advanced features of Kubernetes. Although Kubernetes is feature rich, ECS is much simpler to use.

Fargate

aws fargate logo

All of the previous approaches mentioned, implied running containers on EC2 instances. AWS Fargate, on the other hand, is a server-less approach. This means that AWS will run the containers on servers which you do not have to worry about maintaining. Fargate is available for both EKS and ECS.

App Runner

aws app runner logo

App Runner is a fully managed container-native service, meaning that it's server-less, just like Fargate. In fact, it runs on top of ECS Fargate. It abstracts away most of the ECS configuration, like creating clusters and services, making it even easier to get your container running in the cloud.

Lightsail Containers

Amazon lightsail logo

AWS Lightsail is a service which is mainly intended for users who are looking for the easiest possible way to use the cloud. This means that many configurations and settings are not available, but deploying applications is very simple, and is intended for users who are not very cloud-savvy.

In 2020, Lightsail introduced Lightsail Containers, which provides the same simple interface, but allows the user to deploy their app in a containerized environment.

EC2

amazon ec2 logo

Finally, nothing prevents users from launching an EC2 instance, installing Docker on it, and running containers that way. You could even configure an orchestrator manually on a cluster of EC2 instances. Although AWS offers to help you with the heavy lifting with services, such as ECS, EKS or App Runner, it is a perfectly valid possibility to run containers directly on EC2, if this is for some reason a requirement.

Deploying a Containerized App on AWS

It's time for some hands on. In this example, I will demonstrate how to deploy an application from your local machine to AWS ECS Fargate.

Prerequisites

To follow along you will need:

Developing the App Locally

Every application containerized with Docker needs two things:

  • The application code
  • A Dockerfile

Your application code can be whatever you wish, but for the sake of example, I will make a simple Flask web app:

from flask import Flask

app = Flask(__name__)

@app.route("/")
def home():
    return "Deployment worked!"


if __name__ == "__main__":
    app.run("0.0.0.0")

Next, we need a Dockerfile, basically a recipe for building a Docker image, which we will push to the Elastic Container Registry (ECR). You can think of ECR as a Github for Docker images. Docker has its own, called Dockerhub, but since we're deploying to AWS, it makes sense to use ECR.

FROM python:3.8-alpine
WORKDIR /app
RUN pip install flask
COPY app.py /app
EXPOSE 5000
CMD [ "python3.8","app.py" ]

As you can see the Dockerfile is basically a set of commands that will configure the environment of the container. Since this is a very basic app, the Dockerfile is pretty small.

Building the Docker image

To build an image from the Dockerfile, we need to run the following command:

docker build -t test_app .

test_app is obviously only the name of the image, and it is up to you to name it. If everything went right, you will be able to see that the image was indeed built by running:

docker image ls

Now that the image is built, you can run the docker container locally, to see if everything works as expected:

docker run -p 5000:5000 test_app

The -p flag is used to define which local port you will map to the container port.

Pushing the Image to ECR

Before we can push this image to ECR, we need to go to the AWS console and create an ECR repository:

Create ECR repository on github called `test_app`

Now we want to push the image to AWS ECR. We do this with the AWS CLI:

aws ecr get-login-password --region eu-west-1 | docker login --username AWS --password-stdin <YOUR aws_account_id>.dkr.ecr.eu-west-1.amazonaws.com

This command gets the ECR login password, then uses the output to login to ECR via the docker login command.

Note that the region may be different depending on which region you have selected on AWS. Also remember to insert your unique AWS account id in the ECR url.

In order to push to ECR, we first need to set a tag to the image. We need the image id in order to tag it, which we can get from the list of images:

docker image ls

When we find the tag we run

docker tag <image_id>  <YOUR aws_account_id>.dkr.ecr.eu-west-1.amazonaws.com/test_app:1

docker push  <YOUR aws_account_id>.dkr.ecr.eu-west-1.amazonaws.com/test_app:1

This will initiate the uploading of the image to AWS.

Back in the AWS console, if we look at our ECR repository, we will now see that the image was successfully uploaded:

ecr image uploaded sucessfully

Creating ECS Cluster

Now we can go to ECS in the console and create our Fargate cluster. Select create cluster, and choose the networking only option, because we do not want to manage any servers:

creating a fargate cluster

Choose a name for your cluster, and select create VPC also. None of these resources cost anything, until you actually run a task, so don't worry about the cost just yet :)

If the cluster was created successfully, you can click on it to see that there are no services or tasks running yet:

fargate cluster view services

Creating ECS Task Definition

Now we need to create a task definition. It's also only a recipe, technically.

In the task definition, you describe which containers will be running in your task. An ECS task is basically the equivalent of a Kubernetes pod. It's usually good practice to have only one container in a task, but you can optionally have many, depending on your use case.

You create a task definition by clicking on

task definitions -> create new task definition -> fargate

Once you are in the task definition menu, name your task definition whatever name you choose. The task role does not concern us, because for this simple example we will not be communicating with any other AWS services. If our app was real, we would most likely need to attach some IAM policies to a role and assign it here. For operating system family, choose Linux, and set the CPU and RAM to minimal values (this is where you can get charged extra if you are not careful) .

Next, click on the add container option and here we will add our containerized flask app to the task:

add container

You need the url of the image that is located in ECR, so hop over to ECR and copy it.

Make sure to set the port mapping to the correct port. You can skip the rest of the configuration, as it doesn't matter for this simple example.

Simply click Create at the bottom of the task definition menu, and it will be created.

Creating and Running the ECS Task

Now we can finally run this task. In your ECS cluster, go to tasks -> run new task

For the launch type, select Fargate, and for the OS, choose Linux. You need to select at least one subnet, and since we haven't provisioned any load balancer or reverse proxy, make sure that it is a public subnet. A public subnet will make sure that this task gets assigned a public IP, meaning we can access it from the internet:

run a new fargate task

Now you need to create a security group. A security group is basically an instance level firewall (or task level, in the case of Fargate), which blocks traffic on all ports if not specified otherwise. This is important because if you don't enable tcp traffic on port 5000 (which is the port we specified in our task definition), then it will be impossible to access this web app from anywhere.

creating a new security group

Finally, just make sure that the Auto-assign public IP setting it set to enabled, and you can click on run task

It will probably take a minute or two until it starts running, but once the last status field is set to RUNNING, click on the task and find the public IP:

check if task is running

Paste the URL into your browser, and remember to add the port number:

fargate app running in browser

Looks like everything worked!

Cleanup

In order not to get charged for our example, simply select delete cluster in the menu of your cluster. You will see each resource being deleted:

deleting the ecs cluster

ECR charges only $0.09 per GB per month, but for the sake of good hygiene, you can also go over to ECR and delete the repository.

Summary

Knowing how to run containers in the cloud is a very important skill for any developer. I hope this article helped you understand some concepts regarding containers in AWS more clearly. If you liked it, make sure to follow the AWS publication on Hashnode for more great articles about AWS.

Thanks for reading!