API with NestJS #93. Deploying a NestJS app with Amazon ECS and RDS

AWS NestJS

This entry is part 93 of 168 in the API with NestJS

In the last two parts of this series, we’ve dockerized our NestJS application. The next step is to learn how to deploy it. In this article, we push our Docker image to AWS and deploy it using the Elastic Container Service (ECS).

Pushing our Docker image to AWS

So far, we’ve built our Docker image locally. To be able to deploy it, we need to push it to the Elastic Container Registry (ECR). A container registry is a place where we can store our Docker images. ECR works in a similar way to Docker Hub. As soon as we push our image to the Elastic Container Registry, we can use it with other AWS services.

We need to start by opening the ECR service.

The first thing to do is to create a repository for our Docker image.

We must build our NestJS Docker image and push it into our ECR repository. Then, when we open our newly created repository, we can see the “View push commands” button.

Clicking on it opens a popup that serves as a helpful list of all the commands we need to run to push our image to the repository.

First, we need to have the AWS CLI installed. For instructions on how to do that visit the official documentation.

Authenticating with AWS CLI

The first step in the above popup requires us to authenticate. To do that, we need to understand the AWS Identity and Access Management (IAM) service.

IAM is a service that allows us to manage the access to our AWS resources. When we create a new AWS account, we begin with the root user that has the access to all AWS services and resources in the account. Using it for everyday tasks is strongly discouraged. Instead, we can create an IAM user with restricted permissions.

Our user needs to have the permission. It will allow us to push our image into ECR.

 

We need to open the “Security credentials” tab and generate an access key. We will need to use it with AWS CLI.

The last step is to run the command in the terminal and provide the access key and the secret access key we’ve just created.

Pushing the docker image

We can now follow the push commands in our ECR repository. First, we need to authenticate.

We should receive the response saying “Login Suceeded”

Now, we need to make sure our Docker image is built.

If you want to know more about how we created our , check out the following articles:

Docker uses tags to label and categorize images. For example, the Docker image we created above has the tag. Amazon ECR suggests that we make a new tag for our image.

The last step is to push the docker image to our ECR repository.

As soon as we do the above, our image is visible in ECR.

We will soon need the the Image URI that we can copy in the above interface.

Creating a PostgreSQL database

Our NestJS application uses a PostgreSQL database. One way to create it when working with the AWS infrastructure is to use the Relational Database Service (RDS). Let’s open it in the AWS user interface.

The process of creating a database is relatively straightforward. First, we must provide the database type, name, size, and credentials.

Make sure to keep the password and username we type here, we will need them later.

As soon as the database is created, we can open it in the user interface and look at the “Endpoint & port” section. There is the URL of the database that we will need soon.

Using the ECS cluster

One of the most fundamental services provided by AWS is the Elastic Compute Cloud (EC2). With it, we can launch virtual servers that can run anything. However, to use Docker with plain EC2, we would have to go through the hassle of installing and managing Docker manually on our server.

Instead, we can use the Elastic Container Service (ECS). We can use it to run a cluster of one or more EC2 instances that run Docker. We don’t need to worry about installing Docker manually when we use ECS.

Its important to understand that ECS is not simply an alternative to EC2. Those are two services that can work with each other.

First, we need to open the ECS interface.

Now, we have to create a cluster and give it a name.

We can leave most of the configuration with the default values. However, we need to configure our cluster to use EC2 instances.

The EC2 instance ships with 2 GiB of ram. If you want to see the whole list of possible EC2 instances, check out the official documentation.

Creating a task definition

The Amazon ECS service can run tasks in the cluster. A good example of a task is running a Docker image. To do that, we need to create a task definition.

When configuring the task definition, we use the results of the previous steps in this article. First, we need to put in the URI of the Docker image we’ve pushed to our ECR repository.

Our NestJS application runs at port 3000. We need to expose it to be able to interact with it.

We also need to provide all of the environment variables our Docker image needs.

You can find the Postgres host value in the “Endpoint & port” section of our database in the RDS service

The last step we must go through when configuring our task definition is specifying the infrastructure requirements.

Configuring static host port mapping

By default, when we put 3000 in the port mappings in the above interface, AWS expects us to set up the dynamic port mapping. However, let’s take another approach for the sake of simplicity.

We will discuss dynamic port mapping in a separate article.

To change the configuration, we need to modify the task definition we’ve created above by opening our task definition and clicking the “Create new revision” button.

We then need to put 3000 as the host port in the port mappings section.

Once we click the “Create” button, AWS creates a new revision of our task definition.

Running the task on the cluster

Once the task definition is ready, we can open our cluster and run a new task.

First, we need to set up the environment correctly.

We also need to choose the task definition we’ve created in the previous step.

Please notice that we are using the second revision of our task definition that uses the static host mapping.

Once we click on the “Create” button, AWS runs the task we’ve defined.

Accessing the API

We can now go to the EC2 service to see the details of the EC2 instance created by our ECS cluster.

Once we are in the EC2 user interface, we need to open the list of our instances.

Once we open the above instance, we can see the Public IPv4 DNS section. It contains the address of our API.

Unfortunately, we are not able to access it yet. To do that, we need to configure our API to accept incoming requests on port 3000.

Setting up the security group

The security group controls traffic allowed to reach and leave the associated resource. Our EC2 instance already has a default security group assigned. To see it, we need to open the “Security” tab.

Clicking on the security group above opens up its configuration page. We need to modify its inbound rules to open port 3000. To do that, we need to create two rules. In the first one, we set “Anywhere-IPv4” as the source. In the second one, we choose “Anywhere-IPv6” and specify port 3000.

Once we do the above, we can start making HTTP requests to our API.

Summary

In this article, we’ve learned the basics of deploying a NestJS application to AWS using the Elastic Container Service (ECS) and Elastic Compute Cloud (EC2). To do that, we had to learn many different concepts about the AWS ecosystem. It included setting up a new Identity and Access Management (IAM) user, pushing our Docker image to the Elastic Container Registry (ECR), and running a task in the Elastic Container Service (ECS) cluster.

There is still a lot more to learn when it comes to deploying NestJS on AWS, so stay tuned!

Series Navigation<< API with NestJS #92. Increasing the developer experience with Docker ComposeAPI with NestJS #94. Deploying multiple instances on AWS with a load balancer >>
Subscribe
Notify of
guest
6 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
kamal
kamal
1 year ago

i follow all steps but cannot access to public ip

Ngoc Anh
Ngoc Anh
1 year ago
Reply to  Marcin Wanago

I earnestly ask you to guide me to integrate elasticsearch into docker with the latest version, I have a problem when deploying it, docker can’t run even though on my local machine it works fine, Please see the implementation elasticsearch on production environment with https. I think maybe it’s because the ram is not enough for elasticsearch but I have increased and decreased elasticsearch still not working. So I really need a stable ECS production elasticsearch implementation for matching. Thank you very much

Gathsara Umesh
1 month ago
Reply to  Ngoc Anh

Are you using mac to create your docker file?

Wellysson
Wellysson
1 year ago
Reply to  Marcin Wanago

I had the same problema. Both HTTPS or HTTP. I fixed the problem by adding the PORT variable in the task, creating a 3 revision, i had forgotten I changed the security rules and ACL. It’s working now for HTTP.

Last edited 1 year ago by Wellysson
Hana
Hana
1 year ago

Thank you so much!