View all articles
Migrating from ECS to EKS
May 12, 2023
Bhavesh Pawar
Software Engineer

Was ECS the wrong choice?

Not! Given the ease of configuration, & deployments choosing ECS wasn’t the wrong choice at all. Additionally, from a cost perspective, since ECS doesn’t charge you for orchestrating, it turns out to be much cheaper than EKS. EKS takes a $72 fee per cluster. Until you actually breach the VPC and infra limits that ECS has to offer, going the EKS route was the way to go. Given that you’re reading this, I guess you have, and it’s time for a makeover!

https://media.giphy.com/media/oqwsns7qHCEi4/giphy.gif

When would you consider migrating to EKS?

If you’re thinking about moving out of ECS, and to EKS, think of these points like Pokemon - gotta catch 'em all. Ok, not all but at least 3!

1. Breaking out of the AWS vendor lock-inECS comes with a tightly coupled vendor

 lock-in to AWS services even for things like logging, & observability. These prices  increase significantly after you’ve hit some scale and the costs quickly spiral out of control.
this article by Kshitij Gang that takes you through the ABCs of breaking out of the AWS vendor lock-in with ECS.

2. De-risking yourself by having multi-cloud support
Let’s face it your application shouldn’t ever be down. But what can you do if AWS itself is down? Distributing workloads across multi-cloud infrastructure providers makes your product resilient to vendor downtimes, as well as de-risks you from the perils of relying deeply on a single infrastructure vendor.

3. Configuring granular access control for and over resources
Kubernetes allows more granular control over resources within the cluster and the individual components that interact with it. It provides a more advanced security model, including role-based access control and encryption.

4. Leveraging state-of-the-art community plugins, extensions, and tools
Kubernetes community is able to build creative solutions to complex problems which we are able to leverage as opposed to ECS with is closed source, and we are only dependent on the AWS ECS Roadmap for changes

5. Support for increased scale
ECS has hard limits in terms of hardware, and networking components and there's no way out of it. Once you’re breaching these limits there is no straightforward way to solve this. Additionally, if you’re using a fargate cluster time to allocate compute also increases as you increase the size of the instance that you’re allocating. That’s another consideration that you need to make. With EKS you can do a combination of node groups with fargate, so you have some provisioned hardware with the ability to scale on demand

https://media.giphy.com/media/l0ErASOvMVuMljb8I/giphy.gif

How to migrate from ECS to EKS

Current system architecture with ECS -

System-Architecture-ECS

System architecture once we migrate to EKS -

System-Architecture-EKS

For the migration, we will use the existing vpc, subnet, and docker image from the ECS.

We will be using the ecs-eks-migration repo, go ahead and clone it to get started with the migration.

To start off we have some prerequisites

Apart from this we also need to have subnets id beforehand, subnets to be tagged, nat gateway, and Image URI

To get the image URI,

  1. Go to ECS and then click on your service
  2. Click on the “Configurations and task” tab and then click on the task that's running
  3. Click on the running task and then you will see containers in the task
  4. Click on the container and you will see the Image URI, copy that we will need it.
Copy-Container-URI

Steps to get image uri from ecs.

Now that you are ready, let's get started.

https://media.tenor.com/sUtWmJmEmxQAAAAd/captain-america-cap.gif
  1. Get the subnet ids from the ecs
     a. Go to your ECS
     b. Click on your service, then go to the networking tab you will find vpc and subnets id’s

      2. Now the first thing we do is tag the public and private subnets, why do we tag them?
           a. Add kubernetes.io/cluster/<name> and set it to either shared or owned here <name> is your eks cluster name that we will be creating, in our case it will be applicationName-envName. Do this for all your private and public subnets.

Tag-Public-Private-Subnets

          b. Add kubernetes.io/role/internal-elb tag set to 1 for private subnets

Add-kubernetes.io/role/internal-elb-tag-set-1-private-subnets

      c. Add kubernetes.io/role/elb tag set to 1 for public subnets

kubernetes.io/role/elb-tag-set-1-public-subnets

3. Once tagging is done we will now create a  NAT gateway and associate it with the route table

    a.  Open NAT Gateway from the VPC service

    b.  Click on Create NAT gateway.

    c.   Enter the name for the NAT gateway, select a PUBLIC public subnet, and allocate elastic IP.

    d.  Create NAT gateway

Nat-Gateway

     e.  Now, we will add routing to our NAT gateway. Go to your private subnet and click on the route table tab and then open the route table.

routing-nat-gateway

    f.  Click on edit routes and add a route for 0.0.0.0/0 to the nat gateway

edit-routes

    g.  Click save changes, and then on the route table page click on the subnet associations tab and edit subnet associations

route-table-page

    h.  Now select your private subnets and click save

select-private-subnets-save

4. All the additional work is done now we just need to run a few scripts and boom 💥 we will be on EKS.

5. We will be using ecs-eks-migration as the base, but don’t worry the contents of this repo will work for your workload as well. You just need to make sure that the values that you pass are correct.


git clone https://github.com/bhaveshpawar07/migrate-ecs-eks.git


6. Before starting we will be requiring the following parameters for the command to work.
   a.  region
    Your AWS account region where you want to deploy.

     b.  applicationName
 
   You can give any name here, this will be used as your helm app and eks cluster name.

     c.  env
     
Again this will be part of your cluster name, as said earlier and also will be used for your helm application.

     d.  awsAccountId
     
Your AWS account ID.

     e.  pvtSubnets
     
These will be the private subnets where your current ecs is deployed
    Example

    pvtSubnets=privatesubnet1,privatesubnet2

    f.  pubSubnets
     
These will be the public subnets where your current ecs is deployed.

    Example

      pubSubnets=publicsubnet1,publicsubnet2

    💡 PS - Using your private and public subnets, the EKS cluster will be deployed in your existing VPC.

     g.  port

    The Port on which your application is running will be used in the k8s app as the target port.

     h.  imageLocationFinally the ARN of your existing ECS Image that you want to use for the deployment

   💡 PS - You can find the image from your ECS→Task→Click on your tasks and inside configuration look out for containers and there you will find “Image URI”

7. That's it once you have all the parameters ready replace them in the below command and execute the command.


make create-cluster region={yourRegion}\
 applicationName={yourApplicationName}\
 env={yourEnv}\
 awsAccountId={yourAwsAccountId}\
 pvtSubnet={yourPrivateSubnets}\
 pubSubnet={yourPublicSubnets}\
 port={yourPort}\
 imageLocation={yourImageArn}
 
 
command-execute

Here starts the magic, get your 🍿 and just watch things being set up, this command can take from 15 to 30 mins.

Let's understand the magic behind

The make create-cluster script does a lot of things starting from creating a cluster to setting up a load balancer, let's have a detailed look at the script.

Create cluster make create-cluster

First things first , the script will create an EKS Cluster for you in the existing vpc using the private and public subnets given in the command.

create-cluster

Once the cluster is created , it will then update the local kubectl context , so that you can use the kubectl command locally and access the cluster.

update-context

Create helm application make helm-app

Once the cluster is ready, we will be using Helm to create and install applications on the cluster.

The make helm-app command does a lot of things, first, it will create a chart directory for the application

create-helm-application

Then the script will take the default values file and change it using yq to match our application needs. It will set the correct port for the service to listen, it will set the image path to the existing ECS image, also it will set the host and other ingress details.

You can go through the file here and read about each command being executed.

Install helm application make helm-upgrade

Once we have our helm application ready with all the correct values, its time to now install the application.

install-helm-application

The upgrade command with the install flag will install the application.

Interestingly if you want multiple env for your application, then you can simply duplicate the existing values.$env.yaml file and change the env name in the new file and then pass the file to the upgrade command with one more --values flag and it will work.

And ofcourse you will have to change the name of the duplicated file to something like values.{newenv}.yaml

Lets consider you need to create a qa env from the dev -

  1. Copy the file and change its name
    cp values.dev.yaml values.qa.yaml

      2. Change the relevant values, for example


env:
  configmap:
    data:
      ENVIRONMENT_NAME: qa

     3. Now run this command to create an application for your qa env. Note this will use the same cluster as your existing application. In order to have separation of       concerns you can either use a virtual cluster, or namespaces


helm upgrade --install\
 $applicationName-$newenv\
  ./helm/$applicationName\
  --values=./helm/$applicationName/values.$env.yaml\
  --values=./helm/$applicationName/values.$newenv.yaml\
  --debug --wait --force

Once you run this helm application will consider the $env.yaml as the base file and then $newenv.yaml will be used as an overriding file and the changes will be made. Also you will have to change your application name in order to install it.

Creating an IAM account make create-iamaccount

Now before we create the load balancer and get the DNS which is the most exciting part ( only if it succeeds 😅) We will first create an IAM service account which will allow our EKS cluster to securely access AWS resources.

To start with we will first associate OIDC with our cluster

create-iamaccount

Then we will create an IAM policy that will be attached to our service account, which will allow a lot of things , along with the creation of load balancer and target group.

We will using this policy , you can see what is allows your service to do.

create-IAM-policy

Once the policy is created , finally we will create the service account and attach the policy to it.

create-service-account

Creating an application load balancer make create-alb

Now the final part, we will update our helm repo and add in the https://aws.github.io/eks-charts , from which we will install the application load balancer

make-create-alb

Then we get our vpc id, to create the load balancer in the same vpc

create-load-balancer

Now we will install the load balancer controller , which will create a load balancer and we will also attach the service account that we had created for the it to have access to AWS services.

install-load-balancer

Ok ok , I know even if after doing so much we haven't reached the end, but here is the last command which will create the CRD’s for the AWS load balancer controller.

🥱 Phew, that was quite a lot to digest! If you’ve followed along, pat yourself on the back first following along, and second cause, while you were reading this your EKS cluster, was created and is now already up ( hopefully🤞🏻) and now in just a few minutes you will be having your load balancer up and running🕺🏻

Once this command completes all you have to do is check for the newly created load balancer, grab the DNS name and you are done.

load-balancer

Alternatively, you can hit kubectl get ingress in your terminal and you can get the load balancer address.

kubectl-get-ingress

Before you access the load balancer make sure to check in the AWS console if the load balancer is Running or in a Provisioning state.

Once it is in a Running state, you can then access it.

In my case accessing the URL hits my server and logs a message saying “Api is serving”

api-serving

Now, once your EKS cluster is up and running you can do whatever you want with your ECS, even this

https://media.tenor.com/8ZAIg2zOtzQAAAAd/pirómano-meme.gif

💀❗Just make sure you do not delete the image, the ECR, and the vpc as we are using them

Where to go from here?

Ok, it’s kinda crazy but we actually figured out when to migrate our workloads from ECS to EKS, and exactly how it should be done. That’s insane homie. Go now, brag to your friends, colleagues, and your boss. Ah, before you go, take this CD pipeline to automate your set-up, won’t you?

⚠️ Before you add the below jobs to your existing CD pipeline, make sure to copy the helm folder from the project/location where you ran the migration commands (i.e the ecs-eks-migration clone location) to your actual application folder.

Also, make sure to have these secrets in your GitHub repo:

AWS_ACCESS_KEY_ID

      Your AWS access key

AWS_SECRET_ACCESS_KEY

      Yo. r AWS secret access key

AWS_REGION

      Your AWS region

PROJECT_NAME

    Your application name has to be the same as the one you gave will creating the helm app, if it's not you need to make some changes to the cd file accordingly.

    Dont, add the below -

env.ENVIRONMENT_NAME

    We will set this variable based on the branch we are deploying to.


- name: Set branch name
  id: vars
  run: echo ::set-output name=stage::${GITHUB_REF#refs/*/}
- name: Set env.ENVIRONMENT_NAME #1 
  run: |
    if [[ ${{ steps.vars.outputs.stage }} == 'main' ]]; then
        echo "ENVIRONMENT_NAME=prod" >> "$GITHUB_ENV"
    else
        echo "ENVIRONMENT_NAME=${{ steps.vars.outputs.stage }}" >> "$GITHUB_ENV"
    fi
- name: Configure AWS credentials
  uses: aws-actions/configure-aws-credentials@v1
  with:
    aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
    aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
    aws-region: ${{ secrets.AWS_REGION }}
- name: Login to Amazon ECR
  id: login-ecr
  uses: aws-actions/amazon-ecr-login@v1
- name: Build, tag, and push docker image to Amazon ECR #2
  env:
    REGISTRY: ${{ steps.login-ecr.outputs.registry }}
    REPOSITORY: ${{ secrets.PROJECT_NAME }}-${{ env.ENVIRONMENT_NAME }}
    IMAGE_TAG: ${{ github.sha }}
  run: |
    docker build -t $REGISTRY/$REPOSITORY:$IMAGE_TAG .
    docker push $REGISTRY/$REPOSITORY:$IMAGE_TAG
- name: Deploy to EKS #3
	uses: peymanmortazavi/eks-helm-deploy@v2.2 
  with:
    aws-access-key-id: ${{ secrets.AWS_ACCESS__KEY_ID }}
    aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
    aws-region: ${{ secrets.AWS_REGION }}
    cluster-name: ${{ secrets.PROJECT_NAME }}-${{ env.ENVIRONMENT_NAME }}
    config-files: ./helm/${{ secrets.PROJECT_NAME }}/values.${{ env.ENVIRONMENT_NAME }}.yaml
    values: image.tag=${{ github.sha }},image.repository=${{ steps.login-ecr.outputs.registry }}/${{ secrets.PROJECT_NAME }}-${{ env.ENVIRONMENT_NAME }}
    name: ${{ secrets.PROJECT_NAME }}-${{ env.ENVIRONMENT_NAME }}
    chart-path: ./helm/${{ secrets.PROJECT_NAME }}

Let's understand what are the jobs-

  1. Set env.ENVIRONMENT_NAME
  2. In this job we are setting the ENVIRONMENT_NAME variable based on the current branch name, you can change the env name or branch name according to your requirements.
  3. Build, tag, and push the docker image to Amazon ECRWe will build our image and push it to the ECR, with proper tags. Before pushing we are logging into Amazon ECR in the previous step(Login to Amazon ECR)
  4. Deploy to EKSIn this final setup, we will use eks-helm-deploy action to log in to eks and deploy our chart, read more about the action here.

      And if you’re looking for help with moving your workloads from ECS → EKS or you are stuck with other ECS issues just hit us up at work@wednesday.is and sit tight while we wave our magic wand! Ok, I’ll let you in on a little secret, Aseer KT is working on an amazing terraform provider that will allow you to deploy your workloads across multiple cloud providers. Keep an eye out on our Articles Space and we’ll holler once it’s out.

Enjoyed this article? Don't miss out on more exclusive insights and real-life digital product stories at LeadReads. Read by Top C Execs.
Join
here.