Rob Stewart – SMS https://www.sms.com Solving | Managing | Securing Thu, 07 Sep 2023 12:47:18 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.3 https://www.sms.com/wp-content/uploads/2023/05/cropped-sms-favicon-32x32.png Rob Stewart – SMS https://www.sms.com 32 32 Deploying DokuWiki on Amazon Elastic Container Service (ECS) – Part 2 of 2 https://www.sms.com/blog/deploying-dokuwiki-on-amazon-elastic-container-service-ecs-part-2/ https://www.sms.com/blog/deploying-dokuwiki-on-amazon-elastic-container-service-ecs-part-2/#respond Mon, 15 Aug 2022 11:34:57 +0000 http://sms-old.local/?p=4299 By Rob Stewart, Cloud Architect, SMS  

Improving the DokuWiki Deployment

In Part 1 of this series, we documented a very basic “click-ops” deployment of an ECS Task running the Bitnami DokuWiki container. Please go back and read that post if you haven’t already before you read this one.

In this post, we are going to address some of the deficiencies in the original deployment:

  • Improving the fault tolerance of the DokuWiki deployment
  • Moving on from a “Click-Ops” deployment via the AWS console to an Infrastructure as Code (IaC) deployment using Terraform

Improving the Fault Tolerance of our Elastic Container Service (ECS) DokuWiki Deployment

In Part 1 of this series, we performed a manual deployment of the following resources using the AWS Console:

  • An ECS Task Definition which referenced the Bitnami DokuWiki container from Docker Hub that we wanted to run on AWS.
  • An ECS Cluster which is used by AWS to logically separate sets of ECS Tasks and ECS Services.
  • An ECS Task which is a running instance of the ECS Task Definition we created.
  • A Security Group which controlled the network traffic going to the ECS Task running the DokuWiki container.

Exhibit 1: The Original Deployment of DokuWiki

AWSExhibit1 1

After we finished deploying all these resources, we found that the deployment was not very robust. If our ECS task crashed then our application would stop working and any data we added to DokuWiki would be lost. We also noted that we had to connect to our application using a nonstandard TCP port.

This time around, we are going to enhance the deployment by introducing the following changes:

  • An ECS Service which will restart the ECS Task if it fails
  • An Elastic Filesystem (EFS) to store DokuWiki data so that it no longer resides on the running ECS Task and is thus preserved if the task should fail
  • An Application Load Balancer (ALB) to give us a consistent URL for our application and route traffic dynamically to the ECS Tasks created by the ECS Service
  • Security Groups for our ALB and EFS to control network traffic

Exhibit 2: An Enhanced Deployment of DokuWiki

AWSExhibit2 1

It would take a long time to run through all the configuration required to create all of these services and the connections between them using the AWS console, and there is a good chance that we might miss a step along the way. There is a better way to complete this deployment.

From “Click-Ops” to Infrastructure as Code Using HashiCorp Terraform

One of the major shortcomings of the original deployment was that all the resources were created via the AWS console. When you are first learning how to use AWS, creating resources via the console can be helpful as it will enable you to gain an understanding of how AWS services work together. However, there are several shortcomings of this approach when it is time to make the transition to deploying production workloads on AWS.

  1. Unless somebody is watching you click through the console, or they are very good at picking through AWS CloudTrail logs after the fact then they will not get a full understanding of the steps you followed to complete a deployment.
  2. If you wanted to have a shot at a repeatable deployment then you would have to write up a long run book detailing each step of the process. Even if you include every step, and the person following your directions is very conscientious, there is a very good chance that they would miss a step during the deployment which will result in small inconsistencies cropping up over time. Further, eventually the document will be out of date as the AWS console changes and evolves.
  3. In most cases, you will only discover issues or security vulnerabilities introduced by a manual deployment after the deployment is already done. After a security vulnerability is discovered, you will have to revise the run book and then go through each manual deployment and attempt to make corrections which can lead to other complications.

In sum, manual deployments are not scalable. Fortunately, we have tools like Terraform and AWS CloudFormation which enable us to define our infrastructure, the resources we will deploy in the cloud, as code. There are several benefits to defining our infrastructure as code.

  1. We can be precise in defining exactly what resources we need and how each resource will be configured.
  2. We can store the code in a Version Control System (VCS) like GitLab or GitHub.
  3. We can introduce code reviews into the process where other engineers can review our code and provide input prior to deployment.
  4. We can also employ tools to scan our code and identify any deviations from best practices and established standards and detect potential security vulnerabilities so that these issues can be addressed before we deploy infrastructure.
  5. We can repeat the deployment process exactly as the level of human involvement in each deployment is dramatically reduced.
  6. We reduce or eliminate the need to write lengthy run books describing deployments as the code is self-documenting. If you want to know what is deployed then all you need to do is review the code.
  7. When we discover an issue with a deployment, all we need to do is update the code and deploy it again. If we deploy our updates via the code, then we can be much more confident that the changes will be applied consistently.

Taken together, these benefits combine to make a compelling case for leveraging IaC tools to express our infrastructure as code.

As one of the best tools for Infrastructure as Code, HashiCorp Terraform has several unique benefits:

  1. HashiCorp Configuration Language (HCL), the language used to write Terraform code, is easy to write and easy to read due to its declarative nature and the sheer volume of helpful examples available online due to broad industry adoption. While this simplicity results in a gentle learning curve when first learning Terraform, the language has evolved to handle increasingly complex scenarios without forsaking clarity.
  2. Terraform’s core workflow loop of generating a plan describing what changes will be made, applying the changes in accordance with that plan after review, and then optionally rolling back all changes via a destroy operation if they are no longer needed is easy for engineers to understand and use. This simplicity enables rapid iteration when writing code to deploy infrastructure.

Exhibit 3: The Terraform Workflow

AWSExhibit3 1
  1. Terraform tracks the infrastructure that it deploys in a state file which enables tracking of what has been deployed and makes it simple to completely remove that infrastructure when it is no longer needed.
  2. Terraform supports packaging code into reusable modules. In the HashiCorp Terraform documentation, modules are described as follows:
    • A module is a container for multiple resources that are used together. Modules can be used to create lightweight abstractions, so that you can describe your infrastructure in terms of its architecture, rather than directly in terms of physical objects.
    • The files in the root of a terraform project are really just a module as far as Terraform is concerned. The root module can reference other modules using module blocks. You will see this in action below.
    • HashiCorp encourages developers to create and use modules by providing a searchable module registry which now contains hundreds of robust modules contributed by the community.
  3. Terraform is designed to support a diverse ecosystem of platforms and technologies via plug-ins called providers. Providers are responsible for managing the communication between Terraform and other cloud services and technologies. One benefit of this approach is that the core Terraform functionality and the functionality made available via a given provider can evolve independently. For example, when a cloud provider makes a new service available then that service can be added to an updated version of the Terraform provider, and Terraform will automatically support it.

Exhibit 4: Terraform Providers

AWSExhibit4 1
  1. Most importantly, Terraform is fast. You can deploy and then destroy a few resources or a complex environment made up of hundreds of resources using Terraform in a matter of minutes.

Deploying DokuWiki on ECS Using Terraform

If you have never used Terraform before you will need to get your computer set up first. After you finish getting your computer set up, you will need to download the Terraform code using Git, review the code, deploy the DokuWiki resources in AWS by running Terraform commands, validate that it worked by logging into the AWS Console, and then destroy the infrastructure created by the code.

Getting Setup

In order to follow along with the steps in this post you will first need to install Git, the Terraform command line interface (CLI), and the AWS CLI. You will need to have access to an AWS account with IAM Administrative permissions, and you will need to setup programmatic access to AWS with the AWS CLI.

Install Git

The Terraform code associated with this post has been uploaded to a GitLab repository. In order to download this code and follow along, you will need to install Git. Follow the installation directions on the Git website to get started. If you haven’t ever used Git before, the git-scm.com site has a lot of great documentation to get you going in the right direction, including the following sections of the Pro Git book:

Install the Terraform Command Line Interface (CLI)

In order to follow along with the steps in the post, you will need to install Terraform. The following tutorial on the HashiCorp Learn site takes you step by step through the installation process.

Install the AWS CLI

In order to create infrastructure on AWS using Terraform, you will also need to install the AWS CLI. This page in the AWS documentation takes you step by step through the installation process.

Setup An AWS Account

You will also need access to an AWS account with IAM Administrative permissions. If you were following along with the first post in this series then you already created an AWS account.

NOTE: If you follow along with the steps in this post, there is some chance you may incur minimal charges. However, if you create a new account you will be under the AWS Free Tier. That said, it is always prudent to remove any resources you create in AWS right after you finish using them so that you limit the risk of unexpected charges. Guidance on how to remove the resources created by the Terraform code after we finish with them is provided below.

Set Up Programmatic Access to AWS for Terraform Using the AWS CLI

Before you can deploy infrastructure to your AWS account using Terraform you will need to generate an AWS IAM Access Key and Secret Key (a key pair) using the AWS Console. After you generate a key pair, you will need to configure the AWS CLI to use it using the aws configure command. The following page in the AWS documentation provides step by step directions on how to generate an Access Key and Secret Key for your AWS IAM user in the AWS Console and then configure the AWS CLI to use those credentials.

NOTE: When you run the aws configure command, you will be prompted to select a region.  Make sure you specify one.

NOTE: When you generate an Access Key and Secret key for an IAM user then that key pair grants the same access to your AWS account that your AWS login and password has. If the IAM account has permissions to create resources then anybody who possesses the Access Key and Secret key can create resources. Therefore, you should never share these keys and should treat them with care like you would your login and password.

Download the ECS DokuWiki Terraform Code from GitLab

Before we can start deploying infrastructure using the Terraform CLI we need to download the code from GitLab. Enter the following command to download the project code from GitLab:

git clone https://gitlab.com/sms-pub/terraform-aws-ecs-dokuwiki-demo.git

Next change into the directory containing the code you just downloaded.

cd terraform-aws-ecs-dokuwiki-demo

Review the Terraform Code

If you take a look at the terraform code you just downloaded you will see several files and folders.

Exhibit 5: The Terraform ECS Code.
Files in sub-directories are not represented for the sake of brevity.

AWSExhibit5 1


The following table summarizes the files and folders in the repository. The Terraform files are described in further detail below.

NameObject TypeContents
/terraform.tfTerraform fileterraform block
/provider.tfTerraform fileProvider block
/variables.tfTerraform fileVariable blocks
/main.tfTerraform fileModule block for the dokuwiki module
/outputs.tfTerraform fileModule blocks for output values
/README.mdMarkdown documentOverview documentation for the GitLab Project. This file is displayed when you view the project on GitLab.com.
/modulesDirectoryContains the dokuwiki module that is created by the module block in main.tf
/modules/dokuwikiDirectoryThe DokuWiki folder contains all the files that make up the DokuWiki Terraform module. Most of the Terraform files in the DokuWiki folder contain module blocks referencing the modules in the modules folder.
modules/dokuwiki/modulesDirectoryThe modules folder in the DokuWiki folder contains all the modules used by the DokuWiki module
/modules/dokuwiki/modules/application-load-balancerDirectoryTerraform module which creates Application Load Balancers (ALBs)
/modules/dokuwiki/modules/ecs-clusterDirectoryTerraform module which creates ECS Clusters
/modules/dokuwiki/modules/ecs-serviceDirectoryTerraform module which creaes ECS Services
/modules/dokuwiki/modules/ecs-task-definitionDirectoryTerraform module responsible for creating ECS Task Definitions
/modules/dokuwiki/modules/efsDirectoryTerraform module responsible for creating the EFS storage
/modules/dokuwiki/modules/security-groupDirectoryTerraform module which creates Security Groups

NOTE: The intent behind modules is to create code that can be used in multiple projects. Therefore, it is not generally considered a best practice to put terraform modules in subfolders within the project that is referencing those modules. Instead, it is more common to reference the project repository and tag for the module you are using in your module block. The modules tutorial on the HashiCorp Learn site goes into more detail on this.

Terraform.tf

The terraform.tf file contains a terraform block which defines settings for the project including required terraform CLI and AWS provider versions:

# Terraform block which specifies version requirements

terraform {
   # Specify required providers and versions
   required_providers {
      aws = {
      source  = "hashicorp/aws"
      version = ">= 4.21.0"
      }
   }

   # specify required version for terraform itself
   required_version = ">= 1.2.4"
}

Provider.tf

The provider.tf file contains a provider block which defines settings for the AWS Terraform provider including the AWS region where the resources will be created and default tags that will be applied to all of the resources created using this provider:

# Terraform AWS Provider block
# variables come from variables.tf file 

provider "aws" {     
  # Set default AWS Region
  region = var.region 

 # Define default tags
  default_tags {
    tags = merge(var.default_tags, )
  }
}

Variables.tf

The variables.tf file contains variable blocks which set the AWS region, resource name prefix which is appended to the names of all the resources created by Terraform, and the default tags which are applied to all resources created by this project:

# AWS Region where resources will be deployed
variable "region" {
  type        = string
  description = "AWS Region where resources will be deployed"
  default     = "us-east-1"
}

# Names for all resources created by this project will have this prefix applied
variable "name_prefix" {
  type        = string
  description = "Prefix all resource names"
  default     = "dokuwiki"
}

# All resources will have these tags applied
variable "default_tags" {
  description = "A map of tags to add to all resources"
  type        = map(string)
  default = {
    tf-owned = "true"
    repo     = "https://TODOUPDATEURL"
    branch   = "main"
  }
}

Main.tf

The main.tf file contains the module block for the dokuwiki module:

# This module block creates all of the AWS resources for Dokuwiki
module "dokuwiki" {

  # Specify the path to the dokuwiki module in this project 
  source = "./modules/dokuwiki"

  # these variables come from variables.tf
  region       = var.region
  default_tags = var.default_tags
  name_prefix  = var.name_prefix
}

If you are only looking at files in the main root directory of the project, it might seem as though a lot of detail is missing, and it is. In order to simplify the code, we have intentionally placed all of the resources that are created by the deployment in the dokuwiki module which references other modules to actually create resources in AWS.

Exhibit 6: The Relationships Between the Different Modules in the DokuWiki Project

AWSExhibit6 1

Outputs.tf

The outputs.tf file contains output blocks which enable us to add values for modules and resources to the output when we run Terraform commands.

# Name of the ECS Cluster
output "ecs_cluster_name" {
  description = "The name of the ECS cluster"
  value       = module.dokuwiki.ecs_cluster_name
}

# DNS Name for the Application Load Balancer 

output "alb_dns_name" {
  description = "The DNS name of the load balancer."
  value       = module.dokuwiki.alb_dns_name
}

Both of these output blocks are pulling data from the Dokuwiki module.

Deploy DokuWiki Resources Using Terraform

Now that we have taken an initial look at the code it is time to start running terraform commands to turn this code into resources in AWS. We’ll go through the following steps to execute this deployment using terraform:

  1. Initialize the project and download required providers
  2. Generate and review the list of changes that terraform will perform
  3. Deploy changes to our infrastructure
  4. Test the deployment
  5. Roll back the changes made by Terraform

These steps are described in more detail in the following sections.

Step 1: Initialize the Project – terraform init

Before we can start deploying resources with Terraform, we need to instruct it to download any modules and provider plug-ins that we are using in our code.

Make sure you change into the project folder and then run the following command to initialize the project:

terraform init

When you run this command, terraform will do the following:

  1. Evaluate the code in the current folder (the root module).
  2. Download any modules referenced in the current folder that are not available locally and put them into a hidden subfolder in our project folder called .terraform.
  3. Evaluate all the code blocks in the current folder and all module folders to determine which provider plug-ins are needed.
  4. Download the provider plug-ins and put them in the .terraform folder.

Essentially, it does all the preparation work required to enable us to proceed to the next step.

Step 2: Generate and Review a List of Changes that Terraform will Perform – terraform plan

After we initialize our project using the terraform init command, the next step is to instruct Terraform to generate a list of changes that terraform will make to our infrastructure; this list of changes that Terraform generates is called a plan.

When Terraform generates a plan, it will do the following:

  1. Evaluate all of the code blocks in the current folder and the code blocks in all of the modules that are referenced in the current folder
  2. Determine which resources will be created
  3. Generate a dependency map which determines the order in which those resources will be created
  4. Print out a detailed output listing exactly what terraform will do if we choose to make changes to our infrastructure

NOTE: Running the command terraform plan is a safe operation. Terraform will not make any changes to your infrastructure when you run a plan. It will only tell you what changes will be made.

Run the following command to instruct terraform to generate a plan:

terraform plan

Let’s go through the output on the plan to see what resources terraform will create.

Note: The attributes of each resource were removed from the plan for the sake of brevity.

module.dokuwiki.data.aws_vpc.vpc: Reading...
module.dokuwiki.data.aws_vpc.vpc: Read complete after 1s [id=vpc-02bc8afe3a47e8497]
module.dokuwiki.data.aws_subnets.subnets: Reading...
module.dokuwiki.data.aws_subnets.subnets: Read complete after 0s [id=us-east-1]

The first statements we find in the plan output are a collection of data outputs from data blocks in the DokuWiki module. In Terraform, data blocks represent queries sent by the Terraform provider to fetch information. If we want to learn more about these then we need to check the documentation for the HashiCorp Terraform AWS Provider in the Terraform Registry.

  • The data.aws_vpc data block triggers a call to the AWS API to fetch the properties of a VPC. In this case, the intent of the data call is to fetch the ID for the default vpc in the AWS Region. When a new AWS account is created, AWS places a VPC in the account by default so workloads can be created without first having to create a VPC.
  • The data.aws_subnets data block triggers a call to the AWS API to fetch the subnets in a VPC. In this case, the intent of the data call is to get the attributes of all the subnets in the default VPC. It is necessary to specify which subnets will be used when creating resources like application load balancers and ECS services.
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create

Terraform marks resources in the plan with a  + symbol to indicate that they will be created by Terraform when we eventually create the infrastructure. We’ll see other symbols when we roll back the changes we make.

Terraform will perform the following actions:
  # module.dokuwiki.aws_cloudwatch_log_group.ecs-log-group will be created
  + resource "aws_cloudwatch_log_group" "ecs-log-group" {
  . . .
}

The aws_cloudwatch_log_group resource block is the first resource in the plan. AWS CloudWatch log groups aggregate logging data from AWS resources. In this case, the log group will capture logging data from our ECS Tasks.

# module.dokuwiki.aws_iam_role.ecs_task_role will be created
+ resource "aws_iam_role" "ecs_task_role" {
. . .
}
# module.dokuwiki.aws_iam_role_policy_attachment.ecs-task-role-policy-attach will be created
+ resource "aws_iam_role_policy_attachment" "ecs-task-role-policy-attach" {
. . .
}
  1. The aws_iam_role resource block creates an IAM role for our ECS Task. Whenever you deploy a resource in AWS that needs to interact with the AWS API then you have to assign an IAM role with corresponding IAM permissions to that resource.
  2. The aws_iam_role_policy_attachment resource block attaches an AWS IAM policy to the ECS Task IAM role. In this case, we are attaching minimal permissions to permit the task to interact with other AWS services. You can read more about the AmazonECSTaskExecutionRolePolicy AWS policy in the AWS Documentation.
# module.dokuwiki.random_id.index will be created
+ resource "random_id" "index" {
. . .
}

The random_id resource block generates a random number which is used to pick random subnets for our resources because we don’t care which subnets our resources are placed in but need to have a deterministic mechanism for picking subnets.

# module.dokuwiki.module.alb.aws_lb.this[0] will be created
+ resource "aws_lb" "this" {
. . .
}

# module.dokuwiki.module.alb.aws_lb_listener.frontend_http_tcp[0] will be created
+ resource "aws_lb_listener" "frontend_http_tcp" {
. . .
}

# module.dokuwiki.module.alb.aws_lb_target_group.main[0] will be created
+ resource "aws_lb_target_group" "main" {
. . .
}
  1. The aws_lb resource block creates an Application Load Balancer which will send HTTP traffic from users of our application to the ECS Tasks running DokuWiki.
  2. The aws_lb_listener resource block creates an Application Load Balancer Listener which listens for traffic coming to the load balancer and sends it to Target Groups.
  3. The aws_lb_target_group resource block creates an Application Load Balancer Target Group. When the ECS Service creates a new ECS Task, it will register it to the Target Group so that HTTP traffic coming to the Application Load Balancer can be sent to the ECS Task.
# module.dokuwiki.module.ecs-cluster.aws_ecs_cluster.this[0] will be created
+ resource "aws_ecs_cluster" "this" {
. . .
}

# module.dokuwiki.module.ecs-service.aws_ecs_service.this will be created
+ resource "aws_ecs_service" "this" {
. . .
}

# module.dokuwiki.module.ecs-task-def-dokuwiki.aws_ecs_task_definition.ecs_task_definition[0] will be created
+ resource "aws_ecs_task_definition" "ecs_task_definition" {
. . .
}
  1. The ecs-cluster resource block creates an ECS Cluster.
  2. The ecs-service resource block creates an ECS Service.
  3. The aws_ecs_task_definition resource block creates an ECS Task Definition.
    Note: For more information on the relationships between the different ECS resources please refer back to Part 1 of this series.
# module.dokuwiki.module.efs.aws_efs_access_point.default["Doku"] will be created
+ resource "aws_efs_access_point" "default" {
. . .
}

# module.dokuwiki.module.efs.aws_efs_backup_policy.policy[0] will be created
+ resource "aws_efs_backup_policy" "policy" {
. . .
}

# module.dokuwiki.module.efs.aws_efs_file_system.default[0] will be created
+ resource "aws_efs_file_system" "default" {
. . .
}

# module.dokuwiki.module.efs.aws_efs_mount_target.default[0] will be created
+ resource "aws_efs_mount_target" "default" {
. . .
}
  1. The aws_efs_access_point resource block creates an EFS Access Point which exposes a path on the EFS storage volume as the root directory of the filesystem mapped to the ECS Task.
  2. The aws_efs_backup_policy resource block creates an EFS backup policy for an EFS storage volume.
  3. The aws_efs_file_system resource block creates an EFS storage volume which our ECS Tasks use to store Dokuwiki data.
  4. The aws_efs_mount_target resource block creates an EFS Mount Target. In order to access an EFS filesystem an EFS mount target must be created in the VPC.
#  module.dokuwiki.module.sg_ecs_task.aws_security_group.this_name_prefix[0] will be created
+ resource "aws_security_group" "this_name_prefix" {
. . .
}

# module.dokuwiki.module.sg_ecs_task.aws_security_group_rule.computed_egress_rules[0] will be created
+ resource "aws_security_group_rule" "computed_egress_rules" {
. . .
}

# module.dokuwiki.module.sg_ecs_task.aws_security_group_rule.computed_ingress_with_source_security_group_id[0] will be created
+ resource "aws_security_group_rule" "computed_ingress_with_source_security_group_id" {
. . .
}

# module.dokuwiki.module.sg_efs.aws_security_group.this_name_prefix[0] will be created
+ resource "aws_security_group" "this_name_prefix" {
. . .
}

# module.dokuwiki.module.sg_efs.aws_security_group_rule.computed_egress_rules[0] will be created
+ resource "aws_security_group_rule" "computed_egress_rules" {
. . .
}

# module.dokuwiki.module.sg_efs.aws_security_group_rule.computed_ingress_with_source_security_group_id[0] will be created
+ resource "aws_security_group_rule"  "computed_ingress_with_source_security_group_id" {
. . .
}

# module.dokuwiki.module.sg_alb.module.sg.aws_security_group.this_name_prefix[0] will be created
+ resource "aws_security_group" "this_name_prefix" {
. . .
}

# module.dokuwiki.module.sg_alb.module.sg.aws_security_group_rule.egress_rules[0] will be created
+ resource "aws_security_group_rule" "egress_rules" {
. . .
}

# module.dokuwiki.module.sg_alb.module.sg.aws_security_group_rule.ingress_rules[0] will be created
+ resource "aws_security_group_rule" "ingress_rules" {
. . .
}

# module.dokuwiki.module.sg_alb.module.sg.aws_security_group_rule.ingress_rules[1] will be created
+ resource "aws_security_group_rule" "ingress_rules" {
. . .
}

# module.dokuwiki.module.sg_alb.module.sg.aws_security_group_rule.ingress_with_self[0] will be created
+ resource "aws_security_group_rule" "ingress_with_self" {
. . .
}
  1. The aws_security_group resource blocks creates Security Groups in a VPC which are essentially firewalls for the resources that are associated to the Security Group.
  2. The aws_security_group_rule resource blocks create Security Group Rules for the security groups. Security Group Rules define what incoming and outgoing network traffic is permitted to and from the resources that the security groups are associated with.

For the ECS deployment we have 3 security groups.

  1. dokuwiki.module.sg_efs.aws_security_group – The Security Group assigned to the EFS storage volume which only permits traffic coming from resources that have been assigned to the ECS Tasks Security Group.
  2. dokuwiki.module.sg_ecs_task.aws_security_group – The Security Group assigned to the ECS Tasks which only permits traffic coming from the resources that have been assigned to the ALB Security Group.
  3. dokuwiki.module.sg_alb.module.sg.aws_security_group – The Security Group assigned to the ALB which allows incoming HTTP traffic from any address.
Plan: 25 to add, 0 to change, 0 to destroy.
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Changes to Outputs:
+ alb_dns_name = (known after apply)
+ ecs_cluster_name = "dokuwiki-cluster"

Note: You didn’t use the -out option to save this plan, so Terraform can’t guarantee to take exactly these actions if you run terraform apply now.

When we choose to move onto the next step we will be deploying 25 distinct resources to AWS! We also see that the plan shows two changes to Outputs.

  1. The ecs_cluster_name has a value because Terraform can already determine what the value should be.
  2. The alb_dns_name shows a value of (known after apply) because this value will only be known after the Application Load Balancer (ALB) is created.

NOTE: When you run a terraform plan command, it is very important to review it carefully and confirm the plan is doing what you expect it to do.

Step 3: Make changes to Infrastructure – terraform apply

After we run the terraform plan command to generate the planned list of changes that Terraform will make to our infrastructure, the next step is to instruct Terraform to carry out those changes.

Run the following command to instruct terraform to initiate the process of applying the changes that we saw in the plan:

terraform apply

When you run the terraform apply command, Terraform will generate a new plan for you. Let’s go through the output on the apply command.

Note: The list of resources were removed from the sample output for the sake of brevity.

module.dokuwiki.data.aws_vpc.vpc: Reading...
module.dokuwiki.data.aws_vpc.vpc: Read complete after 1s [id=vpc-02bc8afe3a47e8497]
module.dokuwiki.data.aws_subnets.subnets: Reading...
module.dokuwiki.data.aws_subnets.subnets: Read complete after 0s [id=us-east-1]
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create

Terraform will perform the following actions:
. . .

The output from the terraform apply command starts out just like the output from the terraform plan command we just ran so we don’t need to go over it again. However, if you scroll to the end of the output you will see something new.

Plan: 25 to add, 0 to change, 0 to destroy.
Changes to Outputs:
+ alb_dns_name     = (known after apply)
+ ecs_cluster_name = "dokuwiki-cluster"

Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.

Enter a value:

Terraform is asking you to confirm that you want it carry out all the actions that are listed in the plan. Go ahead and type yes at the prompt and hit enter. When you do this, Terraform will start creating all of the resources in AWS on your behalf.

As Terraform works through the process of carrying out the planned changes to our infrastructure, it lists out what it is doing.

Note: Some of the output was omitted for the sake of brevity.

module.dokuwiki.random_id.index: Creating...
module.dokuwiki.random_id.index: Creation complete after 0s [id=7lA]
. . .
module.dokuwiki.module.ecs-service.aws_ecs_service.this: Still creating... [2m20s elapsed]
module.dokuwiki.module.ecs-service.aws_ecs_service.this: Creation complete after 2m23s [id=arn:aws:ecs:us-east-1:816649246361:service/dokuwiki-cluster/dokuwiki-service]
Apply complete! Resources: 25 added, 0 changed, 0 destroyed.

Outputs:
alb_dns_name = "dokuwiki-alb-979498190.us-east-1.elb.amazonaws.com"
ecs_cluster_name = "dokuwiki-cluster"

It should take around 3 minutes for Terraform to create all 25 resources! This is a huge time savings if you consider how long it would take to create all of these resources by clicking around in the AWS Console.

NOTE: Terraform tracks all the changes it makes to infrastructure in a state file. This will come up later when we are finished and want Terraform to roll back all the changes it has made for us.

Notice that the output for the alb_dns_name now has a value. Terraform can tell us what the DNS name is for the Application Load Balancer (ALB) because it has now been created. Try copying the value for the alb_dns_name from your output (which will be different from mine) and then paste it into your browser to go the Dokuwiki site Terraform created.

Exhibit 7: Accessing Dokuwiki in the browser.

AWSExhibit7 1

NOTE: The Dokuwiki application may not load the first time you try it. When the ECS Task is created, AWS needs to pull the Bitnami Dokuwiki container from Docker Hub and then start it which may take a few minutes. If you try to access the DNS name for the ALB in your browser, but it does not load for you or you see an error, just wait a few minutes and try it again.

Step 4: Test the Deployment

If we were able to launch the Dokuwiki site using the alb_dns_name in the last step then we have tested a lot of the infrastructure. At the very least the following is working:

  1. Our Application Load Balancer (ALB) is handling the incoming network traffic from our browser via the security group
  2. The ALB is routing network traffic to the ECS Task running Dokuwiki which means we have a running ECS Cluster with an ECS Task running Dokuwiki.
  3. Unlike the last deployment in Part 1 of this series, we didn’t have to specify a port number in the URL to access the running Dokuwiki container from the browser.

However, there are two enhancements to this deployment when compared with the deployment we did in Part 1 of this series.

  1. Introducing an ECS Service to run our ECS Task so that if the ECS Task stops for some reason then the ECS Service will start a new one for us.
  2. Previously, our data was stored on the Dokuwiki ECS Task; therefore it would be lost if the Task was stopped or failed. However, now our ECS Task is using EFS storage for our content which continues to be available even if the Task is lost.

Now that we have deployed the infrastructure using Terraform we can test these aspects of the deployment by adding content to Dokuwiki via the browser, stopping the running ECS Task, and then verifying that the ECS Service starts a new ECS Task and that our content is still visible when we refresh the page via the browser.

  1. Add Content to Dokuwiki

From your browser, click the pencil on the right side of the page to engage the content editor for the current page.

Exhibit 8: Edit the current page in Dokuwiki.

AWSExhibit8 1

Next, type some text into the text box for the page and then click the Save button. You should now see the text you changed appear on the page.

  1. Stop the ECS Task

Now that we have added the content to the page in Dokuwiki, we should stop the running ECS Task and then wait to see if it starts again. We could login to the AWS Console and stop the running task. However, it would be much quicker to use the AWS command line Interface (CLI) instead. We need to use two AWS CLI commands to do this.

  1. aws ecs list-tasks – which lists the ECS tasks running in an ECS Cluster. Go here to check out the documentation for this command.
  2. aws ecs stop-task – which stops a running ECS task. Go here to check out the documentation for this command.

First, run the following command to get a list of running ECS Tasks on our cluster.

aws ecs list-tasks --cluster dokuwiki-cluster

If we have setup the AWS CLI correctly with an IAM Access Key and Secret Key then we should get a response like this when we run the command.

{
  "taskArns": [
    "arn:aws:ecs:us-east-1:816649246361:task/dokuwiki-cluster/ef120a2a79fe4e4e8efb70a6623d886e"
  ]
}

Note:  The output you get will not match mine exactly. The identifiers will be different.

Next, we need to run the command to stop the ECS Task that we saw when we ran the aws ecs list-tasks command. We will need to run the ecs stop-task command and pass it the name of our ECS Cluster and the identifier for the task we want to stop. Run the following command substituting the ECS Task ID you got when you ran the first command.

aws ecs stop-task --cluster dokuwiki-cluster --task ef120a2a79fe4e4e8efb70a6623d886e

If we ran the command correctly then AWS will stop the task and return all of the parameters of the stopped task. Hit the q key to get back to the command prompt.

Now that we ran a command to stop the task, run the following command again to see if our task was actually stopped.

aws ecs list-tasks --cluster dokuwiki-cluster

If we run the aws ecs list-tasks command fast enough, we may not see any tasks in the list. However, if we wait 15 seconds and run it again, we should see that another task listed with a new Task ID.

{
  "taskArns": [
    "arn:aws:ecs:us-east-1:816649246361:task/dokuwiki-cluster/91e4b6e4c0084d0493756cfe0c4d7898"
  ]
}

Note that the ID at the end of the task is different this time because the ECS Service created a new ECS Task.

  1. Check the DokuWiki Site to Confirm the Content We Changed is Still Loading from EFS Storage

After confirming that a new ECS Task has started, reload the DokuWiki page in your browser to see if the content you changed previously is still there. You may find that the first time you reload the page that you get an error message. This is expected because it will take a minute for the ECS Service to start a new ECS Task running the DokuWiki container. However, if you wait 30 seconds or so and reload the page you should find that the content you changed previously in DokuWiki is still there. A successful test is evidence that our content is now stored on the EFS storage volume instead of our ECS Task.

Step 5: Roll Back the Changes Made by Terraform – terraform destroy

Now that we have deployed and validated our infrastructure, it is time to remove it. Fortunately, Terraform tracked all the changes it made to our infrastructure in a state file and can use this information to roll back all the changes it made.

Run the following command to instruct terraform to roll back or destroy all the changes made to our infrastructure:

terraform destroy

When you run the terraform destroy command, Terraform will generate a new plan for you listing the resources that will be removed. Let’s go through the output on the destroy command.

Note: The list of resources were removed from the sample output for the sake of brevity.

module.dokuwiki.random_id.index: Refreshing state... [id=m4g]
module.dokuwiki.data.aws_vpc.vpc: Reading...
module.dokuwiki.aws_iam_role.ecs_task_role: Refreshing state... [id=dokuwiki-ecstaskrole]
. . .

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
- destroy

The terraform destroy command refreshes the state of all the resources first. Most of these messages were removed from the sample output for the sake of brevity. After it finishes refreshing the state of all the resources, it tells you what it will do. This time around, the symbol changes to – destroy indicating that any resources with the minus symbol next to them will be destroyed.

Terraform will perform the following actions:

# module.dokuwiki.aws_cloudwatch_log_group.ecs-log-group will be destroyed
- resource "aws_cloudwatch_log_group" "ecs-log-group" {
. . .
}

# module.dokuwiki.aws_iam_role.ecs_task_role will be destroyed
- resource "aws_iam_role" "ecs_task_role" {
. . .
}

. . .

Plan: 0 to add, 0 to change, 25 to destroy.

Changes to Outputs:
- alb_dns_name = "dokuwiki-alb-979498190.us-east-1.elb.amazonaws.com"
- ecs_cluster_name = "dokuwiki-cluster"

Do you really want to destroy all resources?
Terraform will destroy all your managed infrastructure, as shown above.
There is no undo. Only 'yes' will be accepted to confirm.

Enter a value:

Note: Some resources and resource attributes were removed from the sample output for the sake of brevity.

As you continue to review the output you should notice that every single resource now has a minus symbol next to it indicating that if you approve the operation then Terraform will remove all the resources. If you scroll down to the end of the output, you’ll see that it will destroy 25 resources which is exactly the same as the number of resources that Terraform created when we ran the apply command.

Terraform is asking you to confirm that you want it to carry out all the actions that are listed in the plan. Go ahead and type yes at the prompt and hit enter. When you do this, Terraform will start destroying all of the resources in AWS on your behalf.

As Terraform works through the process of carrying out the planned changes to our infrastructure, it lists out what it is doing.

Note: Some of the output was omitted for the sake of brevity.

module.dokuwiki.module.efs.aws_efs_backup_policy.policy[0]: Destroying... [id=fs-02fc0e0a740ddd00e]
module.dokuwiki.module.sg_alb.module.sg.aws_security_group_rule.egress_rules[0]: Destroying... [id=sgrule-1334013127]
module.dokuwiki.module.efs.aws_efs_mount_target.default[0]: Destroying... [id=fsmt-03bb888c2575f56e4]
. . .

Destroy complete! Resources: 25 destroyed.

It should take around 3 minutes for Terraform to destroy all 25 resources! Again, this is a huge time savings if you consider how long it would take to destroy all of these resources by clicking around in the AWS Console.

After the destroy process finishes, you will find that if you reload the DokuWiki browser tab it will no longer load because the Application Load Balancer (ALB) created by AWS no longer exists.

Closing Remarks

We covered a lot of ground in this post.

  1. We started by looking at some ways to make our ECS DokuWiki deployment more robust using an ECS Task and an EFS volume.
  2. We listed some of the benefits of Infrastructure as Code (IaC) when compared with “Click-Ops.”
  3. We went over some of the benefits of Terraform.
  4. We described the setup requirements for running Terraform with AWS including installing Git, the AWS CLI, and Terraform.
  5. We pulled the source code for the Terraform deployment from GitLab.
  6. We reviewed the code, ran a terraform init, ran a terraform plan, and then deployed the code using terraform apply.
  7. We tested to confirm that the Terraform deployment was successful using the AWS CLI.
  8. We then used Terraform to destroy our deployment so that we wouldn’t have to pay for resources in AWS that we were no longer using.

Thanks for reading!

]]>
https://www.sms.com/blog/deploying-dokuwiki-on-amazon-elastic-container-service-ecs-part-2/feed/ 0
Deploying Your First Container on Amazon Elastic Container Service (ECS) – Part 1 of 2 https://www.sms.com/blog/deploying-your-first-container-on-amazon-elastic-container-service-ecs/ Thu, 07 Jul 2022 20:48:10 +0000 http://sms-old.local/?p=3897 By Rob Stewart, Cloud Architect, SMS

The Case for Amazon Elastic Container Service

According to Corey Quinn, there are at least 17 Ways to Run Containers on AWS. However, if you are familiar with all the hype concerning containers in general and Kubernetes in particular, then you might conclude that there is no reason to even consider any of the 16+ other options that Amazon provides for running containers in AWS. The experts have all weighed in and Amazon Elastic Kubernetes Service (EKS) is the only option worth considering. Even though EKS is a great option, I invite you to consider Amazon Elastic Container Service (ECS) as an alternative, depending on what type of containerized workload you need to deploy and support.

AWS describes ECS as “a fully managed container orchestration service that makes it easy for you to deploy, manage, and scale containerized applications.” ECS became Generally Available (GA) in April of 2015, a good 3 years prior to the GA of EKS in June 2018. Given this timing, it is not surprising that ECS has strong Docker vibes. It might even be fair to call ECS “AWS Managed Docker Container Service.” Given this heritage, you would expect that ECS would be a good citizen of AWS, and you would be correct. ECS is tightly integrated with many other AWS services. Therefore, if you already have experience with EC2, ELB, ALBs, and CloudWatch then the additional concepts you need to grasp to start using ECS are far more incremental in nature. ECS is one more piece to snap into the larger AWS puzzle.

In contrast, EKS represents Amazon’s effort to address the needs of customers who have already adopted Kubernetes and want to run it using a managed service on AWS. Kubernetes is a fantastic container orchestrator; however, it is not by any stretch a simple one; adding the inherent complexity of AWS Services including IAM, EC2, etc. on top of the complexity of Kubernetes takes it to another level. In some cases, it is not necessary to deploy a containerized workload on Kubernetes. In these scenarios, using ECS may be a better option as it will be easier to get the workload running to start with and easier to maintain it going forward. There is value in simplicity.

Amazon Elastic Container Service Overview

Before diving into a simple deployment on ECS, it is helpful to have some understanding of the different components of the architecture at a high level.

ECS Cluster

To deploy containers on ECS, you must create an ECS Cluster. The ECS documentation describes the ECS Cluster like this:

An Amazon ECS cluster is a logical grouping of tasks or services. You can use clusters to isolate your applications. This way, they don’t use the same underlying infrastructure.

ECS Task Definition

To tell AWS what you want to run on your ECS cluster you must create an ECS Task Definition. A Task Definition is a JSON formatted document that lists the containers that you would like to deploy and how those containers will interact with other AWS services like Elastic File Systems and CloudWatch. The containers you reference in the task definition can come from a public registry like Docker Hub or Amazon’s own Elastic Container Registry.

ECS Tasks

ECS Tasks are created when you ask ECS to run a Task Definition on an ECS Cluster. There are multiple ways to run ECS Tasks on ECS:

  1. EC2 instances running in your VPC can run tasks if the ECS Container Agent is installed on them. The agent enables ECS to manage the tasks running on that instance.
  2. When you don’t want to manage EC2 instances, you can use AWS Fargate. When you use Fargate, AWS will provision the compute required to run the tasks and place Elastic Network Interfaces in your VPC so that these tasks can interact with the network.

ECS Services

ECS supports running task definitions directly as ECS Tasks. However, containers are ephemeral in nature; they should be considered cattle, not pets. Therefore, you will usually create an ECS Service which is responsible for scheduling one or more tasks for you and launching new tasks automatically if one of the running tasks should fail. The following diagram shows the ECS Cluster and how tasks are deployed to the VPC on EC2 instances or as Fargate tasks:

Exhibit 1: AWS ECS Architecture Overview

AWSExhibit1 1

Deploying Dokuwiki on ECS

Now that we have provided a high-level overview of ECS, we’ll move on to describe how to deploy the Bitnami Dokuwiki container on ECS via the AWS Console. For those who may not be familiar with it, Dokuwiki is an open source Wiki engine that runs on PHP; what it lacks in flare it makes up in simplicity. We’ll be using the Bitnami Dokuwiki container because it doesn’t require databases or other services to function.

Step 1: Create a New AWS Account or Login to an AWS Account

To follow the remaining steps described in this post you will need to either create a new AWS Account or login to an existing AWS Account using an account with IAM permissions to resources such as IAM, EC2, and ECS resources. Providing guidance on how to sign up for a new AWS account is outside the scope of this post; however, Amazon provides guidance on how to create an account here.

NOTE: If you follow along with the steps in this post, there is some chance you may incur minimal charges. However, if you create a new account you will be under the AWS Free Tier. That said, it is always prudent to remove any resources you create in AWS right after you finish using them so that you limit the risk of unexpected charges. Guidance on how to remove the resources in provided below.

Step 2: Navigate to the ECS Dashboard

After you login to AWS, enter ECS in the search box at the top of the page and then select Elastic Container Service to navigate to the ECS Dashboard.

Exhibit 2: Navigate to the ECS Dashboard

AWSExhibit2

NOTE: The steps in this blog post describe how to navigate the current version of the AWS ECS UI in the AWS Console.  However, AWS is preparing to release a new version of the ECS Experience in the near future.  There is a toggle switch in the upper left-hand corner which enables you to switch between the current version of the ECS UI and the New ECS Experience.  If you find that the descriptions below do not match with what you are seeing then the New ECS experience may already be enabled.  If so, please use the toggle switch to turn off the New ECS Experience.  We plan to update this post in the future after Amazon transitions all customers over to the new experience.

Step 3: Create a New ECS Task Definition

  1. Select the Task Definitions link on the left-hand navigation.
  2. Select the Create new Task Definition
  3. Select Fargate for the launch type compatibility in Step 1.
  4. Select the Next step button on the bottom right hand corner of the page.
  5. For the Task definition name, enter the following: dokuwiki-task
  6. You can skip down to Operating System family and select Linux from the drop down.
  7. Next, set the Task size by setting Task memory (GB) to .5GB and Task CPU (vCPU) to .25 vCPU using the drop downs.
  8. Next, select the Add container button to define container settings.
  9. On the “Add Container” panel, specify the following:
  10. Container name: dokuwiki
  11. Image: bitnami/dokuwiki
  12. You do not need to configure any other settings, so scroll to the bottom and select the Add button.
  13. If you added the container definition correctly then you should see the container in the list of container definitions.

Exhibit 3: Adding a Container definition

AWSExhibit3

After you finish adding the Container definition, scroll all the way down to the bottom of the page and select the Create If everything went okay, you should see messages telling you that AWS completed the creation of the task definition. Go ahead and select the View task definition button in the lower right-hand corner of the screen.

Step 4: Create an ECS Cluster

Before you can run an ECS Task based on the task definition you just created, you need to create a new ECS Cluster.

  1. Select Clusters on the left navigation.
  2. Select the Create Cluster
  3. For the cluster template, Networking only should already be highlighted as it is the default selection. This is the option we want so go ahead and select the Next step button on the bottom right hand corner of the page.
  4. Enter Dokuwiki-cluster for the Cluster name.
  5. There is no need to change the options for Networking or CloudWatch Container Insights so scroll down to the bottom of the page and select the Create

NOTE: In some cases, you may encounter an error the first time you attempt to create a new ECS cluster telling you that AWS wasn’t able to create the cluster. If this happens, just repeat the cluster creation steps one more time. It should work just fine on the second attempt.

  1. After the cluster has been created successfully, select View Cluster. You should now see the new ECS cluster you just created.

Exhibit 4: The new ECS Cluster

AWSExhibit4

Step 5: Launch a new ECS task

Now that you have created an ECS Task Definition and a new ECS Cluster, it is time to create a new ECS Task based on the Task Definition.

  1. From the same ECS Cluster overview page, select the Tasks tab near the bottom of the page and then select the Run new task

Exhibit 5:  Run a new ECS Task

AWSExhibit5

2. On the Run Task page, select the following options:
a.  Launch type: FARGATE
b.  Operating System Family: Linux
c.  Task Definition Family: dokuwiki-task
d.  Task Definition Revision: 1 (latest)
e.  Platform version: Latest
f.   Cluster: Dokuwiki-cluster

Note: You can just leave Task group blank.

g.  Cluster VPC: Select the VPC ID for the default VPC in your account. It  should have an ID and IP CIDR range that looks similar to this: vpc-0b57fca38d911cb56 (172.31.0.0/16) Your VPC ID will be different. However, the IP CIDR range of 172.31.0.0/16 should match.
h.  Subnets: Select one subnet from the drop down. It does not matter which  subnet you select.
i.  Security groups: Select the Edit button to open the Configure Security Groups panel.
j.  On the Configure Security Groups panel, do the following:
i.  On Assigned security groups, Select the Create new security group radio button.
ii.  Security group name: dokuwiki-sg.
iii.  Description: Dokuwiki security group
iv.  For the Inbound rules for security group, set Type to Custom TCP, Protocol should be TCP, Port range should be 8080 and Source should be  anywhere.
v.  After you finish configuring the security group, select the Save button in the lower right-hand corner to close the panel.

Exhibit 6:  Configure security groups

AWSExhibit6

k.  Auto assign public IP: Enabled
l.  You do not need to change any other options, select the Run Task button to run the ECS Task Definition.
m.  If you did everything correctly, you will see a message at the top of the page indicating that the task was created successfully.

Exhibit 7: ECS task created successfully

Step 6: Access the Dokuwiki Application

Now that we have created the new ECS task, we can access it via the browser using the public IP address that AWS provisioned for it.

  1. Select the task in the list:

Exhibit 8: Select the newly created ECS Task

  1. Amazon has assigned a public IP to your container. Locate and the Public IP on the Network section of the page and copy the IP address.

Exhibit 9:  The public IP Address of the ECS Task

  1. Open a new browser tab, type http:// in the address bar, paste the public IP you just copied, add :8080 to the end of the address, and then hit the Enter key to load the page. If you did this correctly, you should see the Dokuwiki application load in your browser.

Exhibit 10: The Dokuwiki application loads in the browser

  1. Feel free to play around with Dokuwiki. You can click the pencil icon on the right side of the page to create a page, enter some text, and then click the save button at the bottom of the page.
  2. When you’re finished playing with Dokuwiki, switch back to the AWS Console tab in your browser. You should still have the ECS Task open. Go ahead and kill the task by selecting the Stop button in the upper right-hand corner of the page. When you select the Stop button, you will be prompted to confirm that you want to delete the task; go ahead and select the red Stop button to confirm that you want the task to be removed.

Exhibit 11:  Stop the Dokuwiki ECS Task

  1. After the ECS Task has stopped, you will find that if you switch back to the browser tab where you had Dokuwiki loaded and hit the Refresh button on the browser that Dokuwiki will no longer load. The public IP associated with the task was removed when you stopped the ECS task.
  2. As we are done working with the ECS Cluster, you can go ahead and delete it. Select the Delete Cluster button in the upper right-hand corner of the page. After you select the Delete Cluster button, you will be prompted to confirm that you want to delete it. Enter delete me in the input box and then select the Delete button to confirm that you want the cluster to be deleted.
  3. The only other resource we created was the EC2 Security Group. Feel free to switch to the EC2 Service and remove the Security Group if you like. However, Amazon does not charge for security groups.

Final Notes

In this post, we provided a quick overview of AWS Elastic Container Service and then shared step by step directions for how to create an ECS Cluster and run a Dokuwiki container as an ECS Task. While this was a great start, there are several deficiencies with this approach.

  1. We performed all these actions via the AWS Console. When you are first learning about new services in AWS, it is very helpful to explore the console to get a better grasp on how services work. However, clicking through the console is not repeatable. Once you understand how a given AWS service works, it is better to provision all AWS resources using Infrastructure as Code (IaC) tools like AWS CloudFormation or Hashicorp Terraform. When you express your infrastructure as code then you can create it and destroy it with a single command as many times as you like, and all the details of the deployment are automatically documented. You can also track the IaC in a source code repository which enables version tracking, collaboration, and reviews.
  2. The deployment of the ECS Task was not fault tolerant. As soon as the ECS Task was deleted, our application no longer loaded in the browser. For a real deployment, we would want to use an ECS Service to run multiple Tasks. The Service would take responsibility for making sure new tasks are started up when a task fails.
  3. Further, any content we saved in Dokuwiki was gone forever when the ECS Task was deleted. For a real deployment, we would want to store this data in a redundant fashion so that it would not be lost when an ECS Task failed. One way to introduce redundancy for storage is to create an AWS Elastic File System and mount it to the container in the ECS Task Definition.
  4. The deployment was not secure. When we accessed Dokuwiki in the browser, we had to tell the browser to use the HTTP protocol and specify port 8080. Therefore, any data we entered into Dokuwiki would not be encrypted in transit using Transport Layer Security (TLS). One way to address this deficiency is to use an AWS Elastic Load Balancer to handle the traffic. Elastic Load Balancers can be configured with TLS certificates so that data is encrypted in transit.

We address these deficiencies in Part 2 of this series.

Lastly, while ECS may work great in some cases, EKS shines in cases where the application is more complex. One of the advantages of Kubernetes is that complex deployments can be expressed as Helm charts which can be deployed on Amazon EKS or on another cloud provider’s version of Kubernetes. If you need to deploy a container-based application on AWS from a vendor who provides a well-documented Helm chart then using EKS is likely your best path forward.

Click here for Part 2 in this series.

]]>