This is the last in a three-part series of posts on deploying a Docker application to AWS Elastic Beanstalk with AWS CodeSuite. The Docker application in question is this blog, a WordPress application backed by a MySQL database.
In my last post I discussed the specifics of the build process for this blog. Essentially the build process involves (1) pushing the Docker images for the blog to ECR and (2) pushing the EB source bundle for the blog to S3.
In this post I’ll discuss how I set up the deployment process for the blog. The deployment process will pull the EB source bundle from S3 and deploy it to EB. EB will then use the content of the source bundle as the configuration for the deployment. Specifically it will use the docker-compose.yml file in the source bundle to pull down the Docker images for the blog.
Setting up the deployment process involved the following main steps:
Creating a deployment pipeline (via CodePipeline)
Customizing the pipeline’s service role (via IAM)
I performed these steps via the AWS Console.
Creating a deployment pipeline
To create the deployment pipeline I went to CodePipeline and created a new pipeline via Pipelines > Create new pipeline. I then completed the steps of the wizard as follows:
Step 1: Choose creation option
Category: Build custom pipeline
Step 2: Choose pipeline settings
Pipeline settings
Pipeline name: <PIPELINE_NAME>
(The pipeline’s IAM role and S3 bucket are created automatically if you don’t specify them.)
Step 3: Add source stage
Source
Source provider: Amazon S3
Bucket: <BUCKET>
S3 object key: <S3_OBJECT_KEY>
<BUCKET> refers to the S3 bucket in which the EB source bundle is stored. <S3_OBJECT_KEY> refers to the S3 object key of the source bundle.
Step 4: Add build stage
I skipped this stage.
Step 5: Add test stage
I skipped this stage.
Step 6: Add deploy stage
Deploy
Deploy provider: AWS Elastic Beanstalk
Application name: <APPLICATION_NAME>
Environment name: <ENVIRONMENT_NAME>
<APPLICATION_NAME> refers to the EB application name; <ENVIRONMENT_NAME> refers to the EB environment name.
Step 7: Review
Create pipeline
Creating the pipeline triggered an initial execution. This execution failed at the deploy stage due to insufficient permissions on the pipeline’s service role.
In the next section I’ll discuss how I customized the pipeline’s service role to meet the needs of the deploy stage.
Customizing the pipeline’s service role
To customize the pipeline’s service role to meet the needs of the deploy stage I attached the following inline policy to the role:
The inline policy has two statements: EBActionStatement and IAMActionStatement. EBActionStatement enables the service role to perform actions against services related to the deployment, e.g., EB, EC2, S3, etc. IAMActionStatement assigns a role (via iam:PassRole) to those actions in EBActionStatement that require a role to be assigned at time of execution, e.g., launching an EC2 instance.
After attaching the inline policy to the service role, I re-ran the deploy stage of the pipeline, which this time succeeded.
The overall execution was also shown to be successful.
EB also showed that the code had been deployed to the environment via CodePipeline:
Conclusion
In this post I discussed how I set up the deployment pipeline for this blog using CodePipeline. Specifically I discussed how I created a two-stage pipelinethat pulls an EB source bundle from S3 and deploys it to EB. I also discussed how I customized the pipeline’s IAM service role to meet the needs of the deploy stage. This concludes this series of posts on deploying a Docker application to AWS Elastic Beanstalk with AWS CodeSuite.
This is the second in a three-part series of posts on deploying a Docker application to AWS Elastic Beanstalk with AWS CodeSuite. The Docker application in question is this blog, a WordPress application backed by a MySQL database.
In my last post I discussed setting up a basic AWS CodeBuild project that integrates with GitHub. In this one I’ll discuss the specifics of the build process, which will involve the following steps:
Authenticating CodeBuild with ECR
Building the Docker images for the application
Pushing the Docker images to ECR
Creating an EB source bundle
Pushing the EB source bundle to S3
I’ll start by discuss authenticating CodeBuild with ECR, a necessary prerequisite for CodeBuild interactions with ECR that take place later in the build process.
Authenticating CodeBuild with ECR
Given that CodeBuild will need to be able to pull images from and push images to ECR, it’s necessary for it to be authenticated with ECR prior to performing these actions. In my build process I implement authentication via a command in the pre_build phase of my buildspec.yml:
aws ecr get-login-password – retrieves an authentication token from ECR and pipes it to docker login as the value of the –password-stdin argument
docker login – authenticates CodeBuild with ECR
$AWS_DEFAULT_REGION and $AWS_ACCOUNT_ID are environment variables that I set in the CodeBuild project’s Environment configuration.
These environment variables will be used throughout the build process.
In order for CodeBuild to be able retrieve the authentication token from ECR, I added the following permission to the CodeBuild project’s service-role policy:
Here, docker compose build is invoked with a specific Compose file (docker-compose.build.yml), specified via the -f flag. Following is the Compose file in its entirety:
The file builds images for my blog’s two services: mysql and wordpress. Each service is configured with an image and a build—image specifies the name of the image that will be tagged locally; build specifies the build configuration for the image. The build configuration consists of three options:
context – the path to the directory containing the content of the image
dockerfile – the path, relative to context, to the Dockerfile that defines the image
args – arguments, or variables, passed from the Compose file to the Dockerfile
The images are defined by custom Dockerfiles (Dockerfile.build). The Dockerfiles accept three arguments, which are used to construct the value of the FROM directives:
IMAGE_REPO – The URL of the ECR repository from which to pull the base image
IMAGE_NAME – The name of the base image to pull
IMAGE_VERSION – The version of the base image to pull
The content of the Dockerfiles is very straightforward and conforms to the following basic template:
ARG IMAGE_REPO
ARG IMAGE_NAME
ARG IMAGE_VERSION
FROM ${IMAGE_REPO}/${IMAGE_NAME}:${IMAGE_VERSION}
// copy relevant content to container (via COPY directive)
EXPOSE <PORT>
The Dockerfile in question pulls the relevant base image from ECR using the FROM directive, which consumes the args passed in from the Compose file. The relevant content is then copied to the container and the relevant port exposed to the host OS via the EXPOSE directive–port 3306 in the case of the MySQL service and port 80 in the case of the WordPress service.
In order for the Dockerfiles to be able to pull the base images from ECR I created two private repositories in ECR: one for the MySQL image and one for the WordPress image. I also went ahead and create two additional private repositories for my blog’s MySQL and WordPress images–these will be needed when pushing the Docker images for the blog to ECR.
In order for CodeBuild to be able to pull images from ECR, I added the following permission to the CodeBuild project’s service-role policy:
Here, the two docker tag commands tag the built images for the purpose of pushing them to ECR; while the two docker push commands publish the built images to ECR via their tags.
A successful push of an image should result in the output to the build log:
latest: digest: sha256:<sha256> size: <size>
A pushed image should also be viewable on the ECR repository’s image details page, for example the image details page for my blog’s MySQL image:
In order for CodeBuild to be able push the Docker images to ECR, I added the following permission to the CodeBuild project’s service-role policy:
Once the Docker images have been pushed to ECR, the next step in the build process is to create an EB source bundle.
Creating an EB source bundle
An EB source bundle is a zip file that is deployed to EB. The zip file contains all the files that are needed for EB to be able launch the application in the environment, be it a Docker application or an application built on a different platform, e.g., Node.js, Java, etc. In my build process I implement creating the source bundle via a command in the post_build phase of my buildspec.yml:
...
phases:
post_build:
commands:
...
- sh dist.sh
...
Here the sh dist.sh command executes a custom shell script, a slightly dumbed-down version of which follows:
The script creates some holding directories and copies the content for the EB source bundle into them. The specific files include the following:
Any .ebextensions config files that are needed for configuring the EB environment
Files that are needed by the Docker containers (and for whatever reason were not copied directly to the containers)
The Compose file that is needed for launching the application
I will discuss the Compose file in more detail in my next post. For now the only thing I would point out is that it specifies pulling the Docker images for the application from ECR:
With the EB source bundle is in place, the final step of the build process is to ensure the source bundle is pushed to S3.
Pushing the EB source bundle to S3
As I mentioned in my last post, the artifacts section of the buildspec.yml can be used to specify a build artifact to upload to S3. Following is the complete artifacts section of my buildspec.yml:
Here, base-directory specifies the directory containing the files to be included in the build artifact, while files specifies the files from the base directory to be included in the build artifact–in this case ‘**/*’ specifies that all files in the base directory should be included in the build artifact.
I did not need to add a permission to the CodeBuild project’s service-role policy since the CodeBuild project’s base policy has this permission by default.
A successful upload should result in an entry similar to the following being output to the build log:
In this post I’ve described an end-to-end process for building a Docker application for deployment to EB. This has involved several relatively straightforward but nonetheless critical steps, including: authenticating CodeBuild with ECR using an authentication token from the latter, building the Docker images for the application using a purpose-built Compose file, tagging the Docker images and pushing them to ECR, creating an EB source bundle using a custom shell script, and pushing the source bundle to S3 via buildspec configuration.
In my next post, I’ll discuss deploying the application to EB via CodePipeline.