Apper Digital Logo
Related Posts
Related Posts
« Back to DevOps Blog

Understanding the AWS Certified DevOps Engineer – Professional Exam

Understanding the AWS Certified DevOps Engineer – Professional Exam

(This is a three-part article 1 / 3. This article covers the overview of the certification exam and Domain 1: SDLC Automation)

Last February 18, 2019, the new version of the AWS Certified DevOps Engineer – Professional (DOP-C01) was released. Upon hearing about the new version of the exam, I immediately booked the exam and began my study plan. I estimated it would take me about three (3) weeks to prepare for the certification based on the updated exam guide. I cleared the exam last March 15, and I wrote this guide to help you better understand and prepare for the exam.

As a professional-level certification, I expected no less than a difficult and challenging exam for the AWS Certified DevOps Engineer – Professional. It did not disappoint – it was a challenging exam!  Ultimately, your professional AWS experience and quality of preparation are factors in determining your success. However, with the right preparation and mindset, you can increase your chances in passing the exam and gaining the credential. Understanding the coverage and context of the exam guides, sample questions, and practice exam also contributes to your success.

Domains

Let’s start with the exam’s domains. Based on the guide, there are six (6) domains for the AWS Certified DevOps Engineer – Professional.

Domain % of Examination
Domain 1: SDLC Automation 22%
Domain 2: Configuration Management and Infrastructure as Code 19%
Domain 3: Monitoring and Logging 15%
Domain 4: Policies and Standards Automation 10%
Domain 5: Incident and Event Response 18%
Domain 6: High Availability, Fault Tolerance, and Disaster Recovery 16%
Total 100%

Domain 1: SDLC Automation

The domain that has the most coverage revolves around automating the Software Development Lifecycle. This domain supports one of the core tenets of DevOps of removing the distinction between developers and operators with the intent of performing automation as much as possible.

I listed the domain details and the relevant AWS services for SDLC Automation:

SDLC Automation Relevant AWS Services
Apply concepts required to automate a CI/CD pipeline CodePipeline, CodeCommit, CodeBuild, CodeDeploy, Lambda, CloudWatch
Determine source control strategies and how to implement them CodeCommit, CodePipeline
Apply concepts required to automate and integrate testing Device Farm, CodeBuild, Lambda
Apply required to build and manage artifacts securely CodeBuild, CodeDeploy, S3, Elastic Container Registry
Determine deployment/delivery strategies (e.g., A/B, Blue/green, Canary, Red/black) and how to implement them using AWS Services CodePipeline, Elastic Beanstalk, OpsWorks, Auto Scaling, Route 53, Elastic Load Balancing, CloudWatch, Lambda, Elastic Container Service

Domain 1 SDLC Automation has the highest weight in the exam, and it is the most critical domain. As a DevOps Engineer, it is critical to understand how the capabilities of the different AWS services and how they work together to automate source control strategies, deployments, rollbacks, artifact management, and testing.

Let’s look at a typical multi-environment setup for application development:

In the DevOps world, the movement of code from the development, test, staging, and production environments must be automated. The challenge for the AWS DevOps Engineer is how to set up, configure, and code the automation of the deployment using AWS services. Understanding the functionalities of the relevant AWS services and how they work together is critical to automate deployments and the SDLC process.

Application Environments

Provisioning the application environments and network must be immutable and provisioned from code. To provision the VPC and subnets, you can use AWS CloudFormation. Application environments ideally have separate VPCs for network segmentation which is trivial to implement using CloudFormation. The CloudFormation templates are also used to replicate the environment in a disaster recovery region.

Once the VPC and network are configured, the application environment needs to be provisioned. The following services are recommended based on the application architecture:

Web and two-tier applications AWS Elastic Beanstalk Use Elastic Beanstalk environments feature for the application environments
Multi-tier applications AWS OpsWorks Create separate OpsWorks stacks per environment
Microservices – Containers Amazon Elastic Container Service Create separate clusters per environment
Microservices – Serverless AWS Lambda, Amazon API Gateway Leverage API Gateway’s Stages

Orchestrating the CI/CD Pipeline

AWS CodePipeline is the essential service to master for the AWS Certified DevOps Engineer – Professional exam. CodePipeline allows you to orchestrate and control your CI/CD pipeline, automate your software releases, and speed up delivery. CodePipeline can manage your pipeline end-to-end, from source code management, building the application package and running integration tests, approval, and deployment.

While AWS CodePipeline works well with services like AWS CodeCommit for source code management, CodeBuild for continuous integration, and CodeDeploy for continuous deployment, it is also designed to work with 3rd party tools. For example, your development team might be using Github, Gitlab, or BitBucket for managing source code, CodePipeline integrations work well with those 3rd party managed services. Most teams also use Jenkins to manage their build pipeline and build servers; CodePipeline can also work with Jenkins. For the deployment, CodeDeploy is integrated with CodePipeline automated deployments work with EC2 instances, on-premises servers, Lambda functions, and Elastic Container Service. If direct deployment is needed, CodePipeline has deploy actions integration with S3, Elastic Beanstalk, CloudFormation, OpsWorks, Service Catalog, Alexa Skills Kit, and 3rd party tool Xebia Labs. 

Source Code Strategies

Alongside your CI/CD pipeline, appropriate source code management and application-environment strategies are also needed to improve the delivery of software services. For a single developer perspective with standard environments (development, staging, and production), it is a trivial exercise to create with CodePipeline and the supporting tools.

In the real world, however, a development team might have multiple developers concurrently working on multiple feature requests. In this scenario, it is the job of the DevOps Engineer to provide the developers with the ability to have feature request branches and leverage the CI/CD pipeline to deploy and test in their independent environments. If a developer needs to create an application environment to test a feature request, he should be able to build the code in the feature request branch and deploy to an isolated environment automatically as part of the pipeline.

In a multi-pipeline and multi-feature request environment, the pipeline also needs to have a provision to merge feature requests to the master branch, build the master branch, and deploy to the test, staging, and production environment. The pipelines need to have the necessary stages, transitions, and actions to facilitate the movement of the application build.

Event and State Change Detection

The pipeline should be configured to react to events such as code commits, invoked actions,  build completion, and state changes. Amazon CloudWatch events are essential to understanding as it can provide near real-time notifications for pipeline execution state changes. Combining the CloudWatch events with Amazon Simple Notification Service topics enable release managers and DevOps Engineers get notified via SMS or email as well as update systems via HTTPS endpoints. Also, AWS CloudTrail logs all of the API calls done on behalf by CodePipeline to an S3 bucket for further review and analysis.

Automating Builds and Testing

AWS CodeBuild provides automated build service to run tasks like building the application, generating application assets, running automated tests, and producing artifacts for deployment. CodeBuild automatically provisions, manages, and scales build servers.

As part of pipeline and release automation, AWS Device Farm integrates into an AWS CodePipeline pipeline for mobile and web applications testing. A good example is an Android application package build by CodeBuild, a testing stage with Device Farm can be added right after the build stage to run automated tests.

Building and Managing Artifacts Securely

AWS CodePipeline builds application artifacts to an Amazon S3 bucket. To secure the application artifact, CodePipeline uses an Amazon S3 artifact bucket with AWS-managed SSE KMS or a self-managed S3 bucket that uses customer-managed AWS KMS CMK keys. 

Determining deployment/delivery strategies (e.g., A/B, Blue/green, Canary, Red/black) and how to implement them using AWS Services

Once the pipeline builds the code, the deployment stage comes next. AWS provided a fantastic whitepaper that discusses the different Blue/Green deployment techniques. It is essential to master all of these techniques to score higher in the exam:

  1. Using Route 53 to switch from the blue to the green environment
  2. Swapping the Auto Scaling Group in an AWS ELB
  3. Update Auto Scaling Group Launch Configurations
  4. Swapping the Environment URL in AWS Elastic Beanstalk
  5. Clone a Stack and Updating Route 53 to switch traffic from the blue Stack to the green Stack

Many questions in Domain 1 rely on the concepts and techniques described in the techniques above. While the techniques are specific to the AWS services, it is also essential to understand the trade off using one technique over the other. For example, if client DNS record caching is a concern, would changing the Route 53 record set to switch from the blue to the green environment be better than changing the Launch Configuration of an Auto Scaling Group? There are pros and cons in both techniques, and the DevOps Engineer should be able to choose the correct solution based on the scenario presented.

The rollback strategy from the green environment back to the blue environment is also critical to understand as part of the deployment. Data provides the decision point when to invoke a rollback. Amazon CloudWatch provides the capability to collect, monitor, and analyze the deployments, applications logs, and infrastructure metrics. Amazon CloudWatch Logs can provide application-level metrics by collecting and aggregating application logs, while Amazon CloudWatch Metrics provide infrastructure data to give you information on how well the deployment is performing.

Independent environments for the green stacks using AWS Elastic Beanstalk environments and OpsWorks stacks rely on DNS to route back traffic to the blue environment which may present a challenge if clients cache records from the failed green environment. On the other hand, swapping the auto scaling group in an AWS ELB or updating the Auto Scaling Group Launch Configuration removes the DNS TTL issues but they might present application performance and error detection challenges.

To minimize the deployment failure radius, canary deployments using Amazon Route53’s weighted routing policy provide an ability to transfer application traffic from the blue to the green environment gradually. Weighted routing policies also allow AB tests to validate a hypothesis or feature.

Database Schema Changes

In blue/green deployments, we talk about creating different environments for the application code that connects to the same database. For schema-based databases to work in a blue/green deployment, applying the database schema changes, like adding new columns or updating the column names, from the previous application and newer application requires a strategy to decouple the database schema changes from the application code. There are two main approaches to consider:

Database schema-first change Database schema changes are applied first before the blue/green deploy Previous version (blue) must work with the new database schema changes
Database schema-last change Database schema changes are applied after the blue/green deploy New version (green) must work with the old database schema

Closing

Domain 1: SDLC pretty much covered the entire article. That’s a lot of concepts, techniques, and details just for one domain! As a DevOps Engineer, you need to focus on understanding the domain to understand and be successful in the exam.

The next installment of the series covers Domains 2 – 3 of the AWS Certified DevOps Engineer – Professional: Configuration Management and Infrastructure as Code, Monitoring and Logging. These domains cover the monitoring, management, and provisioning of application and infrastructure from a DevOps perspective.

Diwa del Mundo

Principal Cloud Architect

Diwa holds six AWS certifications (Solutions Architect - Professional, DevOps Engineer - Professional, Security - Specialty, and the three Associate certifications). Aside from his professional interest in AWS, Diwa is also active in the community as an AWS User Group Philippines Cloud Community Leader - Core. He is an experienced IT professional with more than ten years of experience. He is passionate in helping individuals and organizations get into the cloud.

Share:
LinkedIn
Facebook
Related Posts