Cyberithub

AWS Certified DevOps Engineer - Professional (DOP-C02) Practice Questions Part - 1

Table of Contents

Advertisements

In this article, we will go through latest practice questions that will help you succeed in AWS Certified DevOps Engineer - Professional exam. This certification is designed for testing the advanced technical skills and expertise in automating processes, managing deployments, and implementing DevOps practices using AWS. Passing this exam will showcase individual expertise in infrastructure automation, CI/CD pipeline management, and operational monitoring using AWS services. To boost your confidence, we are going to present series of practice questions which will guarantee your success.

 

AWS Certified DevOps Engineer - Professional (DOP-C02) Practice Questions Part - 1

AWS Certified DevOps Engineer - Professional (DOP-C02) Practice Questions Part - 1

Also Read: AWS Certified Developer - Associate(DVA-C02) Practice Questions and Answers Part - 1

1. A company allows development teams to create their own development and test environments in the AWS Cloud. The company's security team requires the installation of specific monitoring and security software on all Amazon EC2 instances. Consequently, the DevOps team has created a set of AMIs with the required security and monitoring software pre-installed. The DevOps team wants to implement a system to automate the discovery and termination of any EC2 instance that does not use the required AMIs. How should the DevOps team implement this system to meet these requirements?

a) Create a set of approved products in AWS Service Catalog that use the approved AMIs. Grant the development users IAM permissions to provision the AWS Service Catalog products.

b) Create an SCP that explicitly denies the ec2:CreateInstance action if the AMI is not an approved AMI. Attach the SCP to an IAM group that is associated with the development users.

c) Use AWS Config rules to detect noncompliant EC2 instances. Set up a remediation action that terminates noncompliant EC2 instances.

d) Create AWS CloudFormation templates that use the approved AMIs. Grant the development users IAM permissions to create CloudFormation stacks from those templates.

Ans. c) Use AWS Config rules to detect noncompliant EC2 instances. Set up a remediation action that terminates noncompliant EC2 instances.

 

2. A company hired a third-party development team to build a web-based application. The application runs on Amazon EC2 instances across multiple Availability Zones. The application writes JSON log files locally. Each log entry is a separate JSON object. The log entries include application events and informational entries that contain personally identifiable information (PII) such as phone number and address. The third-party team wants to access the application logs on the EC2 instances to perform root cause analysis when issues and errors occur. The third-party team needs access to the application data but must not be able to access the PII. The company must grant the third-party team access to view the application log files without providing SSH access directly to the production servers. Which solution will meet these requirements MOST cost-effectively?

a) Set up a centralized syslog server on EC2 instances to collect all application server logs. Use AWS Batch to schedule a daily job to consume the logs from the syslog server. Ensure that the batch task filters the data to output only application event log entries to an Amazon S3 bucket. Grant the third-party team access to the S3 bucket that contains the logs.

b) Send the application log files from the EC2 instances to Amazon CloudWatch Logs by using the unified CloudWatch agent. Create an AWS Lambda function that receives log entries and writes them to an Amazon OpenSearch Service cluster. Subscribe the Lambda function to the CloudWatch Logs log group with a subscription filter that sends only application events. Share the OpenSearch Dashboards login URL with the third-party team.

c) Send the application log files from the EC2 instances to an Amazon Data Firehose delivery stream by using the unified CloudWatch agent. Enable data transformation on the delivery stream with an AWS Lambda function that drops all log entries except the application events. Deliver the delivery stream to an Amazon S3 bucket. Grant the third-party team access to the S3 bucket that contains the logs.

d) Send the application log files from the EC2 instances to Amazon CloudWatch Logs by using the unified CloudWatch agent. Create an AWS Lambda function that receives log entries and writes them to an Amazon S3 bucket. Subscribe the Lambda function to the CloudWatch Logs log group with a subscription filter that sends only application events. Grant the third-party team access to the S3 bucket that contains the logs.

Ans. d) Send the application log files from the EC2 instances to Amazon CloudWatch Logs by using the unified CloudWatch agent. Create an AWS Lambda function that receives log entries and writes them to an Amazon S3 bucket. Subscribe the Lambda function to the CloudWatch Logs log group with a subscription filter that sends only application events. Grant the third-party team access to the S3 bucket that contains the logs.

 

3. A company runs an application on Amazon EC2 instances behind an Application Load Balancer (ALB). The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. The company uses an AWS CodeDeploy deployment group to manage blue/green deployments of the application. Currently, operators monitor the application logs on the EC2 instances. The operators manually instruct CodeDeploy to roll back if the error rate is too high. A DevOps engineer needs to automate this part of the deployment process. Which combination of steps should the DevOps engineer take to replace this manual task? (Select TWO.)

a) Configure an Amazon CloudWatch alarm that is invoked when the ALB's HTTPCode_Target_4XX_Count metric exceeds an acceptable threshold.

b) Configure the deployment group to monitor the alarm. If the alarm enters the ALARM state, fail the deployment and automatically roll back the deployment.

c) Create an AWS Lambda function that monitors the alarm and causes the deployment group to roll back the deployment if the alarm enters the ALARM state. Use an Amazon EventBridge scheduled event to invoke the function every hour.

d) Publish an Amazon Simple Notification Service (Amazon SNS) message if the alarm enters the ALARM state. Configure CodeDeploy to monitor the SNS topic and automatically roll back the deployment if CodeDeploy receives an error message.

e) Send the application logs to Amazon CloudWatch Logs. Create a metric filter to monitor error messages in the logs. Initiate an alarm if the error rate is unacceptable.

Ans. b) Configure the deployment group to monitor the alarm. If the alarm enters the ALARM state, fail the deployment and automatically roll back the deployment.

e) Send the application logs to Amazon CloudWatch Logs. Create a metric filter to monitor error messages in the logs. Initiate an alarm if the error rate is unacceptable.

 

4. A company runs a web application on Amazon EC2 instances behind an Application Load Balancer (ALB). The EC2 instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. According to the company's security policy, any EC2 instance that is launched must be registered with the company’s auditing service before the EC2 instance can start processing transactions. Which solution will meet these requirements?

a) Add a lifecycle hook to the Auto Scaling group to put new instances in a Pending:Wait state. Call a custom script on the instance that registers the instance with the auditing service. Complete the lifecycle action with the CONTINUE value or the ABANDON value.

b) Write a bootstrap script in the EC2 instance's user data that registers the instance with the auditing software. Return an error code from the bootstrap script if registration fails.

c) Create an Amazon EventBridge scheduled event that runs every 5 minutes. Set an AWS Lambda function as the target of the event. In that Lambda function, scan the contents of the Auto Scaling group. If any EC2 instances have been launched, register the EC2 instances with the auditing service.

d) Create a rule for Amazon EventBridge events that runs when a new EC2 instance is launched. Set the target to be an AWS Lambda function that calls the auditing service to register the instance.

Ans. a) Add a lifecycle hook to the Auto Scaling group to put new instances in a Pending:Wait state. Call a custom script on the instance that registers the instance with the auditing service. Complete the lifecycle action with the CONTINUE value or the ABANDON value.

 

5. A company is developing Python applications that use a Git repository, AWS CodeBuild, and AWS CodePipeline. The company needs to incorporate an automatic assessment of the source code to identify potential issues and defects within the code before the code is compiled for use. Which solution will meet these requirements?

a) Associate a Git repository with Amazon CodeGuru Profiler. Review any recommendations from pull requests in the console.

b) Configure Amazon CodeGuru Profiler during the prebuild phase of builds in CodeBuild projects. Review any recommendations from the projects in the console.

c) Associate a Git repository with Amazon CodeGuru Reviewer. Review any recommendations from pull requests in the console.

d) Configure Amazon CodeGuru Reviewer during the prebuild phase of builds in CodeBuild projects. Review any recommendations from the projects in the console.

Ans. c) Associate a Git repository with Amazon CodeGuru Reviewer. Review any recommendations from pull requests in the console.

 

6. A software development team uses AWS CodeBuild and AWS CodePipeline to build and test code before release. The same code branch is built and deployed by two different pipelines. One pipeline builds and deploys the code into a development environment where the developers test the code. The other pipeline builds and deploys the code into a user acceptance testing (UAT) environment. Business users report a defect in the application that runs in the UAT environment. The defect is not present in the development environment that runs the same code version. Which approach should a DevOps engineer take to ensure that identical build artifacts are deployed across identical environments?

a) Create an AWS Lambda function that validates the build artifacts. Reuse a single build stage that consists of the same CodeBuild project in both pipelines. Add a stage after the build stage to initiate the Lambda function. Use CodeDeploy in the deployment stage of each pipeline to deploy the artifacts to the associated environment.

b) Create an AWS Lambda function that validates the build artifacts. Create one pipeline that builds the source code and publishes the application artifacts to a location on Amazon S3. Set up an S3 event that initiates the Lambda function. Use two other pipelines to retrieve the artifacts from Amazon S3 and deploy them to the different environments by using CodeDeploy.

c) Reuse a single build stage that consists of the same CodeBuild project in both pipelines. Create a different buildspec file for each environment. Add the --buildspec-override parameter to the CodeBuild command to specify the correct buildspec for the environment. Specify the same AWS CloudFormation template in the deployment stage of each pipeline to create the environments.

d) Create an AWS Lambda function that uses the CodeBuild BuildArtifacts API to retrieve and compare the artifacts' SHA-256 checksums across the pipelines and return an error if the artifacts' checksums do not match. Add a stage after the build stage in the UAT pipeline to initiate the Lambda function. Specify the same AWS CloudFormation template in the deployment stage of each pipeline to create the environments.

Ans. d) Create an AWS Lambda function that uses the CodeBuild BuildArtifacts API to retrieve and compare the artifacts' SHA-256 checksums across the pipelines and return an error if the artifacts' checksums do not match. Add a stage after the build stage in the UAT pipeline to initiate the Lambda function. Specify the same AWS CloudFormation template in the deployment stage of each pipeline to create the environments.

 

7. A DevOps engineer manages several AWS accounts by using AWS Organizations. The DevOps engineer needs to monitor specific service quotas on a daily basis in all the accounts. The DevOps engineer also needs to receive notifications whenever service usage reaches 80% of the quota. Which solution will meet these requirements?

a) Enable trusted access between Service Quotas and Organizations. Create an Amazon CloudWatch alarm for each AWS service that is needed. Enter a threshold of 80% for the quota. Configure the alarm to send a message to an Amazon Simple Notification Service (Amazon SNS) topic when the alarm enters the ALARM state.

b) Create an AWS Lambda function that calls the DescribeTrustedAdvisorCheckResult AWS Support API operation. Configure the Lambda function to open a support case to increase any service quota in which the check is in WARN status. Deploy the Lambda function in every account. Create a daily scheduled event in Amazon EventBridge in a selected governing account to invoke the Lambda function in every account over the EventBridge bus.

c) Enable trusted access between Service Quotas and Organizations. Create an Amazon CloudWatch alarm for each AWS service that is needed. Enter a threshold of 80% for the quota. Configure the alarm to send a message by using Amazon EventBridge when the alarm enters the ALARM state.

d) Create an AWS Lambda function that calls the RefreshTrustedAdvisorCheck and DescribeTrustedAdvisorCheckResult AWS Support API operations. Configure the Lambda function to publish a message to an Amazon Simple Notification Service (Amazon SNS) topic if any service quota checks are 80% or higher. Deploy the Lambda function in every account. Create a daily scheduled event in Amazon EventBridge in the Organizations management account to invoke the Lambda function in every account over the EventBridge bus.

Ans. a) Enable trusted access between Service Quotas and Organizations. Create an Amazon CloudWatch alarm for each AWS service that is needed. Enter a threshold of 80% for the quota. Configure the alarm to send a message to an Amazon Simple Notification Service (Amazon SNS) topic when the alarm enters the ALARM state.

 

8. A company runs an externally facing RESTful web service with thousands of customers. The logic is implemented as a single AWS Lambda function that analyzes a short text document and returns the reading difficulty level. The application includes an Amazon API Gateway API with Lambda custom integration. The current request body has the following format:

{
  "text" : "<text to be evaluated>"
}

The company has updated the Lambda function to accept text in different languages. The new version of the Lambda function requires the language of the incoming text as an additional argument. The request body for an updated API will now require a new argument:

{
  "language" : "English" | "Spanish" | "French"
  "text" : "<text to be evaluated>"
}

The company's DevOps team needs to make the new release available for customers. There must be only one production version of the Lambda function to avoid redundant maintenance issues and duplicate releases of future improvements. Existing customers must be able to migrate to the new version, but customers who cannot immediately make changes on their part must continue to access the service successfully. Which process should the DevOps team use to deploy the new version?

a) Deploy the new Lambda function. Create a new API that is integrated with the new Lambda function. Deploy the new API. Keep the existing API and Lambda function unchanged.

b) Deploy the new Lambda function. Integrate the existing API with the new Lambda function. Deploy the existing API. Delete the existing Lambda function.

c) Deploy the new Lambda function. Integrate the existing API with the new Lambda function. Deploy the existing API. Create a new API that is integrated with the new Lambda function. Add a mapping template to the integration request of the new API that adds "language" : "English" to the body. Deploy the new API. Delete the existing Lambda function.

d) Deploy the new Lambda function. Create a new API that is integrated with the new Lambda function. Deploy the new API. Integrate the existing API with the new Lambda function. Add a mapping template to the integration request of the existing API that adds "language" : "English" to the body. Deploy the existing API. Delete the existing Lambda function.

Ans. d) Deploy the new Lambda function. Create a new API that is integrated with the new Lambda function. Deploy the new API. Integrate the existing API with the new Lambda function. Add a mapping template to the integration request of the existing API that adds "language" : "English" to the body. Deploy the existing API. Delete the existing Lambda function.

 

9. A company deployed AWS resources by using an AWS CloudFormation template. The template specified only the properties necessary for the resource creation. The template inherited the default values for the remaining optional properties. To deploy the workload with proper configuration, the company had to reconfigure the default values outside of CloudFormation. The reconfiguration led to configuration inconsistencies. The company wants to maintain all the settings for the properties through the template. The company wants to identify any changes made outside of CloudFormation and remediate the template back to the original configuration. Which solution will meet these requirements with the LEAST implementation effort?

a) Update the template to specify resource configuration values for all properties, including default values. Create an AWS Lambda function that detects drift on the existing stack and updates the stack by using the existing template. Use an Amazon EventBridge rule to schedule the Lambda function to run daily.

b) Create an AWS Lambda function that detects drift on the existing stack and updates the stack by using the existing template. Use an Amazon EventBridge rule to schedule the Lambda function to run daily.

c) Update the template to specify resource configuration values for all properties, including default values. Use the cloudformation-stack-drift-detection-check AWS Config rule. Configure the rule’s automatic remediation action to run the AWS-UpdateCloudFormationStack AWS Systems Manager Automation runbook.

d) Use the cloudformation-stack-drift-detection-check AWS Config rule. Configure the rule’s automatic remediation action to run the AWS-UpdateCloudFormationStack AWS Systems Manager Automation runbook.

Ans. c) Update the template to specify resource configuration values for all properties, including default values. Use the cloudformation-stack-drift-detection-check AWS Config rule. Configure the rule’s automatic remediation action to run the AWS-UpdateCloudFormationStack AWS Systems Manager Automation runbook.

 

10. A company is writing a web application. The application includes an Amazon API Gateway API that is integrated with AWS Lambda functions. The Lambda functions store data in an Amazon DynamoDB table. The Lambda functions store session information in an Amazon ElastiCache (Redis OSS) cluster. What should a DevOps engineer do to deploy the application in the MOST operationally efficient manner?

a) Deploy all the infrastructure with an AWS Serverless Application Model (AWS SAM) template.

b) Deploy all the infrastructure with an AWS CloudFormation template. Use custom resources to deploy the API, Lambda functions, and DynamoDB table.

c) Deploy the ElastiCache cluster with an AWS CloudFormation template. Deploy the rest of the application with an AWS Serverless Application Model (AWS SAM) template.

d) Deploy all the infrastructure with an AWS CloudFormation template.

Ans. a) Deploy all the infrastructure with an AWS Serverless Application Model (AWS SAM) template.

 

11. A company runs a RESTful web service. The company deploys the API by using Amazon API Gateway. The API uses AWS Lambda custom integration to call the business logic, which is implemented in Lambda functions. Amazon Route 53 is used for DNS. The company's DevOps team has deployed a new version of the Lambda functions, and the Lambda functions are ready to receive traffic. The DevOps team wants to send 10% of the production traffic to the new Lambda functions for a week before fully converting to the new release. The deployment solution must have no impact on users of the web service. What is the MOST operationally efficient deployment solution that meets these requirements?

a) Deploy the API to a new stage. Integrate the new stage with the new Lambda functions. Create a custom domain name for the new stage in API Gateway. Use simple distribution to send 10% of API traffic to the new stage and 90% of API traffic to the existing stage.

b) Add a canary release to the existing API production stage. Configure the canary settings to send 10% of the requests to the new Lambda functions and 90% of the requests to the existing Lambda functions.

c) Create a new API for the new release. Create a second domain in Route 53 for the new API. Use a weighted routing policy to send 10% of the traffic to the new release and 90% of the traffic to the existing release.

d) Reconfigure the API Gateway API to use AWS Lambda proxy integration. Configure the Lambda proxy to send 10% of the requests to the new Lambda functions. Send the remainder of the requests to the existing Lambda functions.

Ans. b) Add a canary release to the existing API production stage. Configure the canary settings to send 10% of the requests to the new Lambda functions and 90% of the requests to the existing Lambda functions.

 

12. A company uses Amazon Elastic Container Service (Amazon ECS) to manage Docker containers. The company recently discovered that some of the ECS tasks reached the STOPPED state when an essential container in the task exited. The company wants to receive email notifications whenever an essential container in the task exits. A DevOps engineer creates a new Amazon Simple Notification Service (Amazon SNS) topic and subscribes the notification email address to the new SNS topic. What should the DevOps engineer do next to meet the requirements with the LEAST amount of development effort?

a) Within the ECS cluster options, configure a notification with the type as lastStatus and the stoppedReason as "Essential container in task exited". Set the new SNS topic as the target for the notifications.

b) Create an Amazon EventBridge rule. Configure the rule with aws.ecs as a source. Configure details by adding "Essential container in task exited" in the stoppedReason field and by adding "STOPPED" in the lastStatus field. Set the new SNS topic as the target for the notifications.

c) Create an AWS Lambda function to perform a DescribeTasks API call. Get the stoppedReason response element. Configure the Lambda function to publish to the SNS topic if the stoppedReason is "Essential container in task exited".

d) Edit the task definition to set the notifications parameter to true for the essential containers. Add "Essential container in task exited" as the stoppedReason option. Set the new SNS topic as the target for the notifications.

Ans. b) Create an Amazon EventBridge rule. Configure the rule with aws.ecs as a source. Configure details by adding "Essential container in task exited" in the stoppedReason field and by adding "STOPPED" in the lastStatus field. Set the new SNS topic as the target for the notifications.

 

13. A DevOps engineer used a script that calls the AWS Elastic Beanstalk CLI to launch an Elastic Beanstalk application in development and test environments during initial development. When the application was ready for release, the production environment required some custom settings. The DevOps engineer added an .ebextensions file to the application source bundle to do the following:

  • Change several environment variables (log level and other variables)
  • Change the instance type from t3.small to t3.medium Amazon EC2 instances

The DevOps engineer ran the script with no errors. The new production environment contained the new environment variable values. However, the new environment still ran on t3.small instances instead of the t3.medium instances specified in the .ebextensions file. What is the root cause of this issue?

a) Default values for the service cannot be overwritten by .ebextensions configuration files.

b) Values that are specified as arguments to the Elastic Beanstalk CLI cannot be overridden by .ebextensions configuration files.

c) The instance type cannot be configured by using .ebextensions configuration files.

d) The user who launched the environment does not have permissions to launch the t3.medium instance type.

Ans. b) Values that are specified as arguments to the Elastic Beanstalk CLI cannot be overridden by .ebextensions configuration files.

 

14. A company manages an application that runs on Amazon EC2 instances behind an Application Load Balancer (ALB). The instances are part of an Amazon EC2 Auto Scaling group. The application deployment process sometimes leads to an inconsistent configuration of the EC2 instances. The inconsistency results from differences when the instances are first launched or from manual changes that are performed on individual instances. The company wants a new process that ensures that all EC2 instances are identical, both at launch and throughout their lifetime. Which solution will meet these requirements?

a) Create a new AMI that is fully configured to meet company standards for each operating system update. Whenever the application or AMI changes, update the Auto Scaling group's launch configuration. Configure the launch configuration to use the NewestInstance termination policy. Disable all operating system console access on the instances. Use AWS Systems Manager to perform any emergency maintenance on the entire group.

b) Use an AMI that is provided by AWS. Download the latest security patches upon launch by using the instance user data. Configure the Auto Scaling group's launch configuration to use the OldestLaunchConfiguration termination policy. Whenever the application or AMI changes, update the launch configuration. Double the size of the Auto Scaling group. When the group is fully populated, return the group to its original size. Disable all operating system console access on the instances. Use AWS Systems Manager to perform any emergency maintenance on the entire group.

c) Create a new AMI that is fully configured to meet company standards for each operating system update. Configure the Auto Scaling group's launch configuration to use the OldestLaunchConfiguration termination policy. Whenever the application or AMI changes, update the launch configuration. Double the size of the Auto Scaling group. When the group is fully populated, return the group to its original size. Use AWS Config to perform any emergency maintenance on the entire group.

d) Create a new AMI that is fully configured to meet company standards for each operating system update. Configure the Auto Scaling group's launch configuration to use the OldestLaunchConfiguration termination policy. Whenever the application or AMI changes, update the launch configuration. Double the size of the Auto Scaling group. When the group is fully populated, return the group to its original size. Disable all operating system console access on the instances. Use AWS Systems Manager to perform any emergency maintenance on the entire group.

Ans. d) Create a new AMI that is fully configured to meet company standards for each operating system update. Configure the Auto Scaling group's launch configuration to use the OldestLaunchConfiguration termination policy. Whenever the application or AMI changes, update the launch configuration. Double the size of the Auto Scaling group. When the group is fully populated, return the group to its original size. Disable all operating system console access on the instances. Use AWS Systems Manager to perform any emergency maintenance on the entire group.

 

15. A company stores legal documents in an Amazon S3 bucket. These documents come from the AWS Management Console and several applications for which the company lacks access to the source code. The company's legal team interacts with these documents through a web application. The legal team can mark individual documents for legal hold by setting a LegalHold tag to true. A DevOps engineer must create an automated process to delete all objects that are more than 90 days old and that have not been marked for legal hold. Which combination of steps should the DevOps engineer take to meet these requirements? (Select TWO.)

a) Create an S3 bucket policy that denies PutObject where "StringNotEquals": { "s3:RequestObjectTag/LegalHold": ["true", "false"] }.

b) Create an S3 Lifecycle policy for the S3 bucket that deletes objects that are older than 90 days.

c) Set an S3 event notification to initiate an AWS Lambda function upon object creation. Configure the Lambda function to add the LegalHold tag with a value of false if the tag is not present in the request.

d) Create an S3 bucket policy that uses the "StringEquals": { "s3:ExistingObjectTag/LegalHold": "true" } condition to deny s3:DeleteObject for all objects that have been marked for legal hold.

e) Create an S3 Lifecycle policy that includes a filter rule for the LegalHold tag with a value equal to false. Configure the S3 Lifecycle policy to delete objects that are older than 90 days.

Ans. c) Set an S3 event notification to initiate an AWS Lambda function upon object creation. Configure the Lambda function to add the LegalHold tag with a value of false if the tag is not present in the request.

e) Create an S3 Lifecycle policy that includes a filter rule for the LegalHold tag with a value equal to false. Configure the S3 Lifecycle policy to delete objects that are older than 90 days.

 

16. A company runs a hybrid cloud environment. The company is planning the deployment strategy for an application that will run on Amazon EC2 instances and on-premises servers. The strategy will use AWS CodeDeploy. What is the MOST secure way for the company to use CodeDeploy with the on-premises servers?

a)

  • Create an IAM user that has CodeDeploy permissions.
  • On the on-premises servers, create a credentials file that includes an access key from the IAM user.
  • Install the CodeDeploy agent on the on-premises servers.
  • Set up a deployment group.
  • Register a list of target instance IDs with the deployment group.
  • Deploy the application by using the deployment group.

b)

  • Create an IAM user that has CodeDeploy permissions.
  • On the on-premises servers, create a credentials file that includes an access key from the IAM user.
  • Install the CodeDeploy agent on the on-premises servers.
  • Create tags for the on-premises servers.
  • Set up a deployment group based on the tags.
  • Deploy the application by using the deployment group.

c)

  • Create an IAM role that has CodeDeploy permissions.
  • Obtain and store a set of AWS Security Token Service (AWS STS) credentials to allow AssumeRole on the servers.
  • Set up a cron job to refresh those credentials regularly.
  • Install the CodeDeploy agent on the on-premises servers.
  • Register the on-premises servers with CodeDeploy.
  • Create tags for the on-premises servers.
  • Set up a deployment group based on the tags.
  • Deploy the application by using the deployment group.

d)

  • Create an IAM role that has CodeDeploy permissions.
  • Obtain and store a set of AWS Security Token Service (AWS STS) credentials to allow AssumeRole on the servers.
  • Set up a cron job to refresh those credentials regularly.
  • Install the CodeDeploy agent on the on-premises servers.
  • Register the on-premises servers with CodeDeploy.
  • Set up a deployment group.
  • Register a list of target instance IDs with the deployment group.
  • Deploy the application by using the deployment group.

Ans. c)

  • Create an IAM role that has CodeDeploy permissions.
  • Obtain and store a set of AWS Security Token Service (AWS STS) credentials to allow AssumeRole on the servers.
  • Set up a cron job to refresh those credentials regularly.
  • Install the CodeDeploy agent on the on-premises servers.
  • Register the on-premises servers with CodeDeploy.
  • Create tags for the on-premises servers.
  • Set up a deployment group based on the tags.
  • Deploy the application by using the deployment group.

 

17. An application runs a web tier on stateless Amazon EC2 instances behind an Application Load Balancer. The application stores data in an Amazon RDS for MySQL Multi-AZ DB instance. A DevOps engineer is creating a disaster recovery (DR) solution that includes a second AWS Region. The DR solution must have an RTO of less than 2 hours and an RPO of less than 10 minutes. Which solution will meet these requirements MOST cost-effectively?

a) Use AWS Elastic Beanstalk to deploy the application to a second environment in the DR Region. Create a cross-Region read replica for the database in the DR Region. In case of a disaster, swap the environment URLs in Elastic Beanstalk. Promote the RDS read replica to primary.

b) Maintain up-to-date AMIs for the web tier in the DR Region. Create a cross-Region read replica for the database in the DR Region. In case of a disaster, launch a new stack in the DR Region by using an AWS CloudFormation template. Promote the RDS read replica to primary. Update DNS records to point to the new load balancer.

c) Create an AWS Lambda function to take Amazon Elastic Block Store (Amazon EBS) volume snapshots and RDS snapshots and copy them to the DR Region. Create an Amazon EventBridge event to schedule the Lambda function to run once an hour. In case of disaster, recreate the resources in the DR Region by using the snapshots and the RDS backup. Update DNS records to point to the new load balancer.

d) Clone the web tier in the DR Region. Deploy application updates in each Region. Migrate the database to an Amazon Aurora MySQL global database. Designate the DR Region as the secondary Region for the database. Use Amazon Route 53 health checks to fail over the DNS to the DR Region in case of disaster.

Ans. b) Maintain up-to-date AMIs for the web tier in the DR Region. Create a cross-Region read replica for the database in the DR Region. In case of a disaster, launch a new stack in the DR Region by using an AWS CloudFormation template. Promote the RDS read replica to primary. Update DNS records to point to the new load balancer.

 

18. A company uses AWS Organizations to manage multiple accounts. Each account uses several AWS Regions. The company needs to implement an automated solution to detect when any existing or new Amazon S3 bucket in any account or Region is made public. The solution must respond by blocking public access. The solution also must detect and respond to changes in ACLs and bucket policies. Which solution will meet these requirements?

a) Create Amazon EventBridge global endpoints with event replication activated in each account. Select a designated security account for a secondary event bus. In the designated security account, configure EventBridge rules on the event bus to detect changes to object ACLs, bucket ACLs, and bucket policies. For each rule, target an AWS Lambda function to remove any public access on misconfigured S3 buckets.

b) Create an organization trail in AWS CloudTrail with the delivery S3 bucket in a designated security account. In the designated security account, configure Amazon EventBridge rules on the event bus to detect changes to object ACLs, bucket ACLs, and bucket policies. For each rule, target an AWS Lambda function to remove any public access on misconfigured S3 buckets.

c) Turn on Amazon EventBridge in all S3 buckets in each account. Configure EventBridge rules on the event bus to detect changes to object ACLs, bucket ACLs, and bucket policies. For each rule, target an AWS Lambda function to remove any public access on misconfigured S3 buckets.

d) Create an AWS Config rule in each account to detect public access in object ACLs, bucket ACLs, and bucket policies. Set up an automated remediation action for the rule. Use an AWS Systems Manager Automation runbook to remove any public access on misconfigured S3 buckets.

Ans. d) Create an AWS Config rule in each account to detect public access in object ACLs, bucket ACLs, and bucket policies. Set up an automated remediation action for the rule. Use an AWS Systems Manager Automation runbook to remove any public access on misconfigured S3 buckets.

 

19. A company needs to deploy a new AWS Serverless Application Model (AWS SAM) application. The SAM application consists of an Amazon API Gateway HTTP API and AWS Lambda functions. The company has an existing custom service to handle authentication. The existing authentication service is not compatible with existing standards, such as OpenID and SAML. For authentication, the SAM application needs to verify an HTTP header and token that are contained within each request by calling the existing authentication service. Which authentication mechanism for the SAM application will meet these requirements?

a) Lambda authorizer

b) Amazon Cognito user pool

c) Amazon Cognito identity pool

d) API key for API Gateway

Ans. a) Lambda authorizer

 

20. A company runs an application on Amazon EC2 instances that are spread across multiple Availability Zones. The application processes messages from an Amazon Simple Queue Service (Amazon SQS) queue. Occasionally, the number of messages increases significantly and causes the application to experience delays in processing the messages. To address the delays, the company creates an Amazon EC2 Auto Scaling group. The company implements a target tracking scaling policy that is based on average CPU utilization of the Auto Scaling group. However, the company finds that the EC2 instances are not scaling in a timely manner. Which solution will resolve this issue in the MOST reliable manner?

a) Create a custom metric to calculate the number of messages in the SQS queue divided by the number of active instances. Use this metric to define scaling activity based on an acceptable value.

b) Create an Application Load Balancer (ALB) in front of the Auto Scaling group. Use the ALBRequestCountPerTarget metric as the target metric for the Auto Scaling group.

c) Edit the existing target tracking scaling policy to be based on the NumberOfMessagesReceived metric for the SQS queue rather than the average CPU utilization for the Auto Scaling group.

d) Edit the existing target tracking scaling policy to specify a lower CPU utilization percentage and a lower warm-up period to initiate the scaling.

Ans. a) Create a custom metric to calculate the number of messages in the SQS queue divided by the number of active instances. Use this metric to define scaling activity based on an acceptable value.

 

21. A company has a mobile application that makes HTTP API calls to an Application Load Balancer (ALB). The ALB routes requests to an AWS Lambda function. Many different versions of the application are in use at any given time, including versions that are in testing by a subset of users. The version of the application is defined in the user-agent header that is sent with all requests to the API. After a series of recent changes to the API, the company has observed issues with the application. The company needs to gather a metric for each API operation by response code for each version of the application that is in use. A DevOps engineer has modified the Lambda function to extract the API operation name, version information from the user-agent header and response code. Which additional set of actions should the DevOps engineer take to gather the required metrics?

a) Modify the Lambda function to write the API operation name, response code, and version number as a log line to an Amazon CloudWatch Logs log group. Configure a CloudWatch Logs metric filter that increments a metric for each API operation name. Specify response code and application version as dimensions for the metric.

b) Modify the Lambda function to write the API operation name, response code, and version number as a log line to an Amazon CloudWatch Logs log group. Configure a CloudWatch Logs Insights query to populate CloudWatch metrics from the log lines. Specify response code and application version as dimensions for the metric.

c) Configure the ALB access logs to write to an Amazon CloudWatch Logs log group. Modify the Lambda function to respond to the ALB with the API operation name, response code, and version number as response metadata. Configure a CloudWatch Logs metric filter that increments a metric for each API operation name. Specify response code and application version as dimensions for the metric.

d) Configure AWS X-Ray integration on the Lambda function. Modify the Lambda function to create an X-Ray subsegment with the API operation name, response code, and version number. Configure X-Ray insights to extract an aggregated metric for each API operation name and to publish the metric to Amazon CloudWatch. Specify response code and application version as dimensions for the metric.

Ans. a) Modify the Lambda function to write the API operation name, response code, and version number as a log line to an Amazon CloudWatch Logs log group. Configure a CloudWatch Logs metric filter that increments a metric for each API operation name. Specify response code and application version as dimensions for the metric.

 

22. A company provides an application to customers. The application has an Amazon API Gateway REST API that invokes an AWS Lambda function. On initialization, the Lambda function loads a large amount of data from an Amazon DynamoDB table. The data load process results in long cold-start times of 8-10 seconds. The DynamoDB table has DynamoDB Accelerator (DAX) configured. Customers report that the application intermittently takes a long time to respond to requests. The application receives thousands of requests throughout the day. In the middle of the day, the application experiences 10 times more requests than at any other time of the day. Near the end of the day, the application’s request volume decreases to 10% of its normal total. A DevOps engineer needs to reduce the latency of the Lambda function at all times of the day. Which solution will meet these requirements?

a) Configure provisioned concurrency on the Lambda function with a concurrency value of 1. Delete the DAX cluster for the DynamoDB table.

b) Configure reserved concurrency on the Lambda function with a concurrency value of 0.

c) Configure provisioned concurrency on the Lambda function. Configure AWS Application Auto Scaling on the Lambda function with provisioned concurrency values set to a minimum of 1 and a maximum of 100.

d) Configure reserved concurrency on the Lambda function. Configure AWS Application Auto Scaling on the API Gateway API with a reserved concurrency maximum value of 100.

Ans. c) Configure provisioned concurrency on the Lambda function. Configure AWS Application Auto Scaling on the Lambda function with provisioned concurrency values set to a minimum of 1 and a maximum of 100.

 

23. A company is adopting AWS CodeDeploy to automate its application deployments for a Java-Apache Tomcat application with an Apache Webserver. The development team started with a proof of concept, created a deployment group for a developer environment, and performed functional tests within the application. After completion, the team will create additional deployment groups for staging and production. The current log level is configured within the Apache settings, but the team wants to change this configuration dynamically when the deployment occurs, so that they can set different log level configurations depending on the deployment group without having a different application revision for each group. How can these requirements be met with the LEAST management overhead and without requiring different script versions for each deployment group?

a) Tag the Amazon EC2 instances depending on the deployment group. Then place a script into the application revision that calls the metadata service and the EC2 API to identify which deployment group the instance is part of. Use this information to configure the log level settings. Reference the script as part of the AfterInstall lifecycle hook in the appspec.yml file.

b) Create a script that uses the CodeDeploy environment variable DEPLOYMENT_GROUP_NAME to identify which deployment group the instance is part of. Use this information to configure the log level settings. Reference this script as part of the BeforeInstall lifecycle hook in the appspec.yml file.

c) Create a CodeDeploy custom environment variable for each environment. Then place a script into the application revision that checks this environment variable to identify which deployment group the instance is part of. Use this information to configure the log level settings. Reference this script as part of the ValidateService lifecycle hook in the appspec.yml file.

d) Create a script that uses the CodeDeploy environment variable DEPLOYMENT_GROUP_ID to identify which deployment group the instance is part of to configure the log level settings. Reference this script as part of the Install lifecycle hook in the appspec.yml file.

Ans. b) Create a script that uses the CodeDeploy environment variable DEPLOYMENT_GROUP_NAME to identify which deployment group the instance is part of. Use this information to configure the log level settings. Reference this script as part of the BeforeInstall lifecycle hook in the appspec.yml file.

 

24. A company requires its developers to tag all Amazon Elastic Block Store (Amazon EBS) volumes in an account to indicate a desired backup frequency. This requirement includes EBS volumes that do not require backups. The company uses custom tags named Backup_Frequency that have values of none, dally, or weekly that correspond to the desired backup frequency. An audit finds that developers are occasionally not tagging the EBS volumes. A DevOps engineer needs to ensure that all EBS volumes always have the Backup_Frequency tag so that the company can perform backups at least weekly unless a different value is specified. Which solution will meet these requirements?

a) Set up AWS Config in the account. Create a custom rule that returns a compliance failure for all Amazon EC2 resources that do not have a Backup Frequency tag applied. Configure a remediation action that uses a custom AWS Systems Manager Automation runbook to apply the Backup_Frequency tag with a value of weekly.

b) Set up AWS Config in the account. Use a managed rule that returns a compliance failure for EC2::Volume resources that do not have a Backup Frequency tag applied. Configure a remediation action that uses a custom AWS Systems Manager Automation runbook to apply the Backup_Frequency tag with a value of weekly.

c) Turn on AWS CloudTrail in the account. Create an Amazon EventBridge rule that reacts to EBS CreateVolume events. Configure a custom AWS Systems Manager Automation runbook to apply the Backup_Frequency tag with a value of weekly. Specify the runbook as the target of the rule.

d) Turn on AWS CloudTrail in the account. Create an Amazon EventBridge rule that reacts to EBS CreateVolume events or EBS ModifyVolume events. Configure a custom AWS Systems Manager Automation runbook to apply the Backup_Frequency tag with a value of weekly. Specify the runbook as the target of the rule.

Ans. b) Set up AWS Config in the account. Use a managed rule that returns a compliance failure for EC2::Volume resources that do not have a Backup Frequency tag applied. Configure a remediation action that uses a custom AWS Systems Manager Automation runbook to apply the Backup_Frequency tag with a value of weekly.

 

25. A company is using an Amazon Aurora cluster as the data store for its application. The Aurora cluster is configured with a single DB instance. The application performs read and write operations on the database by using the cluster’s instance endpoint. The company has scheduled an update to be applied to the cluster during an upcoming maintenance window. The cluster must remain available with the least possible interruption during the maintenance window. What should a DevOps engineer do to meet these requirements?

a) Add a reader instance to the Aurora cluster. Update the application to use the Aurora cluster endpoint for write operations. Update the Aurora cluster’s reader endpoint for reads.

b) Add a reader instance to the Aurora cluster. Create a custom ANY endpoint for the cluster. Update the application to use the Aurora cluster’s custom ANY endpoint for read and write operations.

c) Turn on the Multi-AZ option on the Aurora cluster. Update the application to use the Aurora cluster endpoint for write operations. Update the Aurora cluster’s reader endpoint for reads.

d) Turn on the Multi-AZ option on the Aurora cluster. Create a custom ANY endpoint for the cluster. Update the application to use the Aurora cluster’s custom ANY endpoint for read and write operations

Ans. a) Add a reader instance to the Aurora cluster. Update the application to use the Aurora cluster endpoint for write operations. Update the Aurora cluster’s reader endpoint for reads.

 

26. A company must encrypt all AMIs that the company shares across accounts. A DevOps engineer has access to a source account where an unencrypted custom AMI has been built. The DevOps engineer also has access to a target account where an Amazon EC2 Auto Scaling group will launch EC2 instances from the AMI. The DevOps engineer must share the AMI with the target account. The company has created an AWS Key Management Service (AWS KMS) key in the source account. Which additional steps should the DevOps engineer perform to meet the requirements? (Choose three.)

a) In the source account, copy the unencrypted AMI to an encrypted AMI. Specify the KMS key in the copy action.

b) In the source account, copy the unencrypted AMI to an encrypted AMI. Specify the default Amazon Elastic Block Store (Amazon EBS) encryption key in the copy action.

c) In the source account, create a KMS grant that delegates permissions to the Auto Scaling group service-linked role in the target account.

d) In the source account, modify the key policy to give the target account permissions to create a grant. In the target account, create a KMS grant that delegates permissions to the Auto Scaling group service-linked role.

e) In the source account, share the unencrypted AMI with the target account.

f) In the source account, share the encrypted AMI with the target account.

Ans. a) In the source account, copy the unencrypted AMI to an encrypted AMI. Specify the KMS key in the copy action.

d) In the source account, modify the key policy to give the target account permissions to create a grant. In the target account, create a KMS grant that delegates permissions to the Auto Scaling group service-linked role.

f) In the source account, share the encrypted AMI with the target account.

 

27. A company uses AWS CodePipeline pipelines to automate releases of its application A typical pipeline consists of three stages: build, test, and deployment. The company has been using a separate AWS CodeBuild project to run scripts for each stage. However, the company now wants to use AWS CodeDeploy to handle the deployment stage of the pipelines. The company has packaged the application as an RPM package and must deploy the application to a fleet of Amazon EC2 instances. The EC2 instances are in an EC2 Auto Scaling group and are launched from a common AMI. Which combination of steps should a DevOps engineer perform to meet these requirements? (Choose two.)

a) Create a new version of the common AMI with the CodeDeploy agent installed. Update the IAM role of the EC2 instances to allow access to CodeDeploy.

b) Create a new version of the common AMI with the CodeDeploy agent installed. Create an AppSpec file that contains application deployment scripts and grants access to CodeDeploy.

c) Create an application in CodeDeploy. Configure an in-place deployment type. Specify the Auto Scaling group as the deployment target. Add a step to the CodePipeline pipeline to use EC2 Image Builder to create a new AMI. Configure CodeDeploy to deploy the newly created AMI.

d) Create an application in CodeDeploy. Configure an in-place deployment type. Specify the Auto Scaling group as the deployment target. Update the CodePipeline pipeline to use the CodeDeploy action to deploy the application.

e) Create an application in CodeDeploy. Configure an in-place deployment type. Specify the EC2 instances that are launched from the common AMI as the deployment target. Update the CodePipeline pipeline to use the CodeDeploy action to deploy the application.

Ans. a) Create a new version of the common AMI with the CodeDeploy agent installed. Update the IAM role of the EC2 instances to allow access to CodeDeploy.

d) Create an application in CodeDeploy. Configure an in-place deployment type. Specify the Auto Scaling group as the deployment target. Update the CodePipeline pipeline to use the CodeDeploy action to deploy the application.

 

28. A company’s security team requires that all external Application Load Balancers (ALBs) and Amazon API Gateway APIs are associated with AWS WAF web ACLs. The company has hundreds of AWS accounts, all of which are included in a single organization in AWS Organizations. The company has configured AWS Config for the organization. During an audit, the company finds some externally facing ALBs that are not associated with AWS WAF web ACLs. Which combination of steps should a DevOps engineer take to prevent future violations? (Choose two.)

a) Delegate AWS Firewall Manager to a security account.

b) Delegate Amazon GuardDuty to a security account.

c) Create an AWS Firewall Manager policy to attach AWS WAF web ACLs to any newly created ALBs and API Gateway APIs.

d) Create an Amazon GuardDuty policy to attach AWS WAF web ACLs to any newly created ALBs and API Gateway APIs.

e) Configure an AWS Config managed rule to attach AWS WAF web ACLs to any newly created ALBs and API Gateway APIs.

Ans. a) Delegate AWS Firewall Manager to a security account.

c) Create an AWS Firewall Manager policy to attach AWS WAF web ACLs to any newly created ALBs and API Gateway APIs.

 

29. A security review has identified that an AWS CodeBuild project is downloading a database population script from an Amazon S3 bucket using an unauthenticated request. The security team does not allow unauthenticated requests to S3 buckets for this project. How can this issue be corrected in the MOST secure manner?

a) Add the bucket name to the AllowedBuckets section of the CodeBuild project settings. Update the build spec to use the AWS CLI to download the database population script.

b) Modify the S3 bucket settings to enable HTTPS basic authentication and specify a token. Update the build spec to use cURL to pass the token and download the database population script.

c) Remove unauthenticated access from the S3 bucket with a bucket policy. Modify the service role for the CodeBuild project to include Amazon S3 access. Use the AWS CLI to download the database population script.

d) Remove unauthenticated access from the S3 bucket with a bucket policy. Use the AWS CLI to download the database population script using an IAM access key and a secret access key.

Ans. c) Remove unauthenticated access from the S3 bucket with a bucket policy. Modify the service role for the CodeBuild project to include Amazon S3 access. Use the AWS CLI to download the database population script.

 

30. An ecommerce company has chosen AWS to host its new platform. The company’s DevOps team has started building an AWS Control Tower landing zone. The DevOps team has set the identity store within AWS IAM Identity Center (AWS Single Sign-On) to external identity provider (IdP) and has configured SAML 2.0. The DevOps team wants a robust permission model that applies the principle of least privilege. The model must allow the team to build and manage only the team’s own resources. Which combination of steps will meet these requirements? (Choose three.)

a) Create IAM policies that include the required permissions. Include the aws:PrincipalTag condition key.

b) Create permission sets. Attach an inline policy that includes the required permissions and uses the aws:PrincipalTag condition key to scope the permissions.

c) Create a group in the IdP. Place users in the group. Assign the group to accounts and the permission sets in IAM Identity Center.

d) Create a group in the IdP. Place users in the group. Assign the group to OUs and IAM policies.

e) Enable attributes for access control in IAM Identity Center. Apply tags to users. Map the tags as key-value pairs.

f) Enable attributes for access control in IAM Identity Center. Map attributes from the IdP as key-value pairs.

Ans. b) Create permission sets. Attach an inline policy that includes the required permissions and uses the aws:PrincipalTag condition key to scope the permissions.

c) Create a group in the IdP. Place users in the group. Assign the group to accounts and the permission sets in IAM Identity Center.

f) Enable attributes for access control in IAM Identity Center. Map attributes from the IdP as key-value pairs.

 

31. An ecommerce company is receiving reports that its order history page is experiencing delays in reflecting the processing status of orders. The order processing system consists of an AWS Lambda function that uses reserved concurrency. The Lambda function processes order messages from an Amazon Simple Queue Service (Amazon SQS) queue and inserts processed orders into an Amazon DynamoDB table. The DynamoDB table has auto scaling enabled for read and write capacity. Which actions should a DevOps engineer take to resolve this delay? (Choose two.)

a) Check the ApproximateAgeOfOldestMessage metric for the SQS queue. Increase the Lambda function concurrency limit.

b) Check the ApproximateAgeOfOldestMessage metric for the SQS queue. Configure a redrive policy on the SQS queue.

c) Check the NumberOfMessagesSent metric for the SQS queue. Increase the SQS queue visibility timeout.

d) Check the WriteThrottleEvents metric for the DynamoDB table. Increase the maximum write capacity units (WCUs) for the table’s scaling policy.

e) Check the Throttles metric for the Lambda function. Increase the Lambda function timeout.

Ans. a) Check the ApproximateAgeOfOldestMessage metric for the SQS queue. Increase the Lambda function concurrency limit.

d) Check the WriteThrottleEvents metric for the DynamoDB table. Increase the maximum write capacity units (WCUs) for the table’s scaling policy.

 

32. A company has a single AWS account that runs hundreds of Amazon EC2 instances in a single AWS Region. New EC2 instances are launched and terminated each hour in the account. The account also includes existing EC2 instances that have been running for longer than a week. The company’s security policy requires all running EC2 instances to use an EC2 instance profile. If an EC2 instance does not have an instance profile attached, the EC2 instance must use a default instance profile that has no IAM permissions assigned. A DevOps engineer reviews the account and discovers EC2 instances that are running without an instance profile. During the review, the DevOps engineer also observes that new EC2 instances are being launched without an instance profile. Which solution will ensure that an instance profile is attached to all existing and future EC2 instances in the Region?

a) Configure an Amazon EventBridge rule that reacts to EC2 RunInstances API calls. Configure the rule to invoke an AWS Lambda function to attach the default instance profile to the EC2 instances.

b) Configure the ec2-instance-profile-attached AWS Config managed rule with a trigger type of configuration changes. Configure an automatic remediation action that invokes an AWS Systems Manager Automation runbook to attach the default instance profile to the EC2 instances.

c) Configure an Amazon EventBridge rule that reacts to EC2 StartInstances API calls. Configure the rule to invoke an AWS Systems Manager Automation runbook to attach the default instance profile to the EC2 instances

d) Configure the iam-role-managed-policy-check AWS Config managed rule with a trigger type of configuration changes. Configure an automatic remediation action that invokes an AWS Lambda function to attach the default instance profile to the EC2 instances.

Ans. b) Configure the ec2-instance-profile-attached AWS Config managed rule with a trigger type of configuration changes. Configure an automatic remediation action that invokes an AWS Systems Manager Automation runbook to attach the default instance profile to the EC2 instances.

 

33. A DevOps engineer is building a continuous deployment pipeline for a serverless application that uses AWS Lambda functions. The company wants to reduce the customer impact of an unsuccessful deployment. The company also wants to monitor for issues. Which deploy stage configuration will meet these requirements?

a) Use an AWS Serverless Application Model (AWS SAM) template to define the serverless application. Use AWS CodeDeploy to deploy the Lambda functions with the Canary10Percent15Minutes Deployment Preference Type. Use Amazon CloudWatch alarms to monitor the health of the functions.

b) Use AWS CloudFormation to publish a new stack update, and include Amazon CloudWatch alarms on all resources. Set up an AWS CodePipeline approval action for a developer to verify and approve the AWS CloudFormation change set.

c) Use AWS CloudFormation to publish a new version on every stack update, and include Amazon CloudWatch alarms on all resources. Use the RoutingConfig property of the AWS::Lambda::Alias resource to update the traffic routing during the stack update.

d) Use AWS CodeBuild to add sample event payloads for testing to the Lambda functions. Publish a new version of the functions, and include Amazon CloudWatch alarms. Update the production alias to point to the new version. Configure rollbacks to occur when an alarm is in the ALARM state.

Ans. a) Use an AWS Serverless Application Model (AWS SAM) template to define the serverless application. Use AWS CodeDeploy to deploy the Lambda functions with the Canary10Percent15Minutes Deployment Preference Type. Use Amazon CloudWatch alarms to monitor the health of the functions.

 

34. To run an application, a DevOps engineer launches an Amazon EC2 instance with public IP addresses in a public subnet. A user data script obtains the application artifacts and installs them on the instances upon launch. A change to the security classification of the application now requires the instances to run with no access to the internet. While the instances launch successfully and show as healthy, the application does not seem to be installed. Which of the following should successfully install the application while complying with the new rule?

a) Launch the instances in a public subnet with Elastic IP addresses attached. Once the application is installed and running, run a script to disassociate the Elastic IP addresses afterward.

b) Set up a NAT gateway. Deploy the EC2 instances to a private subnet. Update the private subnet’s route table to use the NAT gateway as the default route.

c) Publish the application artifacts to an Amazon S3 bucket and create a VPC endpoint for S3. Assign an IAM instance profile to the EC2 instances so they can read the application artifacts from the S3 bucket.

d) Create a security group for the application instances and allow only outbound traffic to the artifact repository. Remove the security group rule once the install is complete.

Ans. c) Publish the application artifacts to an Amazon S3 bucket and create a VPC endpoint for S3. Assign an IAM instance profile to the EC2 instances so they can read the application artifacts from the S3 bucket.

 

35. A development team is using AWS CodeCommit to version control application code and AWS CodePipeline to orchestrate software deployments. The team has decided to use a remote main branch as the trigger for the pipeline to integrate code changes. A developer has pushed code changes to the CodeCommit repository, but noticed that the pipeline had no reaction, even after 10 minutes. Which of the following actions should be taken to troubleshoot this issue?

a) Check that an Amazon EventBridge rule has been created for the main branch to trigger the pipeline.

b) Check that the CodePipeline service role has permission to access the CodeCommit repository.

c) Check that the developer’s IAM role has permission to push to the CodeCommit repository.

d) Check to see if the pipeline failed to start because of CodeCommit errors in Amazon CloudWatch Logs.

Ans. a) Check that an Amazon EventBridge rule has been created for the main branch to trigger the pipeline.

 

36. A DevOps engineer is creating an AWS CloudFormation template to deploy a web service. The web service will run on Amazon EC2 instances in a private subnet behind an Application Load Balancer (ALB). The DevOps engineer must ensure that the service can accept requests from clients that have IPv6 addresses. What should the DevOps engineer do with the CloudFormation template so that IPv6 clients can access the web service?

a) Add an IPv6 CIDR block to the VPC and the private subnet for the EC2 instances. Create route table entries for the IPv6 network, use EC2 instance types that support IPv6, and assign IPv6 addresses to each EC2 instance.

b) Assign each EC2 instance an IPv6 Elastic IP address. Create a target group, and add the EC2 instances as targets. Create a listener on port 443 of the ALB, and associate the target group with the ALB.

c) Replace the ALB with a Network Load Balancer (NLB). Add an IPv6 CIDR block to the VPC and subnets for the NLB, and assign the NLB an IPv6 Elastic IP address.

d) Add an IPv6 CIDR block to the VPC and subnets for the ALB. Create a listener on port 443, and specify the dualstack IP address type on the ALB. Create a target group, and add the EC2 instances as targets. Associate the target group with the ALB.

Ans. d) Add an IPv6 CIDR block to the VPC and subnets for the ALB. Create a listener on port 443, and specify the dualstack IP address type on the ALB. Create a target group, and add the EC2 instances as targets. Associate the target group with the ALB.

 

37. A company uses AWS Organizations and AWS Control Tower to manage all the company’s AWS accounts. The company uses the Enterprise Support plan. A DevOps engineer is using Account Factory for Terraform (AFT) to provision new accounts. When new accounts are provisioned, the DevOps engineer notices that the support plan for the new accounts is set to the Basic Support plan. The DevOps engineer needs to implement a solution to provision the new accounts with the Enterprise Support plan. Which solution will meet these requirements?

a) Use an AWS Config conformance pack to deploy the account-part-of-organizations AWS Config rule and to automatically remediate any noncompliant accounts.

b) Create an AWS Lambda function to create a ticket for AWS Support to add the account to the Enterprise Support plan. Grant the Lambda function the support:ResolveCase permission.

c) Add an additional value to the control_tower_parameters input to set the AWSEnterpriseSupport parameter as the organization’s management account number.

d) Set the aft_feature_enterprise_support feature flag to True in the AFT deployment input configuration. Redeploy AFT and apply the changes.

Ans. d) Set the aft_feature_enterprise_support feature flag to True in the AFT deployment input configuration. Redeploy AFT and apply the changes.

 

38. A company’s DevOps engineer uses AWS Systems Manager to perform maintenance tasks during maintenance windows. The company has a few Amazon EC2 instances that require a restart after notifications from AWS Health. The DevOps engineer needs to implement an automated solution to remediate these notifications. The DevOps engineer creates an Amazon EventBridge rule. How should the DevOps engineer configure the EventBridge rule to meet these requirements?

a) Configure an event source of AWS Health, a service of EC2, and an event type that indicates instance maintenance. Target a Systems Manager document to restart the EC2 instance.

b) Configure an event source of Systems Manager and an event type that indicates a maintenance window. Target a Systems Manager document to restart the EC2 instance.

c) Configure an event source of AWS Health, a service of EC2, and an event type that indicates instance maintenance. Target a newly created AWS Lambda function that registers an automation task to restart the EC2 instance during a maintenance window.

d) Configure an event source of EC2 and an event type that indicates instance maintenance. Target a newly created AWS Lambda function that registers an automation task to restart the EC2 instance during a maintenance window.

Ans. a) Configure an event source of AWS Health, a service of EC2, and an event type that indicates instance maintenance. Target a Systems Manager document to restart the EC2 instance.

 

39. A company has containerized all of its in-house quality control applications. The company is running Jenkins on Amazon EC2 instances, which require patching and upgrading. The compliance officer has requested a DevOps engineer begin encrypting build artifacts since they contain company intellectual property. What should the DevOps engineer do to accomplish this in the MOST maintainable manner?

a) Automate patching and upgrading using AWS Systems Manager on EC2 instances and encrypt Amazon EBS volumes by default.

b) Deploy Jenkins to an Amazon ECS cluster and copy build artifacts to an Amazon S3 bucket with default encryption enabled.

c) Leverage AWS CodePipeline with a build action and encrypt the artifacts using AWS Secrets Manager.

d) Use AWS CodeBuild with artifact encryption to replace the Jenkins instance running on EC2 instances.

Ans. d) Use AWS CodeBuild with artifact encryption to replace the Jenkins instance running on EC2 instances.

 

40. An IT team has built an AWS CloudFormation template so others in the company can quickly and reliably deploy and terminate an application. The template creates an Amazon EC2 instance with a user data script to install the application and an Amazon S3 bucket that the application uses to serve static webpages while it is running. All resources should be removed when the CloudFormation stack is deleted. However, the team observes that CloudFormation reports an error during stack deletion, and the S3 bucket created by the stack is not deleted. How can the team resolve the error in the MOST efficient manner to ensure that all resources are deleted without errors?

a) Add a DelelionPolicy attribute to the S3 bucket resource, with the value Delete forcing the bucket to be removed when the stack is deleted.

b) Add a custom resource with an AWS Lambda function with the DependsOn attribute specifying the S3 bucket, and an IAM role. Write the Lambda function to delete all objects from the bucket when RequestType is Delete.

c) Identify the resource that was not deleted. Manually empty the S3 bucket and then delete it.

d) Replace the EC2 and S3 bucket resources with a single AWS OpsWorks Stacks resource. Define a custom recipe for the stack to create and delete the EC2 instance and the S3 bucket.

Ans. b) Add a custom resource with an AWS Lambda function with the DependsOn attribute specifying the S3 bucket, and an IAM role. Write the Lambda function to delete all objects from the bucket when RequestType is Delete.

 

41. A company has an AWS CodePipeline pipeline that is configured with an Amazon S3 bucket in the eu-west-1 Region. The pipeline deploys an AWS Lambda application to the same Region. The pipeline consists of an AWS CodeBuild project build action and an AWS CloudFormation deploy action. The CodeBuild project uses the aws cloudformation package AWS CLI command to build an artifact that contains the Lambda function code’s .zip file and the CloudFormation template. The CloudFormation deploy action references the CloudFormation template from the output artifact of the CodeBuild project’s build action. The company wants to also deploy the Lambda application to the us-east-1 Region by using the pipeline in eu-west-1. A DevOps engineer has already updated the CodeBuild project to use the aws cloudformation package command to produce an additional output artifact for us-east-1. Which combination of additional steps should the DevOps engineer take to meet these requirements? (Choose two.)

a) Modify the CloudFormation template to include a parameter for the Lambda function code’s zip file location. Create a new CloudFormation deploy action for us-east-1 in the pipeline. Configure the new deploy action to pass in the us-east-1 artifact location as a parameter override.

b) Create a new CloudFormation deploy action for us-east-1 in the pipeline. Configure the new deploy action to use the CloudFormation template from the us-east-1 output artifact.

c) Create an S3 bucket in us-east-1. Configure the S3 bucket policy to allow CodePipeline to have read and write access.

d) Create an S3 bucket in us-east-1. Configure S3 Cross-Region Replication (CRR) from the S3 bucket in eu-west-1 to the S3 bucket in us-east-1.

e) Modify the pipeline to include the S3 bucket for us-east-1 as an artifact store. Create a new CloudFormation deploy action for us-east-1 in the pipeline. Configure the new deploy action to use the CloudFormation template from the us-east-1 output artifact.

Ans. c) Create an S3 bucket in us-east-1. Configure the S3 bucket policy to allow CodePipeline to have read and write access.

e) Modify the pipeline to include the S3 bucket for us-east-1 as an artifact store. Create a new CloudFormation deploy action for us-east-1 in the pipeline. Configure the new deploy action to use the CloudFormation template from the us-east-1 output artifact.

 

42. A company runs an application on one Amazon EC2 instance. Application metadata is stored in Amazon S3 and must be retrieved if the instance is restarted. The instance must restart or relaunch automatically if the instance becomes unresponsive. Which solution will meet these requirements?

a) Create an Amazon CloudWatch alarm for the StatusCheckFailed metric. Use the recover action to stop and start the instance. Use an S3 event notification to push the metadata to the instance when the instance is back up and running.

b) Configure AWS OpsWorks, and use the auto-healing feature to stop and start the instance. Use a lifecycle event in OpsWorks to pull the metadata from Amazon S3 and update it on the instance.

c) Use EC2 Auto Recovery to automatically stop and start the instance in case of a failure. Use an S3 event notification to push the metadata to the instance when the instance is back up and running.

d) Use AWS CloudFormation to create an EC2 instance that includes the UserData property for the EC2 resource. Add a command in UserData to retrieve the application metadata from Amazon S3.

Ans. b) Configure AWS OpsWorks, and use the auto-healing feature to stop and start the instance. Use a lifecycle event in OpsWorks to pull the metadata from Amazon S3 and update it on the instance.

 

43. A company has multiple AWS accounts. The company uses AWS IAM Identity Center (AWS Single Sign-On) that is integrated with AWS Toolkit for Microsoft Azure DevOps. The attributes for access control feature is enabled in IAM Identity Center. The attribute mapping list contains two entries. The department key is mapped to ${path:enterprise.department}. The costCenter key is mapped to ${path:enterprise.costCenter}. All existing Amazon EC2 instances have a department tag that corresponds to three company departments (d1, d2, d3). A DevOps engineer must create policies based on the matching attributes. The policies must minimize administrative effort and must grant each Azure AD user access to only the EC2 instances that are tagged with the user’s respective department name. Which condition key should the DevOps engineer include in the custom permissions policies to meet these requirements?

a)

"Condition": {
    "ForAllValues:StringEquals": {
        "aws:TagKeys": ["department"]
     )
}

b)

 "Condition": {
     "StringEquals": {
         "aws:PrincipalTag/department": "$(aws:ResourceTag/department)"
     )
}

c)

"Condition": {
    "StringEquals": {
        "ec2:ResourceTag/department": "$(aws:PrincipalTag/department)"
    )
}

d)

 "Condition": {
     "ForAllValues:StringEquals": {
         "ec2:ResourceTag/department": ["d1","d2","d3"]
     )
}

Ans. c)

"Condition": { 
    "StringEquals": { 
        "ec2:ResourceTag/department": "$(aws:PrincipalTag/department)"
    ) 
}

 

44. A company has an on-premises application that is written in Go. A DevOps engineer must move the application to AWS. The company’s development team wants to enable blue/green deployments and perform A/B testing. Which solution will meet these requirements?

a) Deploy the application on an Amazon EC2 instance, and create an AMI of the instance. Use the AMI to create an automatic scaling launch configuration that is used in an Auto Scaling group. Use Elastic Load Balancing to distribute traffic. When changes are made to the application, a new AMI will be created, which will initiate an EC2 instance refresh.

b) Use Amazon Lightsail to deploy the application. Store the application in a zipped format in an Amazon S3 bucket. Use this zipped version to deploy new versions of the application to Lightsail. Use Lightsail deployment options to manage the deployment.

c) Use AWS CodeArtifact to store the application code. Use AWS CodeDeploy to deploy the application to a fleet of Amazon EC2 instances. Use Elastic Load Balancer to distribute the traffic to the EC2 instances. When making changes to the application, upload a new version to CodeArtifact and create a new CodeDeploy deployment.

d) Use AWS Elastic Beanstalk to host the application. Store a zipped version of the application in Amazon S3. Use that location to deploy new versions of the application. Use Elastic Beanstalk to manage the deployment options.

Ans. d) Use AWS Elastic Beanstalk to host the application. Store a zipped version of the application in Amazon S3. Use that location to deploy new versions of the application. Use Elastic Beanstalk to manage the deployment options.

 

45. A developer is maintaining a fleet of 50 Amazon EC2 Linux servers. The servers are part of an Amazon EC2 Auto Scaling group, and also use Elastic Load Balancing for load balancing. Occasionally, some application servers are being terminated after failing ELB HTTP health checks. The developer would like to perform a root cause analysis on the issue, but before being able to access application logs, the server is terminated. How can log collection be automated?

a) Use Auto Scaling lifecycle hooks to put instances in a Pending:Wait state. Create an Amazon CloudWatch alarm for EC2 Instance Terminate Successful and trigger an AWS Lambda function that invokes an SSM Run Command script to collect logs, push them to Amazon S3, and complete the lifecycle action once logs are collected.

b) Use Auto Scaling lifecycle hooks to put instances in a Terminating:Wait state. Create an AWS Config rule for EC2 Instance-terminate Lifecycle Action and trigger a step function that invokes a script to collect logs, push them to Amazon S3, and complete the lifecycle action once logs are collected.

c) Use Auto Scaling lifecycle hooks to put instances in a Terminating:Wait state. Create an Amazon CloudWatch subscription filter for EC2 Instance Terminate Successful and trigger a CloudWatch agent that invokes a script to collect logs, push them to Amazon S3, and complete the lifecycle action once logs are collected.

d) Use Auto Scaling lifecycle hooks to put instances in a Terminating:Wait state. Create an Amazon EventBridge rule for EC2 Instance-terminate Lifecycle Action and trigger an AWS Lambda function that invokes an SSM Run Command script to collect logs, push them to Amazon S3, and complete the lifecycle action once logs are collected.

Ans. d) Use Auto Scaling lifecycle hooks to put instances in a Terminating:Wait state. Create an Amazon EventBridge rule for EC2 Instance-terminate Lifecycle Action and trigger an AWS Lambda function that invokes an SSM Run Command script to collect logs, push them to Amazon S3, and complete the lifecycle action once logs are collected.

 

46. A company has an organization in AWS Organizations. The organization includes workload accounts that contain enterprise applications. The company centrally manages users from an operations account. No users can be created in the workload accounts. The company recently added an operations team and must provide the operations team members with administrator access to each workload account. Which combination of actions will provide this access? (Choose three.)

a) Create a SysAdmin role in the operations account. Attach the AdministratorAccess policy to the role. Modify the trust relationship to allow the sts:AssumeRole action from the workload accounts.

b) Create a SysAdmin role in each workload account. Attach the AdministratorAccess policy to the role. Modify the trust relationship to allow the sts:AssumeRole action from the operations account.

c) Create an Amazon Cognito identity pool in the operations account. Attach the SysAdmin role as an authenticated role.

d) In the operations account, create an IAM user for each operations team member.

e) In the operations account, create an IAM user group that is named SysAdmins. Add an IAM policy that allows the sts:AssumeRole action for the SysAdmin role in each workload account. Add all operations team members to the group.

f) Create an Amazon Cognito user pool in the operations account. Create an Amazon Cognito user for each operations team member.

Ans. b) Create a SysAdmin role in each workload account. Attach the AdministratorAccess policy to the role. Modify the trust relationship to allow the sts:AssumeRole action from the operations account.

d) In the operations account, create an IAM user for each operations team member.

e) In the operations account, create an IAM user group that is named SysAdmins. Add an IAM policy that allows the sts:AssumeRole action for the SysAdmin role in each workload account. Add all operations team members to the group.

 

47. A company has multiple accounts in an organization in AWS Organizations. The company’s SecOps team needs to receive an Amazon Simple Notification Service (Amazon SNS) notification if any account in the organization turns off the Block Public Access feature on an Amazon S3 bucket. A DevOps engineer must implement this change without affecting the operation of any AWS accounts. The implementation must ensure that individual member accounts in the organization cannot turn off the notification. Which solution will meet these requirements?

a) Designate an account to be the delegated Amazon GuardDuty administrator account. Turn on GuardDuty for all accounts across the organization. In the GuardDuty administrator account, create an SNS topic. Subscribe the SecOps team’s email address to the SNS topic. In the same account, create an Amazon EventBridge rule that uses an event pattern for GuardDuty findings and a target of the SNS topic.

b) Create an AWS CloudFormation template that creates an SNS topic and subscribes the SecOps team’s email address to the SNS topic. In the template, include an Amazon EventBridge rule that uses an event pattern of CloudTrail activity for s3:PutBucketPublicAccessBlock and a target of the SNS topic. Deploy the stack to every account in the organization by using CloudFormation StackSets.

c) Turn on AWS Config across the organization. In the delegated administrator account, create an SNS topic. Subscribe the SecOps team’s email address to the SNS topic. Deploy a conformance pack that uses the s3-bucket-level-public-access-prohibited AWS Config managed rule in each account and uses an AWS Systems Manager document to publish an event to the SNS topic to notify the SecOps team.

d) Turn on Amazon Inspector across the organization. In the Amazon Inspector delegated administrator account, create an SNS topic. Subscribe the SecOps team’s email address to the SNS topic. In the same account, create an Amazon EventBridge rule that uses an event pattern for public network exposure of the S3 bucket and publishes an event to the SNS topic to notify the SecOps team.

Ans. c) Turn on AWS Config across the organization. In the delegated administrator account, create an SNS topic. Subscribe the SecOps team’s email address to the SNS topic. Deploy a conformance pack that uses the s3-bucket-level-public-access-prohibited AWS Config managed rule in each account and uses an AWS Systems Manager document to publish an event to the SNS topic to notify the SecOps team.

 

48. A company has migrated its container-based applications to Amazon EKS and want to establish automated email notifications. The notifications sent to each email address are for specific activities related to EKS components. The solution will include Amazon SNS topics and an AWS Lambda function to evaluate incoming log events and publish messages to the correct SNS topic. Which logging solution will support these requirements?

a) Enable Amazon CloudWatch Logs to log the EKS components. Create a CloudWatch subscription filter for each component with Lambda as the subscription feed destination.

b) Enable Amazon CloudWatch Logs to log the EKS components. Create CloudWatch Logs Insights queries linked to Amazon EventBridge events that invoke Lambda.

c) Enable Amazon S3 logging for the EKS components. Configure an Amazon CloudWatch subscription filter for each component with Lambda as the subscription feed destination.

d) Enable Amazon S3 logging for the EKS components. Configure S3 PUT Object event notifications with AWS Lambda as the destination.

Ans. a) Enable Amazon CloudWatch Logs to log the EKS components. Create a CloudWatch subscription filter for each component with Lambda as the subscription feed destination.

 

49. A company is implementing an Amazon Elastic Container Service (Amazon ECS) cluster to run its workload. The company architecture will run multiple ECS services on the cluster. The architecture includes an Application Load Balancer on the front end and uses multiple target groups to route traffic. A DevOps engineer must collect application and access logs. The DevOps engineer then needs to send the logs to an Amazon S3 bucket for near-real-time analysis. Which combination of steps must the DevOps engineer take to meet these requirements? (Choose three.)

a) Download the Amazon CloudWatch Logs container instance from AWS. Configure this instance as a task. Update the application service definitions to include the logging task.

b) Install the Amazon CloudWatch Logs agent on the ECS instances. Change the logging driver in the ECS task definition to awslogs.

c) Use Amazon EventBridge to schedule an AWS Lambda function that will run every 60 seconds and will run the Amazon CloudWatch Logs create-export-task command. Then point the output to the logging S3 bucket.

d) Activate access logging on the ALB. Then point the ALB directly to the logging S3 bucket.

e) Activate access logging on the target groups that the ECS services use. Then send the logs directly to the logging S3 bucket.

f) Create an Amazon Kinesis Data Firehose delivery stream that has a destination of the logging S3 bucket. Then create an Amazon CloudWatch Logs subscription filter for Kinesis Data Firehose.

Ans. b) Install the Amazon CloudWatch Logs agent on the ECS instances. Change the logging driver in the ECS task definition to awslogs.

d) Activate access logging on the ALB. Then point the ALB directly to the logging S3 bucket.

f) Create an Amazon Kinesis Data Firehose delivery stream that has a destination of the logging S3 bucket. Then create an Amazon CloudWatch Logs subscription filter for Kinesis Data Firehose.

 

50. A company that uses electronic health records is running a fleet of Amazon EC2 instances with an Amazon Linux operating system. As part of patient privacy requirements, the company must ensure continuous compliance for patches for operating system and applications running on the EC2 instances. How can the deployments of the operating system and application patches be automated using a default and custom repository?

a) Use AWS Systems Manager to create a new patch baseline including the custom repository. Run the AWS-RunPatchBaseline document using the run command to verify and install patches.

b) Use AWS Direct Connect to integrate the corporate repository and deploy the patches using Amazon CloudWatch scheduled events, then use the CloudWatch dashboard to create reports.

c) Use yum-config-manager to add the custom repository under /etc/yum.repos.d and run yum-config-manager-enable to activate the repository.

d) Use AWS Systems Manager to create a new patch baseline including the corporate repository. Run the AWS-AmazonLinuxDefaultPatchBaseline document using the run command to verify and install patches.

Ans. a) Use AWS Systems Manager to create a new patch baseline including the custom repository. Run the AWS-RunPatchBaseline document using the run command to verify and install patches.

Leave a Comment