Cyberithub

AWS Certified DevOps Engineer - Professional (DOP-C02) Practice Questions Part - 2

Table of Contents

Advertisements

In our last article, we have gone through multiple practice questions for AWS Certified DevOps Engineer - Professional (DOP-C02) exam. If you haven't seen them yet then you can check AWS Certified DevOps Engineer - Professional (DOP-C02) Practice Questions Part - 1 . Here we will continue with some more practice questions to further boost your confidence. It is highly recommended to go through below practice questions as well before attempting the certification exam. It will surely bring you success !!

 

AWS Certified DevOps Engineer - Professional (DOP-C02) Practice Questions Part - 2

AWS Certified DevOps Engineer - Professional (DOP-C02) Practice Questions Part - 2

Also Read: AWS Certified Developer - Associate(DVA-C02) Practice Questions and Answers Part - 1

1. A company is using AWS CodePipeline to automate its release pipeline. AWS CodeDeploy is being used in the pipeline to deploy an application to Amazon Elastic Container Service (Amazon ECS) using the blue/green deployment model. The company wants to implement scripts to test the green version of the application before shifting traffic. These scripts will complete in 5 minutes or less. If errors are discovered during these tests, the application must be rolled back. Which strategy will meet these requirements?

a) Add a stage to the CodePipeline pipeline between the source and deploy stages. Use AWS CodeBuild to create a runtime environment and build commands in the buildspec file to invoke test scripts. If errors are found, use the aws deploy stop-deployment command to stop the deployment.

b) Add a stage to the CodePipeline pipeline between the source and deploy stages. Use this stage to invoke an AWS Lambda function that will run the test scripts. If errors are found, use the aws deploy stop-deployment command to stop the deployment.

c) Add a hooks section to the CodeDeploy AppSpec file. Use the AfterAllowTestTraffic lifecycle event to invoke an AWS Lambda function to run the test scripts. If errors are found, exit the Lambda function with an error to initiate rollback.

d) Add a hooks section to the CodeDeploy AppSpec file. Use the AfterAllowTraffic lifecycle event to invoke the test scripts. If errors are found, use the aws deploy stop-deployment CLI command to stop the deployment.

Ans. c) Add a hooks section to the CodeDeploy AppSpec file. Use the AfterAllowTestTraffic lifecycle event to invoke an AWS Lambda function to run the test scripts. If errors are found, exit the Lambda function with an error to initiate rollback.

 

2. A company uses AWS Storage Gateway in file gateway mode in front of an Amazon S3 bucket that is used by multiple resources. In the morning when business begins, users do not see the objects processed by a third party the previous evening. When a DevOps engineer looks directly at the S3 bucket, the data is there, but it is missing in Storage Gateway. Which solution ensures that all the updated third-party files are available in the morning?

a) Configure a nightly Amazon EventBridge event to invoke an AWS Lambda function to run the RefreshCache command for Storage Gateway.

b) Instruct the third party to put data into the S3 bucket using AWS Transfer for SFTP.

c) Modify Storage Gateway to run in volume gateway mode.

d) Use S3 Same-Region Replication to replicate any changes made directly in the S3 bucket to Storage Gateway.

Ans. a) Configure a nightly Amazon EventBridge event to invoke an AWS Lambda function to run the RefreshCache command for Storage Gateway.

 

3. A DevOps engineer needs to back up sensitive Amazon S3 objects that are stored within an S3 bucket with a private bucket policy using S3 cross-Region replication functionality. The objects need to be copied to a target bucket in a different AWS Region and account. Which combination of actions should be performed to enable this replication? (Choose three.)

a) Create a replication IAM role in the source account

b) Create a replication IAM role in the target account.

c) Add statements to the source bucket policy allowing the replication IAM role to replicate objects.

d) Add statements to the target bucket policy allowing the replication IAM role to replicate objects.

e) Create a replication rule in the source bucket to enable the replication.

f) Create a replication rule in the target bucket to enable the replication.

Ans. a) Create a replication IAM role in the source account

d) Add statements to the target bucket policy allowing the replication IAM role to replicate objects.

e) Create a replication rule in the source bucket to enable the replication.

 

4. A company has multiple member accounts that are part of an organization in AWS Organizations. The security team needs to review every Amazon EC2 security group and their inbound and outbound rules. The security team wants to programmatically retrieve this information from the member accounts using an AWS Lambda function in the management account of the organization. Which combination of access changes will meet these requirements? (Choose three.)

a) Create a trust relationship that allows users in the member accounts to assume the management account IAM role.

b) Create a trust relationship that allows users in the management account to assume the IAM roles of the member accounts.

c) Create an IAM role in each member account that has access to the AmazonEC2ReadOnlyAccess managed policy.

d) Create an IAM role in each member account to allow the sts:AssumeRole action against the management account IAM role’s ARN.

e) Create an IAM role in the management account that allows the sts:AssumeRole action against the member account IAM role’s ARN.

f) Create an IAM role in the management account that has access to the AmazonEC2ReadOnlyAccess managed policy.

Ans. b) Create a trust relationship that allows users in the management account to assume the IAM roles of the member accounts.

c) Create an IAM role in each member account that has access to the AmazonEC2ReadOnlyAccess managed policy.

e) Create an IAM role in the management account that allows the sts:AssumeRole action against the member account IAM role’s ARN.

 

5. A DevOps engineer is building a multistage pipeline with AWS CodePipeline to build, verify, stage, test, and deploy an application. A manual approval stage is required between the test stage and the deploy stage. The development team uses a custom chat tool with webhook support that requires near-real-time notifications. How should the DevOps engineer configure status updates for pipeline activity and approval requests to post to the chat tool?

a) Create an Amazon CloudWatch Logs subscription that filters on CodePipeline Pipeline Execution State Change. Publish subscription events to an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe the chat webhook URL to the SNS topic, and complete the subscription validation.

b) Create an AWS Lambda function that is invoked by AWS CloudTrail events. When a CodePipeline Pipeline Execution State Change event is detected, send the event details to the chat webhook URL.

c) Create an Amazon EventBridge rule that filters on CodePipeline Pipeline Execution State Change. Publish the events to an Amazon Simple Notification Service (Amazon SNS) topic. Create an AWS Lambda function that sends event details to the chat webhook URL. Subscribe the function to the SNS topic.

d) Modify the pipeline code to send the event details to the chat webhook URL at the end of each stage. Parameterize the URL so that each pipeline can send to a different URL based on the pipeline environment.

Ans. c) Create an Amazon EventBridge rule that filters on CodePipeline Pipeline Execution State Change. Publish the events to an Amazon Simple Notification Service (Amazon SNS) topic. Create an AWS Lambda function that sends event details to the chat webhook URL. Subscribe the function to the SNS topic.

 

6. A company’s application development team uses Linux-based Amazon EC2 instances as bastion hosts. Inbound SSH access to the bastion hosts is restricted to specific IP addresses, as defined in the associated security groups. The company’s security team wants to receive a notification if the security group rules are modified to allow SSH access from any IP address. What should a DevOps engineer do to meet this requirement?

a) Create an Amazon EventBridge rule with a source of aws.cloudtrail and the event name AuthorizeSecurityGroupIngress. Define an Amazon Simple Notification Service (Amazon SNS) topic as the target.

b) Enable Amazon GuardDuty and check the findings for security groups in AWS Security Hub. Configure an Amazon EventBridge rule with a custom pattern that matches GuardDuty events with an output of NON_COMPLIANT. Define an Amazon Simple Notification Service (Amazon SNS) topic as the target.

c) Create an AWS Config rule by using the restricted-ssh managed rule to check whether security groups disallow unrestricted incoming SSH traffic. Configure automatic remediation to publish a message to an Amazon Simple Notification Service (Amazon SNS) topic.

d) Enable Amazon Inspector. Include the Common Vulnerabilities and Exposures-1.1 rules package to check the security groups that are associated with the bastion hosts. Configure Amazon Inspector to publish a message to an Amazon Simple Notification Service (Amazon SNS) topic.

Ans. a) Create an Amazon EventBridge rule with a source of aws.cloudtrail and the event name AuthorizeSecurityGroupIngress. Define an Amazon Simple Notification Service (Amazon SNS) topic as the target.

 

7. A DevOps team manages an API running on-premises that serves as a backend for an Amazon API Gateway endpoint. Customers have been complaining about high response latencies, which the development team has verified using the API Gateway latency metrics in Amazon CloudWatch. To identify the cause, the team needs to collect relevant data without introducing additional latency. Which actions should be taken to accomplish this? (Choose two.)

a) Install the CloudWatch agent server side and configure the agent to upload relevant logs to CloudWatch.

b) Enable AWS X-Ray tracing in API Gateway, modify the application to capture request segments, and upload those segments to X-Ray during each request.

c) Enable AWS X-Ray tracing in API Gateway, modify the application to capture request segments, and use the X-Ray daemon to upload segments to X-Ray.

d) Modify the on-premises application to send log information back to API Gateway with each request.

e) Modify the on-premises application to calculate and upload statistical data relevant to the API service requests to CloudWatch metrics.

Ans. a) Install the CloudWatch agent server side and configure the agent to upload relevant logs to CloudWatch.

c) Enable AWS X-Ray tracing in API Gateway, modify the application to capture request segments, and use the X-Ray daemon to upload segments to X-Ray.

 

8. A company has an application that is using a MySQL-compatible Amazon Aurora Multi-AZ DB cluster as the database. A cross-Region read replica has been created for disaster recovery purposes. A DevOps engineer wants to automate the promotion of the replica so it becomes the primary database instance in the event of a failure. Which solution will accomplish this?

a) Configure a latency-based Amazon Route 53 CNAME with health checks so it points to both the primary and replica endpoints. Subscribe an Amazon SNS topic to Amazon RDS failure notifications from AWS CloudTrail and use that topic to invoke an AWS Lambda function that will promote the replica instance as the primary.

b) Create an Aurora custom endpoint to point to the primary database instance. Configure the application to use this endpoint. Configure AWS CloudTrail to run an AWS Lambda function to promote the replica instance and modify the custom endpoint to point to the newly promoted instance.

c) Create an AWS Lambda function to modify the application’s AWS CloudFormation template to promote the replica, apply the template to update the stack, and point the application to the newly promoted instance. Create an Amazon CloudWatch alarm to invoke this Lambda function after the failure event occurs.

d) Store the Aurora endpoint in AWS Systems Manager Parameter Store. Create an Amazon EventBridge event that detects the database failure and runs an AWS Lambda function to promote the replica instance and update the endpoint URL stored in AWS Systems Manager Parameter Store. Code the application to reload the endpoint from Parameter Store if a database connection fails.

Ans. d) Store the Aurora endpoint in AWS Systems Manager Parameter Store. Create an Amazon EventBridge event that detects the database failure and runs an AWS Lambda function to promote the replica instance and update the endpoint URL stored in AWS Systems Manager Parameter Store. Code the application to reload the endpoint from Parameter Store if a database connection fails.

 

9. A company hosts its staging website using an Amazon EC2 instance backed with Amazon EBS storage. The company wants to recover quickly with minimal data losses in the event of network connectivity issues or power failures on the EC2 instance. Which solution will meet these requirements?

a) Add the instance to an EC2 Auto Scaling group with the minimum, maximum, and desired capacity set to 1.

b) Add the instance to an EC2 Auto Scaling group with a lifecycle hook to detach the EBS volume when the EC2 instance shuts down or terminates.

c) Create an Amazon CloudWatch alarm for the StatusCheckFailed System metric and select the EC2 action to recover the instance.

d) Create an Amazon CloudWatch alarm for the StatusCheckFailed Instance metric and select the EC2 action to reboot the instance.

Ans. c) Create an Amazon CloudWatch alarm for the StatusCheckFailed System metric and select the EC2 action to recover the instance.

 

10. A company wants to use AWS development tools to replace its current bash deployment scripts. The company currently deploys a LAMP application to a group of Amazon EC2 instances behind an Application Load Balancer (ALB). During the deployments, the company unit tests the committed application, stops and starts services, unregisters and re-registers instances with the load balancer, and updates file permissions. The company wants to maintain the same deployment functionality through the shift to using AWS services. Which solution will meet these requirements?

a) Use AWS CodeBuild to test the application. Use bash scripts invoked by AWS CodeDeploy’s appspec.yml file to restart services, and deregister and register instances with the ALB. Use the appspec.yml file to update file permissions without a custom script.

b) Use AWS CodePipeline to move the application from the AWS CodeCommit repository to AWS CodeDeploy. Use CodeDeploy’s deployment group to test the application, unregister and re-register instances with the ALand restart services. Use the appspec.yml file to update file permissions without a custom script.

c) Use AWS CodePipeline to move the application source code from the AWS CodeCommit repository to AWS CodeDeploy. Use CodeDeploy to test the application. Use CodeDeploy’s appspec.yml file to restart services and update permissions without a custom script. Use AWS CodeBuild to unregister and re-register instances with the ALB.

d) Use AWS CodePipeline to trigger AWS CodeBuild to test the application. Use bash scripts invoked by AWS CodeDeploy’s appspec.yml file to restart services. Unregister and re-register the instances in the AWS CodeDeploy deployment group with the ALB. Update the appspec.yml file to update file permissions without a custom script.

Ans. d) Use AWS CodePipeline to trigger AWS CodeBuild to test the application. Use bash scripts invoked by AWS CodeDeploy’s appspec.yml file to restart services. Unregister and re-register the instances in the AWS CodeDeploy deployment group with the ALB. Update the appspec.yml file to update file permissions without a custom script.

 

11. A company runs an application with an Amazon EC2 and on-premises configuration. A DevOps engineer needs to standardize patching across both environments. Company policy dictates that patching only happens during non-business hours. Which combination of actions will meet these requirements? (Choose three.)

a) Add the physical machines into AWS Systems Manager using Systems Manager Hybrid Activations.

b) Attach an IAM role to the EC2 instances, allowing them to be managed by AWS Systems Manager.

c) Create IAM access keys for the on-premises machines to interact with AWS Systems Manager.

d) Run an AWS Systems Manager Automation document to patch the systems every hour

e) Use Amazon EventBridge scheduled events to schedule a patch window.

f) Use AWS Systems Manager Maintenance Windows to schedule a patch window.

Ans. a) Add the physical machines into AWS Systems Manager using Systems Manager Hybrid Activations.

b) Attach an IAM role to the EC2 instances, allowing them to be managed by AWS Systems Manager.

f) Use AWS Systems Manager Maintenance Windows to schedule a patch window.

 

12. A company has chosen AWS to host a new application. The company needs to implement a multi-account strategy. A DevOps engineer creates a new AWS account and an organization in AWS Organizations. The DevOps engineer also creates the OU structure for the organization and sets up a landing zone by using AWS Control Tower. The DevOps engineer must implement a solution that automatically deploys resources for new accounts that users create through AWS Control Tower Account Factory. When a user creates a new account, the solution must apply AWS CloudFormation templates and SCPs that are customized for the OU or the account to automatically deploy all the resources that are attached to the account. All the OUs are enrolled in AWS Control Tower. Which solution will meet these requirements in the MOST automated way?

a) Use AWS Service Catalog with AWS Control Tower. Create portfolios and products in AWS Service Catalog. Grant granular permissions to provision these resources. Deploy SCPs by using the AWS CLI and JSON documents.

b) Deploy CloudFormation stack sets by using the required templates. Enable automatic deployment. Deploy stack instances to the required accounts. Deploy a CloudFormation stack set to the organization’s management account to deploy SCPs.

c) Create an Amazon EventBridge rule to detect the CreateManagedAccount event. Configure AWS Service Catalog as the target to deploy resources to any new accounts. Deploy SCPs by using the AWS CLI and JSON documents.

d) Deploy the Customizations for AWS Control Tower (CfCT) solution. Use an AWS CodeCommit repository as the source. In the repository, create a custom package that includes the CloudFormation templates and the SCP JSON documents.

Ans. d) Deploy the Customizations for AWS Control Tower (CfCT) solution. Use an AWS CodeCommit repository as the source. In the repository, create a custom package that includes the CloudFormation templates and the SCP JSON documents.

 

13. An online retail company based in the United States plans to expand its operations to Europe and Asia in the next six months. Its product currently runs on Amazon EC2 instances behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. All data is stored in an Amazon Aurora database instance. When the product is deployed in multiple regions, the company wants a single product catalog across all regions, but for compliance purposes, its customer information and purchases must be kept in each region. How should the company meet these requirements with the LEAST amount of application changes?

a) Use Amazon Redshift for the product catalog and Amazon DynamoDB tables for the customer information and purchases.

b) Use Amazon DynamoDB global tables for the product catalog and regional tables for the customer information and purchases.

c) Use Aurora with read replicas for the product catalog and additional local Aurora instances in each region for the customer information and purchases.

d) Use Aurora for the product catalog and Amazon DynamoDB global tables for the customer information and purchases.

Ans. c) Use Aurora with read replicas for the product catalog and additional local Aurora instances in each region for the customer information and purchases.

 

14. A rapidly growing company wants to scale for developer demand for AWS development environments. Development environments are created manually in the AWS Management Console. The networking team uses AWS CloudFormation to manage the networking infrastructure, exporting stack output values for the Amazon VPC and all subnets. The development environments have common standards, such as Application Load Balancers, Amazon EC2 Auto Scaling groups, security groups, and Amazon DynamoDB tables. To keep up with demand, the DevOps engineer wants to automate the creation of development environments. Because the infrastructure required to support the application is expected to grow, there must be a way to easily update the deployed infrastructure. CloudFormation will be used to create a template for the development environments. Which approach will meet these requirements and quickly provide consistent AWS environments for developers?

a) Use Fn::ImportValue intrinsic functions in the Resources section of the template to retrieve Virtual Private Cloud (VPC) and subnet values. Use CloudFormation StackSets for the development environments, using the Count input parameter to indicate the number of environments needed. Use the UpdateStackSet command to update existing development environments.

b) Use nested stacks to define common infrastructure components. To access the exported values, use TemplateURL to reference the networking team’s template. To retrieve Virtual Private Cloud (VPC) and subnet values, use Fn::ImportValue intrinsic functions in the Parameters section of the root template. Use the CreateChangeSet and ExecuteChangeSet commands to update existing development environments.

c) Use nested stacks to define common infrastructure components. Use Fn::ImportValue intrinsic functions with the resources of the nested stack to retrieve Virtual Private Cloud (VPC) and subnet values. Use the CreateChangeSet and ExecuteChangeSet commands to update existing development environments.

d) Use Fn::ImportValue intrinsic functions in the Parameters section of the root template to retrieve Virtual Private Cloud (VPC) and subnet values. Define the development resources in the order they need to be created in the CloudFormation nested stacks. Use the CreateChangeSet and ExecuteChangeSet commands to update existing development environments.

Ans. c) Use nested stacks to define common infrastructure components. Use Fn::ImportValue intrinsic functions with the resources of the nested stack to retrieve Virtual Private Cloud (VPC) and subnet values. Use the CreateChangeSet and ExecuteChangeSet commands to update existing development environments.

 

15. A company is performing vulnerability scanning for all Amazon EC2 instances across many accounts. The accounts are in an organization in AWS Organizations. Each account’s VPCs are attached to a shared transit gateway. The VPCs send traffic to the internet through a central egress VPC. The company has enabled Amazon Inspector in a delegated administrator account and has enabled scanning for all member accounts. A DevOps engineer discovers that some EC2 instances are listed in the “not scanning” tab in Amazon Inspector. Which combination of actions should the DevOps engineer take to resolve this issue? (Choose three.)

a) Verify that AWS Systems Manager Agent is installed and is running on the EC2 instances that Amazon Inspector is not scanning.

b) Associate the target EC2 instances with security groups that allow outbound communication on port 443 to the AWS Systems Manager service endpoint.

c) Grant inspector:StartAssessmentRun permissions to the IAM role that the DevOps engineer is using.

d) Configure EC2 Instance Connect for the EC2 instances that Amazon Inspector is not scanning.

e) Associate the target EC2 instances with instance profiles that grant permissions to communicate with AWS Systems Manager.

f) Create a managed-instance activation. Use the Activation Code and the Activation ID to register the EC2 instances.

Ans. a) Verify that AWS Systems Manager Agent is installed and is running on the EC2 instances that Amazon Inspector is not scanning.

b) Associate the target EC2 instances with security groups that allow outbound communication on port 443 to the AWS Systems Manager service endpoint.

e) Associate the target EC2 instances with instance profiles that grant permissions to communicate with AWS Systems Manager.

 

16. A development team uses AWS CodeCommit for version control for applications. The development team uses AWS CodePipeline, AWS CodeBuild and AWS CodeDeploy for CI/CD infrastructure. In CodeCommit, the development team recently merged pull requests that did not pass long-running tests in the code base. The development team needed to perform rollbacks to branches in the codebase, resulting in lost time and wasted effort. A DevOps engineer must automate testing of pull requests in CodeCommit to ensure that reviewers more easily see the results of automated tests as part of the pull request review. What should the DevOps engineer do to meet this requirement?

a) Create an Amazon EventBridge rule that reacts to the pullRequestStatusChanged event. Create an AWS Lambda function that invokes a CodePipeline pipeline with a CodeBuild action that runs the tests for the application. Program the Lambda function to post the CodeBuild badge as a comment on the pull request so that developers will see the badge in their code review.

b) Create an Amazon EventBridge rule that reacts to the pullRequestCreated event. Create an AWS Lambda function that invokes a CodePipeline pipeline with a CodeBuild action that runs the tests for the application. Program the Lambda function to post the CodeBuild test results as a comment on the pull request when the test results are complete.

c) Create an Amazon EventBridge rule that reacts to pullRequestCreated and pullRequestSourceBranchUpdated events. Create an AWS Lambda function that invokes a CodePipeline pipeline with a CodeBuild action that runs the tests for the application. Program the Lambda function to post the CodeBuild badge as a comment on the pull request so that developers will see the badge in their code review.

d) Create an Amazon EventBridge rule that reacts to the pullRequestStatusChanged event. Create an AWS Lambda function that invokes a CodePipeline pipeline with a CodeBuild action that runs the tests for the application. Program the Lambda function to post the CodeBuild test results as a comment on the pull request when the test results are complete.

Ans. c) Create an Amazon EventBridge rule that reacts to pullRequestCreated and pullRequestSourceBranchUpdated events. Create an AWS Lambda function that invokes a CodePipeline pipeline with a CodeBuild action that runs the tests for the application. Program the Lambda function to post the CodeBuild badge as a comment on the pull request so that developers will see the badge in their code review.

 

17. A company has deployed an application in a production VPC in a single AWS account. The application is popular and is experiencing heavy usage. The company’s security team wants to add additional security, such as AWS WAF, to the application deployment. However, the application’s product manager is concerned about cost and does not want to approve the change unless the security team can prove that additional security is necessary. The security team believes that some of the application’s demand might come from users that have IP addresses that are on a deny list. The security team provides the deny list to a DevOps engineer. If any of the IP addresses on the deny list access the application, the security team wants to receive automated notification in near real time so that the security team can document that the application needs additional security. The DevOps engineer creates a VPC flow log for the production VPC. Which set of additional steps should the DevOps engineer take to meet these requirements MOST cost-effectively?

a) Create a log group in Amazon CloudWatch Logs. Configure the VPC flow log to capture accepted traffic and to send the data to the log group. Create an Amazon CloudWatch metric filter for IP addresses on the deny list. Create a CloudWatch alarm with the metric filter as input. Set the period to 5 minutes and the datapoints to alarm to 1. Use an Amazon Simple Notification Service (Amazon SNS) topic to send alarm notices to the security team.

b) Create an Amazon S3 bucket for log files. Configure the VPC flow log to capture all traffic and to send the data to the S3 bucket. Configure Amazon Athena to return all log files in the S3 bucket for IP addresses on the deny list. Configure Amazon QuickSight to accept data from Athena and to publish the data as a dashboard that the security team can access. Create a threshold alert of 1 for successful access. Configure the alert to automatically notify the security team as frequently as possible when the alert threshold is met.

c) Create an Amazon S3 bucket for log files. Configure the VPC flow log to capture accepted traffic and to send the data to the S3 bucket. Configure an Amazon OpenSearch Service cluster and domain for the log files. Create an AWS Lambda function to retrieve the logs from the S3 bucket, format the logs, and load the logs into the OpenSearch Service cluster. Schedule the Lambda function to run every 5 minutes. Configure an alert and condition in OpenSearch Service to send alerts to the security team through an Amazon Simple Notification Service (Amazon SNS) topic when access from the IP addresses on the deny list is detected.

d) Create a log group in Amazon CloudWatch Logs. Create an Amazon S3 bucket to hold query results. Configure the VPC flow log to capture all traffic and to send the data to the log group. Deploy an Amazon Athena CloudWatch connector in AWS Lambda. Connect the connector to the log group. Configure Athena to periodically query for all accepted traffic from the IP addresses on the deny list and to store the results in the S3 bucket. Configure an S3 event notification to automatically notify the security team through an Amazon Simple Notification Service (Amazon SNS) topic when new objects are added to the S3 bucket.

Ans. a) Create a log group in Amazon CloudWatch Logs. Configure the VPC flow log to capture accepted traffic and to send the data to the log group. Create an Amazon CloudWatch metric filter for IP addresses on the deny list. Create a CloudWatch alarm with the metric filter as input. Set the period to 5 minutes and the datapoints to alarm to 1. Use an Amazon Simple Notification Service (Amazon SNS) topic to send alarm notices to the security team.

 

18. A DevOps engineer has automated a web service deployment by using AWS CodePipeline with the following steps:

  •  An AWS CodeBuild project compiles the deployment artifact and runs unit tests.
  •  An AWS CodeDeploy deployment group deploys the web service to Amazon EC2 instances in the staging environment.
  • A CodeDeploy deployment group deploys the web service to EC2 instances in the production environment.

The quality assurance (QA) team requests permission to inspect the build artifact before the deployment to the production environment occurs. The QA team wants to run an internal penetration testing tool to conduct manual tests. The tool will be invoked by a REST API call. Which combination of actions should the DevOps engineer take to fulfill this request? (Choose two.)

a) Insert a manual approval action between the test actions and deployment actions of the pipeline.

b) Modify the buildspec.yml file for the compilation stage to require manual approval before completion.

c) Update the CodeDeploy deployment groups so that they require manual approval to proceed.

d) Update the pipeline to directly call the REST API for the penetration testing tool.

e) Update the pipeline to invoke an AWS Lambda function that calls the REST API for the penetration testing tool.

Ans. a) Insert a manual approval action between the test actions and deployment actions of the pipeline.

e) Update the pipeline to invoke an AWS Lambda function that calls the REST API for the penetration testing tool.

 

19. A company is hosting a web application in an AWS Region. For disaster recovery purposes, a second region is being used as a standby. Disaster recovery requirements state that session data must be replicated between regions in near-real time and 1% of requests should route to the secondary region to continuously verify system functionality. Additionally, if there is a disruption in service in the main region, traffic should be automatically routed to the secondary region, and the secondary region must be able to scale up to handle all traffic. How should a DevOps engineer meet these requirements?

a) In both regions, deploy the application on AWS Elastic Beanstalk and use Amazon DynamoDB global tables for session data. Use an Amazon Route 53 weighted routing policy with health checks to distribute the traffic across the regions.

b) In both regions, launch the application in Auto Scaling groups and use DynamoDB for session data. Use a Route 53 failover routing policy with health checks to distribute the traffic across the regions.

c) In both regions, deploy the application in AWS Lambda, exposed by Amazon API Gateway, and use Amazon RDS for PostgreSQL with cross-region replication for session data. Deploy the web application with client-side logic to call the API Gateway directly.

d) In both regions, launch the application in Auto Scaling groups and use DynamoDB global tables for session data. Enable an Amazon CloudFront weighted distribution across regions. Point the Amazon Route 53 DNS record at the CloudFront distribution.

Ans. a) In both regions, deploy the application on AWS Elastic Beanstalk and use Amazon DynamoDB global tables for session data. Use an Amazon Route 53 weighted routing policy with health checks to distribute the traffic across the regions.

 

20. A company runs an application on Amazon EC2 instances. The company uses a series of AWS CloudFormation stacks to define the application resources. A developer performs updates by building and testing the application on a laptop and then uploading the build output and CloudFormation stack templates to Amazon S3. The developer’s peers review the changes before the developer performs the CloudFormation stack update and installs a new version of the application onto the EC2 instances. The deployment process is prone to errors and is time-consuming when the developer updates each EC2 instance with the new application. The company wants to automate as much of the application deployment process as possible while retaining a final manual approval step before the modification of the application or resources. The company already has moved the source code for the application and the CloudFormation templates to AWS CodeCommit. The company also has created an AWS CodeBuild project to build and test the application. Which combination of steps will meet the company’s requirements? (Choose two.)

a) Create an application group and a deployment group in AWS CodeDeploy. Install the CodeDeploy agent on the EC2 instances.

b) Create an application revision and a deployment group in AWS CodeDeploy. Create an environment in CodeDeploy. Register the EC2 instances to the CodeDeploy environment.

c) Use AWS CodePipeline to invoke the CodeBuild job, run the CloudFormation update, and pause for a manual approval step. After approval, start the AWS CodeDeploy deployment.

d) Use AWS CodePipeline to invoke the CodeBuild job, create CloudFormation change sets for each of the application stacks, and pause for a manual approval step. After approval, run the CloudFormation change sets and start the AWS CodeDeploy deployment.

e) Use AWS CodePipeline to invoke the CodeBuild job, create CloudFormation change sets for each of the application stacks, and pause for a manual approval step. After approval, start the AWS CodeDeploy deployment.

Ans. a) Create an application group and a deployment group in AWS CodeDeploy. Install the CodeDeploy agent on the EC2 instances.

d) Use AWS CodePipeline to invoke the CodeBuild job, create CloudFormation change sets for each of the application stacks, and pause for a manual approval step. After approval, run the CloudFormation change sets and start the AWS CodeDeploy deployment.

 

21. A development team wants to use AWS CloudFormation stacks to deploy an application. However, the developer IAM role does not have the required permissions to provision the resources that are specified in the AWS CloudFormation template. A DevOps engineer needs to implement a solution that allows the developers to deploy the stacks. The solution must follow the principle of least privilege. Which solution will meet these requirements?

a) Create an IAM policy that allows the developers to provision the required resources. Attach the policy to the developer IAM role.

b) Create an IAM policy that allows full access to AWS CloudFormation. Attach the policy to the developer IAM role.

c) Create an AWS CloudFormation service role that has the required permissions. Grant the developer IAM role a cloudformation:* action. Use the new service role during stack deployments.

d) Create an AWS CloudFormation service role that has the required permissions. Grant the developer IAM role the iam:PassRole permission. Use the new service role during stack deployments.

Ans. d) Create an AWS CloudFormation service role that has the required permissions. Grant the developer IAM role the iam:PassRole permission. Use the new service role during stack deployments.

 

22. A production account has a requirement that any Amazon EC2 instance that has been logged in to manually must be terminated within 24 hours. All applications in the production account are using Auto Scaling groups with the Amazon CloudWatch Logs agent configured. How can this process be automated?

a) Create a CloudWatch Logs subscription to an AWS Step Functions application. Configure an AWS Lambda function to add a tag to the EC2 instance that produced the login event and mark the instance to be decommissioned. Create an Amazon EventBridge rule to invoke a second Lambda function once a day that will terminate all instances with this tag.

b) Create an Amazon CloudWatch alarm that will be invoked by the login event. Send the notification to an Amazon Simple Notification Service (Amazon SNS) topic that the operations team is subscribed to, and have them terminate the EC2 instance within 24 hours.

c) Create an Amazon CloudWatch alarm that will be invoked by the login event. Configure the alarm to send to an Amazon Simple Queue Service (Amazon SQS) queue. Use a group of worker instances to process messages from the queue, which then schedules an Amazon EventBridge rule to be invoked.

d) Create a CloudWatch Logs subscription to an AWS Lambda function. Configure the function to add a tag to the EC2 instance that produced the login event and mark the instance to be decommissioned. Create an Amazon EventBridge rule to invoke a daily Lambda function that terminates all instances with this tag.

Ans. d) Create a CloudWatch Logs subscription to an AWS Lambda function. Configure the function to add a tag to the EC2 instance that produced the login event and mark the instance to be decommissioned. Create an Amazon EventBridge rule to invoke a daily Lambda function that terminates all instances with this tag.

 

23. A company has enabled all features for its organization in AWS Organizations. The organization contains 10 AWS accounts. The company has turned on AWS CloudTrail in all the accounts. The company expects the number of AWS accounts in the organization to increase to 500 during the next year. The company plans to use multiple OUs for these accounts. The company has enabled AWS Config in each existing AWS account in the organization. A DevOps engineer must implement a solution that enables AWS Config automatically for all future AWS accounts that are created in the organization. Which solution will meet this requirement?

a) In the organization’s management account, create an Amazon EventBridge rule that reacts to a CreateAccount API call. Configure the rule to invoke an AWS Lambda function that enables trusted access to AWS Config for the organization.

b) In the organization’s management account, create an AWS CloudFormation stack set to enable AWS Config. Configure the stack set to deploy automatically when an account is created through Organizations.

c) In the organization’s management account, create an SCP that allows the appropriate AWS Config API calls to enable AWS Config. Apply the SCP to the root-level OU.

d) In the organization’s management account, create an Amazon EventBridge rule that reacts to a CreateAccount API call. Configure the rule to invoke an AWS Systems Manager Automation runbook to enable AWS Config for the account.

Ans. b) In the organization’s management account, create an AWS CloudFormation stack set to enable AWS Config. Configure the stack set to deploy automatically when an account is created through Organizations.

 

24. A company has many applications. Different teams in the company developed the applications by using multiple languages and frameworks. The applications run on premises and on different servers with different operating systems. Each team has its own release protocol and process. The company wants to reduce the complexity of the release and maintenance of these applications. The company is migrating its technology stacks, including these applications, to AWS. The company wants centralized control of source code, a consistent and automatic delivery pipeline, and as few maintenance tasks as possible on the underlying infrastructure. What should a DevOps engineer do to meet these requirements?

a) Create one AWS CodeCommit repository for all applications. Put each application’s code in a different branch. Merge the branches, and use AWS CodeBuild to build the applications. Use AWS CodeDeploy to deploy the applications to one centralized application server.

b) Create one AWS CodeCommit repository for each of the applications. Use AWS CodeBuild to build the applications one at a time. Use AWS CodeDeploy to deploy the applications to one centralized application server.

c) Create one AWS CodeCommit repository for each of the applications. Use AWS CodeBuild to build the applications one at a time and to create one AMI for each server. Use AWS CloudFormation StackSets to automatically provision and decommission Amazon EC2 fleets by using these AMIs.

d) Create one AWS CodeCommit repository for each of the applications. Use AWS CodeBuild to build one Docker image for each application in Amazon Elastic Container Registry (Amazon ECR). Use AWS CodeDeploy to deploy the applications to Amazon Elastic Container Service (Amazon ECS) on infrastructure that AWS Fargate manages.

Ans. d) Create one AWS CodeCommit repository for each of the applications. Use AWS CodeBuild to build one Docker image for each application in Amazon Elastic Container Registry (Amazon ECR). Use AWS CodeDeploy to deploy the applications to Amazon Elastic Container Service (Amazon ECS) on infrastructure that AWS Fargate manages.

 

25. A DevOps engineer needs to apply a core set of security controls to an existing set of AWS accounts. The accounts are in an organization in AWS Organizations. Individual teams will administer individual accounts by using the AdministratorAccess AWS managed policy. For all accounts, AWS CloudTrail and AWS Config must be turned on in all available AWS Regions. Individual account administrators must not be able to edit or delete any of the baseline resources. However, individual account administrators must be able to edit or delete their own CloudTrail trails and AWS Config rules. Which solution will meet these requirements in the MOST operationally efficient way?

a) Create an AWS CloudFormation template that defines the standard account resources. Deploy the template to all accounts from the organization’s management account by using CloudFormation StackSets. Set the stack policy to deny Update:Delete actions.

b) Enable AWS Control Tower. Enroll the existing accounts in AWS Control Tower. Grant the individual account administrators access to CloudTrail and AWS Config.

c) Designate an AWS Config management account. Create AWS Config recorders in all accounts by using AWS CloudFormation StackSets. Deploy AWS Config rules to the organization by using the AWS Config management account. Create a CloudTrail organization trail in the organization’s management account. Deny modification or deletion of the AWS Config recorders by using an SCP.

d) Create an AWS CloudFormation template that defines the standard account resources. Deploy the template to all accounts from the organization’s management account by using Cloud Formation StackSets. Create an SCP that prevents updates or deletions to CloudTrail resources or AWS Config resources unless the principal is an administrator of the organization’s management account.

Ans. c) Designate an AWS Config management account. Create AWS Config recorders in all accounts by using AWS CloudFormation StackSets. Deploy AWS Config rules to the organization by using the AWS Config management account. Create a CloudTrail organization trail in the organization’s management account. Deny modification or deletion of the AWS Config recorders by using an SCP.

 

26. A company has its AWS accounts in an organization in AWS Organizations. AWS Config is manually configured in each AWS account. The company needs to implement a solution to centrally configure AWS Config for all accounts in the organization. The solution also must record resource changes to a central account. Which combination of actions should a DevOps engineer perform to meet these requirements? (Choose two.)

a) Configure a delegated administrator account for AWS Config. Enable trusted access for AWS Config in the organization.

b) Configure a delegated administrator account for AWS Config. Create a service-linked role for AWS Config in the organization’s management account.

c) Create an AWS CloudFormation template to create an AWS Config aggregator. Configure a CloudFormation stack set to deploy the template to all accounts in the organization.

d) Create an AWS Config organization aggregator in the organization’s management account. Configure data collection from all AWS accounts in the organization and from all AWS Regions.

e) Create an AWS Config organization aggregator in the delegated administrator account. Configure data collection from all AWS accounts in the organization and from all AWS Regions.

Ans. a) Configure a delegated administrator account for AWS Config. Enable trusted access for AWS Config in the organization.

e) Create an AWS Config organization aggregator in the delegated administrator account. Configure data collection from all AWS accounts in the organization and from all AWS Regions.

 

27. A company wants to migrate its content sharing web application hosted on Amazon EC2 to a serverless architecture. The company currently deploys changes to its application by creating a new Auto Scaling group of EC2 instances and a new Elastic Load Balancer, and then shifting the traffic away using an Amazon Route 53 weighted routing policy. For its new serverless application, the company is planning to use Amazon API Gateway and AWS Lambda. The company will need to update its deployment processes to work with the new application. It will also need to retain the ability to test new features on a small number of users before rolling the features out to the entire user base. Which deployment strategy will meet these requirements?

a) Use AWS CDK to deploy API Gateway and Lambda functions. When code needs to be changed, update the AWS CloudFormation stack and deploy the new version of the APIs and Lambda functions. Use a Route 53 failover routing policy for the canary release strategy.

b) Use AWS CloudFormation to deploy API Gateway and Lambda functions using Lambda function versions. When code needs to be changed, update the CloudFormation stack with the new Lambda code and update the API versions using a canary release strategy. Promote the new version when testing is complete.

c) Use AWS Elastic Beanstalk to deploy API Gateway and Lambda functions. When code needs to be changed, deploy a new version of the API and Lambda functions. Shift traffic gradually using an Elastic Beanstalk blue/green deployment.

d) Use AWS OpsWorks to deploy API Gateway in the service layer and Lambda functions in a custom layer. When code needs to be changed, use OpsWorks to perform a blue/green deployment and shift traffic gradually.

Ans. b) Use AWS CloudFormation to deploy API Gateway and Lambda functions using Lambda function versions. When code needs to be changed, update the CloudFormation stack with the new Lambda code and update the API versions using a canary release strategy. Promote the new version when testing is complete.

 

28. A development team uses AWS CodeCommit, AWS CodePipeline, and AWS CodeBuild to develop and deploy an application. Changes to the code are submitted by pull requests. The development team reviews and merges the pull requests, and then the pipeline builds and tests the application. Over time, the number of pull requests has increased. The pipeline is frequently blocked because of failing tests. To prevent this blockage, the development team wants to run the unit and integration tests on each pull request before it is merged. Which solution will meet these requirements?

a) Create a CodeBuild project to run the unit and integration tests. Create a CodeCommit approval rule template. Configure the template to require the successful invocation of the CodeBuild project. Attach the approval rule to the project’s CodeCommit repository.

b) Create an Amazon EventBridge rule to match pullRequestCreated events from CodeCommit. Create a CodeBuild project to run the unit and integration tests. Configure the CodeBuild project as a target of the EventBridge rule that includes a custom event payload with the CodeCommit repository and branch information from the event.

c) Create an Amazon EventBridge rule to match pullRequestCreated events from CodeCommit. Modify the existing CodePipeline pipeline to not run the deploy steps if the build is started from a pull request. Configure the EventBridge rule to run the pipeline with a custom payload that contains the CodeCommit repository and branch information from the event.

d) Create a CodeBuild project to run the unit and integration tests. Create a CodeCommit notification rule that matches when a pull request is created or updated. Configure the notification rule to invoke the CodeBuild project.

Ans. b) Create an Amazon EventBridge rule to match pullRequestCreated events from CodeCommit. Create a CodeBuild project to run the unit and integration tests. Configure the CodeBuild project as a target of the EventBridge rule that includes a custom event payload with the CodeCommit repository and branch information from the event.

 

29. A company has an application that runs on a fleet of Amazon EC2 instances. The application requires frequent restarts. The application logs contain error messages when a restart is required. The application logs are published to a log group in Amazon CloudWatch Logs. An Amazon CloudWatch alarm notifies an application engineer through an Amazon Simple Notification Service (Amazon SNS) topic when the logs contain a large number of restart-related error messages. The application engineer manually restarts the application on the instances after the application engineer receives a notification from the SNS topic. A DevOps engineer needs to implement a solution to automate the application restart on the instances without restarting the instances. Which solution will meet these requirements in the MOST operationally efficient manner?

a) Configure an AWS Systems Manager Automation runbook that runs a script to restart the application on the instances. Configure the SNS topic to invoke the runbook.

b) Create an AWS Lambda function that restarts the application on the instances. Configure the Lambda function as an event destination of the SNS topic.

c) Configure an AWS Systems Manager Automation runbook that runs a script to restart the application on the instances. Create an AWS Lambda function to invoke the runbook. Configure the Lambda function as an event destination of the SNS topic.

d) Configure an AWS Systems Manager Automation runbook that runs a script to restart the application on the instances. Configure an Amazon EventBridge rule that reacts when the CloudWatch alarm enters ALARM state. Specify the runbook as a target of the rule.

Ans. d) Configure an AWS Systems Manager Automation runbook that runs a script to restart the application on the instances. Configure an Amazon EventBridge rule that reacts when the CloudWatch alarm enters ALARM state. Specify the runbook as a target of the rule.

 

30. A DevOps engineer at a company is supporting an AWS environment in which all users use AWS IAM Identity Center (AWS Single Sign-On). The company wants to immediately disable credentials of any new IAM user and wants the security team to receive a notification. Which combination of steps should the DevOps engineer take to meet these requirements? (Choose three.)

a) Create an Amazon EventBridge rule that reacts to an IAM CreateUser API call in AWS CloudTrail.

b) Create an Amazon EventBridge rule that reacts to an IAM GetLoginProfile API call in AWS CloudTrail.

c) Create an AWS Lambda function that is a target of the EventBridge rule. Configure the Lambda function to disable any access keys and delete the login profiles that are associated with the IAM user.

d) Create an AWS Lambda function that is a target of the EventBridge rule. Configure the Lambda function to delete the login profiles that are associated with the IAM user.

e) Create an Amazon Simple Notification Service (Amazon SNS) topic that is a target of the EventBridge rule. Subscribe the security team’s group email address to the topic.

f) Create an Amazon Simple Queue Service (Amazon SQS) queue that is a target of the Lambda function. Subscribe the security team’s group email address to the queue.

Ans. a) Create an Amazon EventBridge rule that reacts to an IAM CreateUser API call in AWS CloudTrail.

c) Create an AWS Lambda function that is a target of the EventBridge rule. Configure the Lambda function to disable any access keys and delete the login profiles that are associated with the IAM user.

e) Create an Amazon Simple Notification Service (Amazon SNS) topic that is a target of the EventBridge rule. Subscribe the security team’s group email address to the topic.

 

31. A company wants to set up a continuous delivery pipeline. The company stores application code in a private GitHub repository. The company needs to deploy the application components to Amazon Elastic Container Service (Amazon ECS). Amazon EC2, and AWS Lambda. The pipeline must support manual approval actions. Which solution will meet these requirements?

a) Use AWS CodePipeline with Amazon ECS. Amazon EC2, and Lambda as deploy providers.

b) Use AWS CodePipeline with AWS CodeDeploy as the deploy provider.

c) Use AWS CodePipeline with AWS Elastic Beanstalk as the deploy provider.

d) Use AWS CodeDeploy with GitHub integration to deploy the application.

Ans. b) Use AWS CodePipeline with AWS CodeDeploy as the deploy provider.

 

32. A company has an application that runs on Amazon EC2 instances that are in an Auto Scaling group. When the application starts up. the application needs to process data from an Amazon S3 bucket before the application can start to serve requests. The size of the data that is stored in the S3 bucket is growing. When the Auto Scaling group adds new instances, the application now takes several minutes to download and process the data before the application can serve requests. The company must reduce the time that elapses before new EC2 instances are ready to serve requests. Which solution is the MOST cost-effective way to reduce the application startup time?

a) Configure a warm pool for the Auto Scaling group with warmed EC2 instances in the Stopped state. Configure an autoscaling:EC2_INSTANCE_LAUNCHING lifecycle hook on the Auto Scaling group. Modify the application to complete the lifecycle hook when the application is ready to serve requests.

b) Increase the maximum instance count of the Auto Scaling group. Configure an autoscaling:EC2_INSTANCE_LAUNCHING lifecycle hook on the Auto Scaling group. Modify the application to complete the lifecycle hook when the application is ready to serve requests.

c) Configure a warm pool for the Auto Scaling group with warmed EC2 instances in the Running state. Configure an autoscaling:EC2_INSTANCE_LAUNCHING lifecycle hook on the Auto Scaling group. Modify the application to complete the lifecycle hook when the application is ready to serve requests.

d) Increase the maximum instance count of the Auto Scaling group. Configure an autoscaling:EC2_INSTANCE_LAUNCHING lifecycle hook on the Auto Scaling group. Modify the application to complete the lifecycle hook and to place the new instance in the Standby state when the application is ready to serve requests.

Ans. a) Configure a warm pool for the Auto Scaling group with warmed EC2 instances in the Stopped state. Configure an autoscaling:EC2_INSTANCE_LAUNCHING lifecycle hook on the Auto Scaling group. Modify the application to complete the lifecycle hook when the application is ready to serve requests.

 

33. A company is using an AWS CodeBuild project to build and package an application. The packages are copied to a shared Amazon S3 bucket before being deployed across multiple AWS accounts. The buildspec.yml file contains the following:

 version: 0.2
 phases:
   build:
     commands:
       - go build -o myapp
   post_build:
     commands:
       - aws s3 cp --acl authenticated-read myapp s3://artifacts/

The DevOps engineer has noticed that anybody with an AWS account is able to download the artifacts. What steps should the DevOps engineer take to stop this?

a) Modify the post_build command to use –acl public-read and configure a bucket policy that grants read access to the relevant AWS accounts only.

b) Configure a default ACL for the S3 bucket that defines the set of authenticated users as the relevant AWS accounts only and grants read-only access.

c) Create an S3 bucket policy that grants read access to the relevant AWS accounts and denies read access to the principal “*”.

d) Modify the post_build command to remove –acl authenticated-read and configure a bucket policy that allows read access to the relevant AWS accounts only.

Ans. d) Modify the post_build command to remove –acl authenticated-read and configure a bucket policy that allows read access to the relevant AWS accounts only.

 

34. A company has developed a serverless web application that is hosted on AWS. The application consists of Amazon S3, Amazon API Gateway, several AWS Lambda functions, and an Amazon RDS for MySQL database. The company is using AWS CodeCommit to store the source code. The source code is a combination of AWS Serverless Application Model (AWS SAM) templates and Python code. A security audit and penetration test reveal that user names and passwords for authentication to the database are hardcoded within CodeCommit repositories. A DevOps engineer must implement a solution to automatically detect and prevent hardcoded secrets. What is the MOST secure solution that meets these requirements?

a) Enable Amazon CodeGuru Profiler. Decorate the handler function with @with_lambda_profiler(). Manually review the recommendation report. Write the secret to AWS Systems Manager Parameter Store as a secure string. Update the SAM templates and the Python code to pull the secret from Parameter Store.

b) Associate the CodeCommit repository with Amazon CodeGuru Reviewer. Manually check the code review for any recommendations. Choose the option to protect the secret. Update the SAM templates and the Python code to pull the secret from AWS Secrets Manager.

c) Enable Amazon CodeGuru Profiler. Decorate the handler function with @with_lambda_profiler(). Manually review the recommendation report. Choose the option to protect the secret. Update the SAM templates and the Python code to pull the secret from AWS Secrets Manager.

d) Associate the CodeCommit repository with Amazon CodeGuru Reviewer. Manually check the code review for any recommendations. Write the secret to AWS Systems Manager Parameter Store as a string. Update the SAM templates and the Python code to pull the secret from Parameter Store.

Ans. b) Associate the CodeCommit repository with Amazon CodeGuru Reviewer. Manually check the code review for any recommendations. Choose the option to protect the secret. Update the SAM templates and the Python code to pull the secret from AWS Secrets Manager.

 

35. A company is using Amazon S3 buckets to store important documents. The company discovers that some S3 buckets are not encrypted. Currently, the company’s IAM users can create new S3 buckets without encryption. The company is implementing a new requirement that all S3 buckets must be encrypted. A DevOps engineer must implement a solution to ensure that server-side encryption is enabled on all existing S3 buckets and all new S3 buckets. The encryption must be enabled on new S3 buckets as soon as the S3 buckets are created. The default encryption type must be 256-bit Advanced Encryption Standard (AES-256). Which solution will meet these requirements?

a) Create an AWS Lambda function that is invoked periodically by an Amazon EventBridge scheduled rule. Program the Lambda function to scan all current S3 buckets for encryption status and to set AES-256 as the default encryption for any S3 bucket that does not have an encryption configuration.

b) Set up and activate the s3-bucket-server-side-encryption-enabled AWS Config managed rule. Configure the rule to use the AWS-EnableS3BucketEncryption AWS Systems Manager Automation runbook as the remediation action. Manually run the re-evaluation process to ensure that existing S3 buckets are compliant.

c) Create an AWS Lambda function that is invoked by an Amazon EventBridge event rule. Define the rule with an event pattern that matches the creation of new S3 buckets. Program the Lambda function to parse the EventBridge event, check the configuration of the S3 buckets from the event, and set AES-256 as the default encryption.

d) Configure an IAM policy that denies the s3:CreateBucket action if the s3:x-amz-server-side-encryption condition key has a value that is not AES-256. Create an IAM group for all the company’s IAM users. Associate the IAM policy with the IAM group.

Ans. b) Set up and activate the s3-bucket-server-side-encryption-enabled AWS Config managed rule. Configure the rule to use the AWS-EnableS3BucketEncryption AWS Systems Manager Automation runbook as the remediation action. Manually run the re-evaluation process to ensure that existing S3 buckets are compliant.

 

36. A DevOps engineer is architecting a continuous development strategy for a company’s software as a service (SaaS) web application running on AWS. For application and security reasons, users subscribing to this application are distributed across multiple Application Load Balancers (ALBs), each of which has a dedicated Auto Scaling group and fleet of Amazon EC2 instances. The application does not require a build stage, and when it is committed to AWS CodeCommit, the application must trigger a simultaneous deployment to all ALBs, Auto Scaling groups, and EC2 fleets. Which architecture will meet these requirements with the LEAST amount of configuration?

a) Create a single AWS CodePipeline pipeline that deploys the application in parallel using unique AWS CodeDeploy applications and deployment groups created for each ALB-Auto Scaling group pair.

b) Create a single AWS CodePipeline pipeline that deploys the application using a single AWS CodeDeploy application and single deployment group.

c) Create a single AWS CodePipeline pipeline that deploys the application in parallel using a single AWS CodeDeploy application and unique deployment group for each ALB-Auto Scaling group pair.

d) Create an AWS CodePipeline pipeline for each ALB-Auto Scaling group pair that deploys the application using an AWS CodeDeploy application and deployment group created for the same ALB-Auto Scaling group pair.

Ans. c) Create a single AWS CodePipeline pipeline that deploys the application in parallel using a single AWS CodeDeploy application and unique deployment group for each ALB-Auto Scaling group pair.

 

37. A company is hosting a static website from an Amazon S3 bucket. The website is available to customers at example.com. The company uses an Amazon Route 53 weighted routing policy with a TTL of 1 day. The company has decided to replace the existing static website with a dynamic web application. The dynamic web application uses an Application Load Balancer (ALB) in front of a fleet of Amazon EC2 instances. On the day of production launch to customers, the company creates an additional Route 53 weighted DNS record entry that points to the ALB with a weight of 255 and a TTL of 1 hour. Two days later, a DevOps engineer notices that the previous static website is displayed sometimes when customers navigate to example.com. How can the DevOps engineer ensure that the company serves only dynamic content for example.com?

a) Delete all objects, including previous versions, from the S3 bucket that contains the static website content.

b) Update the weighted DNS record entry that points to the S3 bucket. Apply a weight of 0. Specify the domain reset option to propagate changes immediately.

c) Configure webpage redirect requests on the S3 bucket with a hostname that redirects to the ALB.

d) Remove the weighted DNS record entry that points to the S3 bucket from the example.com hosted zone. Wait for DNS propagation to become complete.

Ans. d) Remove the weighted DNS record entry that points to the S3 bucket from the example.com hosted zone. Wait for DNS propagation to become complete.

 

38. A company is implementing AWS CodePipeline to automate its testing process. The company wants to be notified when the execution state fails and used the following custom event pattern in Amazon EventBridge:

{
   "source": [
     "aws.codepipeline"
   ],
     "detail-type": [
       "Codepipeline Action Execution State Change"
   ],
     "detail": {
       "state": [
         "FAILED"
   ],
     "type": {
       "category": ["Approval"]
       }
    }
}

Which type of events will match this event pattern?

a) Failed deploy and build actions across all the pipelines

b) All rejected or failed approval actions across all the pipelines

c) All the events across all pipelines

d) Approval actions across all the pipelines

Ans. b) All rejected or failed approval actions across all the pipelines

 

39. An application running on a set of Amazon EC2 instances in an Auto Scaling group requires a configuration file to operate. The instances are created and maintained with AWS CloudFormation. A DevOps engineer wants the instances to have the latest configuration file when launched, and wants changes to the configuration file to be reflected on all the instances with a minimal delay when the CloudFormation template is updated. Company policy requires that application configuration files be maintained along with AWS infrastructure configuration files in source control. Which solution will accomplish this?

a) In the CloudFormation template, add an AWS Config rule. Place the configuration file content in the rule’s InputParameters property, and set the Scope property to the EC2 Auto Scaling group. Add an AWS Systems Manager Resource Data Sync resource to the template to poll for updates to the configuration.

b) In the CloudFormation template, add an EC2 launch template resource. Place the configuration file content in the launch template. Configure the cfn-init script to run when the instance is launched, and configure the cfn-hup script to poll for updates to the configuration.

c) In the CloudFormation template, add an EC2 launch template resource. Place the configuration file content in the launch template. Add an AWS Systems Manager Resource Data Sync resource to the template to poll for updates to the configuration.

d) In the CloudFormation template, add CloudFormation init metadata. Place the configuration file content in the metadata. Configure the cfn-init script to run when the instance is launched, and configure the cfn-hup script to poll for updates to the configuration.

Ans. d) In the CloudFormation template, add CloudFormation init metadata. Place the configuration file content in the metadata. Configure the cfn-init script to run when the instance is launched, and configure the cfn-hup script to poll for updates to the configuration.

 

40. A company recently experienced a disruption for multiple applications. A modification to an Amazon Route 53 public hosted zone caused the disruption. The company must identify any user activity related to the Route 53 hosted zone. The company has logging activated in Amazon VPC, AWS Config, and Route 53. Which solution will meet these requirements?

a) Evaluate VPC Flow Logs records in Amazon CloudWatch Logs for any activity related to Route 53.

b) Evaluate DNS query logs in Amazon CloudWatch Logs for any activity related to Route 53.

c) Evaluate AWS CloudTrail Insights events in the AWS Management Console for any activity related to Route 53.

d) Evaluate the resource timeline for the Route 53 hosted zone in AWS Config for any related activity.

Ans. d) Evaluate the resource timeline for the Route 53 hosted zone in AWS Config for any related activity.

 

41. A company decided to adopt a data classification scheme to help the company assess data protection. The data is classified into three levels: low, medium and high. The data is stored across multiple AWS services, such as Amazon Elastic Block Store (Amazon EBS), Amazon S3, Amazon Elastic File System (Amazon EFS), and Amazon RDS. All data must be classified in accordance with this scheme. Any data that is identified as part of the high level must be backed up every hour. Data that is identified as part of the medium level must be backed up every 6 hours. Data that is identified as part of the low level must be backed up once a day. Which solution will meet these requirements?

a) Use AWS CloudFormation to create each resource and include the resource tag properties in accordance with the classification scheme of low, medium, or high. Use AWS CloudFormation Linter to validate the tag values. Create 3 backup plans in AWS Backup with each plan able to select resources based on the 3 classification tag values. Define the backup frequency as 1 hour, 6 hours, and 24 hours for each backup plan.

b) Define tags for all resources based on the classification scheme of low, medium, or high. Configure an AWS Config rule to ensure that the classification tag is present on all resources. Create 3 backup plans in AWS Backup with each plan able to select resources based on the 3 classification tag values. Define the backup frequency as 1 hour, 6 hours, and 24 hours for each backup plan.

c) Set up 3 separate OUs in AWS Organizations based on the classification scheme of low, medium, or high. Assign workloads and store data within accounts assigned to 1 of the 3 OUs. Use Amazon Data Lifecycle Manager to create a lifecycle policy in each account. Schedule the policies frequency as 1 hour, 6 hours, and 24 hours, in accordance with the classification scheme.

d) Create 3 custom data identifiers in Amazon Macie that match the low, medium, and high classification levels. Schedule a sensitive data discovery job. Evaluate the findings for noncompliant resources in accordance with the classification scheme. Use Amazon Data Lifecycle Manager to create a lifecycle policy in each account. Schedule the policies frequency as 1 hour, 6 hours, or 24 hours in accordance with the classification scheme.

Ans. b) Define tags for all resources based on the classification scheme of low, medium, or high. Configure an AWS Config rule to ensure that the classification tag is present on all resources. Create 3 backup plans in AWS Backup with each plan able to select resources based on the 3 classification tag values. Define the backup frequency as 1 hour, 6 hours, and 24 hours for each backup plan.

 

42. A company runs a web backend microservices application on a fleet of Amazon EC2 instances. The application recently had an outage that was caused by a misconfigured Auto Scaling group. An engineer manually configured the Auto Scaling group's minimum, desired, and maximum capacity settings. These changes caused the Auto Scaling group to not scale correctly to meet demand. The changes went unnoticed until the outage occurred. A DevOps engineer must implement a solution that enforces correct configurations for the Auto Scaling group in near real time. Which solution will meet these requirements with the LEAST operational overhead?

a) Create an AWS Systems Manager Automation runbook that contains logic to enforce correct Auto Scaling group configurations. Create an AWS Lambda function with logic that evaluates the configurations for an Auto Scaling group. Use Amazon EventBridge to schedule the AWS Lambda function to run hourly. Set up the Lambda function to run the runbook when misconfigurations are found.

b) Use a Guard Custom policy to create an AWS Config custom rule to monitor the Auto Scaling group configurations. Create an AWS Lambda function that contains the logic to enforce correct Auto Scaling group configurations. Use Amazon EventBridge to create a rule to monitor AWS Config events. Run the Lambda function in response to the AWS Config event.

c) Turn on AWS CloudTrail in the AWS account. Create an AWS Systems Manager Automation runbook that contains the logic to enforce correct Auto Scaling group configurations. Create an AWS Lambda function that contains logic that ingests AWS CloudTrail logs, evaluates CloudTrail events related to Auto Scaling groups, and runs the runbook for automatic remediation. Use Amazon EventBridge to schedule the Lambda function to run hourly.

d) Use a Guard Custom policy to create an AWS config custom rule to monitor the Auto Scaling group configurations. Create an AWS Systems Manager Automation runbook that contains the logic to enforce correct Auto scaling group configurations. Select the Systems Manager Automation runbook to be used for automatic remediation in AWS Config.

Ans. d) Use a Guard Custom policy to create an AWS config custom rule to monitor the Auto Scaling group configurations. Create an AWS Systems Manager Automation runbook that contains the logic to enforce correct Auto scaling group configurations. Select the Systems Manager Automation runbook to be used for automatic remediation in AWS Config.

 

43. A company runs an online store that receives an increase in web traffic after the launch of a successful online marketing campaign. The company runs 10 Amazon EC2 instances across 2 Availability Zones. A public Application Load Balancer (ALB) distributes the traffic across these instances. The EC2 instances are in an Amazon EC2 Auto Scaling group. The Auto Scaling group can scale up to 20 instances during peak hours. The increased traffic puts a strain on a third-party payment gateway that the company uses for payments. When an order is placed during checkout, the payment data is immediately sent from the EC2 instances to the third-party API. The payment gateway company cannot handle the volume of payments coming into its system during peak hours. Many payments are not being processed and are being lost. Orders are failing when customers attempt to complete the checkout process. The company must implement a solution to stop the payments from failing. Which solution will meet these requirements?

a) Create an Amazon Simple Notification Service (Amazon SNS) topic. Modify the checkout process to send the orders to the topic. Create an AWS Lambda function to subscribe to the topic and send the orders to the third-party API. Subscribe the Lambda function to the topic.

b) Change the Auto Scaling group maximum from 20 to 10. Create an AWS Lambda function that connects to the third-party API. Modify the checkout process to send the orders to the Lambda function that connects to the third-party API.

c) Create an ALB to balance the load from the EC2 instances to the third-party API. Use a health check to verify that the API is healthy before sending the orders.

d) Create a FIFO queue in Amazon Simple Queue Service (Amazon SQS). Modify the checkout process to send the orders to the queue. Create an AWS Lambda function to poll the queue and send the orders to the third-party API.

Ans. d) Create a FIFO queue in Amazon Simple Queue Service (Amazon SQS). Modify the checkout process to send the orders to the queue. Create an AWS Lambda function to poll the queue and send the orders to the third-party API.

 

44. A company plans to deploy an application on a set of Amazon EC2 instances and on-premise servers in its development environment. The development team wants to collect custom metrics and logs from the application hosts. Which solution will meet this requirement with the LEAST operational overhead?

a) Store the Amazon CloudWatch agent configuration file in Amazon S3. Use EC2 user data to install and configure the CloudWatch agent.

b) Store the Amazon CloudWatch agent configuration file in the AWS Systems Manager Parameter Store. Use AWS Systems Manager State Manager to install and configure the Amazon CloudWatch agent.

c) Store the Amazon CloudWatch agent configuration file in the AWS Systems Manager AWS AppConfig. Use AppConfig to install and configure the CloudWatch agent,

d) Store the Amazon CloudWatch agent configuration file in the AWS Systems Manager Parameter Store. Use AWS Systems Manager Quick Setup Host Management to install and configure the CloudWatch agent.

Ans. b) Store the Amazon CloudWatch agent configuration file in the AWS Systems Manager Parameter Store. Use AWS Systems Manager State Manager to install and configure the Amazon CloudWatch agent.

 

45. A company has an application that runs on Amazon EC2 instances in AWS. The application writes error messages to an error.log file that gets overwritten. The application administrator must be notified immediately when errors in the application occur so that the administrator can resolve problems and, therefore, meet their service-level agreement(SLA). The company also wants to keep a copy of the error log data for future reference. The company created an Amazon S3 bucket for long-term storage of the error logs. Which solution will meet these requirements?

a) Install and configure an Amazon Kinesis agent on the EC2 instances to monitor the error log and forward error records to an Amazon Kinesis Data Stream. Configure Kinesis Data Stream with an AWS Lambda function consumer that will send the notification to the application administrator. Configure the S3 bucket as the second consumer for the stream.

b) Install and configure an Amazon Kinesis Agent on the EC2 instances to monitor the error log and forward error records to an Amazon Kinesis Data Stream. Configure the Kinesis Data Stream with an AWS Lambda function consumer that will send the notification to the application administrator. As a second consumer, configure an Amazon Kinesis Data Firehose to transfer the records to the S3 bucket.

c) Install and configure an Amazon Kinesis agent on the EC2 instances to monitor the error log and forward error records to an Amazon Kinesis Data Firehose Stream. Configure the Kinesis Data Stream with an AWS Lambda function consumer that will send the notification to the application administrator. Configure the S3 bucket as the second consumer for the stream.

d) Install and configure an Amazon Kinesis agent on the EC2 instances to monitor the error log and forward error records to an Amazon Kinesis Data Firehose stream. Configure the S3 bucket as the consumer for the stream. Configure an Amazon S3 Event Notification to send a notification to the application administrator whenever an object is loaded to the bucket.

Ans. b) Install and configure an Amazon Kinesis Agent on the EC2 instances to monitor the error log and forward error records to an Amazon Kinesis Data Stream. Configure the Kinesis Data Stream with an AWS Lambda function consumer that will send the notification to the application administrator. As a second consumer, configure an Amazon Kinesis Data Firehose to transfer the records to the S3 bucket.

 

46. A company deploys a web application with AWS Elastic Beanstalk. The web application experiences predictable and immediate increases in traffic each day at 09:00. The increased traffic requires three times more Amazon EC2 instances than normal traffic requires. Traffic gradually returns to previous levels after a few hours. The company needs to avoid losing requests because of insufficient capacity. Which solution will meet the web application's scaling requirements?

a) Configure Elastic Beanstalk to include an Amazon EC2 Auto Scaling group. Configure the Auto Scaling group to use a step scaling policy with periods set at 1-minute intervals. Set the highest step to 3 times the number of instances.

b) Configure Elastic Beanstalk to include an Amazon EC2 Auto Scaling group. Include a .ebextensions/schedule.config file within the application's source code. Include a scheduled scaling policy in the file to increase the initial desired capacity on a recurring schedule at 08:50 each day.

c) Configure Elastic Beanstalk to not include an Amazon EC2 Auto Scaling group. Use AWS CloudFormation to create a separate Auto Scaling group. Schedule an AWS Lambda function to triple the initial desired capacity of the Auto Scaling group at 08:50 each day.

d) Configure Elastic Beanstalk to not include an Amazon EC2 Auto Scaling group. Use AWS CloudFormation to create a separate Auto Scaling group. Schedule an Amazon EventBridge event rule to run at 08:50 each day. Set the rule to triple the initial desired capacity of the Auto Scaling group.

Ans. b) Configure Elastic Beanstalk to include an Amazon EC2 Auto Scaling group. Include a .ebextensions/schedule.config file within the application's source code. Include a scheduled scaling policy in the file to increase the initial desired capacity on a recurring schedule at 08:50 each day.

 

47. A company has an application that runs in an Amazon EC2 Auto Scaling group behind an Application Load Balancer (ALB). Data for the application is stored in Amazon DynamoDB tables. All resources are currently located in a single AWS Region. The company uses Amazon Route 53 with a simple routing policy. The company wants to implement Regional protection for the application that should be automatically reached if the primary application is unavailable. Which solution will meet these requirements with the LEAST implementation effort?

a) Create new DynamoDB global tables with a replica in a secondary Region. Copy the current data to the global tables. Create an EC2 Auto Scaling group and an ALB in the secondary Region. Configure Route 53 with a failover routing policy that points to the primary and secondary ALBs.

b) Create new DynamoDB global tables with a replica in a secondary Region. Copy the current data to the global tables. Create an EC2 Auto Scaling group and an ALB in the secondary Region. Configure Route 53 with a latency-based routing policy that points to the primary and secondary ALBs.

c) Convert the current DynamoDB tables to DynamoDB global tables. Create an EC2 Auto Scaling group and an ALB in the secondary Region. Configure Route 53 with a latency-based routing policy that points to the primary and secondary ALBs.

d) Convert the current DynamoDB tables to DynamoDB global tables. Create an EC2 Auto Scaling group and an ALB in the secondary region. Configure Route 53 with a failover routing policy that points to the primary and secondary ALBs.

Ans. d) Convert the current DynamoDB tables to DynamoDB global tables. Create an EC2 Auto Scaling group and an ALB in the secondary region. Configure Route 53 with a failover routing policy that points to the primary and secondary ALBs.

 

48. A company is in its initial migration process to the AWS Cloud. The company plans to create one account for each workload. The company mandates that each account must have only one VPC. A DevOps team must customize each VPC for the workload requirements. Additionally, the DevOps team must implement custom security controls for each workload. The DevOps team must implement a solution that includes an account creation process that meets the account and VPC requirements. The solution also must give the DevOps team the ability to restrict additional VPC creation in the future. Which solution should the DevOps team use to meet these requirements?

a) Configure AWS Organizations for account creation. Create stack sets in AWS CloudFormation StackSets to create a custom VPC and security controls for each account.

b) Configure AWS Organizations for account creation. Create an AWS Service Catalog product in portfolios to create a custom VPC and security controls for each account.

c) Configure AWS Organizations for account creation. Edit the AWS Control Tower Account Factory network configuration to allow administrators to define a custom VPC later during the account creation. Create an AWS Service Catalog product in portfolios to create custom security controls for each account.

d) Configure AWS Control Tower for account creation. Edit the AWS Control Tower Account Factory network configuration to create accounts without an associated VPC. Create an AWS Service Catalog product in portfolios to create a custom VPC and security controls for each account.

Ans. d) Configure AWS Control Tower for account creation. Edit the AWS Control Tower Account Factory network configuration to create accounts without an associated VPC. Create an AWS Service Catalog product in portfolios to create a custom VPC and security controls for each account.

 

49. A developer deploys a web application by using AWS Elastic Beanstalk. The application uses an Amazon DynamoDB table to cache user session data. The developer wants to ensure that the DynamoDB table is not deleted when the Elastic Beanstalk environment is updated or deleted. Which solution will meet these requirements with the LEAST operational overhead?

a) Create the DynamoDB table from the AWS Management Console. Deploy a web environment to run the application. Log in to the Amazon EC2 instances that were created in the environment. Configure the connection to the DynamoDB table.

b) Define the DynamoDB table while configuring the database in the Elastic Beanstalk console. Deploy a web environment to run the application.

c) Use the .ebextensions configuration files in Elastic Beanstalk to create the DynamoDB table. Deploy a web environment to run the application.

d) Define the DynamoDB table while configuring the database in the Elastic Beanstalk console. Deploy a worker environment to run the application.

Ans. c) Use the .ebextensions configuration files in Elastic Beanstalk to create the DynamoDB table. Deploy a web environment to run the application.

 

50. A development team uses AWS CodeBuild to build an application artifact. The application will run in a large Docker container. The CodeBuild project runs one build at a time at a frequent interval. The builds take a long time to finish running. The teams wants to reduce the build runtime. How should the team update the CodeBuild project to create artifacts in the LEAST amount of time?

a) Specify an Amazon S3 cache type with a cache bucket.

b) Specify an Amazon S3 cache type with Docker layer cache mode.

c) Specify a local cache type with source cache mode.

d) Specify a local cache type with Docker layer cache mode.

Ans. d) Specify a local cache type with Docker layer cache mode.

Leave a Comment