Spring Sale Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: 70track

Free Amazon Web Services DOP-C02 Practice Exam with Questions & Answers | Set: 7

Questions 61

A company has set up AWS CodeArtifact repositories with public upstream repositories The company's development team consumes open source dependencies from the repositories in the company's internal network.

The company's security team recently discovered a critical vulnerability in the most recent version of a package that the development team consumes. The security team has produced a patched version to fix the vulnerability. The company needs to prevent the vulnerable version from being downloaded. The company also needs to allow the security team to publish the patched version.

Which combination of steps will meet these requirements? {Select TWO.)

Options:
A.

Update the status of the affected CodeArtifact package version to unlisted

B.

Update the status of the affected CodeArtifact package version to deleted

C.

Update the status of the affected CodeArtifact package version to archived.

D.

Update the CodeArtifact package origin control settings to allow direct publishing and to block upstream operations

E.

Update the CodeArtifact package origin control settings to block direct publishing and to allow upstream operations.

Amazon Web Services DOP-C02 Premium Access
Questions 62

A company is developing a web application that runs on Amazon EC2 Linux instances. The application requires monitoring of custom performance metrics. The company must collect metrics for API response times and database query latency across multiple instances. Which solution will generate the custom metrics with the LEAST operational overhead?

Options:
A.

Install the Amazon CloudWatch agent on the instances. Configure the agent to collect the custom metrics. Instrument the application to send the metrics to the agent.

B.

Use Amazon Managed Service for Prometheus to scrape the custom metrics from the application. Use the Amazon CloudWatch agent to forward the metrics to CloudWatch.

C.

Create a custom AWS Lambda function that polls the application endpoints and database at regular intervals. Program the Lambda function to calculate the custom metrics and to send the metrics to Amazon CloudWatch by using PutMetricData API calls.

D.

Implement custom logging in the application code to record the custom metrics. Use Amazon CloudWatch Logs Insights to extract and analyze the metrics.

Questions 63

A company uses AWS CodeArtifact to centrally store Python packages. The CodeArtifact repository is configured with the following repository policy.

"Version": ”2012-10-17”,

"Statement”: [

{

"Action": [

"codeartifact:DescribePackageVersion", "codeartifact:DescribeRepository",

"codeartifact:GetPackageVersionReadme", "codeartifact:GetRepositoryEndpoint", "codeartifact:ListPackageVersionAssets", '’codeartifact: ListPackageVersionDependencies", "codeartifact:ListPackageVersions", '’codeartifact :ListPackages",

'’codeartifact: ReadFromRepository"

],

"Effect": "Allow",

"Resource": "*",

"Principal":

"Condition": {

"StringEquals": {

"aws:PrincipalOrglD": [ "o-xxxxxxxxxxx"

]

}

}

}

]

A development team is building a new project in an account that is in an organization in AWS Organizations. The development team wants to use a Python library that has already been stored in the CodeArtifact repository in the organization. The development team uses AWS CodePipeline and AWS CodeBuild to build the new application. The CodeBuild job that the development team uses to build the application is configured to run in a VPC Because of compliance requirements the VPC has no internet connectivity.

The development team creates the VPC endpoints for CodeArtifact and updates the CodeBuild buildspec yaml file. However, the development team cannot download the Python library from the repository.

Which combination of steps should a DevOps engineer take so that the development team can use Code Artifact? (Select TWO.)

Options:
A.

Create an Amazon S3 gateway endpoint Update the route tables for the subnets that are running the CodeBuild job.

B.

Update the repository policy's Principal statement to include the ARN of the role that the CodeBuild project uses.

C.

Share the CodeArtifact repository with the organization by using AWS Resource Access Manager (AWS RAM).

D.

Update the role that the CodeBuild project uses so that the role has sufficient permissions to use the CodeArtifact repository.

E.

Specify the account that hosts the repository as the delegated administrator for CodeArtifact in the organization.

Questions 64

A company deployed an Amazon CloudFront distribution that accepts requests and routes to an Amazon API Gateway HTTP API. During a recent security audit, the company discovered that requests from the internet could reach the HTTP API without using the CloudFront distribution.

A DevOps engineer must ensure that connections to the HTTP API use the CloudFront distribution.

Which solution will meet these requirements?

Options:
A.

Enable VPC Flow Logs to identify requests that reach the HTTP API.

B.

Deploy AWS WAF in front of the CloudFront distribution.

C.

Implement an identity-based policy on the CloudFront distribution that requires authentication to make requests to the HTTP API.

D.

Implement a custom header in the CloudFront distribution. Implement an AWS Lambda authorizer associated with the HTTP API that verifies the custom header.

Questions 65

A company wants to use a grid system for a proprietary enterprise m-memory data store on top of AWS. This system can run in multiple server nodes in any Linux-based distribution. The system must be able to reconfigure the entire cluster every time a node is added or removed. When adding or removing nodes an /etc./cluster/nodes config file must be updated listing the IP addresses of the current node members of that cluster.

The company wants to automate the task of adding new nodes to a cluster.

What can a DevOps engineer do to meet these requirements?

Options:
A.

Use AWS OpsWorks Stacks to layer the server nodes of that cluster. Create a Chef recipe that populates the content of the 'etc./cluster/nodes config file and restarts the service by using the current members of the layer. Assign that recipe to the Configure lifecycle event.

B.

Put the file nodes config in version control. Create an AWS CodeDeploy deployment configuration and deployment group based on an Amazon EC2 tag value for thecluster nodes. When adding a new node to the cluster update the file with all tagged instances and make a commit in version control. Deploy the new file and restart the services.

C.

Create an Amazon S3 bucket and upload a version of the /etc./cluster/nodes config file Create a crontab script that will poll for that S3 file and download it frequently. Use a process manager such as Monit or system, to restart the cluster services when it detects that the new file was modified. When adding a node to the cluster edit the file's most recent members Upload the new file to the S3 bucket.

D.

Create a user data script that lists all members of the current security group of the cluster and automatically updates the /etc/cluster/. nodes config. Tile whenever a new instance is added to the cluster.

Questions 66

A company runs several applications in the same AWS account. The applications send logs to Amazon CloudWatch.

A data analytics team needs to collect performance metrics and custom metrics from the applications. The analytics team needs to transform the metrics data before storing the data in an Amazon S3 bucket. The analytics team must automatically collect any new metrics that are added to the CloudWatch namespace.

Which solution will meet these requirements with the LEAST operational overhead?

Options:
A.

Configure a CloudWatch metric stream to include metrics from the application and the CloudWatch namespace. Configure the metric stream to deliver the metrics to an Amazon Data Firehose delivery stream. Configure the Firehose delivery stream to invoke an AWS Lambda function to transform the data. Configure the delivery stream to send the transformed data to the S3 bucket.

B.

Configure a CloudWatch metrics stream to include all the metrics and to deliver the metrics to an Amazon Data Firehose delivery stream. Configure the Firehose delivery stream to invoke an AWS Lambda function to transform the data. Configure the delivery stream to send the transformed data to the S3 bucket.

C.

Configure metric filters for the CloudWatch logs to create custom metrics. Configure a CloudWatch metric stream to deliver the application metrics to the S3 bucket.

D.

Configure subscription filters on the application log groups to target an Amazon Data Firehose delivery stream. Configure the Firehose delivery stream to invoke an AWS Lambda function to transform the data. Configure the delivery stream to send the transformed data to the S3 bucket.

Questions 67

A company uses AWS Organizations to manage multiple AWS accounts. The accounts are in an OU that has a policy attached to allow all actions. The company is migrating several Git repositories to a specified AWS CodeConnections supported Git provider. The Git repositories manage AWS CloudFormation stacks for application infrastructure that the company deploys across multiple AWS Regions. The company wants a DevOps team to integrate CodeConnections into the CloudFormation stacks. The DevOps team must ensure that company staff members can integrate only with the specified Git provider. The deployment process must be highly available across Regions. Which combination of steps will meet these requirements? (Select THREE.)

Options:
A.

Add a new SCP statement to the OU that denies the CodeConnections CreatingConnections action where the provider type is not the specified Git provider.

B.

Add a new SCP statement to the OU that allows the CodeConnections CreatingConnections action where the provider type is the specified Git provider.

C.

Use CodeConnections to configure a single CodeConnections connection to each Git repository.

D.

Use CodeConnections to create a CodeConnections connection from each Region where the company operates to each Git repository.

E.

Use CodeConnections to create a CodeConnections repository link. Update each CloudFormation stack to sync from the Git repository.

F.

For each Git repository, create a pipeline in AWS CodePipeline that has the Git repository set as the source and a CloudFormation deployment stage.

Questions 68

An online retail company based in the United States plans to expand its operations to Europe and Asia in the next six months. Its product currently runs on Amazon EC2 instances behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. All data is stored in an Amazon Aurora database instance.

When the product is deployed in multiple regions, the company wants a single product catalog across all regions, but for compliance purposes, its customer information and purchases must be kept in each region.

How should the company meet these requirements with the LEAST amount of application changes?

Options:
A.

Use Amazon Redshift for the product catalog and Amazon DynamoDB tables for the customer information and purchases.

B.

Use Amazon DynamoDB global tables for the product catalog and regional tables for the customer information and purchases.

C.

Use Aurora with read replicas for the product catalog and additional local Aurora instances in each region for the customer information and purchases.

D.

Use Aurora for the product catalog and Amazon DynamoDB global tables for the customer information and purchases.

Questions 69

A company builds an application that uses an Application Load Balancer in front of Amazon EC2 instances that are in an Auto Scaling group. The

application is stateless. The Auto Scaling group uses a custom AMI that is fully prebuilt. The EC2 instances do not have a custom bootstrapping process.

The AMI that the Auto Scaling group uses was recently deleted. The Auto Scaling group's scaling activities show failures because the AMI ID does not exist.

Which combination of steps should a DevOps engineer take to meet these requirements? (Select THREE.)

Options:
A.

Create a new launch template that uses the new AMI.

B.

Update the Auto Scaling group to use the new launch template.

C.

Reduce the Auto Scaling group's desired capacity to O.

D.

Increase the Auto Scaling group's desired capacity by I.

E.

Create a new AMI from a running EC2 instance in the Auto Scaling group.

F.

Create a new AMI by copying the most recent public AMI of the operating system that the EC2 instances use.

Questions 70

A company has a search application that has a web interface. The company uses Amazon CloudFront, Application Load Balancers (ALBs), and Amazon EC2 instances in an Auto Scaling group with a desired capacity of 3. The company uses prebaked AMIs. The application starts in 1 minute. The application queries an Amazon OpenSearch Service cluster. The application is deployed to multiple Availability Zones. Because of compliance requirements, the application needs to have a disaster recovery (DR) environment in a separate AWS Region. The company wants to minimize the ongoing cost of the DR environment and requires an RTO and an RPO of under 30 minutes. The company has created an ALB in the DR Region. Which solution will meet these requirements?

Options:
A.

Add the new ALB as an origin in the CloudFront distribution. Configure origin failover functionality. Copy the AMI to the DR Region. Create a launch template and an Auto Scaling group with a desired capacity of 0 in the DR Region. Create a new OpenSearch Service cluster in the DR Region. Set up cross-cluster replication for the cluster.

B.

Create a new CloudFront distribution in the DR Region and add the new ALB as an origin. Use Amazon Route 53 DNS for Regional failover. Copy the AMI to the DR Region. Create a launch template and an Auto Scaling group with a desired capacity of 0 in the DR Region. Reconfigure the OpenSearch Service cluster as a Multi-AZ with Standby deployment. Ensure that the standby nodes are in the DR Region.

C.

Create a new CloudFront distribution in the DR Region and add the new ALB as an origin. Use Amazon Route 53 DNS for Regional failover. Copy the AMI to the DR Region. Create a launch template and an Auto Scaling group with a desired capacity of 3 in the DR Region. Reconfigure the OpenSearch Service cluster as a Multi-AZ with Standby deployment. Ensure that the standby nodes are in the DR Region.

D.

Add the new ALB as an origin in the CloudFront distribution. Configure origin failover functionality. Copy the AMI to the DR Region. Create a launch template and an Auto Scaling group with a desired capacity of 3 in the DR Region. Create a new OpenSearch Service cluster in the DR Region. Set up cross-cluster replication for the cluster.