New Year Sale 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: 70track

Free Amazon Web Services SAA-C03 Practice Exam with Questions & Answers | Set: 5

Questions 61

A solutions architect needs to implement a solution that can handle up to 5,000 messages per second. The solution must publish messages as events to multiple consumers. The messages are up to 500 KB in size. The message consumers need to have the ability to use multiple programming languages to consume the messages with minimal latency. The solution must retain published messages for more than 3 months. The solution must enforce strict ordering of the messages.

Options:
A.

Publish messages to an Amazon Kinesis Data Streams data stream. Enable enhanced fan-out. Ensure that consumers ingest the data stream by using dedicated throughput.

B.

Publish messages to an Amazon Simple Notification Service (Amazon SNS) topic. Ensure that consumers use an Amazon Simple Queue Service (Amazon SQS) FIFO queue to subscribe to the topic.

C.

Publish messages to Amazon EventBridge. Allow each consumer to create rules to deliver messages to the consumer's own target.

D.

Publish messages to an Amazon Simple Notification Service (Amazon SNS) topic. Ensure that consumers use Amazon Data Firehose to subscribe to the topic.

Amazon Web Services SAA-C03 Premium Access
Questions 62

A company runs a NetApp storage array in an on-premises data center. The company wants to migrate the storage array to Amazon FSx for NetApp ONTAP. The company has a mix of NFS and SMB file shares with complex directory structures and over 60 million small files. The company has 10 Gbps of network bandwidth available. The company wants to optimize migration efficiency for the file system.

Options:
A.

Use AWS DataSync with a bandwidth throttle. Use the All tiering policy.

B.

Provision an AWS Storage Gateway Volume Gateway. Configure a zero-ETL integration with the FSx for NetApp ONTAP file system.

C.

Set up NetApp SnapMirror replication between the on-premises array and the FSx for ONTAP file system.

D.

Use AWS Snowball Edge to perform an offline migration.

Questions 63

A company runs an HPC workload that uses a 200-TB file system on premises. The company needs to migrate this data to Amazon FSx for Lustre. Internet capacity is 10 Mbps, and all data must be migrated within 30 days.

Which solution will meet this requirement?

Options:
A.

Use AWS DMS to transfer data into S3 and link FSx for Lustre to the bucket.

B.

Deploy AWS DataSync on premises and transfer directly into FSx for Lustre.

C.

Use AWS Storage Gateway Volume Gateway to move data into FSx for Lustre.

D.

Use an AWS Snowball Edge storage-optimized device to transfer data into S3 and link FSx for Lustre to the bucket.

Questions 64

A company has a VPC with multiple private subnets that host multiple applications. The applications must not be accessible to the internet. However, the applications need to access multiple AWS services. The applications must not use public IP addresses to access the AWS services.

Which solution will meet these requirements MOST cost-effectively?

Options:
A.

Configure interface VPC endpoints for the required AWS services. Route traffic from the private subnets through the interface VPC endpoints.

B.

Deploy a NAT gateway in each private subnet. Route traffic from the private subnets through the NAT gateways.

C.

Deploy internet gateways in each private subnet. Route traffic from the private subnets through the internet gateways.

D.

Set up an AWS Direct Connect connection between the private subnets. Route traffic from the private subnets through the Direct Connect connection.

Questions 65

A company collects data for temperature, humidity, and atmospheric pressure in cities across multiple continents. The average volume of data that the company collects from each site daily is 500 GB. Each site has a high-speed internet connection.

The company wants to aggregate the data from all these global sites as quickly as possible in a single Amazon S3 bucket. The solution must minimize operational complexity.

Which solution meets these requirements?

Options:
A.

Turn on S3 Transfer Acceleration on the destination S3 bucket. Use multipart uploads to directly upload site data to the destination S3 bucket.

B.

Upload the data from each site to an S3 bucket in the closest Region. Use S3 Cross-Region Replication to copy objects to the destination S3 bucket. Then remove the data from the origin S3 bucket.

C.

Schedule AWS Snowball Edge Storage Optimized device jobs daily to transfer data from each site to the closest Region. Use S3 Cross-Region Replication to copy objects to the destination S3 bucket.

D.

Upload the data from each site to an Amazon EC2 instance in the closest Region. Store the data in an Amazon Elastic Block Store (Amazon EBS) volume. At regular intervals, take an EBS snapshot and copy it to the Region that contains the destination S3 bucket. Restore the EBS volume in that Region.

Questions 66

A software company needs to upgrade a critical web application. The application is hosted in a public subnet. The EC2 instance runs a MySQL database. The application's DNS records are published in an Amazon Route 53 zone.

A solutions architect must reconfigure the application to be scalable and highly available. The solutions architect must also reduce MySQL read latency.

Which combination of solutions will meet these requirements? (Select TWO.)

Options:
A.

Launch a second EC2 instance in a second AWS Region. Use a Route 53 failover routing policy to redirect the traffic to the second EC2 instance.

B.

Create and configure an Auto Scaling group to launch private EC2 instances in multiple Availability Zones. Add the instances to a target group behind a new Application Load Balancer.

C.

Migrate the database to an Amazon Aurora MySQL cluster. Create the primary DB instance and reader DB instance in separate Availability Zones.

D.

Create and configure an Auto Scaling group to launch private EC2 instances in multiple AWS Regions. Add the instances to a target group behind a new Application Load Balancer.

E.

Migrate the database to an Amazon Aurora MySQL cluster with cross-Region read replicas.

Questions 67

A global ecommerce company is designing a three-tier application on AWS. The application includes a web tier that serves static content, an application tier that handles business logic, and a database tier that stores product information and user data. The application interacts with a relational database.

The company needs a highly available application architecture to serve global users with low latency, with the least operational overhead.

Which solution will meet these requirements?

Options:
A.

Deploy Amazon EC2 instances in an Auto Scaling group for the application tier and web tier in a single AWS Region. Use an Application Load Balancer to distribute web traffic. Use an Amazon RDS database and Multi-AZ deployments for the database tier.

B.

Set up an Amazon CloudFront distribution that uses an Amazon S3 bucket as the origin. Use Amazon Elastic Container Service (Amazon ECS) containers on AWS Fargate to deploy the application tier to each AWS Region where the company operates. Use an Amazon Aurora global database for the database tier.

C.

Use an Amazon S3 bucket to store the static web content. Use Amazon EC2 Auto Scaling and EC2 Spot Instances for the application tier. Use Amazon RDS for MySQL with read replicas for the database tier. Use AWS Database Migration Service (AWS DMS) to replicate data to secondary AWS Regions.

D.

Use an Amazon S3 bucket to store static web content. Use AWS Lambda functions to handle serverless backend logic in the application tier. Use Amazon API Gateway to invoke the Lambda functions for web requests. Use an Amazon DynamoDB database for the database tier. Deploy the DynamoDB database across multiple AWS Regions.

Questions 68

An application uses an Amazon SQS queue and two AWS Lambda functions. One of the Lambda functions pushes messages to the queue, and the other function polls the queue and receives queued messages.

A solutions architect needs to ensure that only the two Lambda functions can write to or read from the queue.

Which solution will meet these requirements?

Options:
A.

Attach an IAM policy to the SQS queue that grants the Lambda function principals read and write access. Attach an IAM policy to the execution role of each Lambda function that denies all access to the SQS queue except for the principal of each function.

B.

Attach a resource-based policy to the SQS queue to deny read and write access to the queue for any entity except the principal of each Lambda function. Attach an IAM policy to the execution role of each Lambda function that allows read and write access to the queue.

C.

Attach a resource-based policy to the SQS queue that grants the Lambda function principals read and write access to the queue. Attach an IAM policy to the execution role of each Lambda function that allows read and write access to the queue.

D.

Attach a resource-based policy to the SQS queue to deny all access to the queue. Attach an IAM policy to the execution role of each Lambda function that grants read and write access to the queue.

Questions 69

A company is migrating an online marketplace application from a mainframe system to an Auto Scaling group of Amazon EC2 instances. The EC2 instances access an Amazon Aurora cluster. The application requires a scalable, persistent caching solution to store the results of in-progress transactions and SQL queries.

Options:
A.

Use an Amazon ElastiCache (Redis OSS) cluster to serve transaction and query results.

B.

Use an Amazon CloudFront distribution with an Amazon S3 bucket as the origin to cache the transactions. Add an Amazon EC2 instance store volume to the EC2 instances for query result caching.

C.

Use an Amazon ElastiCache (Memcached) cluster to serve transaction and query results.

D.

Use an Amazon ElastiCache (Redis OSS) cluster to cache the transactions. Add an Amazon EC2 instance store volume to the EC2 instances for query result caching.

Questions 70

A company is migrating a distributed application to AWS. The application serves variable workloads. The legacy platform consists of a primary server that coordinates jobs across multiple compute nodes. The company wants to modernize the application with a solution that maximizes resiliency and scalability.

How should a solutions architect design the architecture to meet these requirements?

Options:
A.

Configure an Amazon Simple Queue Service (Amazon SQS) queue as a destination for the jobs. Implement the compute nodes with Amazon EC2 instances that are managed in an Auto Scaling group. Configure EC2 Auto Scaling to use scheduled scaling.

B.

Configure an Amazon Simple Queue Service (Amazon SQS) queue as a destination for the jobs. Implement the compute nodes with Amazon EC2 instances that are managed in an Auto Scaling group. Configure EC2 Auto Scaling based on the size of the queue.

C.

Implement the primary server and the compute nodes with Amazon EC2 instances that are managed in an Auto Scaling group. Configure AWS CloudTrail as a destination for the jobs. Configure EC2 Auto Scaling based on the load on the primary server.

D.

Implement the primary server and the compute nodes with Amazon EC2 instances that are managed in an Auto Scaling group. Configure Amazon EventBridge as a destination for the jobs. Configure EC2 Auto Scaling based on the load on the compute nodes.

Questions 71

A company wants to create an API to authorize users by using JSON Web Tokens (JWTs). The company needs to support dynamic access to multiple AWS services by using path-based routing.

Which solution will meet these requirements?

Options:
A.

Deploy an Application Load Balancer behind an Amazon API Gateway REST API. Configure IAM authorization.

B.

Deploy an Application Load Balancer behind an Amazon API Gateway HTTP API. Use Amazon Cognito for authorization.

C.

Deploy a Network Load Balancer behind an Amazon API Gateway REST API. Use an AWS Lambda function as a custom authorizer.

D.

Deploy a Network Load Balancer behind an Amazon API Gateway HTTP API. Use Amazon Cognito for authorization.

Questions 72

A company uses an Amazon EC2 instance to handle requests for a public web application. The application routes traffic to multiple application pages by using URL paths.

The company begins to experience large surges of traffic at unpredictable times. The traffic surges cause the web application to experience issues and to occasionally become unavailable.

The company needs to make the web application more scalable to handle sudden increases in traffic.

Which solution will meet this requirement?

Options:
A.

Create an Amazon Machine Image (AMI) of the web application instance. Use the AMI to create an Auto Scaling group of EC2 instances that has a minimum capacity of two. Create an Application Load Balancer. Set the Auto Scaling group as the target group.

B.

Create a Docker image of the application. Use Amazon Elastic Container Service (Amazon ECS) to create an Auto Scaling ECS cluster. Enable managed scaling. Create a Network Load Balancer. Set the ECS cluster as the target group.

C.

Create an Amazon Machine Image (AMI) of the web application instance. Use the AMI to create two more web application instances in separate Availability Zones. Update the website DNS record to refer to all three instances.

D.

Create an Application Load Balancer (ALB). Set the web application instance as the target. Create an Amazon CloudWatch alarm based on ALB traffic metrics. Configure the alert to activate when traffic spikes.

Questions 73

A company has a web application that stores user transactions in an Amazon DynamoDB table. To comply with regulations, the company must retain a copy of user transaction data for 7 years.

Which solution will meet these requirements with the LEAST operational overhead?

Options:
A.

Use DynamoDB point-in-time recovery to back up the table continuously.

B.

Use AWS Backup to create backup schedules and retention policies for the table.

C.

Create an on-demand backup of the table by using DynamoDB. Store the backup in an Amazon S3 bucket. Set an S3 Lifecycle configuration for the S3 bucket.

D.

Create an Amazon EventBridge rule to invoke an AWS Lambda function. Configure the Lambda function to back up the table and to store the backup in an Amazon S3 bucket. Set an S3 Lifecycle configuration for the S3 bucket.

Questions 74

A medical company wants to perform transformations on a large amount of clinical trial data that comes from several customers. The company must extract the data from a relational databasethatcontains the customer data. Then the company will transform the data by using a series of complex rules. The company will load the data to Amazon S3 when the transformations are complete.

All data must be encrypted where it is processed before the company stores the data in Amazon S3. All data must be encrypted by using customer-specific keys.

Which solution will meet these requirements with the LEAST amount of operational effort?

Options:
A.

Create one AWS Glue job for each customer Attach a security configuration to each job that uses server-side encryption with Amazon S3 managed keys (SSE-S3) to encrypt the data.

B.

Create one Amazon EMR cluster for each customer Attach a security configuration to each cluster that uses client-side encryption with a custom client-side root key (CSE-Custom) to encrypt the data.

C.

Create one AWS Glue job for each customer Attach a security configuration to each job that uses client-side encryption with AWS KMS managed keys (CSE-KMS) to encrypt the data.

D.

Create one Amazon EMR cluster for each customer Attach a security configuration to each cluster that uses server-side encryption with AWS KMS keys (SSE-KMS) to encrypt the data.

Questions 75

A company needs a solution to back up and protect critical AWS resources. The company needs to regularly take backups of several Amazon EC2 instances and Amazon RDS for PostgreSQL databases. To ensure high resiliency, the company must have the ability to validate and restore backups.

Which solution meets the requirement with LEAST operational overhead?

Options:
A.

Use AWS Backup to create a backup schedule for the resources. Use AWS Backup to create a restoration testing plan for the required resources.

B.

Take snapshots of the EC2 instances and RDS DB instances. Create AWS Batch jobs to validate and restore the snapshots.

C.

Create a custom AWS Lambda function to take snapshots of the EC2 instances and RDS DB instances. Create a second Lambda function to restore the snapshots periodically to validate the backups.

D.

Take snapshots of the EC2 instances and RDS DB instances. Create an AWS Lambda function to restore the snapshots periodically to validate the backups.