We just do not compromise with the bright future of our respected customers. PassExam4Sure takes the future of clients quite seriously and we ensure that our SAA-C03 exam dumps get you through the line. If you think that our exam question and answers did not help you much with the exam paper and you failed it somehow, we will happily return all of your invested money with a full 100% refund.
100% Real Questions
We verify and assure the authenticity of Amazon SAA-C03 exam dumps PDFs with 100% real and exam-oriented questions. Our exam questions and answers comprise 100% real exam questions from the latest and most recent exams in which you’re going to appear. So, our majestic library of exam dumps for Amazon SAA-C03 is surely going to push on forward on the path of success.
Security & Privacy
Free for download Amazon SAA-C03 demo papers are available for our customers to verify the authenticity of our legit helpful exam paper samples, and to authenticate what you will be getting from PassExam4Sure. We have tons of visitors daily who simply opt and try this process before making their purchase for Amazon SAA-C03 exam dumps.
Last Week SAA-C03 Exam Results
242
Customers Passed Amazon SAA-C03 Exam
96%
Average Score In Real SAA-C03 Exam
95%
Questions came from our SAA-C03 dumps.
Authentic SAA-C03 Exam Dumps
Prepare for Amazon SAA-C03 Exam like a Pro
PassExam4Sure is famous for its top-notch services for providing the most helpful, accurate, and up-to-date material for Amazon SAA-C03 exam in form of PDFs. Our SAA-C03 dumps for this particular exam is timely tested for any reviews in the content and if it needs any format changes or addition of new questions as per new exams conducted in recent times. Our highly-qualified professionals assure the guarantee that you will be passing out your exam with at least 85% marks overall. PassExam4Sure Amazon SAA-C03 ProvenDumps is the best possible way to prepare and pass your certification exam.
Easy Access and Friendly UI
PassExam4Sure is your best buddy in providing you with the latest and most accurate material without any hidden charges or pointless scrolling. We value your time and we strive hard to provide you with the best possible formatting of the PDFs with accurate, to the point, and vital information about Amazon SAA-C03. PassExam4Sure is your 24/7 guide partner and our exam material is curated in a way that it will be easily readable on all smartphone devices, tabs, and laptop PCs.
PassExam4Sure - The Undisputed King for Preparing SAA-C03 Exam
We have a sheer focus on providing you with the best course material for Amazon SAA-C03. So that you may prepare your exam like a pro, and get certified within no time. Our practice exam material will give you the necessary confidence you need to sit, relax, and do the exam in a real exam environment. If you truly crave success then simply sign up for PassExam4Sure Amazon SAA-C03 exam material. There are millions of people all over the globe who have completed their certification using PassExam4Sure exam dumps for Amazon SAA-C03.
100% Authentic Amazon SAA-C03 – Study Guide (Update 2024)
Our Amazon SAA-C03 exam questions and answers are reviewed by us on weekly basis. Our team of highly qualified Amazon professionals, who once also cleared the exams using our certification content does all the analysis of our recent exam dumps. The team makes sure that you will be getting the latest and the greatest exam content to practice, and polish your skills the right way. All you got to do now is to practice, practice a lot by taking our demo questions exam, and making sure that you prepare well for the final examination. Amazon SAA-C03 test is going to test you, play with your mind and psychology, and so be prepared for what’s coming. PassExam4Sure is here to help you and guide you in all steps you will be going through in your preparation for glory. Our free downloadable demo content can be checked out if you feel like testing us before investing your hard-earned money. PassExam4Sure guaranteed your success in the Amazon SAA-C03 exam because we have the newest and most authentic exam material that cannot be found anywhere else on the internet.
Amazon SAA-C03 Sample Questions
Question # 1
A company is developing a mobile game that streams score updates to a backendprocessor and then posts results on a leaderboard A solutions architect needs to design asolution that can handle large traffic spikes process the mobile game updates in order ofreceipt, and store the processed updates in a highly available database The company alsowants to minimize the management overhead required to maintain the solutionWhat should the solutions architect do to meet these requirements?
A. Push score updates to Amazon Kinesis Data Streams Process the updates in KinesisData Streams with AWS Lambda Store the processed updates in Amazon DynamoDB. B. Push score updates to Amazon Kinesis Data Streams. Process the updates with a fleetof Amazon EC2 instances set up for Auto Scaling Store the processed updates in AmazonRedshift. C. Push score updates to an Amazon Simple Notification Service (Amazon SNS) topicSubscribe an AWS Lambda function to the SNS topic to process the updates. Store theprocessed updates in a SQL database running on Amazon EC2. D. Push score updates to an Amazon Simple Queue Service (Amazon SQS) queue. Use afleet of Amazon EC2 instances with Auto Scaling to process the updates in the SQSqueue. Store the processed updates in an Amazon RDS Multi-AZ DB instance.
Answer: A
Explanation: Amazon Kinesis Data Streams is a scalable and reliable service that can
ingest, buffer, and process streaming data in real-time. It can handle large traffic spikes
and preserve the order of the incoming data records. AWS Lambda is a serverless
compute service that can process the data streams from Kinesis Data Streams without
requiring any infrastructure management. It can also scale automatically to match the
throughput of the data stream. Amazon DynamoDB is a fully managed, highly available,
and fast NoSQL database that can store the processed updates from Lambda. It can also
handle high write throughput and provide consistent performance. By using these services,
the solutions architect can design a solution that meets the requirements of the company
with the least operational overhead.
Question # 2
A company runs an SMB file server in its data center. The file server stores large files thatthe company frequently accesses for up to 7 days after the file creation date. After 7 days,the company needs to be able to access the files with a maximum retrieval time of 24hours.Which solution will meet these requirements?
A. Use AWS DataSync to copy data that is older than 7 days from the SMB file server toAWS. B. Create an Amazon S3 File Gateway to increase the company's storage space. Createan S3 Lifecycle policy to transition the data to S3 Glacier Deep Archive after 7 days. C. Create an Amazon FSx File Gateway to increase the company's storage space. Createan Amazon S3 Lifecycle policy to transition the data after 7 days. D. Configure access to Amazon S3 for each user. Create an S3 Lifecycle policy totransition the data to S3 Glacier Flexible Retrieval after 7 days.
Answer: B
Explanation:
Amazon S3 File Gateway is a service that provides a file-based interface to Amazon S3,
which appears as a network file share. It enables you to store and retrieve Amazon S3
objects through standard file storage protocols such as SMB. S3 File Gateway can also
cache frequently accessed data locally for low-latency access. S3 Lifecycle policy is a
feature that allows you to define rules that automate the management of your objects
throughout their lifecycle. You can use S3 Lifecycle policy to transition objects to different
storage classes based on their age and access patterns. S3 Glacier Deep Archive is a
storage class that offers the lowest cost for long-term data archiving, with a retrieval time of
12 hours or 48 hours. This solution will meet the requirements, as it allows the company to
store large files in S3 with SMB file access, and to move the files to S3 Glacier Deep
Archive after 7 days for cost savings and compliance.
References: 1 provides an overview of Amazon S3 File Gateway and its benefits.
2 explains how to use S3 Lifecycle policy to manage object storage lifecycle.
3 describes the features and use cases of S3 Glacier Deep Archive storage class.
Question # 3
A company has an organization in AWS Organizations that has all features enabled Thecompany requires that all API calls and logins in any existing or new AWS account must beaudited The company needs a managed solution to prevent additional work and tominimize costs The company also needs to know when any AWS account is not compliantwith the AWS Foundational Security Best Practices (FSBP) standard.Which solution will meet these requirements with the LEAST operational overhead?
A. Deploy an AWS Control Tower environment in the Organizations management accountEnable AWS Security Hub and AWS Control Tower Account Factory in the environment. B. Deploy an AWS Control Tower environment in a dedicated Organizations memberaccount Enable AWS Security Hub and AWS Control Tower Account Factory in theenvironment. C. Use AWS Managed Services (AMS) Accelerate to build a multi-account landing zone(MALZ) Submit an RFC to self-service provision Amazon GuardDuty in the MALZ. D. Use AWS Managed Services (AMS) Accelerate to build a multi-account landing zone(MALZ) Submit an RFC to self-service provision AWS Security Hub in the MALZ.
Answer: A
Explanation: AWS Control Tower is a fully managed service that simplifies the setup and
governance of a secure, compliant, multi-account AWS environment. It establishes a
landing zone that is based on best-practices blueprints, and it enables governance using
controls you can choose from a pre-packaged list. The landing zone is a well-architected,
multi-account baseline that follows AWS best practices. Controls implement governance
rules for security, compliance, and operations. AWS Security Hub is a service that provides
a comprehensive view of your security posture across your AWS accounts. It aggregates,
organizes, and prioritizes security alerts and findings from multiple AWS services, such as
IAM Access Analyzer, as well as from AWS Partner solutions. AWS Security Hub
continuously monitors your environment using automated compliance checks based on the
AWS best practices and industry standards, such as the AWS Foundational Security Best
Practices (FSBP) standard. AWS Control Tower Account Factory is a feature that
automates the provisioning of new AWS accounts that are preconfigured to meet your
business, security, and compliance requirements. By deploying an AWS Control Tower
environment in the Organizations management account, you can leverage the existing
organization structure and policies, and enable AWS Security Hub and AWS Control Tower
Account Factory in the environment. This way, you can audit all API calls and logins in any
existing or new AWS account, monitor the compliance status of each account with the FSBP standard, and provision new accounts with ease and consistency. This solution
meets the requirements with the least operational overhead, as you do not need to manage
any infrastructure, perform any data migration, or submit any requests for changes.
References:
AWS Control Tower
[AWS Security Hub]
[AWS Control Tower Account Factory]
Question # 4
A solutions architect is designing a user authentication solution for a company The solutionmust invoke two-factor authentication for users that log in from inconsistent geographicallocations. IP addresses, or devices. The solution must also be able to scale up toaccommodate millions of users.Which solution will meet these requirements'?
A. Configure Amazon Cognito user pools for user authentication Enable the nsk-basedadaptive authentication feature with multi-factor authentication (MFA) B. Configure Amazon Cognito identity pools for user authentication Enable multi-factorauthentication (MFA). C. Configure AWS Identity and Access Management (1AM) users for user authenticationAttach an 1AM policy that allows the AllowManageOwnUserMFA action D. Configure AWS 1AM Identity Center (AWS Single Sign-On) authentication for userauthentication Configure the permission sets to require multi-factor authentication(MFA)
Answer: A
Explanation: Amazon Cognito user pools provide a secure and scalable user directory for
user authentication and management. User pools support various authentication methods,
such as username and password, email and password, phone number and password, and
social identity providers. User pools also support multi-factor authentication (MFA), which
adds an extra layer of security by requiring users to provide a verification code or a
biometric factor in addition to their credentials. User pools can also enable risk-based
adaptive authentication, which dynamically adjusts the authentication challenge based on
the risk level of the sign-in attempt. For example, if a user tries to sign in from an unfamiliar
device or location, the user pool can require a stronger authentication factor, such as SMS
or email verification code. This feature helps to protect user accounts from unauthorized
access and reduce the friction for legitimate users. User pools can scale up to millions of
users and integrate with other AWS services, such as Amazon SNS, Amazon SES, AWS
Lambda, and AWS KMS.
Amazon Cognito identity pools provide a way to federate identities from multiple identity
providers, such as user pools, social identity providers, and corporate identity providers.
Identity pools allow users to access AWS resources with temporary, limited-privilege
credentials. Identity pools do not provide user authentication or management features,
such as MFA or adaptive authentication. Therefore, option B is not correct.
AWS Identity and Access Management (IAM) is a service that helps to manage access to
AWS resources. IAM users are entities that represent people or applications that need to
interact with AWS. IAM users can be authenticated with a password or an access key. IAM
users can also enable MFA for their own accounts, by using the
AllowManageOwnUserMFA action in an IAM policy. However, IAM users are not suitable
for user authentication for web or mobile applications, as they are intended for
administrative purposes. IAM users also do not support adaptive authentication based on
risk factors. Therefore, option C is not correct.
AWS IAM Identity Center (AWS Single Sign-On) is a service that enables users to sign in
to multiple AWS accounts and applications with a single set of credentials. AWS SSO
supports various identity sources, such as AWS SSO directory, AWS Managed Microsoft
AD, and external identity providers. AWS SSO also supports MFA for user authentication,
which can be configured in the permission sets that define the level of access for each
user. However, AWS SSO does not support adaptive authentication based on risk factors.
Therefore, option D is not correct.
References:
Amazon Cognito User Pools
Adding Multi-Factor Authentication (MFA) to a User Pool
Risk-Based Adaptive Authentication
Amazon Cognito Identity Pools
IAM Users
Enabling MFA Devices
AWS Single Sign-On
How AWS SSO Works
Question # 5
A solutions architect needs to design the architecture for an application that a vendorprovides as a Docker container image The container needs 50 GB of storage available fortemporary files The infrastructure must be serverless.Which solution meets these requirements with the LEAST operational overhead?
A. Create an AWS Lambda function that uses the Docker container image with an AmazonS3 mounted volume that has more than 50 GB of space B. Create an AWS Lambda function that uses the Docker container image with an AmazonElastic Block Store (Amazon EBS) volume that has more than 50 GB of space C. Create an Amazon Elastic Container Service (Amazon ECS) cluster that uses the AWSFargate launch type Create a task definition for the container image with an AmazonElastic File System (Amazon EFS) volume. Create a service with that task definition. D. Create an Amazon Elastic Container Service (Amazon ECS) cluster that uses theAmazon EC2 launch type with an Amazon Elastic Block Store (Amazon EBS) volume thathas more than 50 GB of space Create a task definition for the container image. Create aservice with that task definition.
Answer: C
Explanation:
The AWS Fargate launch type is a serverless way to run containers on Amazon ECS,
without having to manage any underlying infrastructure. You only pay for the resources
required to run your containers, and AWS handles the provisioning, scaling, and security of
the cluster. Amazon EFS is a fully managed, elastic, and scalable file system that can be
mounted to multiple containers, and provides high availability and durability. By using AWS
Fargate and Amazon EFS, you can run your Docker container image with 50 GB of storage available for temporary files, with the least operational overhead. This solution meets the
requirements of the question.
References:
AWS Fargate
Amazon Elastic File System
Using Amazon EFS file systems with Amazon ECS
Question # 6
A company uses AWS Organizations to run workloads within multiple AWS accounts Atagging policy adds department tags to AWS resources when the company creates tags.An accounting team needs to determine spending on Amazon EC2 consumption Theaccounting team must determine which departments are responsible for the costsregardless of AWS account The accounting team has access to AWS Cost Explorer for allAWS accounts within the organization and needs to access all reports from Cost Explorer.Which solution meets these requirements in the MOST operationally efficient way'?
A. From the Organizations management account billing console, activate a user-definedcost allocation tag named department Create one cost report in Cost Explorer grouping by tag name, and filter by EC2. B. From the Organizations management account billing console, activate an AWS-definedcost allocation tag named department. Create one cost report in Cost Explorer grouping bytag name, and filter by EC2. C. From the Organizations member account billing console, activate a user-defined costallocation tag named department. Create one cost report in Cost Explorer grouping by thetag name, and filter by EC2. D. From the Organizations member account billing console, activate an AWS-defined costallocation tag named department. Create one cost report in Cost Explorer grouping by tagname and filter by EC2.
Answer: B
Explanation: This solution meets the following requirements:
It is operationally efficient, as it only requires one activation of the cost allocation
tag and one creation of the cost report from the management account, which has
access to all the member accounts’ data and billing preferences.
It is consistent, as it uses the AWS-defined cost allocation tag named department,
which is automatically applied to resources when the company creates tags using
the tagging policy enforced by AWS Organizations. This ensures that the tag name
and value are the same across all the resources and accounts, and avoids any
discrepancies or errors that might arise from user-defined tags.
It is informative, as it creates one cost report in Cost Explorer grouping by the tag
name, and filters by EC2. This allows the accounting team to see the breakdown
of EC2 consumption and costs by department, regardless of the AWS account.
The team can also use other features of Cost Explorer, such as charts, filters, and
forecasts, to analyze and optimize the spending.
References:
Using AWS cost allocation tags - AWS Billing
User-defined cost allocation tags - AWS Billing
Cost Tagging and Reporting with AWS Organizations
Question # 7
A company is building an Amazon Elastic Kubernetes Service (Amazon EKS) cluster for itsworkloads. All secrets that are stored in Amazon EKS must be encrypted in the Kubernetesetcd key-value store.Which solution will meet these requirements?
A. Create a new AWS Key Management Service (AWS KMS) key Use AWS SecretsManager to manage rotate, and store all secrets in Amazon EKS. B. Create a new AWS Key Management Service (AWS KMS) key Enable Amazon EKSKMS secrets encryption on the Amazon EKS cluster. C. Create the Amazon EKS cluster with default options Use the Amazon Elastic BlockStore (Amazon EBS) Container Storage Interface (CSI) driver as an add-on. D. Create a new AWS Key Management Service (AWS KMS) key with the ahas/aws/ebsalias Enable default Amazon Elastic Block Store (Amazon EBS) volume encryption for theaccount.
Answer: B
Explanation: This option is the most secure and simple way to encrypt the secrets that are
stored in Amazon EKS. AWS Key Management Service (AWS KMS) is a service that
allows you to create and manage encryption keys that can be used to encrypt your data.
Amazon EKS KMS secrets encryption is a feature that enables you to use a KMS key to
encrypt the secrets that are stored in the Kubernetes etcd key-value store. This provides an
additional layer of protection for your sensitive data, such as passwords, tokens, and keys.
You can create a new KMS key or use an existing one, and then enable the Amazon EKS
KMS secrets encryption on the Amazon EKS cluster. You can also use IAM policies to
control who can access or use the KMS key.
Option A is not correct because using AWS Secrets Manager to manage, rotate, and store
all secrets in Amazon EKS is not necessary or efficient. AWS Secrets Manager is a service
that helps you securely store, retrieve, and rotate your secrets, such as database
credentials, API keys, and passwords. You can use it to manage secrets that are used by
your applications or services outside of Amazon EKS, but it is not designed to encrypt the
secrets that are stored in the Kubernetes etcd key-value store. Moreover, using AWS
Secrets Manager would incur additional costs and complexity, and it would not leverage the
Option C is not correct because using the Amazon EBS Container Storage Interface (CSI)
driver as an add-on does not encrypt the secrets that are stored in Amazon EKS. The
Amazon EBS CSI driver is a plugin that allows you to use Amazon EBS volumes as
persistent storage for your Kubernetes pods. It is useful for providing durable and scalable
storage for your applications, but it does not affect the encryption of the secrets that are
stored in the Kubernetes etcd key-value store. Moreover, using the Amazon EBS CSI
driver would require additional configuration and resources, and it would not provide the
same level of security as using a KMS key.
Option D is not correct because creating a new AWS KMS key with the alias aws/ebs and
enabling default Amazon EBS volume encryption for the account does not encrypt the
secrets that are stored in Amazon EKS. The alias aws/ebs is a reserved alias that is used
by AWS to create a default KMS key for your account. This key is used to encrypt the
Amazon EBS volumes that are created in your account, unless you specify a different KMS
key. Enabling default Amazon EBS volume encryption for the account is a setting that ensures that all new Amazon EBS volumes are encrypted by default. However, these
features do not affect the encryption of the secrets that are stored in the Kubernetes etcd
key-value store. Moreover, using the default KMS key or the default encryption setting
would not provide the same level of control and security as using a custom KMS key and
enabling the Amazon EKS KMS secrets encryption feature. References:
Encrypting secrets used in Amazon EKS
What Is AWS Key Management Service?
What Is AWS Secrets Manager?
Amazon EBS CSI driver
Encryption at rest
Question # 8
A retail company has several businesses. The IT team for each business manages its ownAWS account. Each team account is part of an organization in AWS Organizations. Eachteam monitors its product inventory levels in an Amazon DynamoDB table in the team'sown AWS account.The company is deploying a central inventory reporting application into a shared AWSaccount. The application must be able to read items from all the teams' DynamoDB tables.Which authentication option will meet these requirements MOST securely?
A. Integrate DynamoDB with AWS Secrets Manager in the inventory application account.Configure the application to use the correct secret from Secrets Manager to authenticateand read the DynamoDB table. Schedule secret rotation for every 30 days. B. In every business account, create an 1AM user that has programmatic access.Configure the application to use the correct 1AM user access key ID and secret access keyto authenticate and read the DynamoDB table. Manually rotate 1AM access keys every 30days. C. In every business account, create an 1AM role named BU_ROLE with a policy that givesthe role access to the DynamoDB table and a trust policy to trust a specific role in theinventory application account. In the inventory account, create a role named APP_ROLEthat allows access to the STS AssumeRole API operation. Configure the application to useAPP_ROLE and assume the cross-account role BU_ROLE to read the DynamoDB table. D. Integrate DynamoDB with AWS Certificate Manager (ACM). Generate identitycertificates to authenticate DynamoDB. Configure the application to use the correctcertificate to authenticate and read the DynamoDB table.
Answer: C
Explanation: This solution meets the requirements most securely because it uses IAM
roles and the STS AssumeRole API operation to authenticate and authorize the inventory
application to access the DynamoDB tables in different accounts. IAM roles are more
secure than IAM users or certificates because they do not require long-term credentials or
passwords. Instead, IAM roles provide temporary security credentials that are automatically
rotated and can be configured with a limited duration. The STS AssumeRole API operation
enables you to request temporary credentials for a role that you are allowed to assume. By
using this operation, you can delegate access to resources that are in different AWS
accounts that you own or that are owned by third parties. The trust policy of the role defines
which entities can assume the role, and the permissions policy of the role defines which
actions can be performed on the resources. By using this solution, you can avoid hardcoding
credentials or certificates in the inventory application, and you can also avoid
storing them in Secrets Manager or ACM. You can also leverage the built-in security
features of IAM and STS, such as MFA, access logging, and policy conditions.
References: IAM Roles
STS AssumeRole
Tutorial: Delegate Access Across AWS Accounts Using IAM Roles
Question # 9
A company built an application with Docker containers and needs to run the application inthe AWS Cloud The company wants to use a managed sen/ice to host the applicationThe solution must scale in and out appropriately according to demand on the individualcontainer services The solution also must not result in additional operational overhead orinfrastructure to manageWhich solutions will meet these requirements? (Select TWO)
A. Use Amazon Elastic Container Service (Amazon ECS) with AWS Fargate. B. Use Amazon Elastic Kubernetes Service (Amazon EKS) with AWS Fargate. C. Provision an Amazon API Gateway API Connect the API to AWS Lambda to run the containers. D. Use Amazon Elastic Container Service (Amazon ECS) with Amazon EC2 worker nodes. E. Use Amazon Elastic Kubernetes Service (Amazon EKS) with Amazon EC2 workernodes.
Answer: A,B
Explanation: These options are the best solutions because they allow the company to run
the application with Docker containers in the AWS Cloud using a managed service that
scales automatically and does not require any infrastructure to manage. By using AWS
Fargate, the company can launch and run containers without having to provision, configure,
or scale clusters of EC2 instances. Fargate allocates the right amount of compute
resources for each container and scales them up or down as needed. By using Amazon
ECS or Amazon EKS, the company can choose the container orchestration platform that
suits its needs. Amazon ECS is a fully managed service that integrates with other AWS
services and simplifies the deployment and management of containers. Amazon EKS is a
managed service that runs Kubernetes on AWS and provides compatibility with existing
Kubernetes tools and plugins.
C. Provision an Amazon API Gateway API Connect the API to AWS Lambda to run the
containers. This option is not feasible because AWS Lambda does not support running
Docker containers directly. Lambda functions are executed in a sandboxed environment
that is isolated from other functions and resources. To run Docker containers on Lambda,
the company would need to use a custom runtime or a wrapper library that emulates the
Docker API, which can introduce additional complexity and overhead.
D. Use Amazon Elastic Container Service (Amazon ECS) with Amazon EC2 worker nodes.
This option is not optimal because it requires the company to manage the EC2 instances
that host the containers. The company would need to provision, configure, scale, patch,
and monitor the EC2 instances, which can increase the operational overhead and
infrastructure costs.
E. Use Amazon Elastic Kubernetes Service (Amazon EKS) with Amazon EC2 worker
nodes. This option is not ideal because it requires the company to manage the EC2
instances that host the containers. The company would need to provision, configure, scale,
patch, and monitor the EC2 instances, which can increase the operational overhead and
infrastructure costs.
References:
1 AWS Fargate - Amazon Web Services
2 Amazon Elastic Container Service - Amazon Web Services
3 Amazon Elastic Kubernetes Service - Amazon Web Services
4 AWS Lambda FAQs - Amazon Web Services
Question # 10
A company uses Amazon S3 as its data lake. The company has a new partner that mustuse SFTP to upload data files A solutions architect needs to implement a highly availableSFTP solution that minimizes operational overhead.Which solution will meet these requirements?
A. Use AWS Transfer Family to configure an SFTP-enabled server with a publiclyaccessible endpoint Choose the S3 data lake as the destination B. Use Amazon S3 File Gateway as an SFTP server Expose the S3 File Gateway endpointURL to the new partner Share the S3 File Gateway endpoint with the newpartner C. Launch an Amazon EC2 instance in a private subnet in a VPC. Instruct the new partnerto upload files to the EC2 instance by using a VPN. Run a cron job script on the EC2instance to upload files to the S3 data lake D. Launch Amazon EC2 instances in a private subnet in a VPC. Place a Network LoadBalancer (NLB) in front of the EC2 instances. Create an SFTP listener port for the NLB Share the NLB hostname with the new partner Run a cron job script on the EC2 instancesto upload files to the S3 data lake.
Answer: A
Explanation: This option is the most cost-effective and simple way to enable SFTP access
to the S3 data lake. AWS Transfer Family is a fully managed service that supports secure
file transfers over SFTP, FTPS, and FTP protocols. You can create an SFTP-enabled
server with a public endpoint and associate it with your S3 bucket. You can also use AWS
Identity and Access Management (IAM) roles and policies to control access to your S3 data
lake. The service scales automatically to handle any volume of file transfers and provides
high availability and durability. You do not need to provision, manage, or patch any servers
or load balancers.
Option B is not correct because Amazon S3 File Gateway is not an SFTP server. It is a
hybrid cloud storage service that provides a local file system interface to S3. You can use it
to store and retrieve files as objects in S3 using standard file protocols such as NFS and
SMB. However, it does not support SFTP protocol, and it requires deploying a file gateway
appliance on-premises or on EC2.
Option C is not cost-effective or scalable because it requires launching and managing an
EC2 instance in a private subnet and setting up a VPN connection for the new partner. This
would incur additional costs for the EC2 instance, the VPN connection, and the data
transfer. It would also introduce complexity and security risks to the solution. Moreover, it
would require running a cron job script on the EC2 instance to upload files to the S3 data
lake, which is not efficient or reliable.
Option D is not cost-effective or scalable because it requires launching and managing
multiple EC2 instances in a private subnet and placing a NLB in front of them. This would
incur additional costs for the EC2 instances, the NLB, and the data transfer. It would also
introduce complexity and security risks to the solution. Moreover, it would require running a
cron job script on the EC2 instances to upload files to the S3 data lake, which is not
efficient or reliable. References:
What Is AWS Transfer Family?
What Is Amazon S3 File Gateway?
What Is Amazon EC2?
[What Is Amazon Virtual Private Cloud?]
[What Is a Network Load Balancer?]
Question # 11
A company hosts an application used to upload files to an Amazon S3 bucket Onceuploaded, the files are processed to extract metadata which takes less than 5 seconds Thevolume and frequency of the uploads varies from a few files each hour to hundreds ofconcurrent uploads The company has asked a solutions architect to design a cost-effectivearchitecture that will meet these requirements.What should the solutions architect recommend?
A. Configure AWS CloudTrail trails to tog S3 API calls Use AWS AppSync to process thefiles. B. Configure an object-created event notification within the S3 bucket to invoke an AWSLambda function to process the files. C. Configure Amazon Kinesis Data Streams to process and send data to Amazon S3.Invoke an AWS Lambda function to process the files. D. Configure an Amazon Simple Notification Service (Amazon SNS) topic to process thefiles uploaded to Amazon S3 Invoke an AWS Lambda function to process the files.
Answer: B
Explanation: This option is the most cost-effective and scalable way to process the files
uploaded to S3. AWS CloudTrail is used to log API calls, not to trigger actions based on
them. AWS AppSync is a service for building GraphQL APIs, not for processing files.
Amazon Kinesis Data Streams is used to ingest and process streaming data, not to send
data to S3. Amazon SNS is a pub/sub service that can be used to notify subscribers of
events, not to process files. References:
Using AWS Lambda with Amazon S3
AWS CloudTrail FAQs
What Is AWS AppSync?
[What Is Amazon Kinesis Data Streams?]
[What Is Amazon Simple Notification Service?]
Question # 12
A company runs analytics software on Amazon EC2 instances The software accepts jobrequests from users to process data that has been uploaded to Amazon S3 Users reportthat some submitted data is not being processed Amazon CloudWatch reveals that theEC2 instances have a consistent CPU utilization at or near 100% The company wants toimprove system performance and scale the system based on user load.What should a solutions architect do to meet these requirements?
A. Create a copy of the instance Place all instances behind an Application Load Balancer B. Create an S3 VPC endpoint for Amazon S3 Update the software to reference theendpoint C. Stop the EC2 instances. Modify the instance type to one with a more powerful CPU andmore memory. Restart the instances. D. Route incoming requests to Amazon Simple Queue Service (Amazon SQS) Configurean EC2 Auto Scaling group based on queue size Update the software to read from the queue.
Answer: D
Explanation: This option is the best solution because it allows the company to decouple
the analytics software from the user requests and scale the EC2 instances dynamically
based on the demand. By using Amazon SQS, the company can create a queue that
stores the user requests and acts as a buffer between the users and the analytics software.
This way, the software can process the requests at its own pace without losing any data or
overloading the EC2 instances. By using EC2 Auto Scaling, the company can create an
Auto Scaling group that launches or terminates EC2 instances automatically based on the
size of the queue. This way, the company can ensure that there are enough instances to
handle the load and optimize the cost and performance of the system. By updating the
software to read from the queue, the company can enable the analytics software to
consume the requests from the queue and process the data from Amazon S3.
A. Create a copy of the instance Place all instances behind an Application Load Balancer.
This option is not optimal because it does not address the root cause of the problem, which
is the high CPU utilization of the EC2 instances. An Application Load Balancer can
distribute the incoming traffic across multiple instances, but it cannot scale the instances
based on the load or reduce the processing time of the analytics software. Moreover, this
option can incur additional costs for the load balancer and the extra instances.
B. Create an S3 VPC endpoint for Amazon S3 Update the software to reference the
endpoint. This option is not effective because it does not solve the issue of the high CPU
utilization of the EC2 instances. An S3 VPC endpoint can enable the EC2 instances to
access Amazon S3 without going through the internet, which can improve the network
performance and security. However, it cannot reduce the processing time of the analytics
software or scale the instances based on the load.
C. Stop the EC2 instances. Modify the instance type to one with a more powerful CPU and
more memory. Restart the instances. This option is not scalable because it does not
account for the variability of the user load. Changing the instance type to a more powerful
one can improve the performance of the analytics software, but it cannot adjust the number
of instances based on the demand. Moreover, this option can increase the cost of the
system and cause downtime during the instance modification.
References:
1 Using Amazon SQS queues with Amazon EC2 Auto Scaling - Amazon EC2 Auto
Scaling
2 Tutorial: Set up a scaled and load-balanced application - Amazon EC2 Auto
Scaling
3 Amazon EC2 Auto Scaling FAQs
Question # 13
A company is deploying an application that processes streaming data in near-real time Thecompany plans to use Amazon EC2 instances for the workload The network architecturemust be configurable to provide the lowest possible latency between nodesWhich combination of network solutions will meet these requirements? (Select TWO)
A. Enable and configure enhanced networking on each EC2 instance B. Group the EC2 instances in separate accounts C. Run the EC2 instances in a cluster placement group D. Attach multiple elastic network interfaces to each EC2 instance E. Use Amazon Elastic Block Store (Amazon EBS) optimized instance types.
Answer: A,C
Explanation: These options are the most suitable ways to configure the network
architecture to provide the lowest possible latency between nodes. Option A enables and
configures enhanced networking on each EC2 instance, which is a feature that improves
the network performance of the instance by providing higher bandwidth, lower latency, and
lower jitter. Enhanced networking uses single root I/O virtualization (SR-IOV) or Elastic
Fabric Adapter (EFA) to provide direct access to the network hardware. You can enable
and configure enhanced networking by choosing a supported instance type and a
compatible operating system, and installing the required drivers. Option C runs the EC2
instances in a cluster placement group, which is a logical grouping of instances within a
single Availability Zone that are placed close together on the same underlying hardware.
Cluster placement groups provide the lowest network latency and the highest network
throughput among the placement group options. You can run the EC2 instances in a
cluster placement group by creating a placement group and launching the instances into it.
Option B is not suitable because grouping the EC2 instances in separate accounts does
not provide the lowest possible latency between nodes. Separate accounts are used to
isolate and organize resources for different purposes, such as security, billing, or
compliance. However, they do not affect the network performance or proximity of the
instances. Moreover, grouping the EC2 instances in separate accounts would incur
additional costs and complexity, and it would require setting up cross-account networking
and permissions.
Option D is not suitable because attaching multiple elastic network interfaces to each EC2
instance does not provide the lowest possible latency between nodes. Elastic network
interfaces are virtual network interfaces that can be attached to EC2 instances to provide
additional network capabilities, such as multiple IP addresses, multiple subnets, or
enhanced security. However, they do not affect the network performance or proximity of the
instances. Moreover, attaching multiple elastic network interfaces to each EC2 instance
would consume additional resources and limit the instance type choices. Option E is not suitable because using Amazon EBS optimized instance types does not
provide the lowest possible latency between nodes. Amazon EBS optimized instance types
are instances that provide dedicated bandwidth for Amazon EBS volumes, which are block
storage volumes that can be attached to EC2 instances. EBS optimized instance types
improve the performance and consistency of the EBS volumes, but they do not affect the
network performance or proximity of the instances. Moreover, using EBS optimized
instance types would incur additional costs and may not be necessary for the streaming
data workload. References:
Enhanced networking on Linux
Placement groups
Elastic network interfaces
Amazon EBS-optimized instances
Question # 14
A company runs a container application on a Kubernetes cluster in the company's datacenter The application uses Advanced Message Queuing Protocol (AMQP) tocommunicate with a message queue The data center cannot scale fast enough to meet thecompany's expanding business needs The company wants to migrate the workloads toAWSWhich solution will meet these requirements with the LEAST operational overhead? \
A. Migrate the container application to Amazon Elastic Container Service (Amazon ECS)Use Amazon Simple Queue Service (Amazon SQS) to retrieve the messages. B. Migrate the container application to Amazon Elastic Kubernetes Service (Amazon EKS)Use Amazon MQ to retrieve the messages. C. Use highly available Amazon EC2 instances to run the application Use Amazon MQ toretrieve the messages. D. Use AWS Lambda functions to run the application Use Amazon Simple Queue Service(Amazon SQS) to retrieve the messages.
Answer: B
Explanation: This option is the best solution because it allows the company to migrate the
container application to AWS with minimal changes and leverage a managed service to run
the Kubernetes cluster and the message queue. By using Amazon EKS, the company can
run the container application on a fully managed Kubernetes control plane that is
compatible with the existing Kubernetes tools and plugins. Amazon EKS handles the
provisioning, scaling, patching, and security of the Kubernetes cluster, reducing the
operational overhead and complexity. By using Amazon MQ, the company can use a fully
managed message broker service that supports AMQP and other popular messaging
protocols. Amazon MQ handles the administration, maintenance, and scaling of the
message broker, ensuring high availability, durability, and security of the messages.
A. Migrate the container application to Amazon Elastic Container Service (Amazon ECS)
Use Amazon Simple Queue Service (Amazon SQS) to retrieve the messages. This option
is not optimal because it requires the company to change the container orchestration
platform from Kubernetes to ECS, which can introduce additional complexity and risk.
Moreover, it requires the company to change the messaging protocol from AMQP to SQS,
which can also affect the application logic and performance. Amazon ECS and Amazon
SQS are both fully managed services that simplify the deployment and management of
containers and messages, but they may not be compatible with the existing application
architecture and requirements.
C. Use highly available Amazon EC2 instances to run the application Use Amazon MQ to
retrieve the messages. This option is not ideal because it requires the company to manage
the EC2 instances that host the container application. The company would need to
provision, configure, scale, patch, and monitor the EC2 instances, which can increase the
operational overhead and infrastructure costs. Moreover, the company would need to
install and maintain the Kubernetes software on the EC2 instances, which can also add
complexity and risk. Amazon MQ is a fully managed message broker service that supports
AMQP and other popular messaging protocols, but it cannot compensate for the lack of a
managed Kubernetes service.
D. Use AWS Lambda functions to run the application Use Amazon Simple Queue Service
(Amazon SQS) to retrieve the messages. This option is not feasible because AWS Lambda
does not support running container applications directly. Lambda functions are executed in
a sandboxed environment that is isolated from other functions and resources. To run container applications on Lambda, the company would need to use a custom runtime or a
wrapper library that emulates the container API, which can introduce additional complexity
and overhead. Moreover, Lambda functions have limitations in terms of available CPU,
memory, and runtime, which may not suit the application needs. Amazon SQS is a fully
managed message queue service that supports asynchronous communication, but it does
not support AMQP or other messaging protocols.
References:
1 Amazon Elastic Kubernetes Service - Amazon Web Services
2 Amazon MQ - Amazon Web Services
3 Amazon Elastic Container Service - Amazon Web Services
4 AWS Lambda FAQs - Amazon Web Services
Question # 15
A company runs a real-time data ingestion solution on AWS. The solution consists of themost recent version of Amazon Managed Streaming for Apache Kafka (Amazon MSK). Thesolution is deployed in a VPC in private subnets across three Availability Zones.A solutions architect needs to redesign the data ingestion solution to be publicly availableover the internet. The data in transit must also be encrypted.Which solution will meet these requirements with the MOST operational efficiency?
A. Configure public subnets in the existing VPC. Deploy an MSK cluster in the publicsubnets. Update the MSK cluster security settings to enable mutual TLS authentication. B. Create a new VPC that has public subnets. Deploy an MSK cluster in the publicsubnets. Update the MSK cluster security settings to enable mutual TLS authentication. C. Deploy an Application Load Balancer (ALB) that uses private subnets. Configure an ALBsecurity group inbound rule to allow inbound traffic from the VPC CIDR block for HTTPSprotocol. D. Deploy a Network Load Balancer (NLB) that uses private subnets. Configure an NLBlistener for HTTPS communication over the internet.
Answer: A
Explanation: The solution that meets the requirements with the most operational efficiency
is to configure public subnets in the existing VPC and deploy an MSK cluster in the public
subnets. This solution allows the data ingestion solution to be publicly available over the
internet without creating a new VPC or deploying a load balancer. The solution also
ensures that the data in transit is encrypted by enabling mutual TLS authentication, which
requires both the client and the server to present certificates for verification. This solution
leverages the public access feature of Amazon MSK, which is available for clusters running
Apache Kafka 2.6.0 or later versions1.
The other solutions are not as efficient as the first one because they either create
unnecessary resources or do not encrypt the data in transit. Creating a new VPC with
public subnets would incur additional costs and complexity for managing network resources
and routing. Deploying an ALB or an NLB would also add more costs and latency for the
data ingestion solution. Moreover, an ALB or an NLB would not encrypt the data in transit
by itself, unless they are configured with HTTPS listeners and certificates, which would
require additional steps and maintenance. Therefore, these solutions are not optimal for the
given requirements.
References:
Public access - Amazon Managed Streaming for Apache Kafka
Question # 16
A company runs a Java-based job on an Amazon EC2 instance. The job runs every hourand takes 10 seconds to run. The job runs on a scheduled interval and consumes 1 GB ofmemory. The CPU utilization of the instance is low except for short surges during which thejob uses the maximum CPU available. The company wants to optimize the costs to run thejob.Which solution will meet these requirements?
A. Use AWS App2Container (A2C) to containerize the job. Run the job as an AmazonElastic Container Service (Amazon ECS) task on AWS Fargate with 0.5 virtual CPU(vCPU) and 1 GB of memory. B. Copy the code into an AWS Lambda function that has 1 GB of memory. Create anAmazon EventBridge scheduled rule to run the code each hour. C. Use AWS App2Container (A2C) to containerize the job. Install the container in theexisting Amazon Machine Image (AMI). Ensure that the schedule stops the container whenthe task finishes. D. Configure the existing schedule to stop the EC2 instance at the completion of the joband restart the EC2 instance when the next job starts.
Answer: B
Explanation: AWS Lambda is a serverless compute service that allows you to run code
without provisioning or managing servers. You can create Lambda functions using various
languages, including Java, and specify the amount of memory and CPU allocated to your function. Lambda charges you only for the compute time you consume, which is calculated
based on the number of requests and the duration of your code execution. You can use
Amazon EventBridge to trigger your Lambda function on a schedule, such as every hour,
using cron or rate expressions. This solution will optimize the costs to run the job, as you
will not pay for any idle time or unused resources, unlike running the job on an EC2
instance. References: 1: AWS Lambda - FAQs2, General Information section2: Tutorial:
Schedule AWS Lambda functions using EventBridge3, Introduction section3: Schedule
expressions using rate or cron - AWS Lambda4, Introduction section.
Question # 17
An ecommerce company runs applications in AWS accounts that are part of anorganization in AWS Organizations The applications run on Amazon Aurora PostgreSQLdatabases across all the accounts The company needs to prevent malicious activity andmust identify abnormal failed and incomplete login attempts to the databasesWhich solution will meet these requirements in the MOST operationally efficient way?
A. Attach service control policies (SCPs) to the root of the organization to identify the failedlogin attempts B. Enable the Amazon RDS Protection feature in Amazon GuardDuty for the memberaccounts of the organization C. Publish the Aurora general logs to a log group in Amazon CloudWatch Logs Export thelog data to a central Amazon S3 bucket D. Publish all the Aurora PostgreSQL database events in AWS CloudTrail to a centralAmazon S3 bucket
Answer: C
Explanation: This option is the most operationally efficient way to meet the requirements
because it allows the company to monitor and analyze the database login activity across all
the accounts in the organization. By publishing the Aurora general logs to a log group in
Amazon CloudWatch Logs, the company can enable the logging of the database
connections, disconnections, and failed authentication attempts. By exporting the log data
to a central Amazon S3 bucket, the company can store the log data in a durable and costeffective
way and use other AWS services or tools to perform further analysis or alerting on
the log data. For example, the company can use Amazon Athena to query the log data in
Amazon S3, or use Amazon SNS to send notifications based on the log data.
A. Attach service control policies (SCPs) to the root of the organization to identify the failed
login attempts. This option is not effective because SCPs are not designed to identify the
failed login attempts, but to restrict the actions that the users and roles can perform in the
member accounts of the organization. SCPs are applied to the AWS API calls, not to the
database login attempts. Moreover, SCPs do not provide any logging or analysis
capabilities for the database activity.
B. Enable the Amazon RDS Protection feature in Amazon GuardDuty for the member
accounts of the organization. This option is not optimal because the Amazon RDS
Protection feature in Amazon GuardDuty is not available for Aurora PostgreSQL
databases, but only for Amazon RDS for MySQL and Amazon RDS for MariaDB databases. Moreover, the Amazon RDS Protection feature does not monitor the database
login attempts, but the network and API activity related to the RDS instances.
D. Publish all the Aurora PostgreSQL database events in AWS CloudTrail to a central
Amazon S3 bucket. This option is not sufficient because AWS CloudTrail does not capture
the database login attempts, but only the AWS API calls made by or on behalf of the
Aurora PostgreSQL database. For example, AWS CloudTrail can record the events such
as creating, modifying, or deleting the database instances, clusters, or snapshots, but not
the events such as connecting, disconnecting, or failing to authenticate to the database.
References:
1 Working with Amazon Aurora PostgreSQL - Amazon Aurora
2 Working with log groups and log streams - Amazon CloudWatch Logs
3 Exporting Log Data to Amazon S3 - Amazon CloudWatch Logs
[4] Amazon GuardDuty FAQs
[5] Logging Amazon RDS API Calls with AWS CloudTrail - Amazon Relational
Database Service
Question # 18
A company needs to provide customers with secure access to its data. The companyprocesses customer data and stores the results in an Amazon S3 bucket.All the data is subject to strong regulations and security requirements. The data must beencrypted at rest. Each customer must be able to access only their data from their AWSaccount. Company employees must not be able to access the data.Which solution will meet these requirements?
A. Provision an AWS Certificate Manager (ACM) certificate for each customer. Encrypt thedata client-side. In the private certificate policy, deny access to the certificate for allprincipals except an 1AM role that the customer provides. B. Provision a separate AWS Key Management Service (AWS KMS) key for eachcustomer. Encrypt the data server-side. In the S3 bucket policy, deny decryption of data forall principals except an 1AM role that the customer provides. C. Provision a separate AWS Key Management Service (AWS KMS) key for eachcustomer. Encrypt the data server-side. In each KMS key policy, deny decryption of datafor all principals except an 1AM role that the customer provides. D. Provision an AWS Certificate Manager (ACM) certificate for each customer. Encrypt thedata client-side. In the public certificate policy, deny access to the certificate for allprincipals except an 1AM role that the customer provides.
Answer: C
Explanation: The correct solution is to provision a separate AWS KMS key for each
customer and encrypt the data server-side. This way, the company can use the S3
encryption feature to protect the data at rest and delegate the control of the encryption keys
to the customers. The customers can then use their own IAM roles to access and decrypt
their data. The company employees will not be able to access the data because they are
not authorized by the KMS key policies. The other options are incorrect because:
Option A and D are using ACM certificates to encrypt the data client-side. This is
not a recommended practice for S3 encryption because it adds complexity and
overhead to the encryption process. Moreover, the company will have to manage
the certificates and their policies for each customer, which is not scalable and
secure.
Option B is using a separate KMS key for each customer, but it is using the S3
bucket policy to control the decryption access. This is not a secure solution
because the bucket policy applies to the entire bucket, not to individual objects.
Therefore, the customers will be able to access and decrypt each other’s data if
they have the permission to list the bucket contents. The bucket policy also
overrides the KMS key policy, which means the company employees can access
the data if they have the permission to use the KMS key.
References:
S3 encryption
KMS key policies
ACM certificates
Question # 19
A company has a nightly batch processing routine that analyzes report files that an onpremisesfile system receives daily through SFTP. The company wants to move thesolution to the AWS Cloud. The solution must be highly available and resilient. The solutionalso must minimize operational effort.Which solution meets these requirements?
A. Deploy AWS Transfer for SFTP and an Amazon Elastic File System (Amazon EFS) filesystem for storage. Use an Amazon EC2 instance in an Auto Scaling group with ascheduled scaling policy to run the batch operation. B. Deploy an Amazon EC2 instance that runs Linux and an SFTP service. Use an AmazonElastic Block Store {Amazon EBS) volume for storage. Use an Auto Scaling group with theminimum number of instances and desired number of instances set to 1. C. Deploy an Amazon EC2 instance that runs Linux and an SFTP service. Use an AmazonElastic File System (Amazon EFS) file system for storage. Use an Auto Scaling group withthe minimum number of instances and desired number of instances set to 1. D. Deploy AWS Transfer for SFTP and an Amazon S3 bucket for storage. Modify theapplication to pull the batch files from Amazon S3 to an Amazon EC2 instance forprocessing. Use an EC2 instance in an Auto Scaling group with a scheduled scaling policyto run the batch operation.
Answer: D
Explanation: The solution that meets the requirements of high availability, performance,
security, and static IP addresses is to use Amazon CloudFront, Application Load Balancers
(ALBs), Amazon Route 53, and AWS WAF. This solution allows the company to distribute
its HTTP-based application globally using CloudFront, which is a content delivery network
(CDN) service that caches content at edge locations and provides static IP addresses for
each edge location. The company can also use Route 53 latency-based routing to route
requests to the closest ALB in each Region, which balances the load across the EC2
instances. The company can also deploy AWS WAF on the CloudFront distribution to
protect the application against common web exploits by creating rules that allow, block, or
count web requests based on conditions that are defined. The other solutions do not meet
all the requirements because they either use Network Load Balancers (NLBs), which do not
support HTTP-based applications, or they do not use CloudFront, which provides better
performance and security than AWS Global Accelerator. References :=
Amazon CloudFront
Application Load Balancer
Amazon Route 53
AWS WAF
Question # 20
A company uses high concurrency AWS Lambda functions to process a constantlyincreasing number of messages in a message queue during marketing events. TheLambda functions use CPU intensive code to process the messages. The company wantsto reduce the compute costs and to maintain service latency for its customers.Which solution will meet these requirements?
A. Configure reserved concurrency for the Lambda functions. Decrease the memoryallocated to the Lambda functions. B. Configure reserved concurrency for the Lambda functions. Increase the memoryaccording to AWS Compute Optimizer recommendations. C. Configure provisioned concurrency for the Lambda functions. Decrease the memoryallocated to the Lambda functions. D. Configure provisioned concurrency for the Lambda functions. Increase the memoryaccording to AWS Compute Optimizer recommendations.
Answer: D
Explanation: The company wants to reduce the compute costs and maintain service
latency for its Lambda functions that process a constantly increasing number of messages
in a message queue. The Lambda functions use CPU intensive code to process the
messages. To meet these requirements, a solutions architect should recommend the
following solution:
Configure provisioned concurrency for the Lambda functions. Provisioned
concurrency is the number of pre-initialized execution environments that are
allocated to the Lambda functions. These execution environments are prepared to
respond immediately to incoming function requests, reducing the cold start latency.
Configuring provisioned concurrency also helps to avoid throttling errors due to
reaching the concurrency limit of the Lambda service.
Increase the memory according to AWS Compute Optimizer recommendations.
AWS Compute Optimizer is a service that provides recommendations for optimal
AWS resource configurations based on your utilization data. By increasing the
memory allocated to the Lambda functions, you can also increase the CPU power
and improve the performance of your CPU intensive code. AWS Compute
Optimizer can help you find the optimal memory size for your Lambda functions
based on your workload characteristics and performance goals.
This solution will reduce the compute costs by avoiding unnecessary over-provisioning of
memory and CPU resources, and maintain service latency by using provisioned
concurrency and optimal memory size for the Lambda functions.
References:
Provisioned Concurrency
AWS Compute Optimizer
Question # 21
A company runs applications on AWS that connect to the company's Amazon RDSdatabase. The applications scale on weekends and at peak times of the year. Thecompany wants to scale the database more effectively for its applications that connect tothe database.Which solution will meet these requirements with the LEAST operational overhead?
A. Use Amazon DynamoDB with connection pooling with a target group configuration forthe database. Change the applications to use the DynamoDB endpoint. B. Use Amazon RDS Proxy with a target group for the database. Change the applicationsto use the RDS Proxy endpoint. C. Use a custom proxy that runs on Amazon EC2 as an intermediary to the database.Change the applications to use the custom proxy endpoint. D. Use an AWS Lambda function to provide connection pooling with a target groupconfiguration for the database. Change the applications to use the Lambda function.
Answer: B
Explanation:
Amazon RDS Proxy is a fully managed, highly available database proxy for Amazon
Relational Database Service (RDS) that makes applications more scalable, more resilient
to database failures, and more secure1. RDS Proxy allows applications to pool and share
connections established with the database, improving database efficiency and application
scalability2. RDS Proxy also reduces failover times for Aurora and RDS databases by up to
66% and enables IAM authentication and Secrets Manager integration for database
access1. RDS Proxy can be enabled for most applications with no code changes2.
Question # 22
A company wants to run its payment application on AWS The application receives paymentnotifications from mobile devices Payment notifications require a basic validation beforethey are sent for further processingThe backend processing application is long running and requires compute and memory tobe adjusted The company does not want to manage the infrastructureWhich solution will meet these requirements with the LEAST operational overhead?
A. Create an Amazon Simple Queue Service (Amazon SQS) queue Integrate the queuewith an Amazon EventBndge rule to receive payment notifications from mobile devicesConfigure the rule to validate payment notifications and send the notifications to the backend application Deploy the backend application on Amazon Elastic KubernetesService (Amazon EKS) Anywhere Create a standalone cluster B. Create an Amazon API Gateway API Integrate the API with anAWS Step Functionsstate machine to receive payment notifications from mobile devices Invoke the statemachine to validate payment notifications and send the notifications to the backendapplication Deploy the backend application on Amazon Elastic Kubernetes Sen/ice(Amazon EKS). Configure an EKS cluster with self-managed nodes. C. Create an Amazon Simple Queue Sen/ice (Amazon SQS) queue Integrate the queuewith an Amazon EventBridge rule to receive payment notifications from mobile devicesConfigure the rule to validate payment notifications and send the notifications to thebackend application Deploy the backend application on Amazon EC2 Spot InstancesConfigure a Spot Fleet with a default allocation strategy. D. Create an Amazon API Gateway API Integrate the API with AWS Lambda to receivepayment notifications from mobile devices Invoke a Lambda function to validate paymentnotifications and send the notifications to the backend application Deploy the backendapplication on Amazon Elastic Container Service (Amazon ECS). Configure Amazon ECSwith an AWS Fargate launch type.
Answer: D
Explanation:
This option is the best solution because it allows the company to run its payment
application on AWS with minimal operational overhead and infrastructure management. By
using Amazon API Gateway, the company can create a secure and scalable API to receive
payment notifications from mobile devices. By using AWS Lambda, the company can run a
serverless function to validate the payment notifications and send them to the backend
application. Lambda handles the provisioning, scaling, and security of the function,
reducing the operational complexity and cost. By using Amazon ECS with AWS Fargate,
the company can run the backend application on a fully managed container service that
scales the compute resources automatically and does not require any EC2 instances to
manage. Fargate allocates the right amount of CPU and memory for each container and
adjusts them as needed.
A. Create an Amazon Simple Queue Service (Amazon SQS) queue Integrate the queue
with an Amazon EventBndge rule to receive payment notifications from mobile devices
Configure the rule to validate payment notifications and send the notifications to the
backend application Deploy the backend application on Amazon Elastic Kubernetes
Service (Amazon EKS) Anywhere Create a standalone cluster. This option is not optimal
because it requires the company to manage the Kubernetes cluster that runs the backend
application. Amazon EKS Anywhere is a deployment option that allows the company to
create and operate Kubernetes clusters on-premises or in other environments outside
AWS. The company would need to provision, configure, scale, patch, and monitor the
cluster nodes, which can increase the operational overhead and complexity. Moreover, the
company would need to ensure the connectivity and security between the AWS services
and the EKS Anywhere cluster, which can also add challenges and risks. B. Create an Amazon API Gateway API Integrate the API with anAWS Step Functions
state ma-chine to receive payment notifications from mobile devices Invoke the state
machine to validate payment notifications and send the notifications to the backend
application Deploy the backend application on Amazon Elastic Kubernetes Sen/ice
(Amazon EKS). Configure an EKS cluster with self-managed nodes. This option is not ideal
because it requires the company to manage the EC2 instances that host the Kubernetes
cluster that runs the backend application. Amazon EKS is a fully managed service that runs
Kubernetes on AWS, but it still requires the company to manage the worker nodes that run
the containers. The company would need to provision, configure, scale, patch, and monitor
the EC2 instances, which can increase the operational overhead and infrastructure costs.
Moreover, using AWS Step Functions to validate the payment notifications may be
unnecessary and complex, as the validation logic can be implemented in a simpler way
with Lambda or other services.
C. Create an Amazon Simple Queue Sen/ice (Amazon SQS) queue Integrate the queue
with an Amazon EventBridge rule to receive payment notifications from mobile devices
Configure the rule to validate payment notifications and send the notifications to the
backend application Deploy the backend application on Amazon EC2 Spot Instances
Configure a Spot Fleet with a default al-location strategy. This option is not cost-effective
because it requires the company to manage the EC2 instances that run the backend
application. The company would need to provision, configure, scale, patch, and monitor the
EC2 instances, which can increase the operational overhead and infrastructure costs.
Moreover, using Spot Instances can introduce the risk of interruptions, as Spot Instances
are reclaimed by AWS when the demand for On-Demand Instances increases. The
company would need to handle the interruptions gracefully and ensure the availability and
reliability of the backend application.
References:
1 Amazon API Gateway - Amazon Web Services
2 AWS Lambda - Amazon Web Services
3 Amazon Elastic Container Service - Amazon Web Services
4 AWS Fargate - Amazon Web Services
Question # 23
A company has multiple AWS accounts with applications deployed in the us-west-2 RegionApplication logs are stored within Amazon S3 buckets in each account The company wants to build a centralized log analysis solution that uses a single S3 bucket Logs must not leaveus-west-2, and the company wants to incur minimal operational overheadWhich solution meets these requirements and is MOST cost-effective?
A. Create an S3 Lifecycle policy that copies the objects from one of the application S3buckets to the centralized S3 bucket B. Use S3 Same-Region Replication to replicate logs from the S3 buckets to another S3bucket in us-west-2 Use this S3 bucket for log analysis. C. Write a script that uses the PutObject API operation every day to copy the entirecontents of the buckets to another S3 bucket in us-west-2 Use this S3 bucket for loganalysis. D. Write AWS Lambda functions in these accounts that are triggered every time logs aredelivered to the S3 buckets (s3 ObjectCreated a event) Copy the logs to another S3 bucketin us-west-2. Use this S3 bucket for log analysis.
Answer: B
Explanation: This solution meets the following requirements:
It is cost-effective, as it only charges for the storage and data transfer of the
replicated objects, and does not require any additional AWS services or custom
scripts. S3 Same-Region Replication (SRR) is a feature that automatically
replicates objects across S3 buckets within the same AWS Region. SRR can help
you aggregate logs from multiple sources to a single destination for analysis and
auditing. SRR also preserves the metadata, encryption, and access control of the
source objects.
It is operationally efficient, as it does not require any manual intervention or
scheduling. SRR replicates objects as soon as they are uploaded to the source
bucket, ensuring that the destination bucket always has the latest log data. SRR
also handles any updates or deletions of the source objects, keeping the
destination bucket in sync. SRR can be enabled with a few clicks in the S3 console
or with a simple API call.
It is secure, as it does not allow the logs to leave the us-west-2 Region. SRR only
replicates objects within the same AWS Region, ensuring that the data sovereignty
and compliance requirements are met. SRR also supports encryption of the source
and destination objects, using either server-side encryption with AWS KMS or S3-
managed keys, or client-side encryption.
References:
Same-Region Replication - Amazon Simple Storage Service
How do I replicate objects across S3 buckets in the same AWS Region?
A company runs a highly available web application on Amazon EC2 instances behind anApplication Load Balancer The company uses Amazon CloudWatch metricsAs the traffic to the web application Increases, some EC2 instances become overloadedwith many outstanding requests The CloudWatch metrics show that the number of requestsprocessed and the time to receive the responses from some EC2 instances are both highercompared to other EC2 instances The company does not want new requests to beforwarded to the EC2 instances that are already overloaded.Which solution will meet these requirements?
A. Use the round robin routing algorithm based on the RequestCountPerTarget and ActiveConnection Count CloudWatch metrics. B. Use the least outstanding requests algorithm based on the RequestCountPerTarget andActiveConnectionCount CloudWatch metrics. C. Use the round robin routing algorithm based on the RequestCount andTargetResponseTime CloudWatch metrics. D. Use the least outstanding requests algorithm based on the RequestCount andTargetResponseTime CloudWatch metrics.
Answer: D
Explanation: The least outstanding requests (LOR) algorithm is a load balancing algorithm
that distributes incoming requests to the target with the fewest outstanding requests. This
helps to avoid overloading any single target and improves the overall performance and
availability of the web application. The LOR algorithm can use the RequestCount and
TargetResponseTime CloudWatch metrics to determine the number of outstanding
requests and the response time of each target. These metrics measure the number of
requests processed by each target and the time elapsed after the request leaves the load
balancer until a response from the target is received by the load balancer, respectively. By
using these metrics, the LOR algorithm can route new requests to the targets that are less
busy and more responsive, and avoid sending requests to the targets that are already
overloaded or slow. This solution meets the requirements of the company.
References:
Application Load Balancer now supports Least Outstanding Requests algorithm for
An analytics company uses Amazon VPC to run its multi-tier services. The company wantsto use RESTful APIs to offer a web analytics service to millions of users. Users must beverified by using an authentication service to access the APIs.Which solution will meet these requirements with the MOST operational efficiency?
A. Configure an Amazon Cognito user pool for user authentication. Implement Amazon APIGateway REST APIs with a Cognito authorizer. B. Configure an Amazon Cognito identity pool for user authentication. Implement AmazonAPI Gateway HTTP APIs with a Cognito authorizer. C. Configure an AWS Lambda function to handle user authentication. Implement AmazonAPI Gateway REST APIs with a Lambda authorizer. D. Configure an 1AM user to handle user authentication. Implement Amazon API GatewayHTTP APIs with an 1AM authorizer.
Answer: A
Explanation: This solution will meet the requirements with the most operational efficiency
because:
Amazon Cognito user pools provide a secure and scalable user directory that can
store and manage user profiles, and handle user sign-up, sign-in, and access
control. User pools can also integrate with social identity providers and enterprise
identity providers via SAML or OIDC. User pools can issue JSON Web Tokens
(JWTs) that can be used to authenticate users and authorize API requests.
Amazon API Gateway REST APIs enable you to create and deploy APIs that
expose your backend services to your clients. REST APIs support multiple
authorization mechanisms, including Cognito user pools, IAM, Lambda, and
custom authorizers. A Cognito authorizer is a type of Lambda authorizer that uses
a Cognito user pool as the identity source. When a client makes a request to a
REST API method that is configured with a Cognito authorizer, API Gateway
verifies the JWTs that are issued by the user pool and grants access based on the
token’s claims and the authorizer’s configuration.
By using Cognito user pools and API Gateway REST APIs with a Cognito
authorizer, you can achieve a high level of security, scalability, and performance
for your web analytics service. You can also leverage the built-in features of
Cognito and API Gateway, such as user management, token validation, caching,
throttling, and monitoring, without having to implement them yourself. This reduces
the operational overhead and complexity of your solution.
References:
Amazon Cognito User Pools
Amazon API Gateway REST APIs
Use API Gateway Lambda authorizers
Question # 26
A company has an AWS Direct Connect connection from its on-premises location to anAWS account The AWS account has 30 different VPCs in the same AWS Region TheVPCs use private virtual interfaces (VIFs) Each VPC has a CIDR block that does notoverlap with other networks under the company's controlThe company wants to centrally manage the networking architecture while still allowingeach VPC to communicate with all other VPCs and on-premises networksWhich solution will meet these requirements with the LEAST amount of operationaloverhead?
A. Create a transit gateway and associate the Direct Connect connection with a new transitVIF Turn on the transit gateway's route propagation feature B. Create a Direct Connect gateway Recreate the private VIFs to use the new gatewayAssociate each VPC by creating new virtual private gateways C. Create a transit VPC Connect the Direct Connect connection to the transit VPC Create apeenng connection between all other VPCs in the Region Update the route tables D. Create AWS Site-to-Site VPN connections from on premises to each VPC Ensure thatboth VPN tunnels are UP for each connection Turn on the route propagation feature
Answer: A
Explanation: This solution meets the following requirements:
It is operationally efficient, as it only requires one transit gateway and one transit
VIF to connect the Direct Connect connection to all the VPCs in the same AWS
Region. The transit gateway acts as a regional network hub that simplifies the
network management and reduces the number of VIFs and gateways needed.
It is scalable, as it can support up to 5000 attachments per transit gateway, which
can include VPCs, VPNs, Direct Connect gateways, and peering connections. The
transit gateway can also be connected to other transit gateways in different
Regions or accounts using peering connections, enabling cross-Region and cross-account connectivity.
It is flexible, as it allows each VPC to communicate with all other VPCs and onpremises
networks using dynamic routing protocols such as Border Gateway
Protocol (BGP). The transit gateway’s route propagation feature automatically
propagates the routes from the attached VPCs and VPNs to the transit gateway
route table, eliminating the need to manually update the route tables.
References:
Transit Gateways - Amazon Virtual Private Cloud
Working with transit gateways - AWS Direct Connect
A solutions architect is designing a shared storage solution for a web application that isdeployed across multiple Availability Zones The web application runs on Amazon EC2instances that are in an Auto Scaling group The company plans to make frequent changesto the content The solution must have strong consistency in returning the new content assoon as the changes occur.Which solutions meet these requirements? (Select TWO)
A. Use AWS Storage Gateway Volume Gateway Internet Small Computer SystemsInterface (iSCSI) block storage that is mounted to the individual EC2 instances B. Create an Amazon Elastic File System (Amazon EFS) file system Mount the EFS filesystem on the individual EC2 instances C. Create a shared Amazon Elastic Block Store (Amazon EBS) volume. Mount the EBSvolume on the individual EC2 instances. D. Use AWS DataSync to perform continuous synchronization of data between EC2 hostsin the Auto Scaling group E. Create an Amazon S3 bucket to store the web content Set the metadata for the Cache-Control header to no-cache Use Amazon CloudFront to deliver the content
Answer: B,E
Explanation: These options are the most suitable ways to design a shared storage
solution for a web application that is deployed across multiple Availability Zones and
requires strong consistency. Option B uses Amazon Elastic File System (Amazon EFS) as
a shared file system that can be mounted on multiple EC2 instances in different Availability
Zones. Amazon EFS provides high availability, durability, scalability, and performance for
file-based workloads. It also supports strong consistency, which means that any changes
made to the file system are immediately visible to all clients. Option E uses Amazon S3 as
a shared object store that can store the web content and serve it through Amazon
CloudFront, a content delivery network (CDN). Amazon S3 provides high availability,
durability, scalability, and performance for object-based workloads. It also supports strong
consistency for read-after-write and list operations, which means that any changes made to
the objects are immediately visible to all clients. By setting the metadata for the Cache-
Control header to no-cache, the web content can be prevented from being cached by the
browsers or the CDN edge locations, ensuring that the latest content is always delivered to
the users.
Option A is not suitable because using AWS Storage Gateway Volume Gateway as a
shared storage solution for a web application is not efficient or scalable. AWS Storage
Gateway Volume Gateway is a hybrid cloud storage service that provides block storage
volumes that can be mounted on-premises or on EC2 instances as iSCSI devices. It is
useful for migrating or backing up data to AWS, but it is not designed for serving web
content or providing strong consistency. Moreover, using Volume Gateway would incur
additional costs and complexity, and it would not leverage the native AWS storage
services.
Option C is not suitable because creating a shared Amazon EBS volume and mounting it
on multiple EC2 instances is not possible or reliable. Amazon EBS is a block storage
service that provides persistent and high-performance volumes for EC2 instances.
However, EBS volumes can only be attached to one EC2 instance at a time, and they are
constrained to a single Availability Zone. Therefore, creating a shared EBS volume for a
web application that is deployed across multiple Availability Zones is not feasible.
Moreover, EBS volumes do not support strong consistency, which means that any changes
made to the volume may not be immediately visible to other clients.
Option D is not suitable because using AWS DataSync to perform continuous
synchronization of data between EC2 hosts in the Auto Scaling group is not efficient or
scalable. AWS DataSync is a data transfer service that helps you move large amounts of
data to and from AWS storage services. It is useful for migrating or archiving data, but it is
not designed for serving web content or providing strong consistency. Moreover, using
DataSync would incur additional costs and complexity, and it would not leverage the native AWS storage services. References:
What Is Amazon Elastic File System?
What Is Amazon Simple Storage Service?
What Is Amazon CloudFront?
What Is AWS Storage Gateway?
What Is Amazon Elastic Block Store?
What Is AWS DataSync?
Question # 28
A company needs to extract the names of ingredients from recipe records that are storedas text files in an Amazon S3 bucket A web application will use the ingredient names toquery an Amazon DynamoDB table and determine a nutrition score.The application can handle non-food records and errors The company does not have anyemployees who have machine learning knowledge to develop this solutionWhich solution will meet these requirements MOST cost-effectively?
A. Use S3 Event Notifications to invoke an AWS Lambda function when PutObjectrequests occur Program the Lambda function to analyze the object and extract theingredient names by using Amazon Comprehend Store the Amazon Comprehend output inthe DynamoDB table. B. Use an Amazon EventBridge rule to invoke an AWS Lambda function when PutObjectrequests occur. Program the Lambda function to analyze the object by using AmazonForecast to extract the ingredient names Store the Forecast output in the DynamoDB table. C. Use S3 Event Notifications to invoke an AWS Lambda function when PutObjectrequests occur Use Amazon Polly to create audio recordings of the recipe records. Savethe audio files in the S3 bucket Use Amazon Simple Notification Service (Amazon SNS) tosend a URL as a message to employees Instruct the employees to listen to the audio filesand calculate the nutrition score Store the ingredient names in the DynamoDB table. D. Use an Amazon EventBridge rule to invoke an AWS Lambda function when a PutObjectrequest occurs Program the Lambda function to analyze the object and extract theingredient names by using Amazon SageMaker Store the inference output from theSageMaker endpoint in the DynamoDB table.
Answer: A
Explanation: This solution meets the following requirements:
It is cost-effective, as it only uses serverless components that are charged based
on usage and do not require any upfront provisioning or maintenance.
It is scalable, as it can handle any number of recipe records that are uploaded to
the S3 bucket without any performance degradation or manual intervention.
It is easy to implement, as it does not require any machine learning knowledge or
complex data processing logic. Amazon Comprehend is a natural language
processing service that can automatically extract entities such as ingredients from
text files. The Lambda function can simply invoke the Comprehend API and store
the results in the DynamoDB table.
It is reliable, as it can handle non-food records and errors gracefully. Amazon
Comprehend can detect the language and domain of the text files and return an
appropriate response. The Lambda function can also implement error handling
and logging mechanisms to ensure the data quality and integrity.
References:
Using AWS Lambda with Amazon S3 - AWS Lambda
What Is Amazon Comprehend? - Amazon Comprehend
Working with Tables - Amazon DynamoDB
Question # 29
A company has a new mobile app. Anywhere in the world, users can see local news ontopics they choose. Users also can post photos and videos from inside the app.Users access content often in the first minutes after the content is posted. New contentquickly replaces older content, and then the older content disappears. The local nature ofthe news means that users consume 90% of the content within the AWS Region where it isuploaded.Which solution will optimize the user experience by providing the LOWEST latency forcontent uploads?
A. Upload and store content in Amazon S3. Use Amazon CloudFront for the uploads. B. Upload and store content in Amazon S3. Use S3 Transfer Acceleration for the uploads. C. Upload content to Amazon EC2 instances in the Region that is closest to the user. Copythe data to Amazon S3. D. Upload and store content in Amazon S3 in the Region that is closest to the user. Usemultiple distributions of Amazon CloudFront.
Answer: B
Explanation: The most suitable solution for optimizing the user experience by providing
the lowest latency for content uploads is to upload and store content in Amazon S3 and
use S3 Transfer Acceleration for the uploads. This solution will enable the company to
leverage the AWS global network and edge locations to speed up the data transfer
between the users and the S3 buckets.
Amazon S3 is a storage service that provides scalable, durable, and highly available object
storage for any type of data. Amazon S3 allows users to store and retrieve data from
anywhere on the web, and offers various features such as encryption, versioning, lifecycle
management, and replication1.
S3 Transfer Acceleration is a feature of Amazon S3 that helps users transfer data to and
from S3 buckets more quickly. S3 Transfer Acceleration works by using optimized network
paths and Amazon’s backbone network to accelerate data transfer speeds. Users can
enable S3 Transfer Acceleration for their buckets and use a distinct URL to access them,
such as <bucket>.s3-accelerate.amazonaws.com2.
The other options are not correct because they either do not provide the lowest latency or
are not suitable for the use case. Uploading and storing content in Amazon S3 and using Amazon CloudFront for the uploads is not correct because this solution is not designed for
optimizing uploads, but rather for optimizing downloads. Amazon CloudFront is a content
delivery network (CDN) that helps users distribute their content globally with low latency
and high transfer speeds. CloudFront works by caching the content at edge locations
around the world, so that users can access it quickly and easily from anywhere3. Uploading
content to Amazon EC2 instances in the Region that is closest to the user and copying the
data to Amazon S3 is not correct because this solution adds unnecessary complexity and
cost to the process. Amazon EC2 is a computing service that provides scalable and secure
virtual servers in the cloud. Users can launch, stop, or terminate EC2 instances as needed,
and choose from various instance types, operating systems, and configurations4.
Uploading and storing content in Amazon S3 in the Region that is closest to the user and
using multiple distributions of Amazon CloudFront is not correct because this solution is not
cost-effective or efficient for the use case. As mentioned above, Amazon CloudFront is a
CDN that helps users distribute their content globally with low latency and high transfer
speeds. However, creating multiple CloudFront distributions for each Region would incur
additional charges and management overhead, and would not be necessary since 90% of
the content is consumed within the same Region where it is uploaded3.
References:
What Is Amazon Simple Storage Service? - Amazon Simple Storage Service
Amazon S3 Transfer Acceleration - Amazon Simple Storage Service
What Is Amazon CloudFront? - Amazon CloudFront
What Is Amazon EC2? - Amazon Elastic Compute Cloud
Question # 30
An ecommerce application uses a PostgreSQL database that runs on an Amazon EC2instance. During a monthly sales event, database usage increases and causes databaseconnection issues for the application. The traffic is unpredictable for subsequent monthlysales events, which impacts the sales forecast. The company needs to maintainperformance when there is an unpredictable increase in traffic.Which solution resolves this issue in the MOST cost-effective way?
A. Migrate the PostgreSQL database to Amazon Aurora Serverless v2. B. Enable auto scaling for the PostgreSQL database on the EC2 instance to accommodateincreased usage. C. Migrate the PostgreSQL database to Amazon RDS for PostgreSQL with a largerinstance type D. Migrate the PostgreSQL database to Amazon Redshift to accommodate increasedusage
Answer: A
Explanation: Amazon Aurora Serverless v2 is a cost-effective solution that can
automatically scale the database capacity up and down based on the application’s needs. It
can handle unpredictable traffic spikes without requiring any provisioning or management
of database instances. It is compatible with PostgreSQL and offers high performance,
A company's marketing data is uploaded from multiple sources to an Amazon S3 bucket A series ot data preparation jobs aggregate the data for reporting The data preparation jobsneed to run at regular intervals in parallel A few jobs need to run in a specific order laterThe company wants to remove the operational overhead of job error handling retry logic,and state managementWhich solution will meet these requirements?
A. Use an AWS Lambda function to process the data as soon as the data is uploaded tothe S3 bucket Invoke Other Lambda functions at regularly scheduled intervals B. Use Amazon Athena to process the data Use Amazon EventBndge Scheduler to invokeAthena on a regular internal C. Use AWS Glue DataBrew to process the data Use an AWS Step Functions statemachine to run the DataBrew data preparation jobs D. Use AWS Data Pipeline to process the data. Schedule Data Pipeline to process the dataonce at midnight.
Answer: C
Explanation: AWS Glue DataBrew is a visual data preparation tool that allows you to
easily clean, normalize, and transform your data without writing any code. You can create
and run data preparation jobs on your data stored in Amazon S3, Amazon Redshift, or
other data sources. AWS Step Functions is a service that lets you coordinate multiple AWS
services into serverless workflows. You can use Step Functions to orchestrate your
DataBrew jobs, define the order and parallelism of execution, handle errors and retries, and
monitor the state of your workflow. By using AWS Glue DataBrew and AWS Step
Functions, you can meet the requirements of the company with minimal operational
overhead, as you do not need to write any code, manage any servers, or deal with complex
dependencies.
References:
AWS Glue DataBrew
AWS Step Functions
Orchestrate AWS Glue DataBrew jobs using AWS Step Functions
Question # 32
A research company uses on-premises devices to generate data for analysis. Thecompany wants to use the AWS Cloud to analyze the data. The devices generate .csv filesand support writing the data to SMB file share. Company analysts must be able to use SQLcommands to query the data. The analysts will run queries periodically throughout the day.Which combination of steps will meet these requirements MOST cost-effectively? (SelectTHREE.)
A. Deploy an AWS Storage Gateway on premises in Amazon S3 File Gateway mode. B. Deploy an AWS Storage Gateway on premises in Amazon FSx File Gateway mode. C. Set up an AWS Glue crawler to create a table based on the data that is in Amazon S3. D. Set up an Amazon EMR cluster with EMR Fife System (EMRFS) to query the data thatis in Amazon S3. Provide access to analysts. E. Set up an Amazon Redshift cluster to query the data that is in Amazon S3. Provideaccess to analysts. F. Set up Amazon Athena to query the data that is in Amazon S3. Provide access toanalysts.
Answer: A,C,F
Explanation: To meet the requirements of the use case in a cost-effective way, the
following steps are recommended:
Deploy an AWS Storage Gateway on premises in Amazon S3 File Gateway mode.
This will allow the company to write the .csv files generated by the devices to an
SMB file share, which will be stored as objects in Amazon S3 buckets. AWS
Storage Gateway is a hybrid cloud storage service that integrates on-premises
environments with AWS storage. Amazon S3 File Gateway mode provides a
seamless way to connect to Amazon S3 and access a virtually unlimited amount of
cloud storage1.
Set up an AWS Glue crawler to create a table based on the data that is in Amazon
S3. This will enable the company to use standard SQL to query the data stored in
Amazon S3 buckets. AWS Glue is a serverless data integration service that
simplifies data preparation and analysis. AWS Glue crawlers can automatically
discover and classify data from various sources, and create metadata tables in the
AWS Glue Data Catalog2. The Data Catalog is a central repository that stores
information about data sources and how to access them3.
Set up Amazon Athena to query the data that is in Amazon S3. This will provide
the company analysts with a serverless and interactive query service that can
analyze data directly in Amazon S3 using standard SQL. Amazon Athena is
integrated with the AWS Glue Data Catalog, so users can easily point Athena at
the data source tables defined by the crawlers. Amazon Athena charges only for
the queries that are run, and offers a pay-per-query pricing model, which makes it
a cost-effective option for periodic queries4.
The other options are not correct because they are either not cost-effective or not suitable
for the use case. Deploying an AWS Storage Gateway on premises in Amazon FSx File
Gateway mode is not correct because this mode provides low-latency access to fully
managed Windows file shares in AWS, which is not required for the use case. Setting up
an Amazon EMR cluster with EMR File System (EMRFS) to query the data that is in
Amazon S3 is not correct because this option involves setting up and managing a cluster of
EC2 instances, which adds complexity and cost to the solution. Setting up an Amazon
Redshift cluster to query the data that is in Amazon S3 is not correct because this option
also involves provisioning and managing a cluster of nodes, which adds overhead and cost
to the solution.
References:
What is AWS Storage Gateway?
What is AWS Glue?
AWS Glue Data Catalog
What is Amazon Athena?
Question # 33
A company website hosted on Amazon EC2 instances processes classified data stored inThe application writes data to Amazon Elastic Block Store (Amazon EBS) volumes Thecompany needs to ensure that all data that is written to the EBS volumes is encrypted atrest.Which solution will meet this requirement?
A. Create an 1AM role that specifies EBS encryption Attach the role to the EC2 instances B. Create the EBS volumes as encrypted volumes Attach the EBS volumes to the EC2instances C. Create an EC2 instance tag that has a key of Encrypt and a value of True Tag allinstances that require encryption at the EBS level D. Create an AWS Key Management Service (AWS KMS) key policy that enforces EBSencryption in the account Ensure that the key policy is active
Answer: B
Explanation: The simplest and most effective way to ensure that all data that is written to
the EBS volumes is encrypted at rest is to create the EBS volumes as encrypted volumes.
You can do this by selecting the encryption option when you create a new EBS volume, or
by copying an existing unencrypted volume to a new encrypted volume. You can also
specify the AWS KMS key that you want to use for encryption, or use the default AWSmanaged
key. When you attach the encrypted EBS volumes to the EC2 instances, the data
will be automatically encrypted and decrypted by the EC2 host. This solution does not
require any additional IAM roles, tags, or policies. References:
Amazon EBS encryption
Creating an encrypted EBS volume
Encrypting an unencrypted EBS volume
Question # 34
A company has Amazon EC2 instances that run nightly batch jobs to process data. TheEC2 instances run in an Auto Scaling group that uses On-Demand billing. If a job fails onone instance: another instance will reprocess the job. The batch jobs run between 12:00AM and 06 00 AM local time every day.Which solution will provide EC2 instances to meet these requirements MOST cost-effectively'?
A. Purchase a 1-year Savings Plan for Amazon EC2 that covers the instance family of theAuto Scaling group that the batch job uses. B. Purchase a 1-year Reserved Instance for the specific instance type and operatingsystem of the instances in the Auto Scaling group that the batch job uses. C. Create a new launch template for the Auto Scaling group Set the instances to SpotInstances Set a policy to scale out based on CPU usage. D. Create a new launch template for the Auto Scaling group Increase the instance size Seta policy to scale out based on CPU usage.
Answer: C
Explanation: This option is the most cost-effective solution because it leverages the Spot
Instances, which are unused EC2 instances that are available at up to 90% discount
compared to On-Demand prices. Spot Instances can be interrupted by AWS when the
demand for On-Demand instances increases, but since the batch jobs are fault-tolerant and
can be reprocessed by another instance, this is not a major issue. By using a launch
template, the company can specify the configuration of the Spot Instances, such as the
instance type, the operating system, and the user data. By using an Auto Scaling group,
the company can automatically scale the number of Spot Instances based on the CPU
usage, which reflects the load of the batch jobs. This way, the company can optimize the
performance and the cost of the EC2 instances for the nightly batch jobs.
A. Purchase a 1-year Savings Plan for Amazon EC2 that covers the instance family of the
Auto Scaling group that the batch job uses. This option is not optimal because it requires a
commitment to a consistent amount of compute usage per hour for a one-year term,
regardless of the instance type, size, region, or operating system. This can limit the flexibility and scalability of the Auto Scaling group and result in overpaying for unused
compute capacity. Moreover, Savings Plans do not provide a capacity reservation, which
means the company still needs to reserve capacity with On-Demand Capacity
Reservations and pay lower prices with Savings Plans.
B. Purchase a 1-year Reserved Instance for the specific instance type and operating
system of the instances in the Auto Scaling group that the batch job uses. This option is not
ideal because it requires a commitment to a specific instance configuration for a one-year
term, which can reduce the flexibility and scalability of the Auto Scaling group and result in
overpaying for unused compute capacity. Moreover, Reserved Instances do not provide a
capacity reservation, which means the company still needs to reserve capacity with On-
Demand Capacity Reservations and pay lower prices with Reserved Instances.
D. Create a new launch template for the Auto Scaling group Increase the instance size Set
a policy to scale out based on CPU usage. This option is not cost-effective because it does
not take advantage of the lower prices of Spot Instances. Increasing the instance size can
improve the performance of the batch jobs, but it can also increase the cost of the On-
Demand instances. Moreover, scaling out based on CPU usage can result in launching
more instances than needed, which can also increase the cost of the system.
References:
1 Spot Instances - Amazon Elastic Compute Cloud
2 Launch templates - Amazon Elastic Compute Cloud
3 Auto Scaling groups - Amazon EC2 Auto Scaling
[4] Savings Plans - Amazon EC2 Reserved Instances and Other AWS Reservation
Models
Question # 35
A company hosts a three-tier web application in the AWS Cloud. A Multi-AZ Amazon RDSfor MySQL server forms the database layer. Amazon ElastiCache forms the cache layer.The company wants a caching strategy that adds or updates data in the cache when acustomer adds an item to the database. The data in the cache must always match the datain the database.Which solution will meet these requirements?
A. Implement the lazy loading caching strategy B. Implement the write-through caching strategy. C. Implement the adding TTL caching strategy. D. Implement the AWS AppConfig caching strategy.
Answer: B
Explanation: A write-through caching strategy adds or updates data in the cache
whenever data is written to the database. This ensures that the data in the cache is always
consistent with the data in the database. A write-through caching strategy also reduces the
cache miss penalty, as data is always available in the cache when it is requested.
However, a write-through caching strategy can increase the write latency, as data has to be
written to both the cache and the database. A write-through caching strategy is suitable for
applications that require high data consistency and low read latency.
A lazy loading caching strategy only loads data into the cache when it is requested, and
updates the cache when there is a cache miss. This can result in stale data in the cache,
as data is not updated in the cache when it is changed in the database. A lazy loading
caching strategy is suitable for applications that can tolerate some data inconsistency and
have a low cache miss rate.
An adding TTL caching strategy assigns a time-to-live (TTL) value to each data item in the cache, and removes the data from the cache when the TTL expires. This can help prevent
stale data in the cache, as data is periodically refreshed from the database. However, an
adding TTL caching strategy can also increase the cache miss rate, as data can be evicted
from the cache before it is requested. An adding TTL caching strategy is suitable for
applications that have a high cache hit rate and can tolerate some data inconsistency.
An AWS AppConfig caching strategy is not a valid option, as AWS AppConfig is a service
that enables customers to quickly deploy validated configurations to applications of any
size and scale. AWS AppConfig does not provide a caching layer for web applications.
References: Caching strategies - Amazon ElastiCache, Caching for high-volume workloads
with Amazon ElastiCache
Question # 36
A company wants to analyze and troubleshoot Access Denied errors and Unauthonzederrors that are related to 1AM permissions The company has AWS CloudTrail turned onWhich solution will meet these requirements with the LEAST effort?
A. Use AWS Glue and write custom scripts to query CloudTrail logs for the errors B. Use AWS Batch and write custom scripts to query CloudTrail logs for the errors C. Search CloudTrail logs with Amazon Athena queries to identify the errors D. Search CloudTrail logs with Amazon QuickSight. Create a dashboard to identify the errors.
Answer: C
Explanation: This solution meets the following requirements:
It is the least effort, as it does not require any additional AWS services, custom
scripts, or data processing steps. Amazon Athena is a serverless interactive query
service that allows you to analyze data in Amazon S3 using standard SQL. You
can use Athena to query CloudTrail logs directly from the S3 bucket where they
are stored, without any data loading or transformation. You can also use the AWS
Management Console, the AWS CLI, or the Athena API to run and manage your
queries.
It is effective, as it allows you to filter, aggregate, and join CloudTrail log data using
SQL syntax. You can use various SQL functions and operators to specify the
criteria for identifying Access Denied and Unauthorized errors, such as the error
code, the user identity, the event source, the event name, the event time, and the
resource ARN. You can also use subqueries, views, and common table
expressions to simplify and optimize your queries.
It is flexible, as it allows you to customize and save your queries for future use.
You can also export the query results to other formats, such as CSV or JSON, or
integrate them with other AWS services, such as Amazon QuickSight, for further
analysis and visualization.
References:
Querying AWS CloudTrail Logs - Amazon Athena
Analyzing Data in S3 using Amazon Athena | AWS Big Data Blog
Troubleshoot IAM permisson access denied or unauthorized errors | AWS re:Post
Question # 37
A global company runs its applications in multiple AWS accounts in AWS Organizations.The company's applications use multipart uploads to upload data to multiple Amazon S3buckets across AWS Regions. The company wants to report on incomplete multipartuploads for cost compliance purposes.Which solution will meet these requirements with the LEAST operational overhead?
A. Configure AWS Config with a rule to report the incomplete multipart upload object count. B. Create a service control policy (SCP) to report the incomplete multipart upload objectcount. C. Configure S3 Storage Lens to report the incomplete multipart upload object count. D. Create an S3 Multi-Region Access Point to report the incomplete multipart upload objectcount.
Answer: C
Explanation: S3 Storage Lens is a cloud storage analytics feature that provides
organization-wide visibility into object storage usage and activity across multiple AWS
accounts in AWS Organizations. S3 Storage Lens can report the incomplete multipart
upload object count as one of the metrics that it collects and displays on an interactive
dashboard in the S3 console. S3 Storage Lens can also export metrics in CSV or Parquet
format to an S3 bucket for further analysis. This solution will meet the requirements with the
least operational overhead, as it does not require any code development or policy changes.
References:
1 explains how to use S3 Storage Lens to gain insights into S3 storage usage and
activity.
2 describes the concept and benefits of multipart uploads.
Question # 38
A company has stored 10 TB of log files in Apache Parquet format in an Amazon S3 bucketThe company occasionally needs to use SQL to analyze the log files Which solution willmeet these requirements MOST cost-effectively?
A. Create an Amazon Aurora MySQL database Migrate the data from the S3 bucket intoAurora by using AWS Database Migration Service (AWS DMS) Issue SQL statements tothe Aurora database. B. Create an Amazon Redshift cluster Use Redshift Spectrum to run SQL statementsdirectly on the data in the S3 bucket C. Create an AWS Glue crawler to store and retrieve table metadata from the S3 bucketUse Amazon Athena to run SQL statements directly on the data in the S3 bucket D. Create an Amazon EMR cluster Use Apache Spark SQL to run SQL statements directlyon the data in the S3 bucket
Answer: C
Explanation: AWS Glue is a serverless data integration service that can crawl, catalog,
and prepare data for analysis. AWS Glue can automatically discover the schema and
partitioning of the data stored in Apache Parquet format in S3, and create a table in the
AWS Glue Data Catalog. Amazon Athena is a serverless interactive query service that can
run SQL queries directly on data in S3, without requiring any data loading or
transformation. Athena can use the table metadata from the AWS Glue Data Catalog to
query the data in S3. By using AWS Glue and Athena, you can analyze the log files in S3
most cost-effectively, as you only pay for the resources consumed by the crawler and the
queries, and you do not need to provision or manage any servers or clusters.
References:
AWS Glue
Amazon Athena
Analyzing Data in S3 using Amazon Athena
Question # 39
A pharmaceutical company is developing a new drug. The volume of data that the company generates has grown exponentially over the past few months. The company'sresearchers regularly require a subset of the entire dataset to be immediately available withminimal lag. However the entire dataset does not need to be accessed on a daily basis. Allthe data currently resides in on-premises storage arrays, and the company wants to reduceongoing capital expenses.Which storage solution should a solutions architect recommend to meet theserequirements?
A. Run AWS DataSync as a scheduled cron job to migrate the data to an Amazon S3bucket on an ongoing basis. B. Deploy an AWS Storage Gateway file gateway with an Amazon S3 bucket as the targetstorage Migrate the data to the Storage Gateway appliance. C. Deploy an AWS Storage Gateway volume gateway with cached volumes with anAmazon S3 bucket as the target storage. Migrate the data to the Storage Gatewayappliance. D. Configure an AWS Site-to-Site VPN connection from the on-premises environment toAWS. Migrate data to an Amazon Elastic File System (Amazon EFS) file system.
Answer: C
Explanation: AWS Storage Gateway is a hybrid cloud storage service that allows you to
seamlessly integrate your on-premises applications with AWS cloud storage. Volume
Gateway is a type of Storage Gateway that presents cloud-backed iSCSI block storage
volumes to your on-premises applications. Volume Gateway operates in either cache mode
or stored mode. In cache mode, your primary data is stored in Amazon S3, while retaining
your frequently accessed data locally in the cache for low latency access. In stored mode,
your primary data is stored locally and your entire dataset is available for low latency
access on premises while also asynchronously getting backed up to Amazon S3.
For the pharmaceutical company’s use case, cache mode is the most suitable option, as it
meets the following requirements:
It reduces the need to scale the on-premises storage infrastructure, as most of the
data is stored in Amazon S3, which is scalable, durable, and cost-effective.
It provides low latency access to the subset of the data that the researchers
regularly require, as it is cached locally in the Storage Gateway appliance.
It does not require the entire dataset to be accessed on a daily basis, as it is
stored in Amazon S3 and can be retrieved on demand.
It offers flexible data protection and recovery options, as it allows taking point-intime
copies of the volumes using AWS Backup, which are stored in AWS as
Amazon EBS snapshots.
Therefore, the solutions architect should recommend deploying an AWS Storage Gateway
volume gateway with cached volumes with an Amazon S3 bucket as the target storage and
migrating the data to the Storage Gateway appliance.
References:
Volume Gateway | Amazon Web Services
How Volume Gateway works (architecture) - AWS Storage Gateway
A company runs a three-tier web application in a VPC across multiple Availability Zones.Amazon EC2 instances run in an Auto Scaling group for the application tier.The company needs to make an automated scaling plan that will analyze each resource'sdaily and weekly historical workload trends. The configuration must scale resourcesappropriately according to both the forecast and live changes in utilization.Which scaling strategy should a solutions architect recommend to meet theserequirements?
A. Implement dynamic scaling with step scaling based on average CPU utilization from theEC2 instances. B. Enable predictive scaling to forecast and scale. Configure dynamic scaling with targettracking. C. Create an automated scheduled scaling action based on the traffic patterns of the webapplication. D. Set up a simple scaling policy. Increase the cooldown period based on the EC2 instancestartup time
Answer: B
Explanation:
This solution meets the requirements because it allows the company to use both predictive
scaling and dynamic scaling to optimize the capacity of its Auto Scaling group. Predictive
scaling uses machine learning to analyze historical data and forecast future traffic patterns.
It then adjusts the desired capacity of the group in advance of the predicted changes.
Dynamic scaling uses target tracking to maintain a specified metric (such as CPU
utilization) at a target value. It scales the group in or out as needed to keep the metric close to the target. By using both scaling methods, the company can benefit from faster, simpler,
and more accurate scaling that responds to both forecasted and live changes in utilization.
References:
Predictive scaling for Amazon EC2 Auto Scaling
[Target tracking scaling policies for Amazon EC2 Auto Scaling
Question # 41
A company deployed a serverless application that uses Amazon DynamoDB as a databaselayer The application has experienced a large increase in users. The company wants toimprove database response time from milliseconds to microseconds and to cache requeststo the database.Which solution will meet these requirements with the LEAST operational overhead?
A. Use DynamoDB Accelerator (DAX). B. Migrate the database to Amazon Redshift. C. Migrate the database to Amazon RDS. D. Use Amazon ElastiCache for Redis.
Answer: A
Explanation: DynamoDB Accelerator (DAX) is a fully managed, highly available caching
service built for Amazon DynamoDB. DAX delivers up to a 10 times performance
improvement—from milliseconds to microseconds—even at millions of requests per
second. DAX does all the heavy lifting required to add in-memory acceleration to your
DynamoDB tables, without requiring developers to manage cache invalidation, data
population, or cluster management. Now you can focus on building great applications for
your customers without worrying about performance at scale. You do not need to modify
application logic because DAX is compatible with existing DynamoDB API calls. This
solution will meet the requirements with the least operational overhead, as it does not
require any code development or manual intervention. References:
1 provides an overview of Amazon DynamoDB Accelerator (DAX) and its benefits.
2 explains how to use DAX with DynamoDB for in-memory acceleration.
Question # 42
An online video game company must maintain ultra-low latency for its game servers. Thegame servers run on Amazon EC2 instances. The company needs a solution that canhandle millions of UDP internet traffic requests each second.Which solution will meet these requirements MOST cost-effectively?
A. Configure an Application Load Balancer with the required protocol and ports for theinternet traffic. Specify the EC2 instances as the targets. B. Configure a Gateway Load Balancer for the internet traffic. Specify the EC2 instances asthe targets. C. Configure a Network Load Balancer with the required protocol and ports for the internettraffic. Specify the EC2 instances as the targets. D. Launch an identical set of game servers on EC2 instances in separate AWS Regions. Route internet traffic to both sets of EC2 instances.
Answer: C
Explanation: The most cost-effective solution for the online video game company is to
configure a Network Load Balancer with the required protocol and ports for the internet
traffic and specify the EC2 instances as the targets. This solution will enable the company
to handle millions of UDP requests per second with ultra-low latency and high performance.
A Network Load Balancer is a type of Elastic Load Balancing that operates at the
connection level (Layer 4) and routes traffic to targets (EC2 instances, microservices, or
containers) within Amazon VPC based on IP protocol data. A Network Load Balancer is
ideal for load balancing of both TCP and UDP traffic, as it is capable of handling millions of
requests per second while maintaining high throughput at ultra-low latency. A Network
Load Balancer also preserves the source IP address of the clients to the back-end
applications, which can be useful for logging or security purposes1.
Question # 43
A company maintains an Amazon RDS database that maps users to cost centers. Thecompany has accounts in an organization in AWS Organizations. The company needs asolution that will tag all resources that are created in a specific AWS account in theorganization. The solution must tag each resource with the cost center ID of the user whocreated the resource.Which solution will meet these requirements?
A. Move the specific AWS account to a new organizational unit (OU) in Organizations fromthe management account. Create a service control policy (SCP) that requires all existingresources to have the correct cost center tag before the resources are created. Apply the SCP to the new OU. B. Create an AWS Lambda function to tag the resources after the Lambda function looksup the appropriate cost center from the RDS database. Configure an Amazon EventBridgerule that reacts to AWS CloudTrail events to invoke the Lambda function. C. Create an AWS CloudFormation stack to deploy an AWS Lambda function. Configurethe Lambda function to look up the appropriate cost center from the RDS database and totag resources. Create an Amazon EventBridge scheduled rule to invoke theCloudFormation stack. D. Create an AWS Lambda function to tag the resources with a default value. Configure anAmazon EventBridge rule that reacts to AWS CloudTrail events to invoke the Lambdafunction when a resource is missing the cost center tag.
Answer: B
Explanation: AWS Lambda is a serverless compute service that lets you run code without
provisioning or managing servers. Lambda can be used to tag resources with the cost
center ID of the user who created the resource, by querying the RDS database that maps
users to cost centers. Amazon EventBridge is a serverless event bus service that enables
event-driven architectures. EventBridge can be configured to react to AWS CloudTrail
events, which are recorded API calls made by or on behalf of the AWS account.
EventBridge can invoke the Lambda function when a resource is created in the specific
AWS account, passing the user identity and resource information as parameters. This
solution will meet the requirements, as it enables automatic tagging of resources based on
the user and cost center mapping.
References:
1 provides an overview of AWS Lambda and its benefits.
2 provides an overview of Amazon EventBridge and its benefits.
3 explains the concept and benefits of AWS CloudTrail events.
Question # 44
A company is designing a tightly coupled high performance computing (HPC) environmentin the AWS Cloud The company needs to include features that will optimize the HPCenvironment for networking and storage.Which combination of solutions will meet these requirements? (Select TWO )
A. Create an accelerator in AWS Global Accelerator. Configure custom routing for theaccelerator. B. Create an Amazon FSx for Lustre file system. Configure the file system with scratchstorage. C. Create an Amazon CloudFront distribution. Configure the viewer protocol policy to beHTTP and HTTPS. D. Launch Amazon EC2 instances. Attach an Elastic Fabric Adapter (EFA) to theinstances. E. Create an AWS Elastic Beanstalk deployment to manage the environment.
Answer: B,D
Explanation: These two solutions will optimize the HPC environment for networking and
storage. Amazon FSx for Lustre is a fully managed service that provides cost-effective,
high-performance, scalable storage for compute workloads. It is built on the world’s most
popular high-performance file system, Lustre, which is designed for applications that
require fast storage, such as HPC and machine learning. By configuring the file system
with scratch storage, you can achieve sub-millisecond latencies, up to hundreds of GBs/s
of throughput, and millions of IOPS. Scratch file systems are ideal for temporary storage
and shorter-term processing of data. Data is not replicated and does not persist if a file
server fails. For more information, see Amazon FSx for Lustre.
Elastic Fabric Adapter (EFA) is a network interface for Amazon EC2 instances that enables
customers to run applications requiring high levels of inter-node communications at scale
on AWS. Its custom-built operating system (OS) bypass hardware interface enhances the
performance of inter-instance communications, which is critical to scaling HPC and
machine learning applications. EFA provides a low-latency, low-jitter channel for interinstance
communications, enabling your tightly-coupled HPC or distributed machine
learning applications to scale to thousands of cores. EFA uses libfabric interface and
libfabric APIs for communications, which are supported by most HPC programming
models. For more information, see Elastic Fabric Adapter. The other solutions are not suitable for optimizing the HPC environment for networking and
storage. AWS Global Accelerator is a networking service that helps you improve the
availability, performance, and security of your public applications by using the AWS global
network. It provides two global static public IPs, deterministic routing, fast failover, and TCP
termination at the edge for your application endpoints. However, it does not support OSbypass
capabilities or high-performance file systems that are required for HPC and
machine learning applications. For more information, see AWS Global Accelerator.
Amazon CloudFront is a content delivery network (CDN) service that securely delivers
data, videos, applications, and APIs to customers globally with low latency, high transfer
speeds, all within a developer-friendly environment. CloudFront is integrated with AWS
services such as Amazon S3, Amazon EC2, AWS Elemental Media Services, AWS Shield,
AWS WAF, and AWS Lambda@Edge. However, CloudFront is not designed for HPC and
machine learning applications that require high levels of inter-node communications and
fast storage. For more information, see [Amazon CloudFront].
AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web
applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go,
and Docker on familiar servers such as Apache, Nginx, Passenger, and IIS. You can
simply upload your code and Elastic Beanstalk automatically handles the deployment, from
capacity provisioning, load balancing, auto-scaling to application health monitoring.
However, Elastic Beanstalk is not optimized for HPC and machine learning applications
that require OS-bypass capabilities and high-performance file systems. For more
information, see [AWS Elastic Beanstalk].
References: Amazon FSx for Lustre, Elastic Fabric Adapter, AWS Global Accelerator,
[Amazon CloudFront], [AWS Elastic Beanstalk].
Question # 45
A company is running a photo hosting service in the us-east-1 Region. The service enablesusers across multiple countries to upload and view photos. Some photos are heavilyviewed for months, and others are viewed for less than a week. The application allowsuploads of up to 20 MB for each photo. The service uses the photo metadata to determinewhich photos to display to each user.Which solution provides the appropriate user access MOST cost-effectively?
A. Store the photos in Amazon DynamoDB. Turn on DynamoDB Accelerator (DAX) tocache frequently viewed items. B. Store the photos in the Amazon S3 Intelligent-Tiering storage class. Store the photometadata and its S3 location in DynamoDB. C. Store the photos in the Amazon S3 Standard storage class. Set up an S3 Lifecyclepolicy to move photos older than 30 days to the S3 Standard-Infrequent Access (S3Standard-IA) storage class. Use the object tags to keep track of metadata. D. Store the photos in the Amazon S3 Glacier storage class. Set up an S3 Lifecycle policyto move photos older than 30 days to the S3 Glacier Deep Archive storage class. Store thephoto metadata and its S3 location in Amazon OpenSearch Service.
Answer: B
Explanation: This solution provides the appropriate user access most cost-effectively
because it uses the Amazon S3 Intelligent-Tiering storage class, which automatically
optimizes storage costs by moving data to the most cost-effective access tier when access patterns change, without performance impact or operational overhead1. This storage class
is ideal for data with unknown, changing, or unpredictable access patterns, such as photos
that are heavily viewed for months or less than a week. By storing the photo metadata and
its S3 location in DynamoDB, the application can quickly query and retrieve the relevant
photos for each user. DynamoDB is a fast, scalable, and fully managed NoSQL database
service that supports key-value and document data models2.
A company is designing a new web application that will run on Amazon EC2 Instances. Theapplication will use Amazon DynamoDB for backend data storage. The application trafficwill be unpredictable. T company expects that the application read and write throughput tothe database will be moderate to high. The company needs to scale in response toapplication traffic.Which DynamoDB table configuration will meet these requirements MOST cost-effectively?
A. Configure DynamoDB with provisioned read and write by using the DynamoDBStandard table class. Set DynamoDB auto scaling to a maximum defined capacity. B. Configure DynamoDB in on-demand mode by using the DynamoDB Standard tableclass. C. Configure DynamoDB with provisioned read and write by using the DynamoDBStandard Infrequent Access (DynamoDB Standard-IA) table class. Set DynamoDB autoscaling to a maximum defined capacity. D. Configure DynamoDB in on-demand mode by using the DynamoDB Standard InfrequentAccess (DynamoDB Standard-IA) table class.
Answer: B
Explanation: The most cost-effective DynamoDB table configuration for the web
application is to configure DynamoDB in on-demand mode by using the DynamoDB
Standard table class. This configuration will allow the company to scale in response to
application traffic and pay only for the read and write requests that the application performs
on the table.
On-demand mode is a flexible billing option that can handle thousands of requests per
second without capacity planning. On-demand mode automatically adjusts the table’s
capacity based on the incoming traffic, and charges only for the read and write requests
that are actually performed. On-demand mode is suitable for applications with
unpredictable or variable workloads, or applications that prefer the ease of paying for only
what they use1.
The DynamoDB Standard table class is the default and recommended table class for most
workloads. The DynamoDB Standard table class offers lower throughput costs than the
DynamoDB Standard-Infrequent Access (DynamoDB Standard-IA) table class, and is more
cost-effective for tables where throughput is the dominant cost. The DynamoDB Standard
table class also offers the same performance, durability, and availability as the DynamoDB
Standard-IA table class2. The other options are not correct because they are either not cost-effective or not suitable
for the use case. Configuring DynamoDB with provisioned read and write by using the
DynamoDB Standard table class, and setting DynamoDB auto scaling to a maximum
defined capacity is not correct because this configuration requires manual estimation and
management of the table’s capacity, which adds complexity and cost to the solution.
Provisioned mode is a billing option that requires users to specify the amount of read and
write capacity units for their tables, and charges for the reserved capacity regardless of
usage. Provisioned mode is suitable for applications with predictable or stable workloads,
or applications that require finer-grained control over their capacity settings1. Configuring
DynamoDB with provisioned read and write by using the DynamoDB Standard-Infrequent
Access (DynamoDB Standard-IA) table class, and setting DynamoDB auto scaling to a
maximum defined capacity is not correct because this configuration is not cost-effective for
tables with moderate to high throughput. The DynamoDB Standard-IA table class offers
lower storage costs than the DynamoDB Standard table class, but higher throughput costs.
The DynamoDB Standard-IA table class is optimized for tables where storage is the
dominant cost, such as tables that store infrequently accessed data2. Configuring
DynamoDB in on-demand mode by using the DynamoDB Standard-Infrequent Access
(DynamoDB Standard-IA) table class is not correct because this configuration is not costeffective
for tables with moderate to high throughput. As mentioned above, the DynamoDB
Standard-IA table class has higher throughput costs than the DynamoDB Standard table
class, which can offset the savings from lower storage costs.
References:
Table classes - Amazon DynamoDB
Read/write capacity mode - Amazon DynamoDB
Question # 47
A company's web application that is hosted in the AWS Cloud recently increased inpopularity. The web application currently exists on a single Amazon EC2 instance in asingle public subnet. The web application has not been able to meet the demand of theincreased web traffic.The company needs a solution that will provide high availability and scalability to meet theincreased user demand without rewriting the web application.Which combination of steps will meet these requirements? (Select TWO.)
A. Replace the EC2 instance with a larger compute optimized instance. B. Configure Amazon EC2 Auto Scaling with multiple Availability Zones in private subnets. C. Configure a NAT gateway in a public subnet to handle web requests. D. Replace the EC2 instance with a larger memory optimized instance. E. Configure an Application Load Balancer in a public subnet to distribute web traffic
Answer: B,E
Explanation:
These two steps will meet the requirements because they will provide high availability and
scalability for the web application without rewriting it. Amazon EC2 Auto Scaling allows you
to automatically adjust the number of EC2 instances in response to changes in demand. By
configuring Auto Scaling with multiple Availability Zones in private subnets, you can ensure
that your web application is distributed across isolated and fault-tolerant locations, and that
your instances are not directly exposed to the internet. An Application Load Balancer
operates at the application layer and distributes incoming web traffic across multiple
targets, such as EC2 instances, containers, or Lambda functions. By configuring an
Application Load Balancer in a public subnet, you can enable your web application to
handle requests from the internet and route them to the appropriate targets in the private
subnets.
References:
What is Amazon EC2 Auto Scaling?
What is an Application Load Balancer?
Question # 48
A company is designing a web application on AWS The application will use a VPNconnection between the company's existing data centers and the company's VPCs. Thecompany uses Amazon Route 53 as its DNS service. The application must use privateDNS records to communicate with the on-premises services from a VPC. Which solutionwill meet these requirements in the MOST secure manner?
A. Create a Route 53 Resolver outbound endpoint. Create a resolver rule. Associate theresolver rule with the VPC B. Create a Route 53 Resolver inbound endpoint. Create a resolver rule. Associate theresolver rule with the VPC. C. Create a Route 53 private hosted zone. Associate the private hosted zone with the VPC. D. Create a Route 53 public hosted zone. Create a record for each service to allow servicecommunication.
Answer: A
Explanation: To meet the requirements of the web application in the most secure manner,
the company should create a Route 53 Resolver outbound endpoint, create a resolver rule,
and associate the resolver rule with the VPC. This solution will allow the application to use
private DNS records to communicate with the on-premises services from a VPC. Route 53
Resolver is a service that enables DNS resolution between on-premises networks and
AWS VPCs. An outbound endpoint is a set of IP addresses that Resolver uses to forward
DNS queries from a VPC to resolvers on an on-premises network. A resolver rule is a rule
that specifies the domain names for which Resolver forwards DNS queries to the IP
addresses that you specify in the rule. By creating an outbound endpoint and a resolver
rule, and associating them with the VPC, the company can securely resolve DNS queries
for the on-premises services using private DNS records12.
The other options are not correct because they do not meet the requirements or are not
secure. Creating a Route 53 Resolver inbound endpoint, creating a resolver rule, and
associating the resolver rule with the VPC is not correct because this solution will allow
DNS queries from on-premises networks to access resources in a VPC, not vice versa. An
inbound endpoint is a set of IP addresses that Resolver uses to receive DNS queries from
resolvers on an on-premises network1. Creating a Route 53 private hosted zone and
associating it with the VPC is not correct because this solution will only allow DNS
resolution for resources within the VPC or other VPCs that are associated with the same
hosted zone. A private hosted zone is a container for DNS records that are only accessible
from one or more VPCs3. Creating a Route 53 public hosted zone and creating a record for
each service to allow service communication is not correct because this solution will expose the on-premises services to the public internet, which is not secure. A public hosted
zone is a container for DNS records that are accessible from anywhere on the internet3.
References:
Resolving DNS queries between VPCs and your network - Amazon Route 53
Working with rules - Amazon Route 53
Working with private hosted zones - Amazon Route 53
Related Exams
Our Clients Say About Amazon SAA-C03 Exam
Audrey
I had an enjoyable ride with PassExam4Sure and its Amazon SAA-C03 exam preparatory materials. They provided me with the necessary SAA-C03 and training facilities that I needed to excel in my Amazon SAA-C03 exam. I got my certification and the Amazon SAA-C03 result gave me a respectable outcome. Thanks a lot, PassExam4Sure!
Gotha
I cleared so many SAA-C03 while practicing PassExam4Sure dumps. They helped me clear my exams with flying colors! I faced no trouble and my entire SAA-C03 with PassExam4Sure Amazon SAA-C03 practice exam dumps was worth it. Thanks a bunch.
Stevens
I have never seen a better site that provides the kind of help that PassExam4Sure does. I desperately needed help to give the Amazon SAA-C03 exam, and if I had not got help from PassExam4Sure, I would have been in big trouble. I cleared the Amazon SAA-C03 exam, and I must thank PassExam4Sure Amazon SAA-C03 exam preparation course for helping me in clearing this tough exam. Thank you, PassExam4Sure, for your help.
Robert
Numerous sites are offering several courses and tests such as Amazon SAA-C03 but I must tell you that most of them are fraud and don't provide any impressive notes and materials. I have a very bad experience with such fake websites as I failed in Amazon SAA-C03. I was very disappointed but my friend told me to try it again with PassExam4Sure and it was a sheer success. PassExam4Sure is the best website for Amazon SAA-C03.
Mike
I am very happy that I had an opportunity to use the practice tests offered by PassExam4Sure, for getting prepared for the Test Prep SAA-C03 exam. These practice tests prepared me for the real Test Prep SAA-C03 exam questions, enabling me to pass the Certification Test Prep SAA-C03 exam easily.