Handsome Savings - Limited Time Offer 30% OFF - Ends In 0d 0h 0m 0s Coupon code: 50OFF
Welcome to QA4Exam
Logo

- Trusted Worldwide Questions & Answers

Amazon DBS-C01 Exam Actual Questions

The questions for DBS-C01 were last updated on Sep 30, 2024.
  • Viewing page 1 out of 64 pages.
  • Viewing questions 1-5 out of 322 questions
Unlock Access to All 322 Questions & Answers
Question No. 1

A company uses an Amazon Redshift cluster to run its analytical workloads. Corporate policy requires that the company's data be encrypted at rest with customer managed keys. The company's disaster recovery plan requires that backups of the cluster be copied into another AWS Region on a regular basis.

How should a database specialist automate the process of backing up the cluster data in compliance with these policies?

Show Answer Hide Answer
Correct Answer: B

Correct Answer: B. Create a new AWS Key Management Service (AWS KMS)

Explanation from Amazon documents:

Amazon Redshift supports encryption at rest using AWS Key Management Service (AWS KMS) customer master keys (CMKs). To copy encrypted snapshots across Regions, you need to create a snapshot copy grant in the destination Region and specify a CMK in that Region. You also need to configure cross-Region snapshots in the source Region and provide the destination Region, the snapshot copy grant, and retention periods for the snapshots. This way, you can automate the process of backing up the cluster data in compliance with the corporate policies.

Option A is incorrect because you cannot copy a CMK from one Region to another. You can only import key material from an external source into a CMK in a specific Region. Option C is incorrect because it involves unnecessary steps of copying snapshots to S3 buckets and using S3 Cross-Region Replication. Option D is incorrect because it is not possible to create a CMK with the same private key as another CMK in a different Region. You can only use customer-supplied key material to create a CMK with a specific key ID in a specific Region.


Question No. 2

A coffee machine manufacturer is equipping all of its coffee machines with 10T sensors. The 10T core application is writing measurements for each record to Amazon Timestream. The records have multiple dimensions and measures. The measures include multiple measure names and values.

An analysis application is running queries against the Timestream database and is focusing on data from the current week. A database specialist needs to optimize the query costs of the analysis application.

Which solution will meet these requirements?

Show Answer Hide Answer
Correct Answer: B

Correct Answer: B. Use time range, measure name, and dimensions in the WHERE

Explanation from Amazon documents:

Amazon Timestream is a serverless time series database service that allows you to store and analyze time series data at any scale. To optimize the cost of queries, you should use the following best practices1:

Include only the measure and dimension names essential to query. Adding extraneous columns will increase data scans and therefore will also increase the query cost.

Include a time range in the WHERE clause of your query. For example, if you only need the last one hour of data in your dataset, include a time predicate such as time > ago (1h).

Include the measure names in the WHERE clause of the query when a query accesses a subset of measures in a table.

Option B follows these best practices, while option A does not. Option C is incorrect because canceling a query can save on cost if the query will not return the desired results1. Option D is irrelevant because exponential backoff is a technique to handle throttling errors, not to optimize query costs2.


Question No. 3

A database specialist needs to replace the encryption key for an Amazon RDS DB instance. The database specialist needs to take immediate action to ensure security of the database.

Which solution will meet these requirements?

Show Answer Hide Answer
Correct Answer: D

Question No. 4

A gaming company is building a mobile game that will have as many as 25,000 active concurrent users in the first 2 weeks after launch. The game has a leaderboard that shows the 10 highest scoring players over the last 24 hours. The leaderboard calculations are processed by an AWS Lambda function, which takes about 10 seconds. The company wants the data on the leaderboard to be no more than 1 minute old.

Which architecture will meet these requirements in the MOST operationally efficient way?

Show Answer Hide Answer
Correct Answer: A

Amazon Timestream is a serverless time series database service that allows you to store and analyze time series data at any scale1.It is well suited for gaming applications that generate high volumes of data from player events, such as scores, achievements, and actions2.Amazon ElastiCache for Redis is a fully managed in-memory data store that provides fast and scalable performance for applications that need sub-millisecond latency3. It can be used as a cache layer to store frequently accessed data, such as leaderboard results, and reduce the load on the database. AWS Lambda is a serverless compute service that lets you run code without provisioning or managing servers. It can be used to process the data from Amazon Timestream and store the leaderboard results in Amazon ElastiCache for Redis. Amazon EventBridge is a serverless event bus service that makes it easy to connect your applications with data from a variety of sources. It can be used to create a scheduled event that triggers the Lambda function once every minute, ensuring that the leaderboard data is updated regularly. The game server can then query the Redis cluster for the leaderboard data, which will be no more than 1 minute old.

Option B is incorrect because Amazon DynamoDB is a key-value and document database that delivers single-digit millisecond performance at any scale. It is not designed for time series data, which requires efficient ingestion, compression, and querying of high-volume data streams. Option C is incorrect because Amazon Aurora is a relational database that combines the performance and availability of traditional enterprise databases with the simplicity and cost-effectiveness of open source databases. It is not optimized for time series data, which requires specialized indexing and partitioning techniques. Option D is incorrect because Amazon Neptune is a graph database that supports property graph and RDF models. It is not suitable for time series data, which requires high ingestion rates and temporal queries.


Question No. 5

A retail company uses Amazon Redshift for its 1 PB data warehouse. Several analytical workloads run on a Redshift cluster. The tables within the cluster have grown rapidly. End users are reporting poor performance of daily reports that run on the transaction fact tables.

A database specialist must change the design of the tables to improve the reporting performance. All the changes must be applied dynamically. The changes must have the least possible impact on users and must optimize the overall table size.

Which solution will meet these requirements?

Show Answer Hide Answer
Correct Answer: D

Product Image

Unlock All Questions for Amazon DBS-C01 Exam

Full Exam Access, Actual Exam Questions, Validated Answers, Anytime Anywhere, No Download Limits, No Practice Limits

Get All 322 Questions & Answers