A company uses an Amazon Redshift cluster to run its analytical workloads. Corporate policy requires that the company's data be encrypted at rest with customer managed keys. The company's disaster recovery plan requires that backups of the cluster be copied into another AWS Region on a regular basis.
How should a database specialist automate the process of backing up the cluster data in compliance with these policies?
Correct Answer: B. Create a new AWS Key Management Service (AWS KMS)
Explanation from Amazon documents:
Amazon Redshift supports encryption at rest using AWS Key Management Service (AWS KMS) customer master keys (CMKs). To copy encrypted snapshots across Regions, you need to create a snapshot copy grant in the destination Region and specify a CMK in that Region. You also need to configure cross-Region snapshots in the source Region and provide the destination Region, the snapshot copy grant, and retention periods for the snapshots. This way, you can automate the process of backing up the cluster data in compliance with the corporate policies.
Option A is incorrect because you cannot copy a CMK from one Region to another. You can only import key material from an external source into a CMK in a specific Region. Option C is incorrect because it involves unnecessary steps of copying snapshots to S3 buckets and using S3 Cross-Region Replication. Option D is incorrect because it is not possible to create a CMK with the same private key as another CMK in a different Region. You can only use customer-supplied key material to create a CMK with a specific key ID in a specific Region.
A coffee machine manufacturer is equipping all of its coffee machines with 10T sensors. The 10T core application is writing measurements for each record to Amazon Timestream. The records have multiple dimensions and measures. The measures include multiple measure names and values.
An analysis application is running queries against the Timestream database and is focusing on data from the current week. A database specialist needs to optimize the query costs of the analysis application.
Which solution will meet these requirements?
Correct Answer: B. Use time range, measure name, and dimensions in the WHERE
Explanation from Amazon documents:
Include only the measure and dimension names essential to query. Adding extraneous columns will increase data scans and therefore will also increase the query cost.
Include a time range in the WHERE clause of your query. For example, if you only need the last one hour of data in your dataset, include a time predicate such as time > ago (1h).
Include the measure names in the WHERE clause of the query when a query accesses a subset of measures in a table.
A database specialist needs to replace the encryption key for an Amazon RDS DB instance. The database specialist needs to take immediate action to ensure security of the database.
Which solution will meet these requirements?
A gaming company is building a mobile game that will have as many as 25,000 active concurrent users in the first 2 weeks after launch. The game has a leaderboard that shows the 10 highest scoring players over the last 24 hours. The leaderboard calculations are processed by an AWS Lambda function, which takes about 10 seconds. The company wants the data on the leaderboard to be no more than 1 minute old.
Which architecture will meet these requirements in the MOST operationally efficient way?
Option B is incorrect because Amazon DynamoDB is a key-value and document database that delivers single-digit millisecond performance at any scale. It is not designed for time series data, which requires efficient ingestion, compression, and querying of high-volume data streams. Option C is incorrect because Amazon Aurora is a relational database that combines the performance and availability of traditional enterprise databases with the simplicity and cost-effectiveness of open source databases. It is not optimized for time series data, which requires specialized indexing and partitioning techniques. Option D is incorrect because Amazon Neptune is a graph database that supports property graph and RDF models. It is not suitable for time series data, which requires high ingestion rates and temporal queries.
A retail company uses Amazon Redshift for its 1 PB data warehouse. Several analytical workloads run on a Redshift cluster. The tables within the cluster have grown rapidly. End users are reporting poor performance of daily reports that run on the transaction fact tables.
A database specialist must change the design of the tables to improve the reporting performance. All the changes must be applied dynamically. The changes must have the least possible impact on users and must optimize the overall table size.
Which solution will meet these requirements?
Full Exam Access, Actual Exam Questions, Validated Answers, Anytime Anywhere, No Download Limits, No Practice Limits
Get All 322 Questions & Answers