You need to set access to BigQuery for different departments within your company. Your solution should comply with the following requirements:
Each department should have access only to their data.
Each department will have one or more leads who need to be able to create and update tables and provide them to their team.
Each department has data analysts who need to be able to query but not modify data.
How should you set access to the data in BigQuery?
A. Create a dataset for each department. Assign the department leads the role of WRITER, and assign the data analysts the role of READER on their dataset.
B. Create a dataset for each department. Assign the department leads the role of OWNER, and assign the data analysts the role of WRITER on their dataset.
C. Create a table for each department. Assign the department leads the role of Owner, and assign the data analysts the role of Editor on the project the table is in.
D. Create a table for each department. Assign the department leads the role of Editor, and assign the data analysts the role of Viewer on the project the table is in.
正解:D
質問 2:
Which of the following statements about the Wide & Deep Learning model are true? (Select 2 answers.)
A. The wide model is used for memorization, while the deep model is used for generalization.
B. The wide model is used for generalization, while the deep model is used for memorization.
C. A good use for the wide and deep model is a small-scale linear regression problem.
D. A good use for the wide and deep model is a recommender system.
正解:A,D
解説: (Pass4Test メンバーにのみ表示されます)
質問 3:
You are administering a BigQuery dataset that uses a customer-managed encryption key (CMEK). You need to share the dataset with a partner organization that does not have access to your CMEK. What should you do?
A. Create an authorized view that contains the CMEK to decrypt the data when accessed.
B. Export the tables to parquet files to a Cloud Storage bucket and grant the storageinsights. viewer role on the bucket to the partner organization.
C. Provide the partner organization a copy of your CMEKs to decrypt the data.
D. Copy the tables you need to share to a dataset without CMEKs Create an Analytics Hub listing for this dataset.
正解:D
解説: (Pass4Test メンバーにのみ表示されます)
質問 4:
An external customer provides you with a daily dump of data from their database. The data flows into Google Cloud Storage GCS as comma-separated values (CSV) files. You want to analyze this data in Google BigQuery, but the data could have rows that are formatted incorrectly or corrupted. How should you build this pipeline?
A. Enable BigQuery monitoring in Google Stackdriver and create an alert.
B. Run a Google Cloud Dataflow batch pipeline to import the data into BigQuery, and push errors to another dead-letter table for analysis.
C. Use federated data sources, and check data in the SQL query.
D. Import the data into BigQuery using the gcloud CLI and set max_bad_records to 0.
正解:B
質問 5:
You are administering shared BigQuery datasets that contain views used by multiple teams in your organization. The marketing team is concerned about the variability of their monthly BigQuery analytics spend using the on-demand billing model. You need to help the marketing team establish a consistent BigQuery analytics spend each month. What should you do?
A. Create a BigQuery Standard pay-as-you go reservation with a baseline of 0 slots and autoscaling set to 500 for the marketing team, and bill them back accordingly.
B. Create a BigQuery reservation with a baseline of 500 slots with no autoscaling for the marketing team, and bill them back accordingly.
C. Create a BigQuery Enterprise reservation with a baseline of 250 slots and autoscaling set to 500 for the marketing team, and bill them back accordingly.
D. Establish a BigQuery quota for the marketing team, and limit the maximum number of bytes scanned each day.
正解:B
解説: (Pass4Test メンバーにのみ表示されます)
質問 6:
You have a Standard Tier Memorystore for Redis instance deployed in a production environment. You need to simulate a Redis instance failover in the most accurate disaster recovery situation, and ensure that the failover has no impact on production dat a. What should you do?
A. Create a Standard Tier Memorystore for Redis instance in a development environment. Initiate a manual failover by using the force-data-loss data protection mode.
B. Increase one replica to Redis instance in production environment. Initiate a manual failover by using the force-data-loss data protection mode.
C. Initiate a manual tailover by using the limited-data-loss data protection mode to the Memorystore for Redis instance in the production environment.
D. Create a Standard Tier Memorystore for Redis instance in the development environment. Initiate a manual failover by using the limited-data-loss data protection mode.
正解:D
解説: (Pass4Test メンバーにのみ表示されます)
質問 7:
You are designing a system that requires an ACID-compliant database. You must ensure that the system requires minimal human intervention in case of a failure. What should you do?
A. Configure a Bigtable instance with more than one cluster.
B. Configure a Cloud SQL for PostgreSQL instance with high availability enabled.
C. Configure a Cloud SQL for MySQL instance with point-in-time recovery enabled.
D. Configure a BJgQuery table with a multi-region configuration.
正解:B
解説: (Pass4Test メンバーにのみ表示されます)
質問 8:
What is the recommended action to do in order to switch between SSD and HDD storage for your Google Cloud Bigtable instance?
A. create a third instance and sync the data from the two storage types via batch jobs
B. export the data from the existing instance and import the data into a new instance
C. run parallel instances where one is HDD and the other is SDD
D. the selection is final and you must resume using the same storage type
正解:B
解説: (Pass4Test メンバーにのみ表示されます)
質問 9:
You are developing an application on Google Cloud that will automatically generate subject labels for users' blog posts. You are under competitive pressure to add this feature quickly, and you have no additional developer resources. No one on your team has experience with machine learning. What should you do?
A. Build and train a text classification model using TensorFlow. Deploy the model using a Kubernetes Engine cluster. Call the model from your application and process the results as labels.
B. Call the Cloud Natural Language API from your application. Process the generated Entity Analysis as labels.
C. Call the Cloud Natural Language API from your application. Process the generated Sentiment Analysis as labels.
D. Build and train a text classification model using TensorFlow. Deploy the model using Cloud Machine Learning Engine. Call the model from your application and process the results as labels.
正解:C
あだ** -
Pass4TestのProfessional-Data-Engineerの問題集を習得して本場試験に合格した。しかも高得点。次はAssociate-Cloud-Engineerに挑戦したいと思います!