You need to design a data pipeline that ingests data from CSV, Avro, and Parquet files into Cloud Storage.
The data includes raw user input. You need to remove all malicious SQL injections before storing the data in BigQuery. Which data manipulation methodology should you choose?
A. ETL
B. ETLT
C. EL
D. ELT
正解:A
解説: (Pass4Test メンバーにのみ表示されます)
質問 2:
You are a Looker analyst. You need to add a new field to your Looker report that generates SQL that will run against your company's database. You do not have the Develop permission. What should you do?
A. Create a calculated field using the Add a field option in Looker Studio, and add it to your report.
B. Create a custom field from the field picker in Looker, and add it to your report.
C. Create a table calculation from the field picker in Looker, and add it to your report.
D. Create a new field in the LookML layer, refresh your report, and select your new field from the field picker.
正解:B
解説: (Pass4Test メンバーにのみ表示されます)
質問 3:
You have a Dataproc cluster that performs batch processing on data stored in Cloud Storage. You need to schedule a daily Spark job to generate a report that will be emailed to stakeholders. You need a fully-managed solution that is easy to implement and minimizes complexity. What should you do?
A. Use Dataproc workflow templates to define and schedule the Spark job, and to email the report.
B. Use Cloud Scheduler to trigger the Spark job. and use Cloud Run functions to email the report.
C. Use Cloud Run functions to trigger the Spark job and email the report.
D. Use Cloud Composer to orchestrate the Spark job and email the report.
正解:A
解説: (Pass4Test メンバーにのみ表示されます)
質問 4:
Your retail company wants to predict customer churn using historical purchase data stored in BigQuery. The dataset includes customer demographics, purchase history, and a label indicating whether the customer churned or not. You want to build a machine learning model to identify customers at risk of churning. You need to create and train a logistic regression model for predicting customer churn, using the customer_data table with the churned column as the target label. Which BigQuery ML query should you use?
A. CREATE OR REPLACE MODEL churn_prediction_model OPTIONS (rr.odel_type=' logisric_reg *) AS select * except(churned), churned AS label FROM customer_data;
B. CREATE OR REPLACE MODEL churn_prediction_model options (model type='logistic_reg') AS select churned as label FROM customer_data;
C. CREATE OR REPLACE MODEL churn_prediction_model OPTIONS(model_uype='logisric_reg') AS SELECT * from cusromer_data;
D. CREATE OR REPLACE MODEL churn_prediction_model options(model_type='logistic_reg*) as select ' except(churned) FROM customer data;
正解:A
解説: (Pass4Test メンバーにのみ表示されます)
質問 5:
You have a Cloud SQL for PostgreSQL database that stores sensitive historical financial data. You need to ensure that the data is uncorrupted and recoverable in the event that the primary region is destroyed. The data is valuable, so you need to prioritize recovery point objective (RPO) over recovery time objective (RTO). You want to recommend a solution that minimizes latency for primary read and write operations. What should you do?
A. Configure the Cloud SQL for PostgreSQL instance for regional availability (HA) with asynchronous replication to a secondary instance in a different region.
B. Configure the Cloud SQL for PostgreSQL instance for regional availability (HA) with synchronous replication to a secondary instance in a different zone.
C. Configure the Cloud SQL for PostgreSQL instance for multi-region backup locations.
D. Configure the Cloud SQL for PostgreSQL instance for regional availability (HA). Back up the Cloud SQL for PostgreSQL database hourly to a Cloud Storage bucket in a different region.
正解:B
解説: (Pass4Test メンバーにのみ表示されます)
梅津** -
Associate-Data-Practitioner試験に合格しました。私はもう一度う買いたいです!精度が確かに高いです。心から感謝します。