For this question, refer to the TerramEarth case study. To be compliant with European GDPR regulation, TerramEarth is required to delete data generated from its European customers after a period of 36 months when it contains personal data. In the new architecture, this data will be stored in both Cloud Storage and BigQuery. What should you do?
A. Create a BigQuery table for the European data, and set the table retention period to 36 months. For Cloud Storage, use gsutil to enable lifecycle management using a DELETE action with an Age condition of 36 months.
B. Create a BigQuery table for the European data, and set the table retention period to 36 months. For Cloud Storage, use gsutil to create a SetStorageClass to NONE action when with an Age condition of 36 months.
C. Create a BigQuery time-partitioned table for the European data, and set the partition expiration period to
36 months. For Cloud Storage, use gsutil to enable lifecycle management using a DELETE action with an Age condition of 36 months.
D. Create a BigQuery time-partitioned table for the European data, and set the partition period to 36 months. For Cloud Storage, use gsutil to create a SetStorageClass to NONE action with an Age condition of 36 months.
正解:C
解説: (Pass4Test メンバーにのみ表示されます)
質問 2:
For this question, refer to the Helicopter Racing League (HRL) case study. The HRL development team releases a new version of their predictive capability application every Tuesday evening at 3 a.m. UTC to a repository. The security team at HRL has developed an in-house penetration test Cloud Function called Airwolf.
The security team wants to run Airwolf against the predictive capability application as soon as it is released every Tuesday. You need to set up Airwolf to run at the recurring weekly cadence. What should you do?
A. Set up Cloud Tasks and a Cloud Storage bucket that triggers a Cloud Function.
B. Configure the deployment job to notify a Pub/Sub queue that triggers a Cloud Function.
C. Set up Identity and Access Management (IAM) and Confidential Computing to trigger a Cloud Function.
D. Set up a Cloud Logging sink and a Cloud Storage bucket that triggers a Cloud Function.
正解:A
質問 3:
Your team is developing a web application that will be deployed on Google Kubernetes Engine (GKE). Your CTO expects a successful launch and you need to ensure your application can handle the expected load of tens of thousands of users. You want to test the current deployment to ensure the latency of your application stays below a certain threshold. What should you do?
A. Enable autoscaling on the GKE cluster and enable horizontal pod autoscaling on your application deployments. Send curl requests to your application, and validate if the auto scaling works.
B. Use Cloud Debugger in the development environment to understand the latency between the different microservices.
C. Use a load testing tool to simulate the expected number of concurrent users and total requests to your application, and inspect the results.
D. Replicate the application over multiple GKE clusters in every Google Cloud region. Configure a global HTTP (S) load balancer to expose the different clusters over a single global IP address.
正解:A
質問 4:
Your customer is moving their corporate applications to Google Cloud Platform. The security team wants detailed visibility of all projects in the organization. You provision the Google Cloud Resource Manager and set up yourself as the org admin. What Google Cloud Identity and Access Management (Cloud IAM) roles should you give to the security team'?
A. Project owner, network admin
B. Org admin, project browser
C. Org viewer, project viewer
D. Org viewer, project owner
正解:C
解説: (Pass4Test メンバーにのみ表示されます)
質問 5:
For this question, refer to the TerramEarth case study.
The TerramEarth development team wants to create an API to meet the company's business requirements. You want the development team to focus their development effort on business value versus creating a custom framework. Which method should they use?
A. Use Google App Engine with Google Cloud Endpoints. Focus on an API for dealers and partners.
B. Use Google App Engine with a JAX-RS Jersey Java-based framework. Focus on an API for the public.
C. Use Google Container Engine with a Tomcat container with the Swagger (Open API Specification) framework. Focus on an API for dealers and partners.
D. Use Google Container Engine with a Django Python container. Focus on an API for the public.
E. Use Google App Engine with the Swagger (open API Specification) framework. Focus on an API for the public.
正解:A
解説: (Pass4Test メンバーにのみ表示されます)
質問 6:
You are implementing the infrastructure for a web service on Google Cloud. The web service needs to receive and store the data from 500,000 requests per second. The data will be queried later in real time, based on exact matches of a known set of attributes. There will be periods where the web service will not receive any requests. The business wants to keep costs low. Which web service platform and database should you use for the application?
A. A Compute Engine autoscaling managed instance group and BigQuery
B. A Compute Engine autoscaling managed instance group and Cloud Bigtable
C. Cloud Run and Cloud Bigtable
D. Cloud Run and BigQuery
正解:C
解説: (Pass4Test メンバーにのみ表示されます)
質問 7:
For this question, refer to the TerramEarth case study.
TerramEarth's 20 million vehicles are scattered around the world. Based on the vehicle's location its telemetry data is stored in a Google Cloud Storage (GCS) regional bucket (US. Europe, or Asia). The CTO has asked you to run a report on the raw telemetry data to determine why vehicles are breaking down after 100 K miles.
You want to run this job on all the data. What is the most cost-effective way to run this job?
A. Move all the data into 1 region, then launch a Google Cloud Dataproc cluster to run the job.
B. Launch a cluster in each region to preprocess and compress the raw data, then move the data into a multi region bucket and use a Dataproc cluster to finish the job.
C. Launch a cluster in each region to preprocess and compress the raw data, then move the data into a region bucket and use a Cloud Dataproc cluster to finish the jo
D. Move all the data into 1 zone, then launch a Cloud Dataproc cluster to run the job.
正解:C
解説: (Pass4Test メンバーにのみ表示されます)
神田** -
今日無事合格してきました!初心者でしたが、この問題集でほぼ満点合格でした!
模擬試験の設問は本当に本番の試験に似てます。
オンラインサービスのご丁寧な対応もありがとうございました。