A company is storing large numbers of small JSON files (ranging from 1-4 bytes) that are received from IoT devices and sent to a cloud provider. In any given hour, 100,000 files are added to the cloud provider.
What is the MOST cost-effective way to bring this data into a Snowflake table?
A. A stream
B. A copy command at regular intervals
C. A pipe
D. An external table
正解:C
解説: (Pass4Test メンバーにのみ表示されます)
質問 2:
An Architect is designing a solution that will be used to process changed records in an orders table.
Newly-inserted orders must be loaded into the f_orders fact table, which will aggregate all the orders by multiple dimensions (time, region, channel, etc.). Existing orders can be updated by the sales department within 30 days after the order creation. In case of an order update, the solution must perform two actions:
1. Update the order in the f_0RDERS fact table.
2. Load the changed order data into the special table ORDER _REPAIRS.
This table is used by the Accounting department once a month. If the order has been changed, the Accounting team needs to know the latest details and perform the necessary actions based on the data in the order_repairs table.
What data processing logic design will be the MOST performant?
A. Usetwo streams and two tasks.
B. Useone stream and one task.
C. Useone stream and two tasks.
D. Usetwo streams and one task.
正解:C
解説: (Pass4Test メンバーにのみ表示されます)
質問 3:
How can the Snowpipe REST API be used to keep a log of data load history?
A. Call loadHistoryScan every 10 minutes for a 15-minutes range.
B. Call insertReport every 20 minutes, fetching the last 10,000 entries.
C. Call insertReport every 8 minutes for a 10-minute time range.
D. Call loadHistoryScan every minute for the maximum time range.
正解:A
解説: (Pass4Test メンバーにのみ表示されます)
質問 4:
An Architect needs to design a Snowflake account and database strategy to store and analyze large amounts of structured and semi-structured data. There are many business units and departments within the company. The requirements are scalability, security, and cost efficiency.
What design should be used?
A. Create a single Snowflake account and database for all data storage and analysis needs, regardless of data volume or complexity.
B. Use a centralized Snowflake database for core business data, and use separate databases for departmental or project-specific data.
C. Use Snowflake's data lake functionality to store and analyze all data in a central location, without the need for structured schemas or indexes
D. Set up separate Snowflake accounts and databases for each department or business unit, to ensure data isolation and security.
正解:B
解説: (Pass4Test メンバーにのみ表示されます)
質問 5:
Which command will create a schema without Fail-safe and will restrict object owners from passing on access to other users?
A. create TRANSIENT schema EDW.ACCOUNTING WITH MANAGED ACCESS
DATA_RETENTION_TIME_IN_DAYS = 1;
B. create schema EDW.ACCOUNTING WITH MANAGED ACCESS;
C. create TRANSIENT schema EDW.ACCOUNTING WITH MANAGED ACCESS
DATA_RETENTION_TIME_IN_DAYS = 7;
D. create schema EDW.ACCOUNTING WITH MANAGED ACCESS
DATA_RETENTION_TIME_IN_DAYS - 7;
正解:C
解説: (Pass4Test メンバーにのみ表示されます)
関え** -
読む側も得意な項目は省略したりなど
自分でペースを考えて勉強した方が効率的。ARA-R01問題集に当たっても良いと思います。