In which two situations is it appropriate to use a Sparse Lookup? (Choose two.)
A. When accessing DB2 data using the DB2 API stage.
B. When invoking a stored procedure within a database per row in the streaming link.
C. When the output of the Lookup stage needs to be hashed partitioned.
D. When reference data is significantly larger than the streaming data (100:1).
正解:B,D
質問 2:
Which two properties can be set to read a fixed width sequential file in parallel? (Choose two.)
A. Set the "Number of Readers Per Node" optional property to a value greater than 1.
B. Set the "Read from Multiple Nodes" optional property to a value greater than 1.
C. Set the Execution mode to "Parallel".
D. Set Read Method to "File Pattern".
正解:A,B
質問 3:
Which three methods can be used to import metadata from a Web Services Description Language (WSDL) document? (Choose three.)
A. Job Stage Column tab properties entered using "Load" feature
B. Orchestrate Schema Definitions
C. Web Services WSDL Definitions
D. Web Service Function Definitions
E. XML Table Definitions
正解:C,D,E
質問 4:
DataStage offers database connectivity through connectors, native parallel and plug-in stage types. Which two statements are correct? (Choose two.)
A. ODBC API is a plug-in stage.
B. For maximum parallel performance, scalability, and features it is best to use the native parallel database stages.
C. The connector stage offers better functionality and performance and is the best to use.
D. Next to the connector stage it is best to use the native parallel database stages.
正解:C,D
質問 5:
A job design consists of an input sequential file, a Modify stage, followed by a Filter stage and an output SequentialFile stage. The job is run on an SMP machine with a configuration file defined with three nodes. No environment variables were set for the job. How many osh processes will this job create?
A. 8
B. 16
C. 9
D. 11
正解:C
質問 6:
You have a 3TB dataset hash-partitioned on CustID in a clustered environment. You need to join this dataset with 1GB of reference data on OrderID. Which technique is most appropriate?
A. Use Lookup stage, select auto partitioning for the stream link and hash-partition the reference link on CustI
B. Use Lookup stage, select auto partitioning for the stream link and entire partitioning for the reference link.
C. Use Lookup stage, select auto partitioning for both link.
D. Use Join stage, hash-partition and sort both link on OrderID.
正解:B
質問 7:
Using a DB2 for z/OS source database, a 200 million row source table with 30 million distinct values must be aggregated to calculate the average value of two column attributes. What would provide optimal performance while satisfying the business requirements?
A. Using custom SQL with an ORDER BY clause based on key columns, select all source rows using the DB2 API stage. Aggregate using a Hash Aggregator.
B. Using custom SQL with AVG functions and a DISTINCT clause, select all source rows using a DB2 Enterprise stage.
C. Select all source rows using a DB2 Enterprise stage, use a parallel Sort stage with the specified sort keys, calculate the average values using a parallel Transformer with stage variables and output link constraints.
D. Select all source rows using a DB2 API stage. Aggregate using a Sort Aggregator.
正解:D
Kato -
試験にある問題はほぼPass4Testのこの問題集にもあって、短時間で答え終わって、結果がてて本当に合格になった。最重要用語や問題傾向を掲載しているから気に入ってます。