You execute an SAP data services job with enable recovery activated. One of the dataflows in the jobs raises an exception that interrupts the execution. You run the job again with Recover from last failed execution enabled. What happens to the data flow that raised the exception during the first execution?
A. It is rerun only if the dataflow is part of a recovery unit.
B. Is is rerun with the first failed row.
C. It is rerun from the beginning and the partially loaded data is always handled automanticaly.
D. It is rerun from the beginning and the desing of the data flow must deal with partially loaded data.
正解:B
質問 2:
You want to set up a new SAP Data Services landscape. You must also create new repositories. Which repository types can you create?
A. Central repository
B. Profiler repository
C. Backup repository
D. Local repository
E. Standy repository
正解:B
質問 3:
In SAP Data Services which basic operations can you perform with a Query transform?
A. Set a global variable to a value
B. Flag rows for update
C. "Apply functions to columns"
D. Map Columns from an input schema to an output schema"
E. "Join data from several sources"
正解:C
質問 4:
You want to execute two dataflows in parallel in SAP Data Services. How can you achieve this?
A. Create a workflow containing two dataflows without connecting them with a line.
B. Create a workflow containing two dataflows and set a degree of parallelism to 2.
C. Create a workflow containing two dataflows and connect them with a line.
D. Create a workflow containg two dataflows and deselect the execute Only once property of the workflow.
正解:A
質問 5:
You have to load a file that contains the following first three lines:
YEAR; MONTH; PLAN_AMOUNT
2014;01;100.00
2014;02;110.00
What setting do you use when you created a file format for this?
A. Type: Delimited column delimiter:; skip row headen: yes
B. Type: Delimited column delimiter:; skip row headen: no
C. Type: Delimited column delimiter:<blank> skip row headen: yes
D. Type: Fixed column length:4,2 and 6 skip row headen: yes
正解:A
質問 6:
An SAP Data Services job contains a lot of dataflows and runs for several hours every night. If a job execution fails, you want to skip all successful dataflows and start with the failed dataflow. How do you accomplish this?
There are 2 correct answer
A. Design the dataflow to ensure a second run does not result in duplicate
B. Merge the dataflows from the job and rerun it.
C. add a try block before each dataflow and a Catch block after each dataflow
D. Run the nightly job with the enable recovery flag turned on.
正解:A,C
質問 7:
What is the SAP Data services Dataflow auditing feature used for? Note: There are 2 correct answers to this question.
A. To count the number of rows processed at user defined points to collect runtime statistics
B. To view the data as it is processed by the dataflow in order to ensure its correctness
C. To define rules based on the number of records processed overall once the dataflow is finished
D. to define rules that each record processed by the dataflow has to comply with
正解:A,D
質問 8:
You want to load data from an input table to an output table using the SAP Data Services Query transform. How do you define the mapping of the columns within a Query transform? There are 2 correct answers
A. Drag one column from the output schema to the input schema
B. Select one input column and enter the mapping manually
C. Select an output column and enter the mapping manually.
D. Drag one column from the input schema to the output schema
正解:C,D
鎌田** -
Pass4Testの問題集C_DS_42を使って試験に合格しました。ここで感謝を申し上げます。ありがとうございました。