When generating data for prompt tuning in IBM watsonx, which of the following is the most effective method for ensuring that the model can generalize well to a variety of tasks?
A. Prioritize prompts with repetitive patterns to help the model memorize key responses.
B. Generate a single highly-detailed prompt that covers all potential use cases to maximize generalization.
C. Use a diverse set of prompts covering multiple task domains with varying levels of complexity.
D. Focus on generating prompts specific to a single domain to train the model on specialized tasks.
正解:C
質問 2:
When optimizing a generative AI model using the Tuning Studio in IBM Watsonx, which two of the following actions can most effectively improve model performance when dealing with underfitting issues? (Select two)
A. Increase the model's complexity by adding more layers
B. Decrease the batch size
C. Reduce the learning rate
D. Increase the number of training epochs
E. Enable early stopping
正解:A,D
質問 3:
While working on generating a concise response to a user prompt, you notice that the generative AI model in IBM watsonx is producing excessively long outputs. You want to ensure that the response is informative but doesn't exceed a specific length.
Which of the following parameters should you adjust, and what is the most appropriate value to achieve a concise output without cutting off essential information?
A. Set max_tokens to 1000
B. Set max_tokens to 100
C. Set max_tokens to 5
D. Set max_tokens to 10
正解:B
質問 4:
You are using IBM watsonx Prompt Lab to experiment with different versions of a prompt to generate accurate and creative responses for a customer support chatbot.
Which of the following best describes a key benefit of using Prompt Lab in the process of prompt engineering?
A. It automatically generates prompts based on industry-specific data without any user input.
B. It provides a real-time environment for testing and refining prompts, helping to improve response quality.
C. It limits the number of iterations a user can test to prevent overfitting the prompt to specific outputs.
D. It allows users to generate AI models without the need for training data.
正解:B
質問 5:
As an IBM Watsonx Generative AI engineer, you are tasked with creating a chatbot for a public-facing service. One key concern is ensuring that the model does not generate or propagate hate speech, abusive content, or profanity. To mitigate these risks, you must implement appropriate controls.
Which of the following is the best approach to mitigate hate speech, abuse, and profanity from being generated by your AI model?
A. Apply a simple word-level blacklist filter to detect and remove harmful content from the model output.
B. Fine-tune the model with a highly curated dataset that contains labeled examples of hate speech, abuse, and profanity for the model to learn to avoid.
C. Train the model only on data that excludes all user-generated content to prevent exposure to harmful language.
D. Use IBM Watsonx's HAP (Hate, Abuse, and Profanity) filter to dynamically detect and block harmful content at inference time.
正解:D
質問 6:
You are tasked with designing a prompt template to assist a chatbot in generating professional email responses for customer service inquiries. The system should prioritize politeness, clarity, and conciseness.
What elements should be included in the prompt template to achieve the best results, considering optimal behavior of a large language model (LLM)? (Select two)
A. Ask the model to generate multiple versions of the response and rank them
B. Include examples of informal customer service responses for variability
C. Instruct the model to limit responses to a specific character count
D. Specify the output tone as polite and professional
E. Provide the customer's emotional context for better alignment with the tone
正解:C,D
質問 7:
You're developing a generative AI system for a medical diagnosis application that uses patient data. Your responsibility includes designing prompts that extract valuable insights without exposing sensitive patient information.
Which of the following steps is the most effective way to reduce model risks related to privacy while ensuring useful outputs from the AI?
A. Employ differential privacy techniques to add noise to the model's outputs.
B. Restrict the model's output length to reduce the risk of sensitive information leakage.
C. Increase the length of the prompts to provide more context, ensuring more accurate results.
D. Utilize a smaller model to minimize the likelihood of overfitting sensitive data.
正解:A
質問 8:
In the lifecycle of deploying a prompt template for a generative AI solution, which of the following best describes the stage where user feedback is integrated to refine the template's performance?
A. Iterative prompt tuning based on A/B test results and feedback loops
B. Initial testing on synthetic datasets and model validation
C. Retraining the model based on emerging trends in data
D. Deployment to production with regular monitoring and logging
正解:A
質問 9:
You are analyzing prompts submitted to a Generative AI model used for summarizing long research papers.
One user submits the following prompt: "Summarize this 40-page research paper on quantum computing, including details on every section and subsection, providing a detailed description of key points, methodologies, results, discussions, and future work. The summary should be at least 5 pages long." Why is this prompt considered inefficient, and how should it be optimized?
A. The prompt is inefficient because it requests a 5-page summary, which is unnecessary for summarizing the key information from a research paper.
B. The prompt is inefficient because it does not specify a character limit, which means the model might generate overly verbose output.
C. The prompt is efficient because it clearly outlines the expectations and ensures a comprehensive summary.
D. The prompt is inefficient because it asks for too much detail across all sections, leading to excessive token usage and unnecessary information in the output.
正解:D
里*唯 -
解釈でわかりやく内容を明示。つまづきやすいポイントをフォローしてくれてるPass4Test。試験に合格できる分は大きいと思います