The MX8 Labs platform checks responses to text questions in surveys using AI by default, guaranteeing that the input aligns with the expected format or specificity. Users can also provide custom validation logic for specialized requirements.
Default AI Validation
By default, text questions are processed through AI validation to verify the responses meet the question's expectations. For instance, if a question requires specific examples or details, the AI will determine whether the reaction meets these criteria. If a response is invalid, the system provides a clarification message to guide the respondent in revising their answer. This happens up to three times, and then the respondents get the next question.
By default, we don't terminate the user. A validation error is logged, and you can count them using count_validation_errors() as usual.
Customizing validation
The AI scores each response by the user against the following guidelines:
Completely irrelevant, nonsensical, or empty (indicating no effort or bot-like behavior).
Minimally relevant, lacks meaningful effort, or shows signs of inattentiveness (e.g., rushed or generic responses).
It is partially relevant, shows effort, lacks key elements, or deviates slightly from the question.
Mostly relevant and attentive but missing minor details or clarity.
An entirely relevant and reasonable response that directly addresses the question, even if concise.
You can customize the validation in three ways:
The max_attempts parameter specifies the number of attempts the users have to respond to and defaults to 3.
The quality_threshold parameter specifies the threshold required for the respondent to pass. This defaults to 3, which excludes only feeble responses.
The termination_threshold parameter the user must meet to continue taking the survey after the specified number of attempts. This defaults to 1, which will never terminate. Setting to 2 will exclude any bot-like responses.
Providing validation_instructions about what is explicitly required.
Customizing Validation with Instructions
The `validation_instructions` parameter allows you to define specific rules for AI validation. These instructions guide the AI in interpreting and validating responses, for example, specifying that responses should include particular details, such as roles in career aspirations or specific artists for music preferences.
Example:
Code Usage:
s.text_question(
"What is your favorite book?",
validation_instructions="Responses must refer to specific books."
)
Advanced Validation with `custom_validator`
If the default AI validation does not meet your needs, you can define a custom validation function using the `custom_validator` parameter. This function evaluates the response and returns an error message if the response is invalid. For example, this is useful for "gotcha" questions designed to ensure respondents are paying attention.
Example:
Code Usage:
s.text_question(
"Enter the verification code:",
custom_validator=lambda x: "Incorrect code. Please try again." if x != "12345" else None
)
Explanation: The custom validator checks if the response matches the expected value ("12345") and provides a custom error message if it does not.
Example Input and Output Scenarios
Here are more examples from unit tests:
1. Question: What are your career aspirations?
Responses: Become a software engineer
Validation Instruction: Specify roles, industries, or goals.
Output: The response is accepted.**
2. **Question:** Where would you like to go on vacation?
Responses: Overseas
Validation Instruction: Specify particular destinations, not general categories.
Output: Validation fails, the user is asked "Could you share a specific destination you'd like to visit on vacation? That would really help!"