Validating Complex Surveys on the MX8 Research Platform
When designing complex surveys, it is crucial to ensure their accuracy and functionality from the outset. To streamline this process, the MX8 Research platform includes a unique feature: simulated responses for validation every time a survey is saved. This automated check is essential in ensuring that each survey flows smoothly and every question works as expected.
How Survey Validation Works
Each time you save a survey on the MX8 platform, the system simulates responses to your survey by randomly selecting answers or using any defaults specified within the survey code.
The purpose of this simulated validation is to validate question programming: ensure every question is set up correctly and error-free, and test all logic paths: verify that all possible paths, skips, and terminations (e.g., screening respondents out) within the survey are functional.
When the simulation is complete, the platform will highlight in the survey code any of the following:
Syntax Issues: Any programming errors in the survey code.
Unasked Questions: Questions that aren’t presented due to flawed skip or logic rules.
Excessive Term Points: Points at which large numbers of respondents are excluded from the survey, signaling potential issues with the survey’s termination logic.
This automated error logging helps identify potential issues early, minimizing the need for link testing and emphasizing upfront survey quality assurance.
Fixing logic errors with default values
If you encounter errors about unasked questions or high termination rates, you should:
Review Skip or Termination Logic: Identify which questions contribute to these logic pathways and adjust them as necessary.
Set Defaults for Relevant Questions: For questions influencing skip or term decisions, apply defaults to align the simulated responses more closely with expected real-world responses.
Consider this example where the survey screens out anyone who doesn’t watch streaming services:
video_devices = s.multi_select_question(
question="Which of the following ways do you watch TV/video? Please select all that apply.",
options=[
"Streaming services",
"TV Apps",
"Cable Provider",
"Antenna",
"NextGen TV",
"Satellite Service",
],
randomize=True,
other_options=["None of the above"],
)
s.terminate_if("Streaming services" not in video_devices,
reason="Sorry, this survey is only for people who watch streaming services."
)
In this scenario, because simulated responses are chosen randomly, they often don’t select “Streaming services.” Consequently, most simulated respondents are terminated from the survey. To avoid this, add a default response to reflect real-world behaviors. For example, as streaming is popular, you could set it as a default:
video_devices = s.multi_select_question(
question="Which of the following ways do you watch TV/video? Please select all that apply.",
options=[
"Streaming services",
"TV Apps",
"Cable Provider",
"Antenna",
"NextGen TV",
"Satellite Service",
],
default=["Streaming services"],
randomize=True,
other_options=["None of the above"],
)
s.terminate_if("Streaming services" not in video_devices,
reason="Sorry, this survey is only for people who watch streaming services."
)
By applying a default response, you simulate a more realistic distribution, helping prevent excessive terminations and ensuring your survey logic works as expected.
Conclusion
Although setting defaults requires some additional effort, it significantly reduces the need for repetitive link testing, allowing you to focus on building effective surveys that work correctly from the start.