An important step in completing your project is running simulated data through your survey. The test data feature runs through a bug checking process developed to find potential errors in survey logic or survey flow. The test data process also allows thorough testing of the report to help verify that skips and question logic has been defined properly.
Simulated testing looks for errors in survey logic or survey flow by forcing simulated data through your survey.
1: Running Simulated Data
Note: Running Simulated data is only available during the dev and testing phase of your project. You won’t be able to run simulated data on a live survey, unless you’re working in a temporary version of the project.
From the "Test" section of the menu used throughout the Decipher platform, you can select "Simulated Data," opening the test data window.
Selecting "Run Simulated Data" forces a minimum of 100 simulated responses through your completed project, randomly testing each element setup in your project.
Any simulated data that is run through your project mimics the path that actual respondents might take when the survey is live and displays in the reporting tools. The simulated data must obey the setup of the survey, respecting question logic, quotas, sample sources, languages, etc.
The process could take a few seconds or a few minutes, dependent upon the length and complexity of your survey.
2: Configuring a Test Simulation
To set additional conditions for the test data click "Configure Test Simulation" and more options appear. Descriptions for each option are provided next to the action.
- Run # simulations: Allows you to choose how much test data runs through the survey.
- Qualified data only: Check this box to submit mostly qualified responses.
- Skip validation: When checked the test data ignore validation in order to maximize the efficiency of the test.
- Respect optionals: When checked optional answers are respected, leaving some blank.
- Quota increase/Decrease: Allows you to increase or decrease the quota count.
2.1: Custom Test Simulation Options
The custom test simulation is helpful if you need more data to reach a certain part of your survey. For general testing purposes it's good practice to let the data fall out naturally, without restrictions, in order to identify any issues or errors.
The "Custom Test Simulation" area allows you to enter specific commands to direct the path of the test data. For example, you can specify list variables or force a specific answer to a question (examples included below). If you are entering multiple commands, enter each on a new line. Once the test simulation is configured, click "Apply" and "Run Test Data."
Instead of completely random execution, test data can also be configured with specific answers to individual questions using this field. Review the examples below to learn how to configure the test data for your question type.
For questions where only a single answer can be selected, enter the question number and answer option(s) in this form:
q4: r1 r2 r3
In this example the test data is run through answers r1, r2 and r3 with equal chance.
To weight an option in a question where only a single answer can be selected, enter the question number and answer option(s) in this form:
q4: r1:3 r2 r3
This weights answer 1 with 4 points. The default weight is 1, so r1 is chosen 3 out of 5 times, while r2 and r3 can be chosen 1 out of 5 times each.
For 2-dimensional questions, follow this form:
q4.r1: c1 c2 q4.r2: c3 c4
In this example the question is 2 dimensional with rows and columns. Test data run through q4, columns c1 or c2 are chosen for row 1 and columns c3 or c4 are chosen for row 2.
Non-numeric data can also be specified in this form:
q4.r1: blue red green
Note: If a response contains spaces, replace the space with _. A colon (:) cannot be used because it specifies weighting.
For multi-select questions, a percentage chance is supplied per row/col or for the whole question. For example:
This gives each item in the multi-select a 20% chance to be checked.
For number questions that allow decimals, specify ..X to decide the number of digits after the decimal point. For example:
q5: 1-2..2 2-3..2:4
This generates 1.00 .. 2.00 20% of the time, and 2.00 .. 3.00 80% of the time.
For number questions that utilize validate sum, specify a number within the value range for each answer option in the question.
For example: Q10 Validate Sum = 10 and Value Range = 0-10, enter -
q10.r1.c1: 1 q10.r2.c1: 1 q10.r3.c1: 1 q10.r4.c1: 1 q10.r5.c1: 1 q10.r6.c1: 1 q10.r7.c1: 2 q10.r8.c1: 2
This meets the sum and range requirements.
Click here to learn more about custom test simulation options.
3: Viewing Test Data Results
Once complete, the testing status appears in the results. To open the report for further testing of your survey, select "View Simulated Data in Report."
Note: When running test data, quotas with smaller limits may run overquota. This must be done to ensure that enough test data is run through the survey to qualify it for launch.
3.1: Survey is Ready for Launch
Because the survey progress bar relies on simulated data, it's suggested that you run at least 200 attempts through the final version of your survey prior to launch. This helps establish all of the possible paths that a respondent can take and ensures that the progress bar percentage lines up correctly.
If your survey is especially complex, consider running more simulated data to account for those complex paths.
Once you have at least 100 qualified completes make it through your survey free of error, your survey is ready for launch. You'll receive the following message upon a successful attempt.
3.2: Survey Contains Errors
If an error is present in the survey logic or flow, then you are able to view them in the test environment.
There are two types of errors that you might encounter when running simulated data:
3.2.1: Fatal Errors
Fatal Errors indicate that the survey to ended unexpectedly due to an error in the survey. This indicates a serious issue to be resolved prior to the launch of the survey.
Clicking on an error opens the Fatal Report detail page for that particular error in your survey.
You can review Fatal Report detail page to locate the source of the error.
If available Click "Previous" to view information about the questions that occur before the error.
You can use the details page to help you troubleshoot the error. The information available after "The traceback:" should give you a precise snapshot of the error that caused the simulated data to fail. Use the information from the test data environment to identify and resolve the error when working the the survey editor or XML editor.
In the example above, the source of the fatal errors is "name 'q5' is not defined." We can search through the xml looking for this reference and try to resolve the issue.
After resolving the error in your survey programming, clear the simulated data in the test environment and re-run the simulated data.
If fatal errors persist, repeat the process for troubleshooting. If you are unable to resolve an error, then please feel free to contact email@example.com.
3.2.2 Failed Attempts
Failed Attempts indicate the simulated data was unable to continue through the survey because of things like validation errors.
If a fail occurs, click "View Test Data History," select the most recent Test Data attempt and click "fail" to view details about the error.
Click "Previous" to view information about the questions that occur before the error.
You can use the details page to help you troubleshoot the error. The details screen displays the page on which the data failed.
Read the error to help you determine the issue. You can also manually correct the error on the screen by updating the fields or answers selected on the screen and hitting "Continue" on the bottom of the screen.
If you are allowed to continue through the survey after submitting the information in the proper format and the question is working as desired, then you know that this was a test data error (and your survey is working properly). You can avoid this error in the future by using the Custom Configuration and specifying the data behavior for the troublesome question in future simulated data runs. If the survey is not working as expected, use the survey editor or XML editor to fix the issue.
After setting the Custom Configuration or resolving the error in your survey programming, clear the simulated data in the test environment and re-run the simulated data.
If simulated data failures persist, repeat the process for troubleshooting. If you are unable to resolve an error, then please feel free to contact firstname.lastname@example.org.
3.2.3 CPU Warning
A CPU warning indicates that a code block in the survey is using too much CPU processing power. This issue must be corrected before launching the survey. Possible causes:
- questions with a long attribute list
- large surveys
- complex quota setup
- extensive use of persistent variables
- poorly written exec blocks
Users with cloud access can run
sst --profile to help troubleshoot the problem.
4. Clearing Simulated Data
Note: Clearing the test data resets any individuals who are currently manually testing the survey. Their place is reset and they need to restart the testing at the beginning of the survey.
Clearing the simulated data removes all old test data and allows you to restart the debugging process by running new simulated data through the survey. Clearing the test data also removes any errors found in the previous simulated data runs, thereby allowing you to re-run simulated data to look for new or persistent errors.
From the "Test" section of the menu used throughout Decipher, you can select "Simulated Data," opening the test data window. Select the "Clear Test Data" button. There may be a delay while the system loads.
You are asked if you want to clear all test data. Click "Yes, Clear Data" to clear the test data.
A message appears confirming that the test data has been cleared. Click "OK" to return to the the survey.