How to set up a Monadic Test?

How to set up a Monadic Test?

What is a Monadic Test?

In a monadic test, the participants are presented with a single concept, and follow-up questions are asked to evaluate the concept -- likeability, likes/dislikes, rating of specific attributes etc. The monadic survey design is used when you only need to expose a single concept to a target audience.

How can I use Group Questions to design a Monadic survey?

For example: 

In this case, you have 2 concepts and you only need to expose only 1 concept for your participants and you want to show each concept randomly. For each concept, you have 4 questions (Show Concept, Appeal, Reason for Appeal, Text Highlighter) and you want to ask each question in chronological order. 

Step 1:Select sub-questions for the first concept labeled as Group A then select "Show in all order" in the Question Display Option. In this setting, the questions will be asked in this order Concept A -> Appeal A -> Reason A -> Text Highlighter A.

Similarly, select sub-questions for the second concept labelled as Group Band then select "Show in all order" respectively. In this example, the questions will be asked in this order Concept B -> Appeal B -> Reason B -> Text Highlighter B.

Here is a sample video for creating Step 1:  



Step 2:Add a new Group question and then select the formed Group A and Group B.


Step 3: This time, select "Randomize all and randomly pick some" and specify the number of randomly selected question in the box. In this case, just 1 (e.g. input 1). In this last step, the system will randomly pick which concept to ask -- either Group A or Group B.

Here is a sample video for creating Steps 2 and 3:




In summary, this is how the Group questions will show up on the Question Tree in the above example


Please note:

  • If we design the Monadic Test in this way, data is collected in two separate sets of questions for each stimulus, even though we basically ask the same questions across stimuli. For example, we ask the same question Appeal for the two stimuli, but the data are recorded in the two different questions Appeal_1 (for stimulus 1) and Appeal_2 (for stimulus 2). As a result, if we want to compare results between the two stimuli using Crosstab, then it's not directly available on inca dashboard as of now. It may require downloading the raw data, merging or restructuring the relevant data, and then comparing the results in SPSS/EXCEL or other tabulation tools.
  • Group questions can not be used to set quotas. We will make sure each stimulus is selected randomly, but we cannot ensure fulfillment of any quota needs.


Based on the above notes, there are two alternative methods to design a Monadic Test. The two alternative methods are a bit more complicated to set up, but can potentially overcome the constraints mentioned above in a certain way.

Alternative Method 1 - Randomly Select Stimulus ONLY

In the above method, we put each stimulus with all the relevant questions in one group and then randomly choose one of the groups. As a result, the questions for each stimulus are separated. If we only include the stimulus in the Group for random selection and create ONE set of questions after it, then all the data will be collected together for analysis or comparison purposes.

Let's use the same example as above to illustrate how to do it.

Step 1: Add a new Group question and then select Concept A and Concept B, the two multimedia questions to show the stimulus.

Step 2: Select"Randomize all and randomly pick some" and specify the number of randomly selected questions in the box. In this case, just 1 ( e.g. input 1). In this step, the system will randomly pick which concept to ask -- either Concept A or Concept B.




Step 3: Create a Virtual Question indicating which Concept has been randomly picked. This step is important as it allows us to have this information in the data for further analysis and also can support any other needs for logic and quotas. More specifically,

  1. Create a Virtual Question named Selected Concept
  2. Add a variable for Concept A, along with the logic rule that the multimedia question Concept A IS DISPLAYED
  3. Add another variable for Concept B, along with the logic rule that the multimedia question Concept B IS DISPLAYED



Step 4: Add all the relevant questions that are shared by the two concepts, including Appeal and Reason.


Step 5: Add concept-specific questions, which is Text Highlighter in this case. As we need to include the stimulus in the Text Highlighter, we should create two questions for Text Highlighter, one for each Concept. For each Text Highlighter, set up a pre-condition to show the question only when the relevant Concept is selected.



In Summary:

  • If we design the Monadic Test in this way, data is collected all together for the questions that are shared by the concepts and also we have created a Virtual Question ("Selected Concept" in this example) to differentiate the data by concepts. This Virtual Question can be used in the Report Filter and Crosstab Header on the dashboard to compare the results. 
  • This Virtual Question can also be used for Quotas. E.g. If we are required to collect n=100 responses for each concept, then we can add this Virtual Question to the Audience page and set a quota for n=100 for each concept (Please see more details about how to add Quota here. However, please keep in mind that it may not be the most ideal way to control the quota as we will terminate a person simply because the concept that's randomly picked for this person is quota full, and we have no control over it.

Alternative Method 2 - Use URL metadata

In Alternative Method 1, we create ONE set of questions across concepts for analysis or comparison purposes. However, as the the concept is selected randomly each time, we have limited control if we were to target it for a certain quota. We can potentially improve it by using URL metadata instead.

Let's use the same example as above to illustrate how to do it. Before going into the details, please see more details about URL metadata here.

Step 1: Create a Virtual Question with each option depending on the URL metadata. To do this, 

  1. Create a Virtual Question named Selected Concept
  2. Add a variable for Concept A, along with the logic. More specifically, choose URL Metadata as the source question, specify CONCEPT as the URL key, and use the logic rule that CONCEPT Equal [text]A. The Variable Concept A will be TRUE (or auto-selected) when we have "&CONCEPT=A" appended to the survey URL.
  3. Similarly, add a variable for Concept B, along with the logic. More specifically, choose URL Metadata as the source question, specify CONCEPT as the URL key, and use the logic rule that CONCEPT Equal [text]B. The Variable Concept B will be TRUE (or auto-selected) when we have "&CONCEPT=B" appended to the survey URL.


Step 2: Create two multimedia questions to show Concept A and Concept B, with the pre-condition to show the question only when the relevant Concept is selected in the Virtual Question from Step 1.



Step 3: Add all the relevant questions that are shared by the two concepts, including Appeal and Reason.

Step 4: Add concept-specific questions, which is Text Highlighter in this case. As we need to include the stimulus in the Text Highlighter, we should create two questions for Text Highlighter, one for each Concept. For each Text Highlighter, set up a pre-condition to show the question only when the relevant Concept is selected.


Step 5: After you have launched the study, please remember to append the URL metadata in the survey link(s) you share with the panel to indicate the concept for them to target.

E.g. If the survey link you get for the study is https://demo.nexxt.in/p/2362?src=DYNATA&PSID=[ID], then the link(s) for each Concept should be as follows where the changes are highlighted in yellow.

  • https://demo.nexxt.in/p/2362?src=DYNATA&PSID=[ID]&CONCEPT=A
  • https://demo.nexxt.in/p/2362?src=DYNATA&PSID=[ID]&CONCEPT=B

In Summary:

  • If we design the Monadic Test in this way, data is collected all together for the questions that are shared by the concepts and also we have created a Virtual Question ("Selected Concept" in this example) to differentiate the data by concepts. This Virtual Question can be used in the Report Filter and Crosstab Header on the dashboard to compare the results.
  • This Virtual Question can also be used for Quotas. E.g. If we require to collect n=100 responses for each concept, then we can add this Virtual Question to the Audience page and set a quota for n=100 for each concept (Please see more details about how to add Quota here). Also as we can target audience by different links with different URL metadata, we have a bit more controls over there as we can stop the sample pushing for the link where the quota is met, which can help improve the survey Incidence Rate.

    • Related Articles

    • How to set up a Sequential Monadic Test?

      What is a Sequential Monadic Test? In a sequential monadic test, the participants are presented with two or more concepts, and follow-up questions are asked to evaluate each of the concepts -- likeability, likes/dislikes, rating of specific ...
    • How to design a Monadic study with Direct Comparison using URL metadata?

      In this article, we will illustrate how to design a monadic study (with two concepts) and with a Direct comparison or Preference question at the end of the monadic questions. The method is similar to designing a monadic study using Alternative Method ...
    • How to republish a live study?

      After the study is live, any changes that users will make on the Audience Page or Question Builder, will require republishing the study again to implement the changes on the live link. To republish a study here are the steps to follow: 1. In your ...
    • What is URL metadata and how to use it?

      What is URL metadata? URL metadata essentially allows additional information to be appended to the survey URL. The additional information can encompass various aspects, such as demographic data of your members, specific countries you wish to target ...
    • How to manage multi-country studies using URL metadata?

      Multi-country studies Conducting a multi-country survey presents unique challenges which include language translations, diverse cultures, and country-specific variables. Please see more details about general guidelines for managing a multi-country ...