How to set up an Open Ended Question and AI Probing?

How to set up an Open Ended Question and AI Probing?

Main use:


Using an open-ended question, participants can freely type answers in their own words (as opposed to a pre-determined list in Choice questions), which can be in lengthy essays or short texts.


How to set up the platform:


1.  In the Questionnaire Builder page, add a new question OR insert a new question by hovering over an existing question and clicking the green (+) button (either above or below the existing question) 





2. Select Open Ended from the popup menu that appears.



3. Write your question number or label in the upper box and write your question in the Question Text Box. 



Participants now have the option to answer open-ended questions using 
voice or video, leading to longer, more detailed answers. Then, in real-time, inca returns a probe that participants can also answer by voice or video. To learn more about this feature, please go to the Voice+ article here. 



Conversational Settings 

When editing your open-ended question, there are two main features in the Conversational Settings Tab that allow you to control the kind of responses and probing during surveys: Quality Controls and SmartProbe Settings

1 Quality Controls

The quality control system is a built-in feature on the inca Platform, which automatically evaluates and may disqualify participants during surveys. A rparticipant’s quality score is determined based on demerit points that they may accumulate, and whether they are disqualified or not depends on the disqualification control you set on your project. To learn more about the disqualification control, click on this article here. 

When adding an open-ended question, you may select any combination of the following checks through the Quality Control tab:

 

Gibberish Detection

Toggle on/enable when you want to prompt another answer to the participant for a gibberish response (e.g. dasbjlhflsdbfsjkd)

Toogle off/disable this feature when asking for brand/person names. Please note that some brand names and person names may look like gibberish thus it's not recommended to use when answers are short brand names/person.


Uninformative Answer Detection


Toggle on/enable when you want to prompt for more details when the participant is deemed uninformative or generic (e.g. ok. good, don't know, yes, no etc.)


Not recommended when low-information responses (e.g. “nothing”) may be considered a valid response. 


Duplicate Answer Detection

Toggle on/enable when you want to check for answers that match answers given in other open-ended questions that have this check enabled. 


Demerit AI

Leverages SmartProbe’s demerit confidence capability to intelligently analyze the semantics of a participant’s response to a question. It goes beyond detecting gibberish answers or uninformative answers, it potentially can identify when people provide a good but off-topic response. For example, if people answer "I like going to Tesco for grocery shopping as it's close to me" to the question "What do you like about this ad?". 

Questions with any of these checks enabled will have a maximum demerit point influence which can be controlled on the Quality Control tab: Strong (10 points) Normal (5 points) Weak (2 points)




Note: If any of the enabled checks are triggered, then the participant will be allocated demerit points. For open-ended questions without probing, this will equal the maximum demerit point influence. When probing is enabled, the maximum demerit point influence is only allocated if all of the responses trigger an enabled check; otherwise, each response contributes a diminishing amount of points, with the prime response having the most influence and each subsequent probe response having less.



2 SmartProbe Settings

The SmartProbe settings can intelligently generate follow-up questions ("probes"), assess whether their answers warrant demerit points, and can also detect conversational targets of interest. 

When adding an open-ended question, you may select any combination of the following settings in the SmartProbe tab:


Smartprobe

Toggle on/enable when you want the AI to probe participant answers and prompt them for answers that could have more value/insight derived from a follow-up.

When you want to add multiple probing in a single open end question, you can enable Multi-Turn probing and please refer to this article here

Research Context

Toggle on/enable when you want to add contextual factors to be naturally incorporated into the generated probes. In this probing setting, you need to provide the Question Objectives summary:

  • Question Objectives:
  • Specify in 1-2 sentences what the question objectives are, as though you were briefing a colleague who will be interviewing people for you.
  • We recommend you start the sentence with “We want to understand,” as per the example below

Another useful case for applying Research Context is when you want to set the tonality (i.e. formal audience) of your probing. For example, you can enable the Research Context and inform the bot that this is a survey intended for a business sample and ask it to use more formal language when probing.

One more useful example of applying Research Context is when you want to set the regional language differences or use the "consumer terms" when probing for a category or a topic. For example, you can enable to Research Context and inform the bot to use the British word for probing (Trainers - British vs Sneakers - American; Nappy - British; Diaper - American; Football - British vs Soccer - American).

**Best practices when using Question Objectives:

  • Length:Keep it within 100 words. Longer objectives may result in important instructions being overlooked.
  • Structure:Use the "context" + "instruction" format. Provide 1-2 sentences of context, then add specific instructions like "focus more on..." or "try not to...".
  • Conditionally asking for different information depending on the type of response provided— for example, use an "if" statement in your question objective such as "If reasons have been stated, we want to know examples of where users have seen those reasons in practice."
  • Testing and adjustment: This feature has inherent non-determinism, and it will generally follow the user's instructions. Users can experiment to optimize its effectiveness.

**Examples of what NOT to do with Question Objectives:

  • Vague or ambiguous instructions, and/or instructions that only apply to a subset of the responses. For example, question objectives such as "We want to check whether the participant noticed the brand that each of the people in the ad were using" could result in SmartProbe asking if the brand was noticed even if the user specifically mentions a brand in their response.
  • Exclusively focusing on, or avoiding specific targets. For such use cases, make use of the "Conversational Targets" feature (please see further below). 
  • Controlling tone or phrases that SmartProbe uses — although question objectives can be useful here, they may not be reliable nor precise. Instead, use the "Trained Model" feature(please see below).

Trained Models

Toggle on/enable when you want to teach the inca chatbot with ideal probing by providing "probing exemplars" consisting of a ‘prime question’, a ‘prime response’, and the ideal ‘probe question’ that would be asked following the prime response. 

Users can steer the probing to investigate particular issues or needs they are especially interested in following the example below:

  • For example: A packaging manufacturer has created a new compostable milk carton. They want to understand barriers to purchase among people who will not buy it. After capturing participants' spontaneous reasons for not purchasing, they want to present further information to participants to see if it impacts their decision.
  • First Question or Prime Question: "You said you would not buy this packaging, why is that?"
    • The following participant Prime Responses, paired with the ideal Probe Question you specified, can serve as exemplars for the Trained Model to imitate:
      • "It's too expensive" → "What would the price difference need to be versus your current milk price to make you consider buying this packaging?"
      • "I don't have a garden!" →"You could add the compostable pack to your food waste bin or even take it to a local park. Does knowing this change your interest in purchasing this pack? Please tell me why."
      • "I don’t see too much wrong with the current milk packaging" →"Did you know that 10% of marine life dies each year due to plastic pollution in the oceans? Does this fact make you reconsider your interest in this compostable milk pack? Please tell me why."

To do this, click the English Trained Model tab and select Custom Trained Models. To add another prime response and probe question, click on ADD A NEW EXAMPLE.

You may also customize the Prime Question for each Prime Response by togglingUse Different Prime Questions

How to implement Trained Models in other languages?

By the default, the Project Language you are using on the Overview Page will also be the available language in theCustom Trained Model.


Step 1 -Select the language you want to use for the Trained Models and select Custom Trained Models in the drop-down.

Step 2 - When adding Trained Models in multiple languages, the process should be the same in applying the Custom Trained Model as discussed above -- in which case, adding the prime response and probe question. The Custom Trained Model for other languages will show up as another field right down below the English Trained Model, and users can add the translated version of the prime response and probe questions.

If you are using only one language in the Project Language (Overview Page), then the same language will be set as default in the Trained Models Example. Users can then add the prime response and probe questions in the specific language in the field provided. 


**Best practices when using Trained Models for other languages:

  • When a language's support score is low, we recommend setting a trained model specific to each question where you are using SmartProbe. We recommend providing at least three positive exemplars from native speakers.
  • For all languages, trained models can be used to adjust or steer SmartProbe towards using certain types of words, phrases, or tones.

Conversational Targets

Toggle on/enable when you want to set conversational targets such as a topic, phrase or name (including people/brands/products/etc.) that are of interest to your study, which a participant might mention in their open-ended response. 

To add a conversational target, click the +Add Targets and specify the Target label (topic/phrase/name), Action and Detection Type




Target Action
  • AVOID: ensures that SmartProbe will NOT generate any probe related to this target. If a participant mentions the target, then the probe will divert the conversation away from that target to something else that is still relevant to the research objectives.
  • TRIGGER IF DETECTED: triggers SmartProbe to probe on this target if the target is detected. In other words, SmartProbe will probe for more details when a participant has mentioned this target.
  • TRIGGER IF NOT DETECTED: triggers SmartProbe to probe on this target if the target is NOT detected. In other words, SmartProbe will acknowledge the participant’s answer but then divert the question so that it focuses on this target.
  • TRIGGER ALWAYS: triggers SmartProbe to probe on this target always even if other targets are mentioned. 
  • END CONVERSATION IF DETECTED: triggers SmartProbe to stop probing and move on to the next question in the survey

Detection Type

There are two Detection Types - Default and Custom. "Default" means the system would detect the target based on the Target label. If users want to provide their own training phrases for each target, then they can choose "Custom", and after that they can provide a few training examples and counter examples.

  1. Examples are are variations of the label, related phrases, or other names that you DO want matched.
  2. Counter examples are other key words or phrases that you do NOT want matched.

Enable Target Priority

You may provide up to 10 targets per request. Since each request only produces one probe question, if multiple targets are triggered to probe, then SmartProbe will randomly choose one of them to focus on. This randomization behaviour is the default, but you can override by adding a priority field to the target (a number, 1-10, with 1 indicating “highest priority” and 10 indicating “lowest priority”). Each target must have a unique priority number. The order of the targets in the API request is not important: only the priority field indicates a preference between targets.




For the example above, it will probe randomly if "energy" is not mentioned (Trigger if not detected) OR immunity is mentioned (Trigger if detected). However, "immunity" is first in priority thus, the bot will give this priority in probing when mentioned. 


Using Canned Questions

For example, if you want to ask “How did you first hear about product X?” if a participant mentions “product X”, then you can set a target with label “product X” and use any "Trigger if detected” / “Trigger if NOT detected” / “Trigger always" as Action. And then select Custom as Detection Type. Toggle "Click here if you want to define a specific (canned) probing question for the target" and on the new text field, user can type in their canned text or the specific probing question.



Quantifying Conversational Targets

In terms of quantifying targets, the mentions will be only counted once on the codeframe upon coding. For the example above, if you set immunity as a target, then the probe will follow up on this, but if the participant mentions immunity again in their response to the probe it will only count once for the quantification on the codeframe.

**Best practices wen using Conversational Targets for other languages:

  • Provide the target label in English unless the phrase is specific to the native language. To increase the likelihood of SmartProbe accurately detecting targets, a variant in the native language can also be provided in the training examples.

Other Useful Question Settings

Allow Copy Paste

Allow participants to copy and paste text from different windows or sources. We recommend disabling it when you don't want them to repeat their stock answers without thinking.


Allow User Corrections

By default (Yes), participants are allowed to change their answer in this question type



    • Related Articles

    • What is Disqualification Control and how to set it up?

      The Disqualification Control allows you to control how strict or lenient you can be on the way participants answer the survey. The quality control system is a built-in feature on the inca Platform, which automatically evaluates and may disqualify ...
    • What is Project-Level Conversational Settings and how to set it up?

      In the Question Builder, users can control and apply the same Quality Control and Smartprobe settings for all Open End questions in their study. This can be done in the Overview page using the Conversational Settings tab. Once users enable the ...
    • How to enable Multi-Turn probing in Open Ended Questions?

      Multi-Turn Probing In the Question Builder, click Conversation Settings and toggle on/enable Smartprobe. Then, toggle on/enable Multi-Turn Probing when you want to add multiple probing in a single open end question. Multi-turn probing allows you have ...
    • How to use Voice+ in Open Ended Questions?

      Participants now have the option to answer open-ended questions using voice or video, leading to longer, more detailed answers. Then, in real-time, inca returns a probe that participants can also answer by voice or video. A quick video to show how ...
    • How to set up a Net Promoter Score Question (NPS Plus)

      Main use: NPS Plus consists of two parts. The first part is similar to a Scale type of question, this is mainly used to rate the likelihood of recommending a product/company/service on a scale of 0-10. The second question is a follow-up, open-ended ...