Main use:
Using an open-ended question, participants can freely type answers in their own words (as opposed to a pre-determined list in Choice questions), which can be in lengthy essays or short texts.
How to set up the platform:
1. In the Questionnaire Builder page, add a new question OR insert a new question by hovering over an existing question and clicking the green (+) button (either above or below the existing question)

2. Select Open Ended from the popup menu that appears.

3. Write your question number or label in the upper box and write your question in the Question Text Box.
Participants now have the option to answer open-ended questions using voice or video, leading to longer, more detailed answers. Then, in real-time, inca returns a probe that participants can also answer by voice or video. To learn more about this feature, please go to the Voice+ article here.
Conversational Settings
When editing your open-ended question, there are two main features in the Conversational Settings Tab that allow you to control the kind of responses and probing during surveys: Quality Controls and SmartProbe Settings
1 Quality Controls
The quality control system is a built-in feature on the inca Platform, which automatically evaluates and may disqualify participants during surveys. A rparticipant’s quality score is determined based on demerit points that they may accumulate, and whether they are disqualified or not depends on the disqualification control you set on your project. To learn more about the disqualification control, click on this article here.
When adding an open-ended question, you may select any combination of the following checks through the Quality Control tab:
Gibberish Detection
Toggle on/enable when you want to prompt another answer to the participant for a gibberish response (e.g. dasbjlhflsdbfsjkd)
Toogle off/disable this feature when asking for brand/person names. Please note that some brand names and person names may look like gibberish thus it's not recommended to use when answers are short brand names/person.
Uninformative Answer Detection
Toggle on/enable when you want to prompt for more details when the participant is deemed uninformative or generic (e.g. ok. good, don't know, yes, no etc.)
Not recommended when low-information responses (e.g. “nothing”) may be considered a valid response.
Duplicate Answer Detection
Toggle on/enable when you want to check for answers that match answers given in other open-ended questions that have this check enabled.
Demerit AI
Leverages SmartProbe’s demerit confidence capability to intelligently analyze the semantics of a participant’s response to a question. It goes beyond detecting gibberish answers or uninformative answers, it potentially can identify when people provide a good but off-topic response. For example, if people answer "I like going to Tesco for grocery shopping as it's close to me" to the question "What do you like about this ad?".
Questions with any of these checks enabled will have a maximum demerit point influence which can be controlled on the Quality Control tab: Strong (10 points) Normal (5 points) Weak (2 points)

Note: If any of the enabled checks are triggered, then the participant will be allocated demerit points. For open-ended questions without probing, this will equal the maximum demerit point influence. When probing is enabled, the maximum demerit point influence is only allocated if all of the responses trigger an enabled check; otherwise, each response contributes a diminishing amount of points, with the prime response having the most influence and each subsequent probe response having less.
2 SmartProbe Settings
The SmartProbe settings can intelligently generate follow-up questions ("probes"), assess whether their answers warrant demerit points, and can also detect conversational targets of interest.
When adding an open-ended question, you may select any combination of the following settings in the SmartProbe tab:

Smartprobe
Research Context
Toggle on/enable when you want to add contextual factors to be naturally incorporated into the generated probes. In this probing setting, you need to provide the Question Objectives summary:

Another useful case for applying Research Context is when you want to set the tonality (i.e. formal audience) of your probing. For example, you can enable the Research Context and inform the bot that this is a survey intended for a business sample and ask it to use more formal language when probing.
One more useful example of applying Research Context is when you want to set the regional language differences or use the "consumer terms" when probing for a category or a topic. For example, you can enable to Research Context and inform the bot to use the British word for probing (Trainers - British vs Sneakers - American; Nappy - British; Diaper - American; Football - British vs Soccer - American).
**Best practices when using Question Objectives:
**Examples of what NOT to do with Question Objectives:
Trained Models
Toggle on/enable when you want to teach the inca chatbot with ideal probing by providing "probing exemplars" consisting of a ‘prime question’, a ‘prime response’, and the ideal ‘probe question’ that would be asked following the prime response.
Users can steer the probing to investigate particular issues or needs they are especially interested in following the example below:
To do this, click the English Trained Model tab and select Custom Trained Models. To add another prime response and probe question, click on ADD A NEW EXAMPLE.

You may also customize the Prime Question for each Prime Response by togglingUse Different Prime Questions

How to implement Trained Models in other languages?
By the default, the Project Language you are using on the Overview Page will also be the available language in theCustom Trained Model.

Step 1 -Select the language you want to use for the Trained Models and select Custom Trained Models in the drop-down.

Step 2 - When adding Trained Models in multiple languages, the process should be the same in applying the Custom Trained Model as discussed above -- in which case, adding the prime response and probe question. The Custom Trained Model for other languages will show up as another field right down below the English Trained Model, and users can add the translated version of the prime response and probe questions.

If you are using only one language in the Project Language (Overview Page), then the same language will be set as default in the Trained Models Example. Users can then add the prime response and probe questions in the specific language in the field provided.

**Best practices when using Trained Models for other languages:
Conversational Targets
Toggle on/enable when you want to set conversational targets such as a topic, phrase or name (including people/brands/products/etc.) that are of interest to your study, which a participant might mention in their open-ended response.
To add a conversational target, click the +Add Targets and specify the Target label (topic/phrase/name), Action and Detection Type
Detection Type
There are two Detection Types - Default and Custom. "Default" means the system would detect the target based on the Target label. If users want to provide their own training phrases for each target, then they can choose "Custom", and after that they can provide a few training examples and counter examples.

Enable Target Priority
You may provide up to 10 targets per request. Since each request only produces one probe question, if multiple targets are triggered to probe, then SmartProbe will randomly choose one of them to focus on. This randomization behaviour is the default, but you can override by adding a priority field to the target (a number, 1-10, with 1 indicating “highest priority” and 10 indicating “lowest priority”). Each target must have a unique priority number. The order of the targets in the API request is not important: only the priority field indicates a preference between targets.
For the example above, it will probe randomly if "energy" is not mentioned (Trigger if not detected) OR immunity is mentioned (Trigger if detected). However, "immunity" is first in priority thus, the bot will give this priority in probing when mentioned.
Using Canned Questions
In terms of quantifying targets, the mentions will be only counted once on the codeframe upon coding. For the example above, if you set immunity as a target, then the probe will follow up on this, but if the participant mentions immunity again in their response to the probe it will only count once for the quantification on the codeframe.
**Best practices wen using Conversational Targets for other languages:
Other Useful Question Settings
Allow Copy Paste
Allow participants to copy and paste text from different windows or sources. We recommend disabling it when you don't want them to repeat their stock answers without thinking.
Allow User Corrections
By default (Yes), participants are allowed to change their answer in this question type