Best practices & other useful information

Running interactive studies online is always a bit challenging, but by following these guidelines, you can minimize the chance of encountering issues and you can maximize data quality.

General Recruitment

  • In general, try to recruit participants at the same time to guarantee minimal waiting times.
  • Do NOT recruit participants from panels, email lists, or other sources that are not intended for real-time interaction (e.g., Qualtrics Panels, alumni mailing lists). If participants are not starting the study around the same time, they won’t be able to interact, and the study will be terminated.
  • Recruit participants only during peak hours (e.g., between 12-9pm ET on weekdays) and pause data collection off-peak hours. Try to recruit in a short time window if possible.
  • Make it clear that your study involves real-time interaction. When you recruit participants, make sure to tell them that they will interact with others in real-time, and ask them to participate only if they can dedicate their next X minutes to your study without any interruption.
  • Pay participants a bit more than for usual (non-interactive) studies. Higher incentives decrease attrition, and low attrition rates are essential for interactive studies. For example, attrition rates are considerably lower when participants expect to earn $15-20/h, rather than the usual MTurk/Prolific wages, which range between $8-12/h.

Recruitment via MTurk & Prolific

  • IMPORTANT for MTurk users: Do NOT use third-party recruitment services (e.g., TurkPrime / CloudResearch), as these can drastically slow down recruitment speed and most participants will experience issues with matching / long waiting times.
  • If you use regular MTurk, publish small batches (e.g. 30-40 workers) every 10-15 minutes, instead of publishing large batches with several hundred workers. This will guarantee that your study will be always displayed on the first page of HITs, and you will have a steady flow of participants.
  • IMPORTANT for PROLIFIC users: Make sure to ask the Prolific support to turn off the “rate-limiter”. The rate-limiter is a system which controls the flow of participants and is designed to give priority to participants who have completed fewer studies recently (i.e., more ‘naive’ participants). This might be a desired feature for individual studies, however, for interactive designs it is a disadvantage, since it slows down recruitment and makes it difficult — if not impossible — to implement real-time interaction. In short, Prolific has a setting (hidden from researchers) that controls the speed of recruitment and who gets to see the study. The good news is that Prolific can easily turn off this rate-limiter, you just have to ask their support to do so: Send a message to their support, asking them to turn off the rate-limiter for your account. Doing so will dramatically increase recruitment and matching speeds.
  • Use the strictest eligibility filters possible (e.g., 99%+ or 100% minimum approval rate) to exclude potentially low quality responses. Note that the approval rating distribution is extremely skewed, and almost every worker is above 95%, so setting a threshold that “low” does not really filter out anyone.

Basic Qualtrics settings and survey structure

  • Hide the back button in your surveys. By default, the back button is disabled in Qualtrics studies, but make sure that the ‘Back Button’ option in ‘Survey / Survey Options’ is unchecked. Allowing participants to go back to previous questions might cause a lot of problems in interactive studies.
  • Always check “Force Response” for decisions and responses that are submitted to others, otherwise participants might leave important decisions blank or might skip responding. Missing responses are interpreted as zero.
  • Set time limits on decisions to guarantee that other participants don’t have to wait too much. Qualtrics has a question type called “Timing”, that allows you to advance participants to the next page when their time is up — you can add these timers to any page.
  • Put as much of the study in the ‘non-interactive’ introduction as possible (e.g., consent form, instructions, comprehension checks, etc.). Only match participants after they have completed these parts. Doing so ensures that only those people will be matched and start the interaction who are interested enough.
  • Keep the survey as short as possible: Studies that are shorter than 15-20 minutes have much lower attrition rates than longer studies. Compensation for interactive studies longer than 30 minutes should be substantially higher than regular wages to prevent attrition (over $20/h).

Survey Flow (embedded data and survey blocks)

  • ALWAYS change the studyID after changing any of the main parameters (number of stages, roles, conditions, or group size). Otherwise, SMARTRIQS will not be able to match participants.
  • Do NOT use special characters in names of studies, of roles, conditions or other embedded data. Use only alphanumeric characters (a-z, A-Z, 0-9), dash ( – ) and underscore ( _ ). Also, remember that the names and values of embedded data (e.g., researcher ID, study name, roles, conditions, operations, etc.) are always case sensitive.
  • Let SMARTRIQS randomize roles and conditions (instead of manual randomization in the Survey Flow). This guarantees much shorter waiting times during matching than when roles and conditions are randomized manually in Qualtrics. For more detail, see Randomization.
  • Always define the required embedded data BEFORE the MATCH, GET, SEND, or CHAT blocks. Otherwise, SMARTRIQS won’t be able to save the results in Qualtrics.
  • When you are defining the required embedded data for the SEND block, make sure to define the response you want to send first as some embedded data, and define sendData = some embedded data after that in a separate ‘Set Embedded Data’ element.

Testing, data collection, & data analysis

  • ALWAYS test your survey before collecting data and make sure that everything works as intended. Open the study URL on multiple devices and test if you can complete the study without receiving any error messages.
  • Keep track of participants’ messages, monitor their progress, and launch / stop batches whenever necessary.
    With quick responses and active management, many potential dropouts can be retained may technical or other issues arise.
  • If you are using the default server, do NOT ask participants to submit any personal information (email, name, phone, etc.), see Data Policy.
  • To make the chat logs even more user friendly, open the results in Excel, select all chat logs, then use the ‘find and replace’ tool and replace <br> and   with blank spaces.