Hi,
I want to employ participants recruited from Prolific in an interactive situation: a pair of people will communicate with one another in real time or ‘live’, by sending messages to one another.
In order to manage this, I wish to recruit about 240 individuals to begin simultaneously so that they can be randomized into treatment groups, and then further randomized into pairs. The two individuals within a pair will interact with one another.
All these aspects of the study are already programmed in OTree, but I don’t understand how to use Prolific to have them begin all together.
Dear Savreen
Welcome to the forum 
Sorry to be slow.
The most relevant FAQ on this issue is probably this
I think it is going to be very difficult to run all off your 120 pairs at the same time, but rather you are going to have to set up a few time slots and invite participants to say at which ones they are available. To do this you may wish to use
or
Calendly.com
You may wish to prescreen (which will cost you) for those that are available at the time slots or just keep increasing time slots until all your pairs
Please see also the search results for Calendly here for issues that others have had with scheduling live studies
https://community.prolific.co/search?expanded=true&q=Calendly.com
Best,
Tim
Yamaguchi University
I will echo timtak’s comment that as far as I can think of, there’s no way to ensure people are interacting at the same time unless you have people schedule times in the future. So maybe 100 sign up up on 4/1 and indicate time slots, and then your system randomly assigns people that match on time availability and sends follow-up messages with the required information for how they enter the chat space (I assume you don’t mean using the Prolific chat). For this, you’d probably pay them some small amount for the original sign-up, and then pay the full study fee at the time that they finish their interactive chat. (I guess in theory you could also try to only get people who are all available at the same time so you can manage the randomization in real time at that time, but I can imagine that’s a lot more expensive since you’d have to pay for the screening and probably only a minority of the respondents would be available for the study.)
The one caution I should give is that the problem with this is that people may not show up for the later time slot. A system that sends them a reminder (and, perhaps, also reminds them of the expected compensation) leading up to their scheduled time slot might be helpful.
Hi Savreen,
in addition to what others have suggested, I think perhaps the best option is to match participants into pairs based on their time of arrival in the study. See here: Advanced Grouping — An introduction to oTree
(Note that in this case not all participants have to start at the same time, but only 2 participants in close temporal proximity (depending on how long the participants should wait for a match)).
If participants only need to be matched once, this is likely to be successful because in my experience there is usually at least one other person who is currently participating in the study. The only case where this could become problematic is towards the end of the study (when almost all the slots are filled), because then only a few new participants are joining and thus finding a “free” partner becomes difficult.
Thank you for your helpful replies @timtak, @rslbliss and @econauofc ; I am grateful for your input! Apologies for the late follow up, the study took a detour but I am now back on the issue of recruiting participants.
I will try to run a few small pilots (with four or eight participants) in the following way: participants sign up for a time slot, and then they will receive the web link and a reminder email. I imagine that there will be significant attrition between sign up and participation in the study.
@ Savreen_Kaur_Nanda I would be interested in hearing how this experiment went. I have set up an oTree experiment that involves 100 participants organised in a social network who need to complete a task in pairs, over 4-6 subsequent pairings with other participants. Not all participants play at the very same time, but the pairs follow a rigid sequence that allows me to end the entire experiment in approx. 60-70 minutes. However, what is required is that all participants show up in the designated time window, and that seems to be the most tricky part. Thank you!
Dear Luca
Welcome to the forum 
The usual way to perform paired studies is to first call participants (perhaps experienced and reliable participants with very low fail rate) in to schedule a time slot using
or
The problem is (and has been reported) that participants get busy, or forget or for other reasons do not show.
You might consider allotting 3 participants per slot because there may be as many as a 33% drop out rate, under the awareness that you will have to pay all three (and one for doing nothing) if all three show up.
Alternatively you might ask participants to show up earlier to a waiting room and and pay them extra for their time there aiming to keep a queue of participants there in the waiting room to allow for pairs to start as they show up. Again this would require more payment than the ideal situation in which pairs turn up promptly at just the right time.
You can send participants emails, via their prolific email address but I am not sure if they are instant. You are not allowed to ask for personal email addresses.
I have not done a paired study and only hear about them when something goes wrong. Perhaps someone will chip in.
Here’s the Prolific FAQ article thing
I am sorry I can’t help more.
Tim
Hello @Luca_Onnis.
My study went well.
First, I recruited participants using the in-built survey tool. In this initial survey, I provided a description of my study, information about expected time of completion and payment. Within the initial survey, I asked participants to confirm if (1) they would be available during a particular time slot on that day, and (2) they would like to participate in my study.
The cost associated with this recruitment was small because it took participants less than 1 minute to sign up. I would easily get 250 participants to sign up if I posted this recruitment call 1.5-2 hours before the study began.
Second, I used the responses to this initial survey to quickly create a list of IDs of participants who said they were available. I included this list in a custom screener, and about 5-10 minutes before the main study was to begin, I published a separate study on Prolific which was visible only to those participants who signed up via the initial survey. I also sent a message to these participants via the initial survey to inform them that the study had been published, re-iterate the name of the study, and ask them to join the online waiting room by the specified time (which was also mentioned with great emphasis in the initial survey; I repeated the time and date 2-3 times).
On average, 50% of the participants who signed up would join the waiting room on time.
Example: I told them to join the online waiting room by 5:30 PM (it is helpful if all your potential participants live in the same time zone), and also told them that the study would begin at 5:32 PM sharp. Sometimes I would create the OTree session a minute or two late, so that more people would join, but even this small delay would prompt a lot of messages and many participants would leave because they thought there was some technical problem with the study. It is best to create the session exactly at the time you give them.
Some more notes from my experience:
-
Participants don’t like to wait. There were a couple of wait pages in my study, and the longest waiting time was about seven minutes. This waiting time was experienced by those who finished the first part of the study quite quickly. I got many messages when they were waiting, and it is best to keep an eye on your messages to reassure them that the study is progressing as expected and they should continue waiting.
-
OTree waiting rooms and wait pages are faulty. Some participants who were in the waiting room on time, would be told that the ‘session is full’, for reasons I didn’t understand. This is a problem with OTree that I failed to fix. I think it may be associated with poor internet connections and participants who refreshed the waiting room page at the wrong time.
Other participants got stuck on waiting pages within the study, again, for reasons I don’t understand. For the vast majority of participants everything worked smooth and well, but it is a hassle nonetheless because those who were stuck would complain, and then it is our onerous job to determine if their concerns are legitimate and make sure that they are compensated for their time.
Good luck with your study! 
1 Like