@Chloe_Nguyen , on second thought, wouldn’t this violate Prolific’s terms which call for all eligible participants to be paid, based on pre-screening? In other words, if I withhold the payment for the screener until they complete Study 2, isn’t that essentially withholding compensation via a custom screener?
I was just going to ask exactly the same question! My duplicate button is greyed out too. I am sorry I don’t have an answer either. It is nice to know I am not the only one though. Perhaps something is going on at Prolific, preventing duplication, right now!?
I have rejected one submission. Perhaps until that dispute settles I am not allowed to Duplicate. But your study is completed so that is probably not the reason.
I opened the old study in a separate window, and created a new study in another window, then copied all the details to the new study, set up the same prescreeners and added the extra prescreener (which would have been added automatically had I been able to duplicate) to exclude participants from the study that I would have liked to have been able to duplicate. It took less than ten minutes but, if anyone knows the reason why our “Duplicate” is greyed out, please let us know.
That’s a good point. However, Prolific actually has mentioned this in their article so I think we’re safe to go.
Can I delay payment until all parts of the study are completed?
If your study consists of multiple waves you can wait until the final wave is complete before paying each wave provided that the waves are within 21 days. To do this, you would leave the submissions from the earlier studies as ‘awaiting review’ until you are ready to approve them. Submissions will be automatically approved after 21 days if they are still awaiting review at this point.
Please specify that payments will be delayed until completion of all parts in your study description, so that participants are fully aware of the requirements in advance of agreeing to take part.
If the waves are over a longer period all participants from each stage will need to be approved within 21 days.
I’m afraid I have a question again. I’m conducting a multi-part (i.e. two part) study and realised a little too late that I needed more balance to conduct the second part. Now, I topped up my balance by transferring money (from Dutch bank to Prolific) and unfortunately, my balance is still not topped up yet. I am aware that this may indeed take 5 workdays (it has been three now), but I am a bit stressed as the second part of the study cannot be further delayed then Wednesday (tomorrow).
So my question, I believe when I top up my account by credit card, the money is transferred directly on my account. So I am considering topping up again the same amount of money by using a credit card hoping that it will directly be on my Prolific balance. Is this allowed? Because there will be quite some money that I’ll have to refund after the study. And is it indeed a safe bet that the money will be on my account directly if I use my creditcard?
Once a participant has “reserved” a place on a study, how long do they have until they must begin it?
I’m having issues with participants starting, and then hanging around on the consent page / first few questions, which holds up the chatrooms and also makes their submission time way longer than it should be. Despite the fact I say clearly in the description that they must join the chatroom within 5 minutes of starting the study.
I have noticed that recruitment seems a lot slower when asking for fewer participants, e.g. I have previously recruited 700+ participants with many prescreening criteria in a few days, with 20-50 taking part at once. But when asking for just 5 participants with only a couple of simple prescreening criteria (leaving 40,000+ pool) I have gone 35 minutes with no one showing up… this is for a study that paid a similar rate, took a similar time, and was recruiting at the same time of week/day. I also noticed this when previously using Mturk, that recruiting larger numbers was quicker than smaller numbers, counterintuitively. Is it because Prolific prioritises studies that require more participants somehow? And if so, can I get around this?
How much of the Prolific Profile do participants have to fill out to take part? e.g. If I restricted my sample to UK residents, would it be the case that some UK residents would be restricted because they just hadn’t filled in that part of the survey yet but had left it blank?
Finally, because 2-3 participants were oddly taking 20+ minutes to fill in some questions giving them a submission time of over an hour, despite the study only taking 25 minutes usually, this decreased my reward per hour. Does this then get displayed in the advert for future participants, potentially putting them off?
Many thanks for any answers to any of these ! Or if anyone knows the best place to email / where else to ask for further clarification?
Can I refuse payment if one of the two is incorrect? Is that a fair attention check?
Can I, in future experiments, forbid those who fail these checks to continue the experiment (i.e. terminate their competition mid-session and not provide any code for competition? If so, how can I implement it on prolific(so that they drop out immediately, and not time-out)?
Hi @Lotty , glad to hear things are coming together (somewhat)
10 minutes for a short study, but I don’t know if this varies based on study length or any other factors. it would be good to have documentation on this included in the help pages (@Josh could you pass that request along or would it be best to make a post in the appropriate section of the forum? I could also be missing it due to user error…)
The prolific FAQ on attention checks says that a survey has to be 5 minutes or less.
What if they only fail one attention check?
A single failed attention check can only be used as a basis for rejection if your study is very short (e.g. under 5 mins).
I was wrong. We are now allowed to request participants return their submission after one attendance check. I presume that this must be in the first 5 minutes of the survey. Please see this exchange where the “new feature” is mentioned.
You can also request/demand that participants return their submission if they give information that is different to prescreening questions, if you use exactly the same wording as the prescreening question. In that case also you can use logic to direct them to a section where they are met with a request to return their submission. I find this to be quite an effective way of getting rid of random clickers.
You can also refuse participation in subsequent studies by creating your own custom blacklist of Prolific IDs. Or by using the participation on Prolific and refuse those that have been rejected X times.
This is not quite what you hoped for, but I hope it helps.
I published a study on Prolific about half an hour ago and do not have any submissions coming in yet. Usually I get the first submissions within a few seconds. Is anything going wrong, or have things changed?
I have experience the same sort of lag too, when at other times studies have been published instantaneously. My interpretation was that a Prolific employee has to press a button to approve the study and that at some times of day, there are very few Prolific employees at work. It is now 11:44 pm, just before midnight in the UK, so that may result in a delay.
I think you mixed up am and pm there, it’s around noon in the UK now
Fortunately, the submissions have started coming in by now, although not nearly as fast as I’m used to. But at least I am confident now that everything is working properly and that I will reach my participant target today.
I would be surprised if a Prolific employee would have to approve every study that is published, especially since submissions used to come in within a few seconds before (unless this has changed).
Hi! I ran four separate “studies” simultaneously with the same qualtrics link, each of which differed on one pre-screen criteria (female versus male, smoker trying to quit versus smoker not trying to quit) to get a sample with a specific demographic breakdown. Has anyone had the experience of running the same study using different pre-screens and having the same participants complete the study because they changed their pre screen answers in between completing the different studies on prolific? Is this possible? I did not record IP addresses so it’s not possible for me to check whether the survey was taken from the same household at different times.
It takes a while, I am not sure how long, to change answers to a prescreening question. A participant would have to delete their answer to the question, which causes the question to disappear, and for them to have effectively answered “no” to all options, and then wait for the question to reappear and then select one of the other options. So ordinarily it would not be possible to change a prescreener and then enter another simultaneous study since studies generally fill up quickly (did yours?).
I have suspect in the past, however, that some very small minority of participants may be using VPNs and a variety of similar but different precreening profiles to respond more than once, due to the similarity of some free response question responses but that was only a hunch. Prolific are using algorithms to detect such frauds.
What made you feel that some of the participants may have been the same person?
Hello once again, Prolific community! Another question here:
Does the Custom Allowlist override the “Exclude participants from previous studies”?
For example, if I exclude all participants from Study 1 (using “Exclude participants from previous studies”), but I want to make one exception for one participant by adding them to the Custom Allowlist, will that work?
A related question is: Does the Custom Allowlist override the Custom Blocklist (I see the “Custom Blocklist” is listed as a separate criteria distinct from the “Exclude participants from previous studies” list)