Note: It isn’t possible to increase places on a balanced study after it has been published.
I think that is also the case that it is impossible to use the “Duplicate Study” function on balanced studies. I am not sure but for some reason I can’t Duplicate recently. If this is the case then I suggest the above be changed to
Note: It isn’t possible to increase places on a balanced study nor duplicate it after it has been published.
Because the lack of “Duplicate Study” is confusing people.
Better still of course would be the ability to duplicate gender-balanced studies since, especially with the lack of one-study-multiple-conditions feature, and the gender-asymmetrical post TikTok demographic, the need to duplicate balanced studies is imho strong.
We need “Lab” accounts- where we can add lab members who can use the funds. We have the ability to move money to other accounts by contacting the Prolific team, but that’s a billing/invoice nightmare for grant-keeping records. What would be great is a Lab account that contains members, and a director/owner. The members can use the funds with the directors project-by-project approval. That way, the money always goes to a single account and doesn’t float around in a way that the auditors will find suspicious.
It would be nice to have an option for sending different messages in bulk, similar to bonus payments. I have an update for 50% of participants and it would be great to have an option to not do it by manually clicking the message for every second one of them.
I have a similar issue when using my university messaging system to invite students to two conditions of an experiment. Clicking down every other student in a list of 120 or more can lead to mistakes.
So I have asked the maker of the Firefox add-in checkthemall to create a “check every other,” option but no good news on that front.
I am not sure how Prolific would feel about this but it is possible to achieve a work around by manipulating the completion code.
If you for example add an extra character to one or both the study completion codes given to participants in each condition at the end of the study or perhaps change the completion code replacing a character with something that is not usually used (so that it does not double up on another study) then… If the prolific completion code is 43698XC7 and you add an A to make 436980C7A or if you change the C to a small c to make 43698Xc7, then that would I think, be displayed in the completion code column on of the submissions screen after a warning mark (telling you that the code is “wrong”).
This would enable you to reorder the submissions screen using the completion code, and then you could use checkthemall to select the relevant applicants.
I put the wrong code in a study once (from a previous study) and I was still able to pay everyone and completed the study without significant issue.
Are we allowed to manipulate the completion code in this, or any way? It would be nice if we are allowed to add up to X characters to the end of completion codes so that we can group participants into condition groups in this way.
I don’t know how much complicated would the process of registration to Prolific become, but I recently was thinking that it would be great to have some sort of control over some information that participants now just self-declare. I guess the most important thing would be to verify the age, nationality, secondarily the sex, and maybe test English proficiency with a quick test (this suggestion is inspired by @Dani_Levine 's question of some days ago). Some control may further increase the reliability of the data collected by researchers.
Love the idea @Veronica - we’ve paused our participant registrations for now, and part of the work we’re doing behind the scenes relates to how we better verify what they tell us. I’ll update you when I know more
PSR has come up with a workaround for the lack of multiple experiment/condition links in Prolific in the form of a simple randomizer at https://allocate.monster/
It allocates randomly. It would be great if this were part of Prolific and if there were an “alternate” as opposed to random option since random can still end up with different numbers in each condition.
I am not sure if it is a bug or not, but is it possible to increase the window size when one writes a message (or make it responsive)? It is a nightmare to write a long message with several lines, as you cannot see the full message. (Perhaps it is a browser issue, but seems to be the same both in chrome and firefox).
Also, it would be nice to be able to include pictures in the messages as well (for example, since some participants ask about where the attention checks were exactly, it would be helpful to send them a screenshot)
Let me start out with complimenting you on a great platform. There are a lot of prescreening questions on Prolific that helps in selecting a good sample. However, I would like to suggest that Prolific increases the usability of these prescreening questions by introducing the possibility to combine two or more filters using OR arguments. An example of this would be to enable the user to define the following population: a population that has either taken at least one dose of a COVID-19 vaccine (defined by the question “Have you received a coronavirus (COVID-19) vaccination? ”) OR that has had COVID-19 (defined from question “Please describe your experience with the COVID-19 virus (coronavirus)”).
I wrote about this earlier in the “Ask Anything Thread” and Tim suggested that I should write about it here instead.
I agree with this! I was worried that if I’m only allowed to define “rejectably fast” as <3 SD below the mean, I’d want to make sure we were all in agreement about what the M and SD of completion time is. Seems like having these numbers calculated would reduce error, and unnecessary rejections.
Somewhere, or in multiple places, in the research GUI it may be a good idea to let researchers know that they will only have access to participants “About You” information if they use that screener to screen for participants taking the survey because researchers, including myself, assume that the data that Prolific has on the participants will be available for download, if only because, adding screeners with all options selected seems to be a strange requirement to enable the download of that information. I misunderstood this requirement in the past and at least one other researcher has more recently.
I’ve loved using Prolific so far, but one feature I am very much lacking is the ability to exclude participants from an ongoing study (rather than a previous, completed study), as I am running the same experiment with multiple studies, each study assigning participants to a different order within the experiment in a counterbalanced fashion, and I do not want any individual to participate in more than one of these studies (they should do the experiment only once).
This could be done by running each study one by one, but this has several disadvantages. One of them is that I would like to run the experiment in small batches, with the same number of participant per condition in each batch. Another is that, to equalize things across conditions, it would be better to have as many participants run the study at the same day and time of day in each condition.
This request seems to have come up several times already, e.g. here and there. I am bumping it up because I think it would be very useful!