New Prolific Features

Thanks for the opportunity to send my feedback. I responded to your survey and signed up for the beta testing. i do hope that a trust system of some sort comes on line. And I love the option of being able to use prescreened information rechecks, as a sort attendance check.

As I wrote in response to your survey however, flagging and denying access to the platform may be rather too strict since sometimes researchers would like access to a wider, representative sample of the diligent/attentive AND the not so diligent and attentive, if they are research things related to diligence and attentiveness.

I also hope by the way that you provide a Japanese localisation, since there is current no Prolific in Japan (only a click****er type general purpose thing) and I would love access to Japanese respondents. I might be able to help with that should you wish. I speak and write Japanese.

And, another idea…
If respondents are to be rated by researchers, I think it would be nice if respondents were given the opportunity to rate researchers
E.g. negatively Asks for private information, does not pay enough, ask really boring questions, asks really difficult questions
Or positively if Ask interesting questions, provides debriefing, questions are clear and easy to answer.
While perhaps few respondents will pay all that much attention to the researcher ratings it would at least preserve the reputation for fairness that Prolific currently, and rightly I believe now enjoys.


Tim, すごい、 ども ありがとう ございます! It’s a small world I studied Japanese at Hiroshima Daigaku. There’s no current plans to expand to Japan but i’ll feed this back to the team.

I’ve just read through your responses in the survey, they give us lots of incite. Thank you.

Alot, of your views are similar to other researchers who we’ve been in contact with. We’re trying to shape our principles on what guides us to give participants access to Prolific. Current we find honesty and reliability are the foundations that both supersede comprehension, accuracy and diversity. These are more important once a participant is validated as honest. Our current way of handling participants is that if they’re repeatedly not honest we remove their access to as a participant.

We’ve been trialing an internal trust measure which helps us understand the honesty of a participant.
We’ve also been trialing a researcher and study rating from participants, would you find it useful to get this feedback too?


I am all for this! As an example, I have an attention check in my study where I ask participants to listen to two music excerpts and tell me what instrument they hear. The music excerpts are identical, playing only guitar music. When a participant then clicks on “piano” and “triangle”, it’s clear that this is beyond being a bit inattentive. I used these checks to screen out the participants at the beginning of the study, and then asked them to return their submission. So I didn’t need to reject them nor pay them. This might be why it didn’t occur to me that I should report them, in addition to the fact that my attention checks aren’t the typical “Please click on this specific response”.

I have a few control questions as well that do not screen out participants, and I pay them regardless of how they respond to these. However, based on their answers, I might have to exclude them from the data set later. These control questions are not that black and white though. I have a cut-off, where values beyond this threshold suggest that participants are either trolling or just responding randomly. I can tell you my reasons for choosing that cut-off (the theory behind it), but there is still room for interpretation. Would it be at all helpful to report behaviour that is highly indicative of trolling/random answers, even though I cannot guarantee it?


We’d certainly like to hear about participants who you feel aren’t providing good quality data. We’re actually currently testing a way that would allow you to report them easily. If you have some time, let us know your thoughts on our concept :slight_smile:


I am very glad to have been of use. Here are two more feature ideas, A and B

A) Alipay or some other Aliexpress compatible payments since Aliexpress sellers do not always except paypal and Aliexpress is the place were many people do their lower price point online shopping since, for Chinese produced products at least, prices are generally lower than at ebay.

B) Bespoke survey links for non-Prolific members, so as to

  1. financially reward students and other survey participants recruited outside of the Prolific respondent team,
  2. thereby allowing researchers to use their own contacts and subject pool of interest,
  3. and additionally, recruit more people to take part as Prolific respondents from such subject pools (in my case, Japanese people).

To do this the bespoke survey link would jump to a form where the respondent would be requested to fill in the minimum amount of information to become a new Prolific member (ideally not requiring a paypal/alipay account initially) and then be directed to the survey.


1 Like

We’re loving the ideas, keep em coming!

A) We don’t have any current plans to use services beyond PayPal, but if we do expand into new international markets, it’s certainly something we’ll have to look at

B) Funnily enough, this is something that has been requested several times by researchers. As the idea seems to be quite popular, we’re looking at how we might implement something like this. I’ll keep you updated if we launch something. We’d love to get your feedback if we do :slight_smile:

  1. A way of rewarding participants (not researchers) for recruiting other participants especially in areas, such as in Japan where there are few participants, such as “tell a friend and get a dollar when they have completed a survey.”

  2. A way for researchers to fund (1) such that it is the researchers, rather than Prolific that pay to increase the respondent subject pool in areas that more subjects are needed. E.g. when a subject finishes a survey they could be given a researcher funded “share this link with friends [from a certain demographic] to get a bonus.” I’d happily pay an extra dollar per subject to get Japanese data and get more Japanese on board. For Prolific it would be win-win.

The existence of a quality data pool will bring researchers to Prolific.

  1. The above can be abused, and perhaps the current system is being abused, by people using VPN/sock puppet multiple accounts, so some checking of accounts that always seem to respond to the same surveys might be in order, if it is not currently carried out.

  2. Qualtrics, Gorilla and Google can provide standard lickert test type surveys, (and Gorilla can provide more) but while Implicit Attitudes Tests are becoming more popular there is currently not a lot in the way of provision for providing these tests online. Millisecond offers a Web version of its software for 3000 dollars per lab. If Prolific offered a survey creation interface it would make using Prolific easier, and if there were IAT type tests that would be pretty unique.

With regard to (4) I have some obsolete php and Flash based software that I would be happy to donate (it is not mine to donate since it is Open Source, but I paid for it and can tell you where you can find it). I don’t think it would be that difficult to convert it to php and HTML5 (if that is the thing people use these days rather than Flash).

I like trying to think of improvement ideas. I hope these are not useless.


Not useless at all! We love hearing new ideas, so keep them coming :slight_smile:

  1. We did have a referral scheme for participants, but people tended to recruit others of similar demographics. Whereas, we wanted to diversify our sample offering, so we stopped the scheme.
  2. That’s a very interesting approach. We hadn’t considered whether researchers would be willing to pay to recruit their own samples. We’ll keep this in mind when we’re considering ways of expanding our demographic offeing.
  3. We do carry out a suite of data quality checks including VPN / IP Address monitoring. We outlined our methods here, and we had an AMA with our Data Team about this a few weeks ago
  4. We’re actually looking into expanding Prolific’s current offering. I can’t say too much about it yet, but there are some things in the pipeline :wink:

Please do keep these ideas coming, we love em!

Hi Daniela, Totally agree! I also feel that developing a feature like that is super important. It could be either (1) a dynamic version of the “Participation on Prolific” filter that would allow marking ongoing studies on the list or (2) an option to add people to the “Custom blocklist” in ongoing studies. This is necessary to facilitate running multiple studies simultaneously without filtering out manually the duplicated submissions. As researchers, we usually aim to examine multiple subjects, not receive multiple responses from the same subjects, and for now, there is no way to guarantee that for multiple ongoing studies. When having multiple versions of the same experiment, checking these manually is quite cumbersome.


Hi Josh,

perhaps this has already been addressed as I am now accessing the platform after a long time: Why am I forced to choose between representative sample (e.g. UK sample) and custom pre-screening?
For example I would like to have a representative UK sample but also screen participants on whether they are resident in UK or don’t.

I hope I made it clear and apologies if this is very easy and quick solution!


1 Like

Hi Elia, welcome to the community!

Unfortunately, you can’t combine the two tools because we’re not confident that we’d be able to deliver a representative sample for combinations. For example, we can provide a sample stratified by age, sex and ethnic group, but if someone wanted to also stratify by political persuasion too, it becomes more difficult to find the proper proportions for each group.

But, in your case, if you’re using a UK rep sample, those who respond will be UK residents.

I hope that makes sense! Let me know if I can help further :slight_smile:

(P.S Do you know someone who uses Prolific, but isn’t on the forum? Get them to register and introduce them, then you could win £££ in Prolific Credits! Click here to find out more)


Thank you for that attention check suggestion and way of implementing prior to the survey. Great.

1 Like

Thank you. AMA is the motto of my English conversation classes. I did not know it is a well known acronym.

  1. Participants be allowed to delete timed out, and rejected submissions from their list of submissions so they feel buoyed by looking at it.
  2. A database of attention checks that are allowed, for only paying researchers to see of course.
  3. A pro-forma informed consent form with boxes for risks or other things that might change.
  4. The ability to quiz the subject pool to see how many respondents of a certain demographic there are without having to make a test survey.
  5. The ability to save favourite demographic settings for future surveys.
  6. An easier way of downloading “About You” information on the participants in our survey (if we are given access to this information – I can’t find it). If we are not currently allowed access to this information, then please could we access it? Or perhaps access it with a bonus payment. Or perhaps those participants that allow access to their about you information might be included as a demographic selector, so that such participants be given priority. (We can access this information by making multiple studies changing the demographic a little each time based on the About You information).
  7. An additional demographic selector that allows researchers to choose participants that take longer x percent of the median completion time. It is generally the people that finish quickly that provide the poor responses. I am aware that participants could deliberately delay their responses in order to get around this but they might not be bothered.
  8. An AI estimated survey completion time of a certain demographic at a certain level of funding (tricky to implement perhaps).
  9. In the future, when surveys on Prolific exist, a way of giving bonuses to those that pass, or gain above a certain mark on fuzzy attention checks.
  10. A quicker way of creating a whitelist from those that have completely previous surveys so that we can inter-correlate our research.
  11. A pink list of participants (perhaps from 10, previous respondents) who are given X hrs to respond before the study is open to everyone.

@timtak you’re a star :star_struck:

These are such great ideas! I particularly like 2, 3, 5, 7, 8, 9 & 11. I’ve added them to our list of ‘Great Community Ideas’, and when we have the time, we’d love to implement some of them .

For some of your ideas, we already have fixes/functionality:

An easier way of downloading “About You” information on the participants in our survey (if we are given access to this information – I can’t find it).

  • We’ve written a guide on how to export demographic info here. Is that what you’re after?

A quicker way of creating a whitelist from those that have completely previous surveys so that we can inter-correlate our research.

  • We have a filter called ‘Previous Studies’ which allows you to choose participants from any of your previously run studies. So, there’s no need to export their IDs, and upload them like you would do for a custom allowlist.

Hope that helps!

Just double-checking, you’re in our user-testing group right?


From my side, I put forward the following proposals for new features of the platform: :memo:

  • Allowing for simultaneous interactions between participants, so to have a way to match participants, at least, in pairs. Thinking about 2-persons interactions, this would mean that subject 1 sees on her screen the input of subject 2 to a certain question (of course, after a waiting time), and viceversa. I don’t know how difficult this would be to implement but for some kind of research it would definitely be a plus

  • Asking participants for a routinley check of the information they’ve provided in the “About You” section. It might be important since opinions but also statuses may change over time.

I am not aware whether something like the above proposals have already been adressed by the Prolific Team unitl now. In this case, please let me know. Thanks :grin:

These are great ideas!

Allowing for simultaneous interactions between participants, so to have a way to match participants, at least, in pairs.

  • This can already be done via some external survey platforms like Qualtrics. We don’t yet have survey functionality, but if we do, this is a feature we’d like to implement.

Asking participants for a routinley check of the information they’ve provided in the “About You” section. It might be important since opinions but also statuses may change over time.

  • We do this already, but I think we can make the notifications more prominent :slight_smile:
1 Like


I was thinking that a feature that allowed you to pay participants for the minute of the time (to read the information and answer comprehension checks) but then direct them out after if they do not seem to comprehend the information properly (so that there can be a means of paying them for their minute) so that they do not then waste their time (and study funds) filling in info if they have failed the comprehension check as now that info is not usable.


1 Like

This is a great idea!

Currently, this is possible if you ask them to ‘return’ their submission, and then pay them a bonus payment to compensate them for their time. But, this process could certainly be streamlined! I’ve added it to the list of great community ideas. We’re hoping to action some of them soon :slight_smile:

Thank you. That is very useful but I was was thinking of the whole caboodle. All the data in the “About you” page, from religion to mental illness to answers to e.g. How often do you engage in physical exercise per week?

Thank you! I had not noticed that.

I am not sure.


Thank you. That is very useful but I was was thinking of the whole caboodle. All the data in the “About you” page, from religion to mental illness to answers to e.g. How often do you engage in physical exercise per week?

Ah, right! Unfortunately, data privacy laws don’t allow us to do that. Participants have to opt in to the data that they provide. So, we can only provide the data that researchers request via the filters

I am not sure.

We’d love for you to join our beta-testers group. That way you’ll get early access to some of our new features, and you can continue to give your great insights in user research interviews. Let me know if that sounds appealing :slight_smile: