New Prolific Features

If you could add any feature to the platform, what would it be?

Hi Researchers!

As a product team we’re currently working on 3 major areas of our product:

  • Improving the usability of our custom prescreening filters
  • Making it easier to let us know when a participant provides poor data in a study, or does something they shouldn’t
  • Simplifying how you top up and manage your account balance

What are your big pain points or ‘wishlist’ items for these parts of our system? Let us know here!

Rob

4 Likes

Hi! I have two features I wish were available:

  • I wish that I could see how many participants have clicked on “Not interested”. I currently have a study with a custom allowlist, where I got the IDs for the list by using a pre-screening study. I am at a point now where I still have spots left, but the remaining participants in the list are not responding to the invite. I don’t know if these participants have just clicked on “not interested” (and therefore cannot see the study anymore?). I don’t want to run the pre-screening study again to get more IDs if I can avoid it by just waiting, but I also don’t want to waste time waiting for participants that will never participate.

  • It would have been helpful to be able to reduce places in a study without having to contact the support team each time.

Rebekka

4 Likes

Hi Josh,

I’ve used both ClickWorker and Prolific, and though I highly prefer Prolific for many reasons, there’s one thing I prefered on ClickWorker: the ability to include multiple experiment links within one order. I often have multiple counterbalanced versions of the same experiment, which leads to anywhere from 4 to 16 different links. It’s quite cumbersome to replicate an order 16 times, especially since they’ll all be sent to the same participant group. In the past I’ve found that participants participated in multiple versions of the same experiment, and therefore could only use the data from their first attempt, but still had to pay them for all attempts, of course.

To avoid this, I have to wait for one order to be completed, check the data to see if participants passed the attention checks, then approve their Prolific submissions, and only then does Prolific give me the option to exclude those participants from my next orders. This means I can only run one list at a time. When I have 16 orders that can take me easily several days, when data collection really takes an hour or two! So it would be a BIG help to be able to submit a list (maybe .csv, like ClickWorker) of links, indicated that I want e.g., 20 participants per list.

I really love Prolific otherwise, but this one thing drives me a little crazy every time!

Best,
Daniela

6 Likes

Thank you very much for your feedback! I completely understand your frustration with that process, and this is a suggestion that has come up before from others. I’m making a note of this and will flag it with our Product Team :slightly_smiling_face:

Glad you’re enjoying using Prolific otherwise!

These are great ideas! Having both those features would certainly make the study publication process much smoother. I’ll send this feedback to our Product Team :slight_smile:

I would like some sort of points based trust system where researchers can award (or perhaps take away) trust points depending upon how subjects respond to surveys.

I have found that subjects generally respond correctly to the recommended attention checks of the form “To show that you are paying attention please select 5 extremely to this question”

However, when asked other questions such as “Does Superman like to serve his community with sewage cocktails?” quite a high proportion of those responding responded affirmatively, suggesting to me at least, that they were not reading to the end of the question.

I can appreciate that there is room for interpretation in the latter type of attention test, so non-payment may well be too harsh a measure, as stipulated in Prolific’s rules.

I suggest therefore that while payment should be made, researchers may profitably be allowed to give/not give, or even reduce subject trust points, depending upon how well subjects respond to such non-literal, foolproof attention checks.

Other attention checks such as giving similar answers (or at least non opposite answers) to identical questions might be used in a similar way.

A trust points system coupled with the ability to restrict access to those participants with positive trust points, or more than X points, in situations where there are a lot of potential participants, would be very useful.

I have submitted this suggestion by email but I was wondering how popular this feature would be with other researchers.

In any event I look forward to continuing to use Prolific including non-foolproof attendance checks because the data is still worth more than I need to pay.

Thanks Prolific!

3 Likes

That is an interesting idea! We’re actually currently testing a reporting process which would allow you to report a participant without rejecting their submission, so you’d be able to pay them but also let us know that they’re not providing good quality data. I’ll let you know updates on that as soon as I get them. :slight_smile:

I wonder what our other researchers think about this? Let us know!

Hi Tim — Thanks for your post and compliments to Prolific!

I work on the ‘Trust’ product squad a Prolific. Everything you’ve posted are hot topics for us at the moment. We’ve got plans to review and develop our attention check policy and guidance soon - i’ll bring your points to the table then. It would be great to get your opinions on some work that’s currently in progress which is somewhat related https://prolific2.typeform.com/to/qcyHTOFX

1 Like

Thanks for the opportunity to send my feedback. I responded to your survey and signed up for the beta testing. i do hope that a trust system of some sort comes on line. And I love the option of being able to use prescreened information rechecks, as a sort attendance check.

As I wrote in response to your survey however, flagging and denying access to the platform may be rather too strict since sometimes researchers would like access to a wider, representative sample of the diligent/attentive AND the not so diligent and attentive, if they are research things related to diligence and attentiveness.

I also hope by the way that you provide a Japanese localisation, since there is current no Prolific in Japan (only a click****er type general purpose thing) and I would love access to Japanese respondents. I might be able to help with that should you wish. I speak and write Japanese.

And, another idea…
If respondents are to be rated by researchers, I think it would be nice if respondents were given the opportunity to rate researchers
E.g. negatively Asks for private information, does not pay enough, ask really boring questions, asks really difficult questions
Or positively if Ask interesting questions, provides debriefing, questions are clear and easy to answer.
While perhaps few respondents will pay all that much attention to the researcher ratings it would at least preserve the reputation for fairness that Prolific currently, and rightly I believe now enjoys.

3 Likes

Tim, すごい、 ども ありがとう ございます! It’s a small world I studied Japanese at Hiroshima Daigaku. There’s no current plans to expand to Japan but i’ll feed this back to the team.

I’ve just read through your responses in the survey, they give us lots of incite. Thank you.

Alot, of your views are similar to other researchers who we’ve been in contact with. We’re trying to shape our principles on what guides us to give participants access to Prolific. Current we find honesty and reliability are the foundations that both supersede comprehension, accuracy and diversity. These are more important once a participant is validated as honest. Our current way of handling participants is that if they’re repeatedly not honest we remove their access to as a participant.

We’ve been trialing an internal trust measure which helps us understand the honesty of a participant.
We’ve also been trialing a researcher and study rating from participants, would you find it useful to get this feedback too?

1 Like

I am all for this! As an example, I have an attention check in my study where I ask participants to listen to two music excerpts and tell me what instrument they hear. The music excerpts are identical, playing only guitar music. When a participant then clicks on “piano” and “triangle”, it’s clear that this is beyond being a bit inattentive. I used these checks to screen out the participants at the beginning of the study, and then asked them to return their submission. So I didn’t need to reject them nor pay them. This might be why it didn’t occur to me that I should report them, in addition to the fact that my attention checks aren’t the typical “Please click on this specific response”.

I have a few control questions as well that do not screen out participants, and I pay them regardless of how they respond to these. However, based on their answers, I might have to exclude them from the data set later. These control questions are not that black and white though. I have a cut-off, where values beyond this threshold suggest that participants are either trolling or just responding randomly. I can tell you my reasons for choosing that cut-off (the theory behind it), but there is still room for interpretation. Would it be at all helpful to report behaviour that is highly indicative of trolling/random answers, even though I cannot guarantee it?

4 Likes

We’d certainly like to hear about participants who you feel aren’t providing good quality data. We’re actually currently testing a way that would allow you to report them easily. If you have some time, let us know your thoughts on our concept :slight_smile:

I am very glad to have been of use. Here are two more feature ideas, A and B

A) Alipay or some other Aliexpress compatible payments since Aliexpress sellers do not always except paypal and Aliexpress is the place were many people do their lower price point online shopping since, for Chinese produced products at least, prices are generally lower than at ebay.

B) Bespoke survey links for non-Prolific members, so as to

  1. financially reward students and other survey participants recruited outside of the Prolific respondent team,
  2. thereby allowing researchers to use their own contacts and subject pool of interest,
  3. and additionally, recruit more people to take part as Prolific respondents from such subject pools (in my case, Japanese people).

To do this the bespoke survey link would jump to a form where the respondent would be requested to fill in the minimum amount of information to become a new Prolific member (ideally not requiring a paypal/alipay account initially) and then be directed to the survey.

m(._.)m

1 Like

We’re loving the ideas, keep em coming!

A) We don’t have any current plans to use services beyond PayPal, but if we do expand into new international markets, it’s certainly something we’ll have to look at

B) Funnily enough, this is something that has been requested several times by researchers. As the idea seems to be quite popular, we’re looking at how we might implement something like this. I’ll keep you updated if we launch something. We’d love to get your feedback if we do :slight_smile:

  1. A way of rewarding participants (not researchers) for recruiting other participants especially in areas, such as in Japan where there are few participants, such as “tell a friend and get a dollar when they have completed a survey.”

  2. A way for researchers to fund (1) such that it is the researchers, rather than Prolific that pay to increase the respondent subject pool in areas that more subjects are needed. E.g. when a subject finishes a survey they could be given a researcher funded “share this link with friends [from a certain demographic] to get a bonus.” I’d happily pay an extra dollar per subject to get Japanese data and get more Japanese on board. For Prolific it would be win-win.

The existence of a quality data pool will bring researchers to Prolific.

  1. The above can be abused, and perhaps the current system is being abused, by people using VPN/sock puppet multiple accounts, so some checking of accounts that always seem to respond to the same surveys might be in order, if it is not currently carried out.

  2. Qualtrics, Gorilla and Google can provide standard lickert test type surveys, (and Gorilla can provide more) but while Implicit Attitudes Tests are becoming more popular there is currently not a lot in the way of provision for providing these tests online. Millisecond offers a Web version of its software for 3000 dollars per lab. If Prolific offered a survey creation interface it would make using Prolific easier, and if there were IAT type tests that would be pretty unique.

With regard to (4) I have some obsolete php and Flash based software that I would be happy to donate (it is not mine to donate since it is Open Source, but I paid for it and can tell you where you can find it). I don’t think it would be that difficult to convert it to php and HTML5 (if that is the thing people use these days rather than Flash).

I like trying to think of improvement ideas. I hope these are not useless.

2 Likes

Not useless at all! We love hearing new ideas, so keep them coming :slight_smile:

  1. We did have a referral scheme for participants, but people tended to recruit others of similar demographics. Whereas, we wanted to diversify our sample offering, so we stopped the scheme.
  2. That’s a very interesting approach. We hadn’t considered whether researchers would be willing to pay to recruit their own samples. We’ll keep this in mind when we’re considering ways of expanding our demographic offeing.
  3. We do carry out a suite of data quality checks including VPN / IP Address monitoring. We outlined our methods here, and we had an AMA with our Data Team about this a few weeks ago
  4. We’re actually looking into expanding Prolific’s current offering. I can’t say too much about it yet, but there are some things in the pipeline :wink:

Please do keep these ideas coming, we love em!

Hi Daniela, Totally agree! I also feel that developing a feature like that is super important. It could be either (1) a dynamic version of the “Participation on Prolific” filter that would allow marking ongoing studies on the list or (2) an option to add people to the “Custom blocklist” in ongoing studies. This is necessary to facilitate running multiple studies simultaneously without filtering out manually the duplicated submissions. As researchers, we usually aim to examine multiple subjects, not receive multiple responses from the same subjects, and for now, there is no way to guarantee that for multiple ongoing studies. When having multiple versions of the same experiment, checking these manually is quite cumbersome.

2 Likes

Hi Josh,

perhaps this has already been addressed as I am now accessing the platform after a long time: Why am I forced to choose between representative sample (e.g. UK sample) and custom pre-screening?
For example I would like to have a representative UK sample but also screen participants on whether they are resident in UK or don’t.

I hope I made it clear and apologies if this is very easy and quick solution!

Elia

Hi Elia, welcome to the community!

Unfortunately, you can’t combine the two tools because we’re not confident that we’d be able to deliver a representative sample for combinations. For example, we can provide a sample stratified by age, sex and ethnic group, but if someone wanted to also stratify by political persuasion too, it becomes more difficult to find the proper proportions for each group.

But, in your case, if you’re using a UK rep sample, those who respond will be UK residents.

I hope that makes sense! Let me know if I can help further :slight_smile:

(P.S Do you know someone who uses Prolific, but isn’t on the forum? Get them to register and introduce them, then you could win £££ in Prolific Credits! Click here to find out more)

1 Like