New Prolific Features

These are great ideas! Having both those features would certainly make the study publication process much smoother. I’ll send this feedback to our Product Team :slight_smile:

I would like some sort of points based trust system where researchers can award (or perhaps take away) trust points depending upon how subjects respond to surveys.

I have found that subjects generally respond correctly to the recommended attention checks of the form “To show that you are paying attention please select 5 extremely to this question”

However, when asked other questions such as “Does Superman like to serve his community with sewage cocktails?” quite a high proportion of those responding responded affirmatively, suggesting to me at least, that they were not reading to the end of the question.

I can appreciate that there is room for interpretation in the latter type of attention test, so non-payment may well be too harsh a measure, as stipulated in Prolific’s rules.

I suggest therefore that while payment should be made, researchers may profitably be allowed to give/not give, or even reduce subject trust points, depending upon how well subjects respond to such non-literal, foolproof attention checks.

Other attention checks such as giving similar answers (or at least non opposite answers) to identical questions might be used in a similar way.

A trust points system coupled with the ability to restrict access to those participants with positive trust points, or more than X points, in situations where there are a lot of potential participants, would be very useful.

I have submitted this suggestion by email but I was wondering how popular this feature would be with other researchers.

In any event I look forward to continuing to use Prolific including non-foolproof attendance checks because the data is still worth more than I need to pay.

Thanks Prolific!

3 Likes

That is an interesting idea! We’re actually currently testing a reporting process which would allow you to report a participant without rejecting their submission, so you’d be able to pay them but also let us know that they’re not providing good quality data. I’ll let you know updates on that as soon as I get them. :slight_smile:

I wonder what our other researchers think about this? Let us know!

Hi Tim — Thanks for your post and compliments to Prolific!

I work on the ‘Trust’ product squad a Prolific. Everything you’ve posted are hot topics for us at the moment. We’ve got plans to review and develop our attention check policy and guidance soon - i’ll bring your points to the table then. It would be great to get your opinions on some work that’s currently in progress which is somewhat related https://prolific2.typeform.com/to/qcyHTOFX

1 Like

Thanks for the opportunity to send my feedback. I responded to your survey and signed up for the beta testing. i do hope that a trust system of some sort comes on line. And I love the option of being able to use prescreened information rechecks, as a sort attendance check.

As I wrote in response to your survey however, flagging and denying access to the platform may be rather too strict since sometimes researchers would like access to a wider, representative sample of the diligent/attentive AND the not so diligent and attentive, if they are research things related to diligence and attentiveness.

I also hope by the way that you provide a Japanese localisation, since there is current no Prolific in Japan (only a click****er type general purpose thing) and I would love access to Japanese respondents. I might be able to help with that should you wish. I speak and write Japanese.

And, another idea…
If respondents are to be rated by researchers, I think it would be nice if respondents were given the opportunity to rate researchers
E.g. negatively Asks for private information, does not pay enough, ask really boring questions, asks really difficult questions
Or positively if Ask interesting questions, provides debriefing, questions are clear and easy to answer.
While perhaps few respondents will pay all that much attention to the researcher ratings it would at least preserve the reputation for fairness that Prolific currently, and rightly I believe now enjoys.

3 Likes

Tim, すごい、 ども ありがとう ございます! It’s a small world I studied Japanese at Hiroshima Daigaku. There’s no current plans to expand to Japan but i’ll feed this back to the team.

I’ve just read through your responses in the survey, they give us lots of incite. Thank you.

Alot, of your views are similar to other researchers who we’ve been in contact with. We’re trying to shape our principles on what guides us to give participants access to Prolific. Current we find honesty and reliability are the foundations that both supersede comprehension, accuracy and diversity. These are more important once a participant is validated as honest. Our current way of handling participants is that if they’re repeatedly not honest we remove their access to as a participant.

We’ve been trialing an internal trust measure which helps us understand the honesty of a participant.
We’ve also been trialing a researcher and study rating from participants, would you find it useful to get this feedback too?

1 Like

I am all for this! As an example, I have an attention check in my study where I ask participants to listen to two music excerpts and tell me what instrument they hear. The music excerpts are identical, playing only guitar music. When a participant then clicks on “piano” and “triangle”, it’s clear that this is beyond being a bit inattentive. I used these checks to screen out the participants at the beginning of the study, and then asked them to return their submission. So I didn’t need to reject them nor pay them. This might be why it didn’t occur to me that I should report them, in addition to the fact that my attention checks aren’t the typical “Please click on this specific response”.

I have a few control questions as well that do not screen out participants, and I pay them regardless of how they respond to these. However, based on their answers, I might have to exclude them from the data set later. These control questions are not that black and white though. I have a cut-off, where values beyond this threshold suggest that participants are either trolling or just responding randomly. I can tell you my reasons for choosing that cut-off (the theory behind it), but there is still room for interpretation. Would it be at all helpful to report behaviour that is highly indicative of trolling/random answers, even though I cannot guarantee it?

4 Likes

We’d certainly like to hear about participants who you feel aren’t providing good quality data. We’re actually currently testing a way that would allow you to report them easily. If you have some time, let us know your thoughts on our concept :slight_smile:

I am very glad to have been of use. Here are two more feature ideas, A and B

A) Alipay or some other Aliexpress compatible payments since Aliexpress sellers do not always except paypal and Aliexpress is the place were many people do their lower price point online shopping since, for Chinese produced products at least, prices are generally lower than at ebay.

B) Bespoke survey links for non-Prolific members, so as to

  1. financially reward students and other survey participants recruited outside of the Prolific respondent team,
  2. thereby allowing researchers to use their own contacts and subject pool of interest,
  3. and additionally, recruit more people to take part as Prolific respondents from such subject pools (in my case, Japanese people).

To do this the bespoke survey link would jump to a form where the respondent would be requested to fill in the minimum amount of information to become a new Prolific member (ideally not requiring a paypal/alipay account initially) and then be directed to the survey.

m(._.)m

1 Like

We’re loving the ideas, keep em coming!

A) We don’t have any current plans to use services beyond PayPal, but if we do expand into new international markets, it’s certainly something we’ll have to look at

B) Funnily enough, this is something that has been requested several times by researchers. As the idea seems to be quite popular, we’re looking at how we might implement something like this. I’ll keep you updated if we launch something. We’d love to get your feedback if we do :slight_smile:

  1. A way of rewarding participants (not researchers) for recruiting other participants especially in areas, such as in Japan where there are few participants, such as “tell a friend and get a dollar when they have completed a survey.”

  2. A way for researchers to fund (1) such that it is the researchers, rather than Prolific that pay to increase the respondent subject pool in areas that more subjects are needed. E.g. when a subject finishes a survey they could be given a researcher funded “share this link with friends [from a certain demographic] to get a bonus.” I’d happily pay an extra dollar per subject to get Japanese data and get more Japanese on board. For Prolific it would be win-win.

The existence of a quality data pool will bring researchers to Prolific.

  1. The above can be abused, and perhaps the current system is being abused, by people using VPN/sock puppet multiple accounts, so some checking of accounts that always seem to respond to the same surveys might be in order, if it is not currently carried out.

  2. Qualtrics, Gorilla and Google can provide standard lickert test type surveys, (and Gorilla can provide more) but while Implicit Attitudes Tests are becoming more popular there is currently not a lot in the way of provision for providing these tests online. Millisecond offers a Web version of its software for 3000 dollars per lab. If Prolific offered a survey creation interface it would make using Prolific easier, and if there were IAT type tests that would be pretty unique.

With regard to (4) I have some obsolete php and Flash based software that I would be happy to donate (it is not mine to donate since it is Open Source, but I paid for it and can tell you where you can find it). I don’t think it would be that difficult to convert it to php and HTML5 (if that is the thing people use these days rather than Flash).

I like trying to think of improvement ideas. I hope these are not useless.

2 Likes

Not useless at all! We love hearing new ideas, so keep them coming :slight_smile:

  1. We did have a referral scheme for participants, but people tended to recruit others of similar demographics. Whereas, we wanted to diversify our sample offering, so we stopped the scheme.
  2. That’s a very interesting approach. We hadn’t considered whether researchers would be willing to pay to recruit their own samples. We’ll keep this in mind when we’re considering ways of expanding our demographic offeing.
  3. We do carry out a suite of data quality checks including VPN / IP Address monitoring. We outlined our methods here, and we had an AMA with our Data Team about this a few weeks ago
  4. We’re actually looking into expanding Prolific’s current offering. I can’t say too much about it yet, but there are some things in the pipeline :wink:

Please do keep these ideas coming, we love em!

Hi Daniela, Totally agree! I also feel that developing a feature like that is super important. It could be either (1) a dynamic version of the “Participation on Prolific” filter that would allow marking ongoing studies on the list or (2) an option to add people to the “Custom blocklist” in ongoing studies. This is necessary to facilitate running multiple studies simultaneously without filtering out manually the duplicated submissions. As researchers, we usually aim to examine multiple subjects, not receive multiple responses from the same subjects, and for now, there is no way to guarantee that for multiple ongoing studies. When having multiple versions of the same experiment, checking these manually is quite cumbersome.

2 Likes

Hi Josh,

perhaps this has already been addressed as I am now accessing the platform after a long time: Why am I forced to choose between representative sample (e.g. UK sample) and custom pre-screening?
For example I would like to have a representative UK sample but also screen participants on whether they are resident in UK or don’t.

I hope I made it clear and apologies if this is very easy and quick solution!

Elia

Hi Elia, welcome to the community!

Unfortunately, you can’t combine the two tools because we’re not confident that we’d be able to deliver a representative sample for combinations. For example, we can provide a sample stratified by age, sex and ethnic group, but if someone wanted to also stratify by political persuasion too, it becomes more difficult to find the proper proportions for each group.

But, in your case, if you’re using a UK rep sample, those who respond will be UK residents.

I hope that makes sense! Let me know if I can help further :slight_smile:

(P.S Do you know someone who uses Prolific, but isn’t on the forum? Get them to register and introduce them, then you could win £££ in Prolific Credits! Click here to find out more)

1 Like

Thank you for that attention check suggestion and way of implementing prior to the survey. Great.

1 Like

Thank you. AMA is the motto of my English conversation classes. I did not know it is a well known acronym.

  1. Participants be allowed to delete timed out, and rejected submissions from their list of submissions so they feel buoyed by looking at it.
  2. A database of attention checks that are allowed, for only paying researchers to see of course.
  3. A pro-forma informed consent form with boxes for risks or other things that might change.
  4. The ability to quiz the subject pool to see how many respondents of a certain demographic there are without having to make a test survey.
  5. The ability to save favourite demographic settings for future surveys.
  6. An easier way of downloading “About You” information on the participants in our survey (if we are given access to this information – I can’t find it). If we are not currently allowed access to this information, then please could we access it? Or perhaps access it with a bonus payment. Or perhaps those participants that allow access to their about you information might be included as a demographic selector, so that such participants be given priority. (We can access this information by making multiple studies changing the demographic a little each time based on the About You information).
  7. An additional demographic selector that allows researchers to choose participants that take longer x percent of the median completion time. It is generally the people that finish quickly that provide the poor responses. I am aware that participants could deliberately delay their responses in order to get around this but they might not be bothered.
  8. An AI estimated survey completion time of a certain demographic at a certain level of funding (tricky to implement perhaps).
  9. In the future, when surveys on Prolific exist, a way of giving bonuses to those that pass, or gain above a certain mark on fuzzy attention checks.
  10. A quicker way of creating a whitelist from those that have completely previous surveys so that we can inter-correlate our research.
  11. A pink list of participants (perhaps from 10, previous respondents) who are given X hrs to respond before the study is open to everyone.

@timtak you’re a star :star_struck:

These are such great ideas! I particularly like 2, 3, 5, 7, 8, 9 & 11. I’ve added them to our list of ‘Great Community Ideas’, and when we have the time, we’d love to implement some of them .

For some of your ideas, we already have fixes/functionality:

An easier way of downloading “About You” information on the participants in our survey (if we are given access to this information – I can’t find it).

  • We’ve written a guide on how to export demographic info here. Is that what you’re after?

A quicker way of creating a whitelist from those that have completely previous surveys so that we can inter-correlate our research.

  • We have a filter called ‘Previous Studies’ which allows you to choose participants from any of your previously run studies. So, there’s no need to export their IDs, and upload them like you would do for a custom allowlist.

Hope that helps!

Just double-checking, you’re in our user-testing group right?

From my side, I put forward the following proposals for new features of the platform: :memo:

  • Allowing for simultaneous interactions between participants, so to have a way to match participants, at least, in pairs. Thinking about 2-persons interactions, this would mean that subject 1 sees on her screen the input of subject 2 to a certain question (of course, after a waiting time), and viceversa. I don’t know how difficult this would be to implement but for some kind of research it would definitely be a plus

  • Asking participants for a routinley check of the information they’ve provided in the “About You” section. It might be important since opinions but also statuses may change over time.

I am not aware whether something like the above proposals have already been adressed by the Prolific Team unitl now. In this case, please let me know. Thanks :grin:

These are great ideas!

Allowing for simultaneous interactions between participants, so to have a way to match participants, at least, in pairs.

  • This can already be done via some external survey platforms like Qualtrics. We don’t yet have survey functionality, but if we do, this is a feature we’d like to implement.

Asking participants for a routinley check of the information they’ve provided in the “About You” section. It might be important since opinions but also statuses may change over time.

  • We do this already, but I think we can make the notifications more prominent :slight_smile:
1 Like