🛠 #TipTuesdays - How to Optimise Your Use of Prolific

Every Tuesday, I’ll be posting a tip to help you optimize the way you use Prolific :slight_smile:

This is a ‘Wiki’ post, which means any community member can edit it. So, If you’ve got any tips or tricks the community would find useful, add it below under ‘Community Tips’!

Click ‘Edit’ in the bottom right corner to add! And comment below if you find this useful.


There might even be a prize for the best tip of the week :eyes:

:mantelpiece_clock: Best Times to Publish Your Study

This graph shows trends in participant activity on Prolific. Publish your study when you know that more active participants will be on the platform! For example, if you wanted the greatest exposure to all users worldwide, the best time to publish would be around 3-4pm GMT.

Community Tips

Add yours here!

  • When running a pilot, consider employing the same day of the week and the same time as that of the real experiment. This might be a plus for the reliability of your analysis if you are planning to analyse both samples together, thus merging the observations from the two sessions in a single dataset. In this way, it is more likely to have homogeneous samples in the two sessions with respect to important characteristics (e.g. gender, age). Of course this is possible only in case the pilot goes well and does not lead to any substantial change in the design or in any other key aspect of the subsequent experiment. (from @Veronica )



:white_check_mark: Getting Trusted Participants

Did you know you can use our ‘Approval Rate’ filter to get participants who have a very high submission approval rating? For example, if you wanted participants who have never had their submissions rejected, you can set the filter at 100 like so:

Read more about our free prescreening filters here :slight_smile:



:mag_right: Reverifying Participant Info

At Prolific, we do a lot to ensure that you get the best possible data quality. But, if you really want to be sure that your participants are who they say they are, you can run some really simple checks:

  • Re-ask your pre-screeners within your experiment. So, for example, if you’re targeting teachers aged 25, ask participants to reconfirm their age and their occupation. This allows you to confirm that your participants’ prescreening answers are still current and valid, and may reveal people who have forgotten their original answers to prescreening questions.

  • Ask difficult to answer questions based on pre-screeners. So, let’s say you’re targeting people who use a particular medicine. You could ask what brand they use, their normal dosage and at what times of day they’re supposed to take it. Then compare their answers against the accurate information about that medicine.

These aren’t foolproof, but along with the extensive work Prolific already does, they can act as extra data quality control measures.

You can read more about our free pre-screeners here

Are there any methods you use to verify participant info? Let us know below! :slight_smile:

If you found this post helpful, give it a like :heart:



1 Like

A post was merged into an existing topic: Sample Enquiry

Hi Josh,
I ran a study that was somewhat sensitive to language and, therefore, made English as a first language an inclusion criterion. However, when I received messages from some participants, it was quite obvious that English was not their first language. I think would need some sort of language test as a screener, if I really wanted to be sure. Fortunately, this was just pilot data.





Good tip Joe!

Sorry to hear that you got participants who weren’t being honest. We now have a reporting feature which will allow you to let us know about things like this in the future :slight_smile:


1 Like

:video_game: How do you deal with participants who might try to game the system?

We’re super proud of the quality of our participant pool, and the quality of the data it provides. But, no vetting system is perfect! So, here’s how you can filter out the very small minority who may be attempting to ‘game’ the system.

  1. Use speeded tests or questionnaires to prevent participants from having time to google answers.
  2. Ask participants a few questions clarifying the instructions of the task at the end of the experiment (to check they understood the task properly and didn’t cheat inadvertently)
  3. Develop precise data-screening criteria to classify unusual behaviour - these will be specific to your experiment but may include:
  • Variable cutoffs based on inter-quartile-range
  • Fixed cutoffs based on ‘reasonable responses’ (consistent reaction rimes faster than 150ms, or test scores of 100%)
  • Non-convergence of an underlying response model
  • Simple as it seems, it’s been suggested you have a free-text question at the end of your study: “Did you cheat?”

If you’re interested in learning more, you can read our full blog post on improving your data quality here.

And if you find these tips helpful, drop a like :heart:


:mag_right: 7 Ways to Check Participant Attentiveness

At Prolific, we do a lot to ensure that you get the best possible data quality. Our pre-print even shows that our participants score higher on attentiveness measures than our competitors!

But, if you want to be extra sure that they’re paying attention, you can use the following methods:

  1. Use speeded tasks and questionnaires to prevent participants from having time to be distracted by the TV or the rest of the internet.

  2. Ask participants a few questions clarifying the instructions of the task at the end of the experiment (to check they read them properly)

  3. Collect timing and page-view data:

  • Record the time of page load page load and timestamp every question answered.
  • Record the number of times the page is hidden or minimised.
  1. Monitor the time spent reading instructions:
  • Look for unusual patterns of timing behaviour: Who took 3 seconds to read your instructions? Who took 35 minutes to answer your questionnaire, with a 3 minute gap between each question?
  1. Implement attention checks (aka Instructional Manipulation Checks or IMCs). These are best kept super simple and fair. “Memory tests” are not a good attention check, nor is hiding one errant attention check among a list of otherwise identical instructions!

  2. Include open-ended questions that require more than a single word answer. Check these for low-effort responses.

  3. Check your data using careless responding measures such as consistency indices or response pattern analysis, see Meade and Craig (2012) and Dupuis et al. (2018).



thanks for this tip :raised_hands:. this is useful!

1 Like

Glad it’s helpful! Is there any particular area that you’d like tips on?

:rocket: How do you get the best out of participants? - Part 1

While participants are ultimately responsible for the quality of data they provide, you as the researcher need to set them up to do their best.

  1. Pilot, pilot, pilot your study’s technology. Run test studies, double and triple check your study URL, ensure your study isn’t password-protected or inaccessible. If participants encounter errors they will, more often than not, plough on and try to do their best regardless. This may result in unusable or missing data. Don’t expect your participants to debug your study for you! :bug:

  2. Make sure you use the ‘device compatibility’ flags on the study page if your study requires a specific (or excludes a specific) type of device. Note that currently our device flags do not block participants from entering your study on illegible devices (detecting devices automatically is somewhat unreliable and may exclude eligible participants). If you need stricter device blocking, then we recommend you implement it using your survey/experimental software.

  3. Keep your instructions as clear and simple as possible. If you have a lot to say, split it across multiple pages: use bullet points and diagrams to aid understanding. Make sure you explicitly state what a participant is required to do in order to be paid. This will increase the number of participants that actually do what you want them to! :memo:

That’s all for part 1! Next week I’ll give you 3 more tips on how to get the best out of your participants.


1 Like

:rocket: How do you get the best out of participants? - Part 2

While participants are ultimately responsible for the quality of data they provide, you, as the researcher, need to set them up to do their best.

  • If participants message you with questions, aim to respond quickly and concisely. Be polite and professional (it’s easy to forget when 500 participants are messaging you at once that each one is an individual!). Ultimately participants will respond much better when treated as valuable scientific co-creators. :slightly_smiling_face:
  • If you can, make your study interesting and approachable. Keep it easy on the eye and break long questionnaires down into smaller chunks.
  • If you can, explain the rationale of your study to your participants. There is evidence that participants are willing to put more effort into a task when its purpose is made clear, and that participants with higher intrinsic motivation towards a task provide higher quality data.

That’s all folks! Comment below if you’d like tips in a particular area :slightly_smiling_face:



Very interesting finding! I will surely take it into account for my next studies.
In the meantime, I added a suggestion in the “Community Tips” block that comes from my experience with piloting experiments.

1 Like

I love the tip! Thanks for your contribution :grin: