#TipTuesdays - Our Research Best Practice Guide

Phase 3: Piloting and launching your study

“What’s that noise?” — Of systematic and random errors, or: Why pilot testing is important

If you study social sciences, you might be especially interested in this section about types of errors.

Test theory assumes that every score or observation is composed of the following two factors: the true value of the variable plus an error. This error can either be at random or systematic.

The random error (sometimes also called noise) is caused by factors that affect the measurement of the variable of interest completely at random. A random error can be, for example, the participants’ mood when taking an intelligence test, which may affect the performance of some participants but not of others — either in a positive or in a negative way. Random errors do not have a systematic effect on your whole sample, which is why they typically do not affect the average group score(s).

The systematic error (sometimes also called bias), however, is a factor that influences the measurement of the variable of interest in a systematic way, that is, in the whole sample. For example, if you are administering an intelligence test and the building in which the test is taken is being renovated, then the potentially noisy environment will most likely lower the scores of all participants. As a researcher, you would really want to avoid or minimize this kind of systematic bias – otherwise, your data may not reflect what you are actually investigating!

The good news is that there are some things that you can do about both types of measurement error.

First, please do pilot test your questionnaire or instruments, in order to get feedback from your participants on potential sources of influence and bias. We generally recommend that you pilot test your study with a handful of participants.

Second, we recommend that you also ask your colleagues / your research lab to provide you with feedback on your study . After all, that’s what teams are there for! :slightly_smiling_face: While participants can only see the “appearance” of your study, your colleagues may in addition be able to sanity check your branch logic and report any bugs back to you . Trust us, this approach is invaluable in preventing all sorts of biases, bugs or external factors undermining your study! We’ve seen some studies go awry because of easily preventable bugs, so we recommend that you avoid plowing ahead with your study without any sanity checks.

Pro-tip: One possibility of minimizing measurement errors is to have several methods to measure the same variable you are interested in. This step is especially useful when you know that a certain method is prone to be systematically biased.

Third, assuming you have pilot tested your study and collected all the data you wanted, you should double-check that you did not make any mistakes when importing / handling / merging your data. Sounds obvious, but you’d be surprised how many stories we’ve heard of wrongly coded variables and responses that produced spurious results. Just ask a colleague to take a quick look. Sanity checks like these can go a long way.

Finally, when it comes to data analysis, there are some statistical measures that you can apply to quantify your measurement error. Besides, the point above regarding sanity checks applies to analysis code in exactly the same way! Do check your code for any errors, and ask a buddy to double-check it. All of the above will significantly increase the quality of your research.

Sources:
The expertise of the Prolific Team & Measurement Error - Research Methods Knowledge Base

1 Like

Approving Submissions and Rejecting Participants

When reviewing your submissions on Prolific, here’s what works best in our experience.

First, try to approve submissions as quickly and accurately as possible.

Second, nobody likes to be rejected - but sometimes you may decide that it is necessary to reject participants’ submissions. In this case, please provide a detailed message about why you had to reject their submission.

You should also not reject submissions based on participants’ demographic information if you did not specify your target demographics as pre-screening filters.

If they’ve clearly made some effort or attempted to engage with the task for a significant period of time but their data is not of sufficient quality, then consider approving them, but excluding them from your analysis.

If the participant has clearly made little effort, failed multiple attention checks or has lied their way into your study, then rejection is appropriate. Please read our article on valid and invalid rejection reasons for more guidance.

If you are concerned that a rejected submission might have been made by a bot, then please let us know immediately.

Lastly, communication is key! Misunderstandings are a common pitfall, especially in online communication (and across different cultures). So if participants message you with problems or questions about your study, please provide them with the necessary information.