🤔 Ask Anything Thread (Newbie Friendly)

Hello !
Thank you for proposing your help.

Recently, I publish a study who has only 6 answers/33 so I decided to stop it. Now, I have 20.40 remaining on my account that I can’t use for my new survey. Why not?

Thank you by advance for your response

Hello @Katya_Garcia and welcome to the Forum!

One possible explanation is that you don’t have enough in your balance to cover the cost of this study. While your total balance might say you have enough, your available balance could not be enough.

The total balance is the total amount of money in your account minus the funds spent on participant payments. The available balance is how much you will have after any active studies have been completed. You can read more about this here: Total & available balance explained – Prolific . On your account, you can check your different balances here: Prolific .

Could this explain your situation?
Let me know - otherwise I’ll think again about it…
Veronica

Hi! I have multiple (3) tasks associated with the same study-- all using different software. If I am not wrong, prolific has space for only one link. Therefore, I was wondering if it was possible for me to advertise my research on prolific so that potential participants could know about it and contact me directly about it instead of going to the tasks via prolific? Have you got any solution to this or is there something else I can do about it on prolific? Thank you!

Hello @Divyanshi_Shaktawat and welcome.

I was wondering if it was possible for me to advertise my research on prolific so that potential participants could know about it and contact me directly about it instead of going to the tasks via prolific?

If I understood your question correctly, I’m afraid the answer is no. That goes beyond what Prolifc was created for. If you recruit participants through this platform, everything must be done via Prolific (payments for example) except the survey of course, towards which they are redirected through the link.

Have you got any solution to this or is there something else I can do about it on prolific?

If you wish to randomly allocate participants to different surveys (or different versions of the same survey) you can usually do this thanks to the functionalities of some survey software such as oTree or Qualtrics. If you have the different versions of your survey implemented in different software and that’s why you have different links, and I guess that is why you are asking, maybe a good solution is allocate.monster, a website @PSR suggested (if you want to read more I leave the reference post below here).

Does this reply to your doubt?
-Veronica

Thank you for your super quick response! No, I do not have different versions of my surveys. Those are three different types of tasks. But your response makes things clearer. Thanks.

Hello Josh,

We are two students from Norway just starting our Master’s thesis and are going to use Prolific to get our sample.
We are wondering if it is able to choose our sample for participants only working in Consultancy companies? Or Participants working in finance? Or another field that is characterized by a high workload environment.

Best regards,
Astrid :slight_smile: :smiley:

Hi Astrid

Welcome to the forums :slight_smile:

You can screen for participants that work in finance but not consultancy.
There is a prescreening question
Which of the following best describes the sector you primarily work in?

  • Agriculture, Food and Natural Resources
  • Architecture and Construction
  • Arts
  • Business Management & Administration
  • Education & Training
  • Finance
  • Government & Public Administration
  • Medicine
  • Hospitality & Tourism
  • Information Technology
  • Legal
  • Policing
  • Military
  • Manufacturing
  • Marketing & Sales
  • Retail
  • Science, Technology, Engineering & Mathematics
  • Social Sciences
  • Transportation, Distribution & Logistics
  • Other
  • Rather not say

If you are looking for participants that have a high workload you could also use

Work week in hours

Please try to estimate: How many hours do you work per week?

  • 1-10 hours per week
  • 11-20 hours per week
  • 21-30 hours per week
  • 31-40 hours per week
  • 41-50 hours per week
  • 51-60 hours per week
  • More than 60 hours per week
  • None

You can see what other prescreening criteria are available on the (New) study settings page under
“AUDIENCE
Prescreen participants
Criteria
(Add Another One)”

As well as telling you what prescreening questions are available it will also tell you how many participants correspond to the criteria you have stipulated. in the top right of the prescreening question dialogue ( “X participants” )

Which of the following have you experienced or are experiencing at work? Please select all that apply.

  • I generally feel passionate about my work (that is, excited and highly motivated)
  • I generally feel respected at work (by my colleagues and/or superiors)
  • I generally feel bored of my job (that is, disinterested and not sufficiently challenged)
  • I’ve experienced workplace conflict (for example due to differing goals, values, or perspectives)
  • I’ve been bullied at work (that is, I’ve been a victim of emotional, physical, and/or verbal abuse)
  • I’ve observed or become aware of wrongdoing (e.g., stealing, cheating) committed by members of my workgroup

The presscreening questions are searchable.

You could also prescreen yourself but you would have to do that in a separate (low paying, wide net) study. P leas ask for further details should you wish to know how :slight_smile:

Tim

To prevent frauds, shall I ask participants to provide their Prolific ID and match it with the auto-generated Prolific ID set up in the survey? Also, shall I include other fraud prevention measures in my Qualtrics survey (e.g. duplicate score, fraud score etc), besides relying on the Prolific ID?

Dear Lywong

Welcome back :slight_smile:

I am not entirely sure what you mean by frauds but I think that the most pressing issue, which could be called fraud, is lack of attention.

I don’t think asking participants to enter their Prolific ID and comparing it with one passed automatically to Qualtrics will work as an attention check because even participants who want to complete the survey without paying much attention will want to make sure that they get paid so they will be careful to enter the correct ID.

However, attention checks requiring that participants reenter information that they have entered into Prolific in the “About You” (participant GUI appellation) / Prescreeners (Researcher GUI appellation) and then rejecting those that do not match, may be a pretty good way of preventing a way of preventing fraud partly because you can reject those that do not reenter the same information, if you use an identical wording to a prescreener that you are using in your study, right there and then in the survey.

Details on how to use this method are explained in the Qualtrix integration guide

Normal attention checks should afaik not be used to reject submissions right then and there in the survey, at least because for anything but the shortest surveys, participants have to fail two checks to be rejected. Information on conventional, “fair” checks are explained here

You can (and I do) also use attention indicators of the form
I eat rocks
He provides service in the form of sewage cocktails.
My car is made of ice.
I have two lungs.
I have hand a fatal heart attack.
I have carpet on my ceiling/
Do you have 17 fingers?
(Most of these were posted to the participant’s sub reddit)
Which could, metaphorically at least, be true so they can’t be used to reject applications, according to Prolific’s rules, but they can be used as a indicators that the participant was not paying attention so that you the researcher pay, but not use the data provided by that participant.
I use all three types.

Tim

Thanks for the prompt reply, Tim. The attention check is a good idea. I will certainly use it for my research design.

I am concerned about bot, box stuffing, and other creative fraudulent attempts to participate in a survey. So the self-validation method by matching the self-reported Prolific ID with the auto-generated Prolific ID may add another layer of defense against potential fraud. Would you agree? Or is that unnecessary?

Lywong

I have been concerned about box stuffing (a new phrase to me, thank you) when I had “two” participants list their pet (one cat, the other dog) as a “person they admire”. I had no proof. The other thing was that the IDs were close to each other when reordered by ID.

Apparently Prolific perform automatic checks (IP? submission time? – they don’t say) to help prevent such types of fraud.

I am not sure that comparing passed with self reported ID would help. Participants might get their ID wrong if they are using several IDs but then I think that they would be more likely to get other demographic and or behavioural stuff wrong (such as age, gender, location, sports participated in) which they are likely to be varying a little in each of their sock puppet accounts.

You could try, as per my experience:
Reordering by ID
Asking for reentry of some prescreener information e.g. sports / possessions (and include all of the options as a prescreener).
Adding some free response boxes since it may be difficult for fraudsters in a rush to come up with convincingly different free responses such as people they admire, or creative uses for a coat hanger, and compare.

You could also increase study places slowly - even one at a time if you have time. I don’t have any proof but if someone were to have a virtual personal network or ten, providing them with ten different IP addresses and some sort of script (IF SUCH EXISTS) to open ten browser windows each with a different IP, then they may be able to launch it if they get their in time to see that there are more than 10 places left. But if they have to beat all the other honest prolific respondents to the click in each of those ten different occasions then they would be less likely, imho, to succeed.

But, this is all hypothetical. I have no proof that box stuffing is going on at all.

Bots, again IF SUCH EXIST, could probably be removed by using statistical tests for outliers.

Any other ideas?

Tim

Good ideas. Thanks.

I have also used a simple math question as a screener for bot.

1 Like

I have a study that has completed, but I want to send someone a bonus payment. How can I do this?

Hi @Karen ! Welcome to the Forum.

From the summary page of your study, you can click the top-right Menu button and select the ‘Bulk bonus payment’ option.

image

Remember the way you must enter Prolific IDs and payments is such in the example below:
image

So ProlificID,payment pairs, one pair for each new line.

Here you can find the official Prolific Help Center page on the topic.

Hope it helps!
Veronica

As the PI in my lab, I pay for studies with CC and get reimbursed. Is there a way to have my students login to the same account to setup studies and NOT have access to my payment information?

I paid $1500 through a PO to do a study on Prolific. It has been paid, yet the funds do not show up in my personal account. How do I get the funds we paid into my personal account?

Thanks,
Tim

Hi, I followed the instructions here

to insert an End of Survey message redirecting people to Prolific if they decline to consent to the study.
It does work to pop a person back into Prolific, but the message that is displayed says “Submission received. This completion URL is working correctly. The participant will complete with the code 5DFFAB7A”.
Is this the way it should be? If someone declines to consent and does not do the survey, I don’t want them to appear in my list as if they completed it and should be paid.
thanks!

Thank you!

thank you!

1 Like

HI Isabel

Welcome to the forums :slight_smile:

Is your card number displayed in your account? I just checked and it is not displayed in mine but that may be because my card expired.

There are I am afraid no joint accounts though it is a common feature request.

Tim