I’m running a study that will recruit 4,000 survey respondents and wondering how likely it is that I will get roughly 50:50 male/female-identifying split in responses without creating separate surveys with equal slots for males and females, each screening out the other gender. The survey topic is relatively gender neutral (restaurant menu choice simulation) and other screeners are more likely to eliminate slightly more female responses (e.g. cannot be vegetarian). Any insight would be appreciated! Thank you, Stacy
Hey Stacy, our participant pool is skewed slightly more towards female-identifying participants, and the way our studies are assigned to participants means that a split is not guaranteed. To be absolutely sure you’ll get a 50/50 split, we recommend that you run 2 separate studies
I follow-up on the topic, since for my studies I found samples of roughly 70% females. Although in principle this is not a problem, for the generality of some results it might be. I’m wondering, are there other ways to narrow down a bit this gap? For example, are there nationalities for which the imbalance is smaller, or hours of the day for which there is evidence of parity in participants’ activity by gender? If not, still the suggestion on the 2 separate studies is very insightful!
That’s a great question. Let me investigate, and get back to you
I looked into it, but unfortunately there’s no significant difference between nationalities. They all hover somewhere between 50-55% of our participant pool. So, it’s strange that you’ve been getting a 70% split.
But, we distribute our studies evenly, not randomly. Our primary tool is something called adaptive rate limiting. In essence, when the number of active participants is high relative to the number of study places available, we give priority access to participants who’ve spent less time taking studies recently.
So, it could be the case that the majority of those given priority access, in your case, happened to identify as female
I see, maybe that’s why it happened, it could be.
Anyways, thank you very much for the information!
I echoed with this thread. I just finished data collection and found that out of 200 participants, 84% are female.
Now I am thinking about whether if the imbalanced sample pool is coming from different job industries. One of the eligibility criteria in my study is that participants must work in a knowledge-intensive industry (e.g., finance, education, technology, healthcare). Most of my participants work in education (K-12) and healthcare, which are generally female-dominant industries. And I have very few participants from finance and technology, which are generally male-dominant industries.
So, I am curious if Prolific has more participants from female-dominant industries than male-dominant industries. Is there any data showing the breakdown of the participant pool by job industry? Also, is any data showing that Prolific has a more significant gender gap in some industries than others?
I am afraid that not so long ago Prolific went viral on TikTok due to the video of a young lady, resulting in their being a lot more participants in a similar (young lady) demographic. This was announced here and on the Prolific blog in the following post
This means it is currently important (if gender is an issue) to create at least three surveys for young women (to limit them), for young men (to get more of them) and for people over 25 (who probably don’t watch Tiktok, and where there is little gender disparity).
Twitter are waiving their costs (33%) on studies that have fewer than 25% males so you can use these reimbursed funds to do another study for only young males. Also representative samples currently incur no extra charge (but they can’t be combined with prescreeners, such as employment type).
In my humble opinion, somewhere on the GUI, it would be a good idea to warn researchers that they need to do some gender balancing. Perhaps when we apply prescreeners the resultant number of participants active in the last 90 days figure be given with a male - female - non binary breakdown. I submitted feedback asking for a feature of this type.
I am sorry things have turned out this way for you.
Yes, I read that article yesterday. I wish I read it before I run my study I sent out an email to my department listserv to let them know about this issue (many people in my department use Prolific)
Having three studies to recruit different demographics sounds clever. I already have so many young female participants (186) and have not enough funding to recruit another 186 young male participants. So, I just decided to just recruit 50 male participants with the refunded money.
[quote=“Ashley_Lee, post:9, topic:241”] I already have so many young female participants (186) and have not enough funding to recruit another 186 young male participants. So, I just decided to just recruit 50 male participants with the refunded money.
I hope that the 64 males show no statistical difference to the females so that you can use all your data,.
We’re glad you’re excited! If you have any feedback about the new feature, leave it in this thread.
For those who don’t have access to this feature yet: we’re doing a staggered rollout, just in case any issues arise. Barring an unforeseen problems, you’ll be able to balance by sex by the end of the month!
You’ll already be on it, I guess, but it would be nice to be able to change the N in balanced mode.
So a n=10 pilot runs OK, just up it to 1000 and let the balance algorithm take care of inviting new people. But GREAT to have this! it’s so much easier than creating and managing duplicate M and F projects…
Yep, we’re on it. It might be a while until that feature is available, as the tool is more complicated under the hood than it seems! So, until then, you can duplicate your study to increase places