I am running a few pilot studies with small sample sizes and different participant filters for each study. My approach is that I always duplicate the previous study, change a filter, and run again with 20-50 participants.
Now the problem is, that I keep getting participant IDs that are clearly fraud as they (a) give the exact same wrong answers to an open question (same spelling, etc) as other participant IDs within the study (b) give the exact same wrong answers to an open question (same spelling, etc) as other participant IDs across studies, even though I have a filter that should exclude participants from earlier studies.
This is clearly the case for around 50% of my participants and maybe for more, but for those others I cannot be sure. Is there a way to prevent that from happening? If not, is there a way to return these participants without me having to write emails to the support team or report the IDs and then having to wait a few days until I can continue? (I need studies to be completed before I can run a new one so that I can filter IDs that have already participated in previous studies.)
Thanks a lot for your help!