|
Post by paulwarren on Mar 28, 2022 6:38:40 GMT
We are using the counterbalancing option (www.psytoolkit.org/lessons/counterbalance.html) to set up 3 groups of 10 participants. We have specified a total of 30 participants and have put 10 10 10 in the line ‘Counterbalance participants’ in our survey. The relevant part of our survey code is pasted below. The code ran 30 participants for us, but with 10, 9 and 11 in each of the three groups. Does anyone have any idea why the counterbalancing has not worked? Is it perhaps the case that the group total is up-dated only when a participant has finished and therefore someone could be allocated to start in one group (effectively as person 11) while someone else (person 10) was still finishing in that group?
...
l: decide_what_people_get
t: jump
- if $psy_group = 1 then goto Group1
- if $psy_group = 2 then goto Group2
- if $psy_group = 3 then goto Group3
l: Group1
t: experiment
- {fullscreen} Gender_Effort_IAT_Mal
j: background
l: Group2
t: experiment
- {fullscreen} Effort_Size_IAT_Mal
j: background
l: Group3
t: experiment
- {fullscreen} Size_Gender_IAT_Mal
l: background
....
|
|
andre
Experienced
Posts: 35
|
Post by andre on Mar 28, 2022 9:33:41 GMT
I could very well imaging that this is what happens, in particular if you use recruitment services (such as Prolific, Testable Minds, etc) who recruit a lot of participants in a very short time, so that you'll have always quite a few participants doing the study at the same time. One reason might be that you need to count the participants who *finished* the study for counterbalancing, because participants may terminate early. And then you might end up with 10 10 10 per group, but with an unequal number of incomplete datasets in each group.
See my thread (https://psytoolkit.boards.net/thread/767/suggestion-counterbalancing) as well, it seems that participants (at least in the beginning?) are assigned randomly to groups. Maybe they are assigned randomly throughout (each new participant has a 33% chance to be assigned to one of the groups) so that the 10 10 10 is only the ideal target but not fully guaranteed?
In the end I didn't use it anymore, but instead created 2 different experiments, using two different links (and creating two studies on Testable Minds. Although I've now found out that one could balance this also in Testable. In other words, writing a very brief 'wrapper' experiment in Testable which just assigns the participants to the two groups in a balanced fashion).
For larger samples or projects where perfect balancing isn't required, I guess it's still a very useful function.
|
|