Stop-Signal Task (SST) - excluding participants, etc.


Author
Message
sendero_dorado
sendero_dorado
New Member (8 reputation)New Member (8 reputation)New Member (8 reputation)New Member (8 reputation)New Member (8 reputation)New Member (8 reputation)New Member (8 reputation)New Member (8 reputation)New Member (8 reputation)
Group: Forum Members
Posts: 1, Visits: 14
Hi everybody. I'm an old hand with Inquisit, but new to using the SST. In analyzing some data, I came across this new paper (Verbuggen et al., 2019) describing updated recommendations for processing and analyzing SST data, and it looks like they've already been incorporated into the scripts for the SST run through Inquisit. Problem is, I used the old version (before these recommendations were implemented) to collect nearly all of the data I'm trying to analyze now.
These new recommendations describe several steps that were new to me, including excluding participants based on (1) whether it looks like the assumptions of the race model have been violated, (2) when the % of go trial omissions is too high (although it's not so clear what that standard should be), and (3) if the probability of responding on stop signal trials is higher than 0.75 or lower than 0.50. The problem is, if I implemented all of these steps, I'd be excluding about 25% of the participants in my study. This is a bit shocking to me, as we followed all other recommendations about the design of the task already, we provided good instructions for each participant, feedback after each trial, and observed them during administration, so I thought the data we captured should be quite valid.
So, my first question is: Is this percentage of excluded participants normal when all of these recommendations are followed, or does it suggest that there was something pretty wrong with our administration of it? I know there's a ton of variability in how people analyze SST data, but in all the studies I can find so far, it seems like most are only dropping a few participants, so i'm a bit concerned about how many this is. 
My second question is: The Verbuggen et al. 2019 paper still seems pretty unclear about what % of go-omissions should be cause for alarm, instead referring readers to Figure 2 in order to figure that out. But that Figure isn't clear at all. Anybody have any sense of what % of go trial omissions is reasonable? Or what a reasonable cutoff is to exclude participants, etc.? 
If you have thoughts about either question, I'd love to hear them!

GO

Merge Selected

Merge into selected topic...



Merge into merge target...



Merge into a specific topic ID...




Reading This Topic

Explore
Messages
Mentions
Search