Latency for first "valid response" AND first "correct response"


Author
Message
supportseeker
supportseeker
Esteemed Member (1.6K reputation)Esteemed Member (1.6K reputation)Esteemed Member (1.6K reputation)Esteemed Member (1.6K reputation)Esteemed Member (1.6K reputation)Esteemed Member (1.6K reputation)Esteemed Member (1.6K reputation)Esteemed Member (1.6K reputation)Esteemed Member (1.6K reputation)
Group: Forum Members
Posts: 6, Visits: 4

Hi,


if I understand correctly Inquisit routinely can only record one latency per trial (the first valid response). Is it somehow possible to make Inquisit record both the first "valid response" (that may also be an incorrect response) and the first "correct response"?


Background: The IAT scoring algorithms either need the latency of the first correct response (D1 and D2) or they need the latency for the first valid response, irrespective of whether or not this response was correct (D3 through D6). So if I understand correctly it is not possible to calculate all D-values for a given data set, but only either D1/D2 OR D3-D6.


Thanks for any comments.


Dave
Dave
Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)
Group: Administrators
Posts: 13K, Visits: 104K

I believe there is some sort of misconception here. I assume that you are referring to the Greenwald, Banaji & Nosek (2003) paper "Understanding and Using the Implicit Association Test: I. An Improved Scoring Algorithm". If you read the article closely, you will find that the web-based IAT they used to collect the data for evaluating the various candidate measures (D1 to D6) works *exactly* like the Inquisit implementations, i.e. only one latency is recorded per trial and this is the latency of the correct response. Quoting page 202 (emphasis added):


Error latencies. It is common practice in studies with latency measures to analyze latencies only for correct responses. By contrast, the conventional IAT algorithm uses error latencies together with those for correct responses. Study 1 included analyses to compare the value of including versus excluding error latencies. A preliminary analysis of the Election 2000 IAT data was limited to respondents (n = 1,904) who had at least two errors in each of Blocks 3, 4, 6, and 7. The analysis indicated that error latencies (M = 1,292 ms; SD = 343) were about 500 ms slower than correct response latencies (M = 790 ms; SD = 301). The increased latency of error trials is explained by the Web IAT’s procedural requirement that respondents give a correct response on each trial. (Error feedback in the form of a red letter X indicated that the initial response was incorrect. Respondents’ instructions were to give the correct response as soon as possible after seeing the red X.) Latencies on error trials therefore always included the added time required for subjects to make a second response.


~Dave


supportseeker
supportseeker
Esteemed Member (1.6K reputation)Esteemed Member (1.6K reputation)Esteemed Member (1.6K reputation)Esteemed Member (1.6K reputation)Esteemed Member (1.6K reputation)Esteemed Member (1.6K reputation)Esteemed Member (1.6K reputation)Esteemed Member (1.6K reputation)Esteemed Member (1.6K reputation)
Group: Forum Members
Posts: 6, Visits: 4

Hi Dave,


Thanks for your reply. I'm aware of that passage. However, I don't think it is possible to calculate D3-D6 measures when Inquisit is recording only the latency for the correct response. In this case, there is an error penalty built into the procedure, because after an initial error subjects need time to figure out the correct answer and press the appropriate key. D3-D6 measures do not use this "built-in" error penalty, bud add a constant to the inital (false) response (i.e., either 600ms or 2SD). Consider the introduction to the SPSS syntax provided on Anthony Greenwald's homepage:


For the D-score with built-in error penalty:


"D_biep is the preferred IAT measure when the IAT procedure allows subjects to correct errors *AND* records latency to the occurrence of the eventual correct response. This is the procedure used in the Generic IAT available on my web site, at: http://faculty.washington.edu/agg (follow link to IAT MATERIALS)."



For the D600 algorithm:


"D_600ep is one of two preferred measures when the IAT procedure does not allow subjects to correct errors, and latency on each trial is recorded as of the
occurrence of the error response. This is *NOT* the procedure used in the Generic IAT available on my web site: http://faculty.washington.edu/agg (follow link to IAT MATERIALS).  For IATs done with the Generic IAT it is appropriate to use the D measure for built-in error correction.  The generic syntax for that is in the file: 'D_biep.Inquisit Generic IAT SPSS syntax form.26Nov05.doc'."


In sum, I still think in order to be able to calculate all 6 possible D scores, it is necessary to know the latency of the first response to calculate D600 and D2SD (correct or incorrect) and also the latency of the first correct response after an initial error to calculate D with a built-in error penalty. Any ideas how this could be done in Inquisit?


Thanks for any comments.


Dave
Dave
Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)
Group: Administrators
Posts: 13K, Visits: 104K

I think you're just plain, outright wrong on this one: Greenwald et al. simply discard the original latencies of error trials (which -- for the data sets they used -- always *include* the additional time for response correction) and then replace them with a value computed from the remaining correct trials (p. 208):


On the basis of a review of results from the four IAT data sets, six variations of the D measure were selected for Study 6, identified as D1–D6. D1 was the simplest, involving no adjustment beyond the preliminary deletion of latencies over 10,000 ms that was done for all measures. D2 additionally deleted latencies below 400 ms (on the basis of Study 5). The remaining four D variations included error penalties (on the basis of Study 4). D3 replaced error trials with the mean of correct responses in the block in which the error occurred plus a penalty of twice the standard deviation of correct responses in the block in which the error occurred. D4 replaced error trials with the mean of correct responses plus 600 ms. D5 and D6 used the same error penalties as D3 and D4 and additionally deleted latencies below 400 ms.


So, in sum, you *can* compute all six D measures for any data set generated with Inquisit just like Greenwald et al. did. Knowing the actual latency of the (first) error response is unnecessary, because this latency is *never* used in any of the computations.


~Dave


Blackadder
Blackadder
Supreme Being (27K reputation)Supreme Being (27K reputation)Supreme Being (27K reputation)Supreme Being (27K reputation)Supreme Being (27K reputation)Supreme Being (27K reputation)Supreme Being (27K reputation)Supreme Being (27K reputation)Supreme Being (27K reputation)
Group: Forum Members
Posts: 280, Visits: 147

However, let's assume for a second one would actually require the response latencies of ALL responses to a stimulus to be recorded. What I do is the following:


Define a trial with only one purpose: collect responses.



<trial TRIAL_responsecollector>
/ validresponse = (203, 205)
/ correctresponse = (203)
/ responseinterrupt = trial
/ responsetrial = (205, TRIAL_responsecollector)
/ stimulusframes = [1 = SHAPE_blank]
</trial>



Use this as the responsetrial for incorrect responses in each of the stimulus presentation trials. The responsetrial will recursively call itself as long as an incorrect response is provided.


Note that this will require a tad more postprocessing of the data but even in Excel the whole process can be automated.


Kind regards,
  Malte



BTW: Frankly, I've never quite understood why Greenwald et al. did not include a condition D7 in their study which does exactly what supportseeker suggested. Why use computed measures if you can exploit real data instead?


Dave
Dave
Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)
Group: Administrators
Posts: 13K, Visits: 104K

BTW: Frankly, I've never quite understood why Greenwald et al. did not include a condition D7 in their study which does exactly what supportseeker suggested. Why use computed measures if you can exploit real data instead?


I believe their reasoning is that usually there will be only very few error trials, which do not provide a sufficient data base to "exploit". Greenwald also appear to think that errors should be penalized.


~Dave


BTW: I don't necessarily agree with Greenwald et al. on all accounts, just adding this for the sake of completeness.


supportseeker
supportseeker
Esteemed Member (1.6K reputation)Esteemed Member (1.6K reputation)Esteemed Member (1.6K reputation)Esteemed Member (1.6K reputation)Esteemed Member (1.6K reputation)Esteemed Member (1.6K reputation)Esteemed Member (1.6K reputation)Esteemed Member (1.6K reputation)Esteemed Member (1.6K reputation)
Group: Forum Members
Posts: 6, Visits: 4

Thanks a lot for this, Blackadder!



Supportseeker


GO

Merge Selected

Merge into selected topic...



Merge into merge target...



Merge into a specific topic ID...




Reading This Topic

Explore
Messages
Mentions
Search