Millisecond Forums

Web Inquisit recorded negative latencies

https://forums.millisecond.com/Topic12030.aspx

By lwilliams - 1/5/2014

I have data recorded from an experiment launched on Web Inquisit (using v. 4.0.2) - a few of the latencies recorded as negative values. This makes me concerned about the validity of the rest of the latency data. I've attached the script file.

Any help would be appreciated! (I only see one other note about negative latencies on the help site -  from 4 years ago).
By Dave - 1/5/2014

If you think there were negative latencies recorded, you should post one of the *original* data files as output by Inquisit (not something that has already processed by some other application).
By lwilliams - 1/6/2014

I have attached the data file downloaded directly from the Web Inquisit site (in .dat format) - including the data from 3 participants - two of whom have non-negative latencies (subject numbers 1004 and 1530) and one of whom has three trials with negative latencies (subject number 1533).


By Dave - 1/6/2014

Thanks for the file(s). Interesting -- never seen anything like this before and have no idea what exactly could cause this. Notice the following pattern (these are the 3 trials w/ negative latencies plus one the one trial preceding and the one succeeding it) -- look at the 'elapsedtime' property (which, barring the existence of a time machine, of course should be monotonically increasing):

subj    build    trial#    trialcode    resp    lat     elapsed
1533    4.0.2    22        similarity   3       2581    124006
1533    4.0.2    23        similarity   6      -2347    121736
1533    4.0.2    24        similarity   5       2450    124252
...
1533    4.0.2    57        similarity   1       3152    200534
1533    4.0.2    58        similarity   1      -2384    198263
1533    4.0.2    59        similarity   2       1990    200346
...
1533    4.0.2    107       similarity   7       1554    361971
1533    4.0.2    108       similarity   4      -1808    360300
1533    4.0.2    109       similarity   1       2198    362633

Something appears to be very, very wrong with (presumably) that machine's clock (it goes back in time!). Can you download just that person's data set (do not merge it with any others) and attach it?


By lwilliams - 1/6/2014

See attached! (by the way, thank you for your very prompt help with this!)
By Dave - 1/6/2014

Thanks for that data set. I'm not seeing any further patterns that would shed any light on this mystery. A couple of questions and remarks:

- Are there any other cases or is this participant's data set the only one w/ negative latencies?

- I have been unable to reproduce this so far. Have you been able to?

- In a (very high-level) nutshell, here's how latency is determined:

(A) Ask OS for time when Inquisit starts polling for responses in a given trial.

(B) Ask OS for time upon registering a response.

(B - A) gives latency.

If somewhere between A and B that machine's clock goofs up and gives some bogus time for B (as may have happened here), one can end up with nonsensical negative latency.

- The time-jumps in that data appear to be pretty drastic in at least one instance (> 3 secs). It seems unlikely to me that actual physical hardware running an OS *natively* would exhibit such behavior. Wild guess thus: Is it possible the person was running Windows (here: XP) in a virtual machine? Is there any way to contact the respective participant to check?

Thanks!
By lwilliams - 1/7/2014

This is the only participant with negative latencies out of a sample of approximately 300. The study was completed anonymously, so we don't have a way of contacting the participant now. I've not seen the problem reproduced.

Ultimately, disregarding the latency data from this participant isn't terribly problematic (the task is not RT based) - but I would like to ensure that it's not a problem with the script (i.e., are the latencies for other participants valid - presuming they are non-negative?).
By Dave - 1/7/2014

lwilliams (1/7/2014)
This is the only participant with negative latencies out of a sample of approximately 300. The study was completed anonymously, so we don't have a way of contacting the participant now. I've not seen the problem reproduced.

Ultimately, disregarding the latency data from this participant isn't terribly problematic (the task is not RT based) - but I would like to ensure that it's not a problem with the script (i.e., are the latencies for other participants valid - presuming they are non-negative?).


#1: I don't see any particular problem w/ the script and it sure does not give me any negative latencies when run. Apparently that's also true for the vast majority of your participants, which should be comforting. Note: Since you have logged elapsedtime you can always check that for inconsistencies -- it should increase monotonically as already noted. If it does not, something's fishy. Also, the difference between elapsedtime on two consecutive trials always ought be at least equal (usually slightly greater) than the logged latency. I'd be very surprised if you found any such inconsistencies in your other participants' data sets.

#2: As for validity of the remaining latency data, one would first have to debate the (somewhat philosophical) point of what "valid" actually means / can mean in the given context. When you conduct research using machines you have no control over, you can never fully know their "true" measurement properties. I.e., different machines may -- in theory -- have certain hard- and/or software configurations that result in different forms of measurement biases or errors. Inquisit tries to do the best it can given the environment it's running in, but unknowns remain (to quantify or get rid of those unknowns, one would have to hook up external measurement hardware to every individual machine and profile its properties in detail). That said, barring any other obvious issues (such as negative latencies), the data are most likely fine, or if you prefer, "valid".

Hope this helps.
By seandr - 1/7/2014

This is almost certainly an anomaly specific to that particular session. I'll take a look at the data file to see if I can figure out what might have happened. Offhand, guesses would be
1) Some form of memory corruption or integer overflow. 
2) Possibly the test was run just as the high performance cpu clock reached the maximum tick count and reset back zero (about as rare as Haley's comet but would explain decreasing elapsed times). 
In any case, if the latencies in the other data sets appear valid, they almost certainly are. 

-Sean